Need help with SYSVAD windows audio driver sample

Hi,

I am trying to solve a problem in this example driver from microsoft, WDM Audio driver: https://github.com/Microsoft/Windows-driver-samples/tree/master/audio/sysvad

The problem is in mic, that every listener creates different stream for its audio (minwavertstream.cpp file CMiniportWaveRTStream::Init function), but I want that every microphone stream to shares same audio data, which I fill in ToneGenerator.cpp file (void ToneGenerator::GenerateSine function). This file is called by timer, so if there is 2 streams, the timer gets called twice as often for every stream. I have an audio stream at 44100Hz which I send directly to microphone buffer, but if I start skype call (I don’t know for what reason) but it creates 2 streams, and now the data it expects is doubled to 88200Hz, it reads data too fast, so I can’t fill data that fast. How to share same timer and buffer between streams? Thanks for response in advance.

xxxxx@gmail.com wrote:

The problem is in mic, that every listener creates different stream for its audio (minwavertstream.cpp file CMiniportWaveRTStream::Init function), but I want that every microphone stream to shares same audio data, which I fill in ToneGenerator.cpp file (void ToneGenerator::GenerateSine function).

How do you expect that to work?  The two streams are not going to be
synchronized.  At any given point in time, how would you know what the
next byte for stream K is supposed to be?

This file is called by timer, so if there is 2 streams, the timer gets called twice as often for every stream.

Not exactly.  If there are two streams, there are two separate timers. 
Each one is getting called at the appropriate rate.

I have an audio stream at 44100Hz which I send directly to microphone buffer, but if I start skype call (I don’t know for what reason) but it creates 2 streams, and now the data it expects is doubled to 88200Hz, it reads data too fast, so I can’t fill data that fast.

What you’re saying doesn’t make sense.  Each CMiniportWaveRTStream has
its own timer, its own state, and its own copy of the ToneGenerator,
keeping its own position info.  The two streams should not affect each
other in any way.

How to share same timer and buffer between streams?

You don’t really want to do that.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Thank you Roberts for such a nice, extensive response, I am just starting driver development, so sorry for my ?dumb questions". I need to get somehow sound to skype. The best case that every listener (stream) outputs same sound (if that’s even possible). If they can’t share buffers, then they can have their own, but I still need then to differentiate stream which skype owns. I need to send sound to skype call, that’s what I am trying to achieve, so how could I do it? What would be your sugestion? I have a 44100Hz music streaming which I need to send to skype, I would like that every stream reads this audio, I don’t need those streams to be synchronized, the main task I am trying to achieve is to send that music to skype via virtual microphone. As I have written earlier, skype creates 2 streams for mic. Maybe if I can differentiate streams, then I can fill every buffer with same data and only when every buffer reads it, I would advance audio buffer (to which I write music data) to the next byte.

xxxxx@gmail.com wrote:

I need to get somehow sound to skype.

The driver has no idea what client is being used.  Even if you trace
things back to the “current process”, you’ll find that it’s the Audio
Engine process.  You can’t hope to treat Skype specially.  All you can
do is produce the data you want all your clients to see.

The best case that every listener (stream) outputs same sound (if that’s even possible). If they can’t share buffers, then they can have their own, but I still need then to differentiate stream which skype owns. I need to send sound to skype call, that’s what I am trying to achieve, so how could I do it?

You can’t.  You will never know which stream is connected to Skype.  You
just have to produce your data.

I have a 44100Hz music streaming which I need to send to skype, I would like that every stream reads this audio, I don’t need those streams to be synchronized, the main task I am trying to achieve is to send that music to skype via virtual microphone. As I have written earlier, skype creates 2 streams for mic.

That doesn’t make sense.  There’s no reason why Skype would create two
microphone streams, unless you have a configuration problem that caused
you to advertise two microphones and it’s trying to to echo
cancellation.  What do you see if you use Audacity or GraphEdit to read
from your microphone?  I assume you have been using something other than
Skype for your testing.

And even if it did create two streams, they wouldn’t interfere.  As I
mentioned, each one will have its own tone generator, so if they are
reading at the same rate, they should see the save data.

Maybe if I can differentiate streams, then I can fill every buffer with same data and only when every buffer reads it, I would advance audio buffer (to which I write music data) to the next byte.

You can certainly buffer your music data in the IAdapterCommon object,
then have each stream object keep track of its current position in that
common buffer.  You then have to deal with the corner cases, like what
happens if one stream gets way behind or pauses?


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Hi again, thank you for responding,

This is how the audio data moves at the moment: I have program in user space, which reads music into shared with driver ring buffer. Audio data (bytes) in ring buffer gets filled at 44100Hz. The buffer is allocated in user space and I access it via mdl. In that tonegenerator function GenerateSine, I just get bytes from my buffer. I commented out everything in that function, and manually fill given buffer with data out of my ring buffer. If I have only one stream, audio sounds perfect, but as every stream (as you said), instantiates its own tone generator, when audio data from buffer is read by both streams, as I have data for only one, both of them sounds horrible. When I test audio, I just set listen to this device in Recording Devices tab. I have tried with audacity, and it too only creates one stream. If I make call, I can see that my buffer is read twice as often, and when I debug, skype creates two streams (Idk for what reason, as you said, it could be for noise cancellation, or maybe when you make call, there is a tab in which you can see mic activity (its volume in real time), so maybe second stream is used for that, idk). I don’t care if one of the streams pauses or exits, I need that every alive and running stream get ± same data from my ring buffer. Could you give some example, how could I use IAdapterCommon object in my case? Those corner cases I will try to find best solution by debugging and trying different strategies, but before that, firstly I need same audio data coming from those streams.

xxxxx@gmail.com wrote:

This is how the audio data moves at the moment: I have program in user space, which reads music into shared with driver ring buffer. Audio data (bytes) in ring buffer gets filled at 44100Hz. The buffer is allocated in user space and I access it via mdl. In that tonegenerator function GenerateSine, I just get bytes from my buffer.

Where do you keep the buffer information?  Is that in CAdapterCommon? 
How do you keep the driver and the user-mode app synchronized?  That is,
how does the user-mode app know you need more data?

I commented out everything in that function, and manually fill given buffer with data out of my ring buffer.

How do you do that?  Are you keeping the ring buffer pointers in
CAdapterCommon?  Or, God forbid, did you put it in a global?  The SYSVAD
sample is very carefully architected with separation of duties.  The
global stuff in adapter.cpp is the minimum required to get the job
done.  The stuff that’s common to all of the devices is kept in the
CAdapterCommon object, which gets passed to the streams, so they can
access common data.

Either way, then the answer should be clear.  Every stream needs to have
its own ring buffer pointer, instead of making it common.

That presents a synchronization problem, of course, because the two
streams might not suck at the same rate.  You’ll have to figure out how
and when to ask for more data.

If I make call, I can see that my buffer is read twice as often, and when I debug, skype creates two streams (Idk for what reason, as you said, it could be for noise cancellation, or maybe when you make call, there is a tab in which you can see mic activity (its volume in real time), so maybe second stream is used for that, idk).

Are you sure it’s really two streams, and not one stream asking for
stereo when you’re expecting mono?

I don’t care if one of the streams pauses or exits, I need that every alive and running stream get ± same data from my ring buffer.

Of course you have to care!  The ONLY way that multiple streams can get
the same data is if the streams are synchronized.  If one of them pauses
or gets behind, YOU have to decide what to do.

Could you give some example, how could I use IAdapterCommon object in my case?

IAdapterCommon (also called PADAPTERCOMMON in the code) is the right way
to handle data that is common to all streams.  The streams have access
to this through m_pAdapterCommon in the miniport object, or
m_pMiniport->GetAdapterCommObj() in the stream object.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Hi thank’s for response:
If driver reads too fast I detect it and send silence data until user mode catches up, if user fills buffer faster, it just overrides data, and driver reads newer, the buffer is big, so small glitches gets fixed by itself in the long run, while I tested I haven’t had any audio glitches, unless I run a benchmark, which loaded up cpu. I keep shared buffer pointer as reference in CAdapterCommon, (at least that part I did correctly :D).

About two streams, as I debugged, CMiniportWaveRTStream::Init function gets called two times, one after another, moments after I start skype call, both of them have stereo streams, same formats and both have Capture_ set to true , in Audacity stereo stream opens only one stream, I really don’t know why is this happening :confused:

These are the last questions then for today:
How to get active capture streams currently running, can I get them from m_pMiniport object? And last one, how can I get (differentiate) for which stream timer is called to fill buffer, in the CMiniportWaveRTStream::WriteBytes function?