Advice sought: IOCTL or better for WDM Audio?

I’m trying to add an user-land interface to an MSVAD -based (audio) driver.
The idea here being the device can then be opened via CreateFile() and audio
data obtained via ReadFile()

Is adding an IOCTL interface the best way to do this?

If yes, then are patterns used in the 8.1 WDK ioctl and hardware event
samples the right way to go? My understanding is that it’ll need to handle
IRP_MJ_CREATE, IRP_MJ_CLOSE, IRP_MJ_DEVICE_CONTROL and IRP_MJ_READ?

If not can anyone point me towards any examples/docs of the better way? Does
the KMDF have more to offer in this respect?

Thx++

Jerry.

xxxxx@chordia.co.uk wrote:

I’m trying to add an user-land interface to an MSVAD -based (audio) driver.
The idea here being the device can then be opened via CreateFile() and audio
data obtained via ReadFile()

Let’s take a step back for a moment. There is already a user-land
interface to MSVAD drivers. Indeed, there are many (WASAPI,
DirectSound, DirectShow, waveIn/waveOut, ASIO, DirectKS, …).

So, what are you actually trying to do here? Are you trying to create a
“virtual audio cable” driver that acts like a normal speaker device, but
pumps its data to a user-mode client for more processing? Are you aware
that the Audio Engine already offers that ability? It’s called
“loopback recording”. No driver required.

Is adding an IOCTL interface the best way to do this?

Let’s not talk about a “best way” until we know what it is you want to
do. MSVAD is a kernel streaming driver. As a result, it already
supports CreateFile and ioctls, through the kernel streaming interface.
That’s how all of those user-mode APIs get to the driver in the first
place. Now, it’s possible to use custom KS properties to talk to the
device, without wrestling control of the driver callbacks away from KS.
It is also possible for you to interject yourself in the callback stream
to check for your own IRPs before letting KS handle them.

If not can anyone point me towards any examples/docs of the better way? Does
the KMDF have more to offer in this respect?

A big part of what KMDF provides is already being provided to KS drivers
by the KS components, including IRP dispatching. It is possible to use
KMDF in “miniport mode”; whether that is a net gain or not is debatable.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Hello Tim,

Fair question. It’s a tad difficult to discuss specifics here OK? That said

Imagine the VAD is a multiple channel device driven by a DAW (Cubase/Sonar
for example), but avoiding the mixdown. That data needs to be piped bit
perfect to a single sink, which might be hardware, might be a socket.

Are you aware that the Audio
Engine already offers that ability? It’s called “loopback recording”. No
driver
required.

Yes, but not in exclusive mode, which is what is needed here.

Now, it’s possible to use custom KS properties to talk to the device,
without wrestling control of the driver callbacks away from KS.

Ah. OK. It’s partly a terminology problem. Does this start to describe
things?
https://msdn.microsoft.com/en-us/library/windows/hardware/ff567673(v=vs.85
%29.aspx

Believe me. I am as keen as possible to do as little kernel mode work as
possible. If there is an existing wheel, it shall be rolled!

Thx++.

VBR,

Jerry

xxxxx@chordia.co.uk wrote:

Imagine the VAD is a multiple channel device driven by a DAW (Cubase/Sonar
for example), but avoiding the mixdown.

What mixdown? If you’re in exclusive mode, there is no mixing or
converting going on. The app will send whatever exact format the device
wants. It’s true that 8 channels will all be delivered as a single
large sample, but that’s usually what you want.

That data needs to be piped bit
perfect to a single sink, which might be hardware, might be a socket.

How is that not what happens already? If your hardware advertises
itself as having 8-channels, then your driver will receive a “bit
perfect” 8-channel signal from the Audio Engine. Is it a USB device?

Sending to a socket is going to involved a VAD, however.

> Now, it’s possible to use custom KS properties to talk to the device,
> without wrestling control of the driver callbacks away from KS.
Ah. OK. It’s partly a terminology problem. Does this start to describe
things?
https://msdn.microsoft.com/en-us/library/windows/hardware/ff567673.aspx

That’s the property interface, yes. Now, it IS possible to override the
dispatching and grab raw, custom ioctls. Whether that’s easier or not
is a question for posterity to answer.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> interface to MSVAD drivers. Indeed, there are many (WASAPI,

DirectSound, DirectShow, waveIn/waveOut, ASIO, DirectKS, …).

Isn’t ASIO using the whole stack of its own, without resorting to KS?


Maxim S. Shatskih
Microsoft MVP on File System And Storage
xxxxx@storagecraft.com
http://www.storagecraft.com

It depends. Sometimes they sit on top of the stack, sometimes they go as low
as possible.

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:bounce-581220-
xxxxx@lists.osr.com] On Behalf Of Maxim S. Shatskih
Sent: 23 April 2015 21:40
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] Advice sought: IOCTL or better for WDM Audio?

> interface to MSVAD drivers. Indeed, there are many (WASAPI,
> DirectSound, DirectShow, waveIn/waveOut, ASIO, DirectKS, …).

Isn’t ASIO using the whole stack of its own, without resorting to KS?


Maxim S. Shatskih
Microsoft MVP on File System And Storage xxxxx@storagecraft.com
http://www.storagecraft.com


NTDEV is sponsored by OSR

Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev

OSR is HIRING!! See http://www.osr.com/careers

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Tim,

Sending to a socket is going to involved a VAD, however.

Indeed. So back to the original question then: what is the ‘best solution’
for doing this?

What mixdown? If you’re in exclusive mode, there is no mixing or
converting
going on. The app will send whatever exact format the device wants. It’s
true that 8 channels will all be delivered as a single large sample, but
that’s
usually what you want.
How is that not what happens already? If your hardware advertises itself
as
having 8-channels, then your driver will receive a “bit perfect” 8-channel
signal from the Audio Engine. Is it a USB device?

a) I know all this,
b) it’s not relevant to the original question
c) I’ve already pointed out I’m not in a position to disclose all details at
this point.

BR.

Maxim S. Shatskih wrote:

> interface to MSVAD drivers. Indeed, there are many (WASAPI,
> DirectSound, DirectShow, waveIn/waveOut, ASIO, DirectKS, …).
Isn’t ASIO using the whole stack of its own, without resorting to KS?

There is an ASIO-to-KS adapter layer for people who have adopted the
ASIO APIs but can’t write a driver.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

jerry evans wrote:

> Sending to a socket is going to involved a VAD, however.
Indeed. So back to the original question then: what is the ‘best solution’
for doing this?

It is possible to do socket I/O from the kernel. I’m not saying that’s
the RIGHT solution; as a rule, I always argue that processing should
never be in the kernel unless there is no other choice.

Either a KS property interface or an ioctl interface can be made to
work. MSVAD already handles standard audio KS properties, and you
should be able to add a custom set. I know some people who are
unfamiliar with KS end up deciding it’s easier to hook the ioctl
interface directly.

Have you looked at the Virtual Audio Cable product? It might not be
your production solution, but the free trial might let you explore some
of your concepts and see if things will work.
http://software.muzychenko.net/eng/vac.htm

a) I know all this,
b) it’s not relevant to the original question

I disagree. You were proposing to write a custom driver for audio
hardware. That’s a hell of a lot of work, and unless your requirements
truly cannot be met by an in-the-box solution, you should want to avoid it.

c) I’ve already pointed out I’m not in a position to disclose all details at
this point.

I’m not asking for details. I’m asking for generalities. When someone
asks about doing something unusual, it is perfectly reasonable for us to
explore whether the need really demands an unusual approach. Many
people have wasted many man-months developing delicate solutions for
problems that could have been solved much more easily.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Hmm.

It is possible to do socket I/O from the kernel. I’m not saying that’s
the RIGHT solution; as a rule, I always argue that processing should
never be in the kernel unless there is no other choice.

Indeed. I’ve already made this perfectly clear.

Either a KS property interface or an ioctl interface can be made to
work. MSVAD already handles standard audio KS properties, and you
should be able to add a custom set. I know some people who are
unfamiliar with KS end up deciding it’s easier to hook the ioctl
interface directly.

Does have 2 ‘wave’ sub devices help? I am unclear as to how this affects
exclusive mode: can a second application open the ‘wave2’ device

Wondering if simpler to have a second IOCtl driver. Have the 2 drivers share
a suitable kernel mode queue for data transfer. As per
http://www.osronline.com/article.cfm?id=177 for example.

I disagree. You were proposing to write a custom driver for audio
hardware. That’s a hell of a lot of work, and unless your requirements
truly cannot be met by an in-the-box solution, you should want to avoid
it.

No. Absolutely not. I made this perfectly clear from the get-go.

*Virtual* audio: built on an existing (and very long standing) Microsoft
sample, adding as little kernel mode code as possible.

I’m not asking for details. I’m asking for generalities. When someone
asks about doing something unusual, it is perfectly reasonable for us to
explore whether the need really demands an unusual approach. Many
people have wasted many man-months developing delicate solutions for
problems that could have been solved much more easily.

Sorry, but no, that is simply a ‘conclusion’ based on a series of incorrect
inferences.

As for suggesting loopback modes and VAC as potential ‘solutions’ well …

BR.

jerry evans wrote:

Indeed. I’ve already made this perfectly clear.

I don’t understand why you’re being defensive. Those of us who have
been answering questions on this forum for a very long time have seen
the way queries go awry based on assumptions and partial information.
You have a model in your brain; the closer we can get to sharing that
model, the more useful this thread will be.

Does have 2 ‘wave’ sub devices help? I am unclear as to how this affects
exclusive mode: can a second application open the ‘wave2’ device

“Exclusive mode” is strictly a concept of the Audio Engine and the
WASAPI interface. It has no impact on the layers below it. In
particular, it does not mean that the driver itself is opened for
exclusive access. You can still open a handle to the device even while
Audio Engine has its own handle.

Creating a second wave device would only confuse things. Now you have
two sets of state to worry about and communicate between.

Wondering if simpler to have a second IOCtl driver. Have the 2 drivers share
a suitable kernel mode queue for data transfer.

I would judge that to be an unnecessary complication. Getting into the
driver is not the hard part. If you decide to go with ioctls, it should
be pretty clear in the source code where you need to add the plumbing.
Circular buffer management will be harder.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> I don’t understand why you’re being defensive. Those of us who have been

answering questions on this forum for a very long time have seen the way
queries go awry based on assumptions and partial information.

Ahem. Not defensive in the least. Finding this a bit tiresome, yes. But let
us progress.

“Exclusive mode” is strictly a concept of the Audio Engine and the WASAPI
interface. It has no impact on the layers below it. In particular, it
does not
mean that the driver itself is opened for exclusive access. You can still
open a
handle to the device even while Audio Engine has its own handle.

Good to know, thanks.

Creating a second wave device would only confuse things. Now you have
two sets of state to worry about and communicate between.
I would judge that to be an unnecessary complication …
Circular buffer management will be harder.

Buffer management will indeed be an issue.

Time for the odd experiment perhaps.

BR.