[OSR-DETECTED-SPAM] Re: How to pass buffers to UVCDriver

not sure I understand your answer
I want NOT to copy as this affects my performance, I want to provide the
UVC driver my allocated buffer. instead of getting teh buffer allocated by
the UVCdriver and then copying it.
I am using on the user mode the MF interface for receiving the buffer
IMFSourceReaderCallback and the method OnReadSample

On Fri, Sep 19, 2014 at 2:31 AM, wrote:

> This can be done easily with the RtlCopyMemory API. You just need to
> provide a destination address, a source address and the number of bytes to
> be copied from the source to the destination.
>
> Don’t forget to use exception handling with user mode buffers or the copy
> may turn to a holocaust (bugcheck).
>
> Best regards.
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Miriam Engel wrote:

I’m using UVC drivers via Media Foundation.
however I need to optimize my buffer copying and I want to send my
buffer to the UVC and let the UVC driver put the new frame in my
memory I provided instead of having it in teh memory the UVC allocated
and then to copy it to the user’s buffer
is this possible ? how can I do this?

There will always be one buffer copy within usbvideo.sys. There are
three reasons for this. First, the USB interface to the camera deals in
sets of USB packets, not in frames. The sizes are different. Second,
the UVC protocol puts a header at the beginning of every packet that is
not part of the frame. That has to be stripped out. Third, usbvideo
has to keep the camera fed with empty buffers at all time, whether you
are ready to receive them or not. Thus, like virtually every USB camera
driver, usbvideo allocates a pool of packet buffers to circulate down to
the camera, and copies the data to the frame buffers that have been
provided from above.

The Media Foundation memory manager will do its darnedest to make sure
there isn’t ANOTHER copy in there. So, if your transform has an
allocator, your buffers will be sent down to usbvideo.sys.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Hi, Thanks Tim as always your answers are precise and helpful
I was searching around reading about the Transform FIlter allocators, as
well as MF allocators in general

I saw various allocators APIs as IMFVideoSampleAllocator, IMemAllocator
and I read about the samples and allocators from here
http:
and about setting transform allocators here
http:
and have a few questions:
1. I do not need a transform filter I use the source filter directly in
order to get the frames and perform some computations.
I understand the IMemAllocator is about the allocators between the
different filters in and out pins. is there any way I can add such an
allocator to the source filter?
or is it mandatory to have an additional filter?
2. I couldn’t find any documentation about MF having this optimized and if
I provide a transform with an allocator is it documented any place? will it
also prevent redundant allocations in USBVideo.sys as it uses my allocator?
which API were you referring to? will same optimization be also on a
different filter not the transform?

thanks a lot!

On Fri, Sep 19, 2014 at 7:18 PM, Tim Roberts wrote:

> Miriam Engel wrote:
> >
> > I’m using UVC drivers via Media Foundation.
> > however I need to optimize my buffer copying and I want to send my
> > buffer to the UVC and let the UVC driver put the new frame in my
> > memory I provided instead of having it in teh memory the UVC allocated
> > and then to copy it to the user’s buffer
> > is this possible ? how can I do this?
>
> There will always be one buffer copy within usbvideo.sys. There are
> three reasons for this. First, the USB interface to the camera deals in
> sets of USB packets, not in frames. The sizes are different. Second,
> the UVC protocol puts a header at the beginning of every packet that is
> not part of the frame. That has to be stripped out. Third, usbvideo
> has to keep the camera fed with empty buffers at all time, whether you
> are ready to receive them or not. Thus, like virtually every USB camera
> driver, usbvideo allocates a pool of packet buffers to circulate down to
> the camera, and copies the data to the frame buffers that have been
> provided from above.
>
> The Media Foundation memory manager will do its darnedest to make sure
> there isn’t ANOTHER copy in there. So, if your transform has an
> allocator, your buffers will be sent down to usbvideo.sys.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
></http:></http:>

NtDev mm wrote:

  1. I do not need a transform filter I use the source filter directly
    in order to get the frames and perform some computations.
    I understand the IMemAllocator is about the allocators between the
    different filters in and out pins. is there any way I can add such an
    allocator to the source filter?
    or is it mandatory to have an additional filter?

It’s a cost vs benefit tradeoff. Here, essentially, are your alternatives.

The easiest alternative is to set up a DShow graph or MF topology that
instantiates your capture device as a filter and connects it to a dummy
“bitmap renderer” that simply calls back into your application for every
frame it receives. I wrote a bitmap renderer for DirectShow in 350
lines of code, and it’s one of the most useful DShow filters I ever
wrote. It essentially replaces the old ISampleGrabber and null renderer
combination. The same thing could be written as a Media Foundation
sink. The huge advantage of this is that you inherit the timing,
threading, and memory management of the framework, without a lot of
overhead. You don’t have to reinvent that.

The next alternative is to instantiate your capture device as a filter
and talk directly to it, as if you were the graph manager or the
topology manager, without creating or starting a graph. That’s
possible, but now you have lost all of the timing, threading, and memory
management that the frameworks provide. You need to talk to the proxies
exactly like the frameworks would have, and the exact components of that
interactions are not documented.

The final alternative is to talk directly to the Kernel Streaming driver
from user mode. That’s obviously possible, since ksproxy does it.
There is an SDK sample called “DirectKS” that demonstrates this
technique. However, the sample is an audio application, as are
essentially all DirectKS-derived samples on the web. The video
interface is much more complicated. I have never found a case where
DirectKS was justified for video.

  1. I couldn’t find any documentation about MF having this optimized
    and if I provide a transform with an allocator is it documented any
    place? will it also prevent redundant allocations in USBVideo.sys as
    it uses my allocator? which API were you referring to? will same
    optimization be also on a different filter not the transform?

Why would you want a custom allocator? What can your memory allocator
do that the standard one can’t?

Like most capture drivers, USBVideo.sys never allocates frame buffers.
It merely fills in the buffers that are handed to it from above. He
does allocate packet buffers to circulate to the USB endpoint, but you
can’t avoid that.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.