Filter Driver to monitor calls into video miniport driver in XPDM???

Hello All,

I would like to develop a filter driver that monitors all entry calls into video miniport driver in a XP machine. Is there any way to intercept VideoPortInitialize call and modify PVIDEO_HW_INITIALIZATION_DATA pointer in a filter driver?

Your help is highly appreciated?

Comet48


Hotmail: Powerful Free email with security by Microsoft.
http://clk.atdmt.com/GBL/go/171222986/direct/01/

That is not really a filterable interface. What are you actually
trying to accomplish with this filter?

Mark Roddy

On Sun, Jan 3, 2010 at 8:28 PM, Afshin Hosseinipour wrote:
> Hello All,
>
> I would like to develop a filter driver that monitors all entry calls into
> video miniport driver in a XP machine. Is there any way to intercept
> VideoPortInitialize call and modify PVIDEO_HW_INITIALIZATION_DATA pointer in
> a filter driver?
>
> Your help is highly appreciated?
>
> Comet48
> ________________________________
> Hotmail: Powerful Free email with security by Microsoft. Get it now.
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

Thank you for the reply.

I am trying to study the display miniport driver interface to explore the possibility of adding a virtual child display device. The virtual display device should be able to take advantage of GPU power (specially DirectX h/w acceleration) in the graphics card to draw the virtual frame buffer.

I understand that DDK recomendation for frame buffer virtualization is to develop a mirror driver or display/video miniport driver pair. But in either case, frame buffer will be drawn either by GDI or software assisted drawing routines in the display driver and they are all a huge burden on CPU.

If video miniport driver is not filterable, what about display driver DLL driver?

comet48

Why do you think that filtering is going to give you some extra power/feature ?
You can write a virtual XPDM device.
All you have to do is to create a miniport that uses videoprt.sys and the matching display-driver DLL.
That device will show up in device-manager under the Display-Device class, and, you can talk to the device using any methods allowed for that type of devices.
Traditionally, you would use CreateDC(\.\DisplayXX) and call [Ext]Escape to talk to the display driver in a private fashion.
If you want to expose D3D functionality, you would need to exposed the DDraw entrypoints
in the display-driver, and, from there work on the capabilities to make-it suitable for D3D9
to use-it.
In the days of WDDM, the creation of a virtual XPDM device is going to have a shrinking market share, since it will work on WinXP, and poorly on Vista/Win7 given the mutual incompatibility of the WDDM and XPDM stack in those OSes

Thank you for your reply Ivan.

I have already developed a virtual XPDM device (a pair of video miniport driver and display driver) that is simply create virtual frame buffer and OS really treats it as extendable display device. But as I said, all drawing in virtual frame buffers or DD surfaces utilize the CPU power.

This virtual device can not offer many capability (including D3D) as GPU can and it means that many application will not be able to run on the virtual display.

All I’m trying to do is to figure out the possibility of using GPU power to draw into virtual frame buffer or to create virtual surfaces for the virtual display in XPDM.

I fail to understand the overall intent of this exerices.
If you are in the business of writing a driver that talks with a GPU, don’t you have already the ability to see all of the calls to the display-driver and the miniport by the virtue of writing the driver for them ?
If you are creating a virtual-device, then, you are on your own to provide the services requested by your device/device-driver pair.

I can possibly speculate that you are really writing something that makes few more monitors to appear to the user, and, diverting those monitors to a USB dongle, a network-based display solution and the such.

In this scenario, I can see how you would tempted to have an existing device to just show a couple of extra monitor, and then intercept traffic from/to those devices.
In general, this is both hard and unsupported both in the actual hardware and in the OS and IHV software.
For the hardware part: in general video-cards are configured to have a limited number of regions of video-memory that can be fed to the ramdac. For example, the majority of the commodity cards have 2 scanout sources, that can be hooked-up to 5 targets, with not all paths allowed at the same time.
Assuming that you will fake a couple of scanout sources, how would the real hardware react to that ?
For the IHV part: in the XPDM days, the DDraw/D3D part of the XPDM stack was fragile at best. I still think that certain IHV never understood the difference between session space and system space, and, their kernel worker-thread spawn-off by D3D interactions made the system really unstable. Intercepting all of that will make the system even more fragile.
For the OS part: VideoPrt.sys has a very private harware enumeration contract with the miniport, and, that has little to do with PnP. The reason is that VideoPrt is at the bottom of the device-stack for \.\DisplayX, that is what is being used by Win32 and the DDraw stack. Then, the binding between dxg.sys and win32k.sys is again private and involved, at best.

The semi-sustainable solution is to have a fully virtual miniport/display-device pair teamed-up by a user-mode rendered process.
The miniport would communicate with an OOB channel to the worker-process.
The worker-process would then use traditional mechanism to talk with the already-installed GPU to perform rasterization and rendering. Upon completion of rendering, the result would be shared back with the virtual display-deivce, and, the miniport could complete the relevant operation that triggered the rasterization.
The major problems I can see with all of this is latency of operations, synchronization and lock-inversions.
Most of the rendering stack in Win32k is expected to be bottom-down. The rendering happens while system-wide locks are held, and, those locks will not be released until your operation completes. Since you cannot do rendering on the real GPU while your virtual-GPU is beign used, it means that you’ll have to fake successfull completion.
That implies that you cannot ever get a correct screen-capture, because you have to fake success while you are deferring the work to the renderer process.

If you ever were tempted to issue DDraw/D3D rendering from kernel-mode, forwarding from the virtual-GPU to the real-GPU, again, this has serious locking and re-entrancy implications.

I’m afraid that your CPU based rendering is about as good as you can get your exercise.

xxxxx@hotmail.com wrote:

I am trying to study the display miniport driver interface to explore the possibility of adding a virtual child display device. The virtual display device should be able to take advantage of GPU power (specially DirectX h/w acceleration) in the graphics card to draw the virtual frame buffer.

To what end? The GPU cannot draw into main memory, so it wouldn’t be a
“virtual frame buffer”, it would be a REAL frame buffer. If your goal
is to echo the current desktop on another machine, like a mirror driver,
the display driver already has access to the real desktop. It could
just copy the bits.

Your idea has intrigued me, but I’m not sure I see the application yet.

I understand that DDK recomendation for frame buffer virtualization is to develop a mirror driver or display/video miniport driver pair. But in either case, frame buffer will be drawn either by GDI or software assisted drawing routines in the display driver and they are all a huge burden on CPU.

I think you underestimate the power of your CPU. Look at the CPU
utilization when VNC is running – the burden is not significant.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Depends on what is to be render.

Well, VNC is 2d which is more of memory bandwidth problem provided that the CPU is capable enough. On the other 3D rendering is very different. One certainly won’t want the CPU to decompose and rendering millions of polygons per second.

Thank you all for sharing your thoughts, specially Ivan for the comprehensive response.

As you mentioned the application I’m thinking of is network-based display solution and the target graphics display device does not have the full-blown capability of a real GPU. In this case, many of 2D operations (Stretching, Shrinking, FOURCC Color Conversion,…) have to be processed under the very same CPU. Not mentioned FB compression that I may have to do before transmitting across the network.

Also 3D games or applications require 3D processing as Calvin mentioned and all of these will burn a big chunk of the processor power.