KMDF: how to return STATUS_MORE_PROCESSING_REQUIRED?

Hi,

I’ve successfully written a filter driver using kmdf that is installed as upper filter in a HID stack of a specific device.
One of the features I’d like to implement is the ability to drop/hide reports coming from the device that match some rule. So far I’ve implemented only the ability to modify the content of these reports installing a completion routine on every read request that is coming down the stack. Since I have no control on a message pump (the IO manager requests coming from user space are the message pump) the only way I know to filter the reports in my driver is in the completion routine. The only way I can conceive that would allow me not to have that request to complete back to the IO manager is to stop the completion routine.
In the WDM world I’d install a completion routine and if the report is to be dropped, I’d return STATUS_MORE_PROCESSING_REQUIRED and resubmit that IRP to the lower drivers and have them complete it again with some other data.
As far as I understand this is not possible in the kmdf, isn’t it? Is what I want to achieve possible at all using the framework?

Thank you in advance,
Stra

Kmdf always returns SMPR in the underlying wdm completion routine. Note that the kmdf completion routine signature had a void return, if you want to conplete a request from a c.r. you need to do that explicitly with an api call.

So, to resend the request down the stack, all you need to do is format it with the current type, WdfRequestWdmFormatWithCurrentType or something like that, and then send.

d

Sent from my phone with no t9, all spilling mistakes are not intentional.

-----Original Message-----
From: xxxxx@gmail.com
Sent: Sunday, November 30, 2008 10:54 AM
To: Windows System Software Devs Interest List
Subject: [ntdev] KMDF: how to return STATUS_MORE_PROCESSING_REQUIRED?

Hi,

I’ve successfully written a filter driver using kmdf that is installed as upper filter in a HID stack of a specific device.
One of the features I’d like to implement is the ability to drop/hide reports coming from the device that match some rule. So far I’ve implemented only the ability to modify the content of these reports installing a completion routine on every read request that is coming down the stack. Since I have no control on a message pump (the IO manager requests coming from user space are the message pump) the only way I know to filter the reports in my driver is in the completion routine. The only way I can conceive that would allow me not to have that request to complete back to the IO manager is to stop the completion routine.
In the WDM world I’d install a completion routine and if the report is to be dropped, I’d return STATUS_MORE_PROCESSING_REQUIRED and resubmit that IRP to the lower drivers and have them complete it again with some other data.
As far as I understand this is not possible in the kmdf, isn’t it? Is what I want to achieve possible at all using the framework?

Thank you in advance,
Stra


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

Dosen’t SMPR for a request you did not originate amount to a memory leak?

No. The classic example is the USB driver, that bumps the IRP back down to
USBD and returns SMPR until the entire transfer has taken place.
Eventually, it lets the completion return to the upper-level driver, but it
does not want the IRP to complete higher than the current driver until the
entire transaction has completed. I’m referring to the classic PnP USB Bulk
Driver that comes with the DDK samples, but it would work the same way for a
KMDF driver.

Also, note the trick in PnP where the completion routine does a SetEvent on
the original thread (which blocked on KeWFSO if it got STATUS_PENDING back),
and the completion happens out of a separate thread; the completion routine
must return SMPR to prevent premature completion of the IRP.
joe

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@yahoo.com
Sent: Monday, December 01, 2008 2:42 PM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] KMDF: how to return STATUS_MORE_PROCESSING_REQUIRED?

Dosen’t SMPR for a request you did not originate amount to a memory leak?


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

xxxxx@yahoo.com wrote:

Dosen’t SMPR for a request you did not originate amount to a memory leak?

Not as long as you eventually get around to completing it later. One
can imagine circumstances where you might need to perform additional
processing on an IRP that has already been processed by a lower driver
before allowing it to finally complete.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> Dosen’t SMPR for a request you did not originate amount to a memory leak?

No.

Returning SMPR just means that the IRP is still belonging to your driver.

Some rules on this:

  • IRP is originated by the Originator, and, during its lifetime, belongs to some Owner, which is changed over the lifetime of the IRP.

  • the Originator can be your own driver, which means that you use IoAllocateIrp or IoBuildAsynchronousFsdRequest routines.

  • or, the Originator can be the IO manager itself. This is so for IRPs created to execute ZwRead/Write/DeviceIoControlFile, including the syscalls from user mode. Also, this is so for the IRPs created by the driver using IoBuildSynchronousFsdRequest or IoBuildDeviceIoControlRequest.

  • the rule is - only the Originator can (and must) destroy the IRP in the end of its lifetime. So, if IO manager is the originator, no other code can destroy the IRP (IoFreeIrp). The IO-manager-originated IRPs must arrive to the IO manager back, and are properly destroyed in the IopCompleteRequest routine (usually queued as a special kernel APC).

  • for your-driver-originated IRPs, your driver must do IoFreeIrp for them sooner or later.

Now about IRP Owners.

  • only the current IRP Owner can touch the IRP in any way.
  • IoCallDriver call transfers the IRP Ownership to the driver specified in the call’s parameter, and delivers the IRP to this new Owner’s dispatch routine. So, after IoCallDriver, your code is no more the Owner and cannot touch this IRP in any way. In other words, IoCallDriver moves the IRP “down”.
  • when the IRP arrives to your dispatch routine, then your code becomes the Owner of the IRP and can touch it in any way it wants (except IoFreeIrp - you must be the Originator to call this).
  • when the IRP arrives to your dispatch routine, you’re not the Originator of the IRP usually (except the dirty hack of calling your own dispatch routine as a usual function). So, you cannot destroy the IRP. Instead, you must return the IRP back “up” to the previous Owner.
  • IoCompleteRequest call transfers the IRP back to the previous Owner, delivering it to its completion routine. It moves the IRP “up”.
  • you always must call IoCompleteRequest sooner or later for any IRP arrived to your dispatch routine.
  • when the IRP arrives to your dispatch routine, then your code becomes the Owner of the IRP and can touch it in any way it wants (IoFreeIrp is once again special - it can be only called by the Originator).
  • if the completion routine returns STATUS_SUCCESS, then this means that IoCompleteRequest continues to transfer the IRP even one more level “up” and your code ceases to be the Owner. In this case, you must execute “if( Irp->PendingReturned ) IoMarkIrpPending(Irp);” magic mantra in the completion routine.
  • if the completion routine returns SMPR, then this mean that IoCompleteRequest is aborted and your driver remains the Owner of the IRP.
  • if the driver is the Originator of the IRP, then it is absolutely OK to IoFreeIrp it in the completion routine, and return SMPR.
  • there are only 2 allowed return codes for the completion routine - success and SMPR. Consider it as a BOOLEAN function :slight_smile:
  • it is absolutely safe to put the IRP to some queue (or a similar context) in the completion routine and then return SMPR. Later, you can either call IoCompleteRequest on it, returning it “one level up”, or call one more IoCallDriver, returning it one level down for a second stage of processing. In the latter case, sooner or later in one of the next completion routine calls you must either return the IRP “up” by returning success (if you are not the Originator of the IRP), or IoFreeIrp it (if you are the Originator).

The rules of pending status are another song. They are:

  • you can only return non-pending status from the dispatch routine if you have already called IoCompleteRequest from within the dispatch routine path, and the returned status value MUST be the same as the one used in Irp->IoStatus.Status before calling IoCompleteRequest.
  • if the dispatch routine path have not called IoCompleteRequest, then it either a) pended the IRP to some queue or a structure field or such or b) it transferred the IRP later down using IoCallDriver.
  • in the latter case, it is OK to use “return IoCallDriver” in the dispatch routine.
  • in the former case, you must call IoMarkIrpPending on the IRP and return STATUS_PENDING from the dispatch routine. More so, IoMarkIrpPending must be called before the IRP is delivered to the context where it can be asynchronously picked and completed - i.e. put to queue from where the DPC can pick it.
  • you cannot use IoMarkIrpPending without returning STATUS_PENDING and vice versa.
  • it is safe to use:
    IoMarkIrpPending(Irp);
    IoCompleteRequest(Irp, …);
    return STATUS_PENDING.
    or:
    IoMarkIrpPending(Irp);
    (VOID)IoCallDriver(LowerDevice, Irp);
    return STATUS_PENDING;
  • but this switches off the important optimization in the IO manager and is not recommended.
  • it is NOT safe to use:
    IoMarkIrpPending(Irp);
    return IoCallDriver…
  • is the completion routine returns STATUS_SUCCESS, then it must contain the code:
    if( Irp->PendingReturned )
    IoMarkIrpPending(Irp);
  • this is just a magic mantra. To understand why is it necessary, you can reverse engineer IoCompleteRequest and IopSynchronousServiceTail (the wrapper around the initial IoCallDriver called by the IO manager, used in NtRead/Write/DeviceIoControlFile). If you want to understand why the mantra is neeed and do not want to do any reverse-engineering, then think about the case:
    IoSetCompletionRoutine(Irp, DriverACompletion, …);
    return IoCallDriver(DeviceOfDriverB, Irp);
    and the driver B does IoMarkIrpPending/return STATUS_PENDING; and later completes the IRP. Driver A will also return STATUS_PENDING (from IoCallDriver and driver B), and there is no agent to set the “pending” flag in driver A’s stack location. The mantra in DriverACompletion fixes this issue.

Also note that most fields of struct _IRP are read-only for your direct access as values (you can use functions/macros like IoMarkIrpPending), except ->IoStatus, and this ->IoStatus is used in IoCompleteRequest path only. So, you must fill it before calling IoCompleteRequest, but there is no requirement to do this at some early point - you’re always allowed to fill it immediately before IoCompleteRequest. You can also switch Irp->MdlAddress to be some MDL of your own, but then, before transferring the IRP back up the stack, you must restore the original value of Irp->MdlAddress.

Also note that most fields of struct _IRP (Tail.Overlay, ApcEnvironment) are undocumented and are only used internally in IO manager to transfer some context from NtRead/WriteFile to IopCompleteRequest. Do not touch them. If you need a file object with which the IRP is associated - then this is in your stack location, not Tail.Overlay.OriginalFileObject.

Hope this helps.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com