Driver Problems? Questions? Issues?
Put OSR's experience to work for you! Contact us for assistance with:
  • Creating the right design for your requirements
  • Reviewing your existing driver code
  • Analyzing driver reliability/performance issues
  • Custom training mixed with consulting and focused directly on your specific areas of interest/concern.
Check us out. OSR, the Windows driver experts.

Upcoming OSR Seminars:

Writing WDF Drivers I: Core Concepts, Nashua, NH 15-19 May, 2017
Writing WDF Drivers II: Advanced Implementation Tech., Nashua, NH 23-26 May, 2017
Kernel Debugging and Crash Analysis, Dulles, VA 26-30 June, 2017
Windows Internals & Software Driver Development, Nashua, NH 24-28 July, 2017


Go Back   OSR Online Lists > ntdev
Welcome, Guest
You must login to post to this list
  Message 1 of 10  
14 Jun 17 17:21
wd
xxxxxx@gmail.com
Join Date: 26 Jan 2015
Posts To This List: 16
Best practice for completing pended UM <-> KM buffer sharing IOCTL

I'm trying to determine the best way to initiate completion of an IRP sent on behalf of a DeviceIoControl() call to share a user buffer with km using the METHOD_OUT_DIRECT mentioned here on NTDEV. The UM app using this driver may need to change the size of the shared buffer periodically based on some UI selections. Neither the app or the driver unload as part of this selection, so I'm wondering what the consnesus here is as to the best approach to complete the current IRP and send a new one with a newly sized buffer. I'm currently just sending the same IOCTL again whereupon the driver completes the previous one, but this seems a bit kluge-a-rific to me. Any help appreciated.
  Message 2 of 10  
14 Jun 17 17:45
Tim Roberts
xxxxxx@probo.com
Join Date: 28 Jan 2005
Posts To This List: 11562
Best practice for completing pended UM <-> KM buffer sharing IOCTL

xxxxx@gmail.com wrote: > I'm trying to determine the best way to initiate completion of an IRP sent on behalf of a DeviceIoControl() call to share a user buffer with km using the METHOD_OUT_DIRECT mentioned here on NTDEV. The UM app using this driver may need to change the size of the shared buffer periodically based on some UI selections. Neither the app or the driver unload as part of this selection, so I'm wondering what the consnesus here is as to the best approach to complete the current IRP and send a new one with a newly sized buffer. I'm currently just sending the same IOCTL again whereupon the driver completes the previous one, but this seems a bit kluge-a-rific to me. You can use CancelIo or CancelIoEx, which force all outstanding requests on that file handle to be canceled. Or, you can have another ioctl that specifically says "cancel the last ioctl", but that's not much better than what you're doing. -- Tim Roberts, xxxxx@probo.com Providenza & Boekelheide, Inc.
  Message 3 of 10  
14 Jun 17 21:32
Peter Viscarola (OSR)
xxxxxx@osr.com
Join Date:
Posts To This List: 5913
List Moderator
Best practice for completing pended UM <-> KM buffer sharing IOCTL

I like your present approach.... no kludge there that I can see. Just "Here's a new request to take the place of the previous one." I think that's tight, elegant, and conceptually clear. Having a separate mechanism to close the previous mapping feels unnecessarily kludgy to me. You're creating unnecessary state. What do you do if you get a second "map" IOCTL without getting an "unmap" first? Ewwww. Peter OSR @OSRDrivers
  Message 4 of 10  
15 Jun 17 14:40
anton bassov
xxxxxx@hotmail.com
Join Date: 16 Jul 2006
Posts To This List: 4356
Best practice for completing pended UM <-> KM buffer sharing IOCTL

>I'm currently just sending the same IOCTL again whereupon the driver completes >the previous one, but this seems a bit kluge-a-rific to me. You don't really seem to have any better option in this situation,do you? Once you use METHOD_DIRECT an MDL that your driver gets with IOCTL describes the buffer as it happens to be at the moment you send an IOCTL. If the size of your target buffer changes there is nothing you can do about it without submitting the new request (and, hence, either completing or canceling an outstanding one). Completing the outstanding request seems (at least to me) to be the easiest option in existence.. Anton Bassov
  Message 5 of 10  
15 Jun 17 15:01
Peter Viscarola (OSR)
xxxxxx@osr.com
Join Date:
Posts To This List: 5913
List Moderator
Best practice for completing pended UM <-> KM buffer sharing IOCTL

<quote> Completing the outstanding request seems (at least to me) to be the easiest option in existence.. </quote> Somebody... QUICK... check to see if the earth is still spinning!! News flash: Mr. Bassov and I agree TWICE in the same decade. I may faint. :-) Peter OSR @OSRDrivers
  Message 6 of 10  
15 Jun 17 17:38
anton bassov
xxxxxx@hotmail.com
Join Date: 16 Jul 2006
Posts To This List: 4356
Best practice for completing pended UM <-> KM buffer sharing IOCTL

>News flash: Mr. Bassov and I agree TWICE in the same decade. Actually, I think we had agreed more than twice even this year alone.... Please note that all our disagreements normally revolve only around certain topics, namely 1.Windows vs UNIX/Linux 2. Microkernel vs monolith. 3. Use of managed languages in system-level programming and driver development 4. My trolling of "PUBLISH YOUR NAME.....(etc)" folks 5. Not-so-obvious (at least to me) benefits of "typedef int INT" -style declarations OTOH, we tend to agree on quite a few things as well. For example, we seem to have the same opinion on C++ - for both of us this opinion is, apparently, well below rattlesnake's arse/ Dan Kyler's intelligence, whichever of these two happens to be lower. Furthermore, we both laugh at above mentioned individual's assertions that "the only proper spinlock implementation ever known to anyone in the observable Universe since the Big Bang" happens to be a tight polling loop of interlocked operations. To make it even more interesting, IIRC, we both dismiss "Prof.Flounder's" claims of shared buffer between an app and a driver being "an awful programming practice that contradicts the very principles of Windows design" as sensationalist and ridiculous, at least from the technical standpoint. In other words, we don't seem to be THAT different from one another.... Anton Bassov
  Message 7 of 10  
15 Jun 17 19:30
M M
xxxxxx@hotmail.com
Join Date: 21 Oct 2010
Posts To This List: 738
Best practice for completing pended UM <-> KM buffer sharing IOCTL

My theory is that it was an accident. You read the OP’s message before Peter’s response and inadvertently agreed with his conclusion. Clearly you won’t let that happen again 😉 Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10 From: xxxxx@hotmail.com<mailto:xxxxx@hotmail.com> Sent: June 15, 2017 5:38 PM To: Windows System Software Devs Interest List<mailto:xxxxx@lists.osr.com> Subject: RE:[ntdev] Best practice for completing pended UM <-> KM buffer sharing IOCTL >News flash: Mr. Bassov and I agree TWICE in the same decade. Actually, I think we had agreed more than twice even this year alone.... Please note that all our disagreements normally revolve only around certain topics, namely 1.Windows vs UNIX/Linux 2. Microkernel vs monolith. 3. Use of managed languages in system-level programming and driver development 4. My trolling of "PUBLISH YOUR NAME.....(etc)" folks 5. Not-so-obvious (at least to me) benefits of "typedef int INT" -style declarations OTOH, we tend to agree on quite a few things as well. For example, we seem to have the same opinion on C++ - for both of us this opinion is, apparently, well below rattlesnake's arse/ Dan Kyler's intelligence, whichever of these two happens to be lower. Furthermore, we both laugh at above mentioned individual's assertions that "the only proper spinlock implementation ever known to anyone in the observable Universe since the Big Bang" happens to be a tight polling loop of interlocked operations. To make it even more interesting, IIRC, we both dismiss "Prof.Flounder's" claims of shared buffer between an app and a driver being "an awful programming practice that contradicts the very principles of Windows design" as sensationalist and ridiculous, at least from the technical standpoint. In other words, we don't seem to be THAT different from one another.... Anton Bassov --- NTDEV is sponsored by OSR Visit the list online at: <http://www.osronline.com/showlists.cfm?list=ntdev> MONTHLY seminars on crash dump analysis, WDF, Windows internals and software drivers! Details at <http://www.osr.com/seminars> To unsubscribe, visit the List Server section of OSR Online at <http://www.osronline.com/page.cfm?name=ListServer>
  Message 8 of 10  
16 Jun 17 18:49
wd
xxxxxx@gmail.com
Join Date: 26 Jan 2015
Posts To This List: 16
Best practice for completing pended UM <-> KM buffer sharing IOCTL

I always wanted to do my part in helping to bestow world peace... who knew... Now that Im testing this code with other synchronous IOCTLs (completed immediately) I'm seeing an issue. I was (probably incorrectly) under the impression, based on seeing this admittedly old post from from Doron at http://www.osronline.com/showThread.CFM?link=90910 , "Requests are automatically pended for you. You must set/clear the cancel routine if you are going o pemd I internally for a long time. Since storage doesn't support i/o cancellation, you have nothing left to do. do yes, ut us that simple." that it might be possible to get around creating a manual WDFQUEUE (of depth 1) to forward my buffer sharing IOCTL to. In the UM code, I open the device with FILE_FLAG_OVERLAPPED and use a separate event for the pended buffer sharing IOCTL from the other IOCTLs that are completed immediately in the driver. The UM code calls GetOverlappedResult(...TRUE) to wait for completion of the immediate IOCTLs (but not for my pended one). After sending of my pended IOCTL, (for which no call to GetOverlappedResult() is issued), any subsequent attempts to send an immediate (non-pended) IOCTL results in the call to GetOverlappedResult(...TRUE) hanging indefinitely. I imagine its because all Im doing in the driver to "pend" my bufferSharing IOCTL is to save the handle then set the cancel routine as per the 10 year old post referenced above. I am in other code forwarding requests to a manual queue and all works fine. So finally to my Question: Will forwarding this bufferSharing IOCTL to a manual queue undo the IOManager's mapping, or will it remain valid until its dequeued and completed some time later?
  Message 9 of 10  
17 Jun 17 10:29
Peter Viscarola (OSR)
xxxxxx@osr.com
Join Date:
Posts To This List: 5913
List Moderator
Best practice for completing pended UM <-> KM buffer sharing IOCTL

No... you're pending the request correctly. You do not have to do anything to pend the request other than not complete it... just stashing the handle to the WDFREQUEST object in a variable in (for example) your device context is fine. There is no magic about saving Requests inQueues, other than the auto-cancel handling magic. If there's an issue with what you're doing, it's almost certainly a user-mode coding issue.... and I can't help you with that one I'm afraid. Peter OSR @OSRDrivers
  Message 10 of 10  
17 Jun 17 11:11
Jamey Kirby
xxxxxx@gmail.com
Join Date: 31 Dec 2014
Posts To This List: 240
Best practice for completing pended UM <-> KM buffer sharing IOCTL

I'm with Peter. You approach sounds fine to me. New IOCTL with updated buffer and size sent down to replace the currently pending IOCTL seems tight. On Wed, Jun 14, 2017 at 9:32 PM <xxxxx@osr.com> wrote: > I like your present approach.... no kludge there that I can see. Just > "Here's a new request to take the place of the previous one." I think > that's tight, elegant, and conceptually clear. > > Having a separate mechanism to close the previous mapping feels > unnecessarily kludgy to me. You're creating unnecessary state. What do > you do if you get a second "map" IOCTL without getting an "unmap" first? > Ewwww. > > Peter <...excess quoted lines suppressed...> --
Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You must login to OSR Online AND be a member of the ntdev list to be able to post.

All times are GMT -5. The time now is 05:38.


Copyright ©2015, OSR Open Systems Resources, Inc.
Based on vBulletin Copyright ©2000 - 2005, Jelsoft Enterprises Ltd.
Modified under license