crash in netbt.sys while monitoring incoming connections from a TDI filter driver

Hello,

In my TDI filter driver (tdiflt.sys) I am replacing the connect event handler and context with my own when I receive IRP_MJ_INTERNAL_DEVICE_CONTROL/TDI_SET_EVENT_HANDLER. My goal is to be able to do some pre an post processing on incoming connections.

When my connect event handler is called, I call the original connect handler and then on the acceptIrp received from the original connect handler, I do something like below.

irp = *acceptIrp;
if (irp->CurrentLocation <= 1) {
goto contextfree;
}
IoCopyCurrentIrpStackLocationToNext(irp);
IoSetCompletionRoutine(irp,
TdiIrpComplete,
tdiIrpContext,
TRUE,
TRUE,
TRUE);
IoSetNextIrpStackLocation(irp);

From TdiIrpComplete, I call IoCompleteRequest(tdiIrpContext->tdiIrp, IO_NO_INCREMENT);

This all usually works fine. However, sporadically I get crash in netbt.sys when I call IoCompleteRequest.

95f9bc04 82a95222 badb0d00 00000004 00000000 nt!KiTrap0E+0x2cf
95f9bc74 87dd6e7e 865340c3 86534008 00000000 nt!KefAcquireSpinLockAtDpcLevel+0x2
95f9bc94 82a95933 00000000 86534008 00c9f1d0 netbt!AcceptCompletionRoutine+0x35
95f9bcdc 8813c02d 87a858a8 8656d428 95f9bd00 nt!IopfCompleteRequest+0x128
95f9bcec 82c31466 8656d428 988fa008 85f9a8a8 tdiflt!TdiIrpComplete+0x25
95f9bd00 82a9aaab 87a858a8 00000000 85f9a8a8 nt!IopProcessWorkItem+0x23
95f9bd50 82c25f64 80000001 bd13a8c9 00000000 nt!ExpWorkerThread+0x10d
95f9bd90 82ace219 82a9a99e 80000001 00000000 nt!PspSystemThreadStartup+0x9e
00000000 00000000 00000000 00000000 00000000 nt!KiThreadStartup+0x19

Any idea what can cause this? It seems that netdt completion routine is passed wrong context and it’s attempting to deference it and crashing?

Searching various samples available on google, I found few alternate ways to do this

  1. Replace the completionRoutine and context in the acceptIrp with my own and then calling the original completion routine from my completion routine.
  2. Create a new accept IRP and set the stack location.

So basically two questions.

  1. Thoughts on why the method I am using lead to crash in netbt.
  2. Are other approaches mentioned above are better than the one I am using.

Would appreciate thoughts from experts.

Thanks.
-Prasad

Hello,

Investigating the crash further, the context that is getting passed to netbt!AcceptCompletionRoutine seems to be correct. The memory block that context points to is tagged with NbL4 tag which seems like a netbt tag. The AcceptCompletionRoutine expects a pointer to it’s own data structure at context + 0x14. This data structure apparently has a spin lock at 0x228 offset. When the crash happens context + 0x14 contains NULL. Hence, it ends up calling KefAcquireSpinLockAtDpcLevel with spin lock at NULL->0x228 and then it crashes in the function at the following instruction.

lock bts dword ptr [ecx],0 ds:0023:00000228=???

Do I need to mark the acceptIRP pending when I return from the connect event callback?

Any thoughts would be much appreciated.

Thanks.
-Prasad

Please post your full code for the tdi connect and irp complete handlers.

Hi Cristian,

I created stripped version of the inbound connection handling and it’s crashing at the same place. Now, the crash is more deterministic. Basically I wrote a simple python script that repetitively opens connection on 139 port (netbt port) to the machine that is running my filter driver and closes it. After running few iterations, I get a crash at the same place that I mentioned earlier. I tried the same script with other ports (non-netbt) and it doesn’t crash there which makes it believe that it’s netbt specific interoperability issue? I also confirmed that if my completion routine doesn’t defer processing to work queue item and instead always return STATUS_CONTINUE_COMPLETION, I don’t get a crash.

Here is the complete code for the same.

typedef struct _TDI_IRP_CONTEXT {
PIRP tdiIrp;
PIO_WORKITEM workItem;
} TDI_IRP_CONTEXT, *PTDI_IRP_CONTEXT;

typedef struct _TDI_EVENT_CONTEXT {
// Address file object on which the handler is registered.
PFILE_OBJECT addressFileobject;
// Upper layer TDI client supplied handler function
// to be invoked with its context parameter.
PTDI_IND_CONNECT connectHandler;
PVOID connectContext;
} TDI_EVENT_CONTEXT, *PTDI_EVENT_CONTEXT;

static void PostInboundConnect(PDEVICE_OBJECT deviceObject,
PVOID context)
{
PTDI_IRP_CONTEXT tdiIrpContext = NULL;

ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
tdiIrpContext = (PTDI_IRP_CONTEXT)(context);
ASSERT(NULL != tdiIrpContext);

if (NULL != tdiIrpContext->workItem) {
IoFreeWorkItem(tdiIrpContext->workItem);
tdiIrpContext->workItem = NULL;
IoCompleteRequest(tdiIrpContext->tdiIrp, IO_NO_INCREMENT);
}
ExFreePool(tdiIrpContext);
}

static NTSTATUS
TdiIrpComplete(PDEVICE_OBJECT deviceObject,
PIRP irp,
void *context)
{
PTDI_IRP_CONTEXT tdiIrpContext = NULL;
NTSTATUS ns = STATUS_CONTINUE_COMPLETION;

tdiIrpContext = (PTDI_IRP_CONTEXT)(context);

ASSERT(NULL != deviceObject);
ASSERT(NULL != tdiIrpContext);
ASSERT(NULL != irp);

if (KeGetCurrentIrql() > PASSIVE_LEVEL) {
tdiIrpContext->workItem = IoAllocateWorkItem(deviceObject);
if (tdiIrpContext->workItem == NULL) {
ERROR(“Failed to allocate the workitem.”);
goto exit;
}
IoQueueWorkItem(tdiIrpContext->workItem,
PostInboundConnect,
DelayedWorkQueue,
tdiIrpContext);
ns = STATUS_MORE_PROCESSING_REQUIRED;
} else {
PostInboundConnect(NULL, tdiIrpContext);
}
exit:
return ns;
}

static NTSTATUS
TdiConnectEventHandler(PVOID context,
LONG remoteAddressLength,
PVOID remoteAddress,
LONG userDataLength,
PVOID userData,
LONG optionsLength,
PVOID options,
CONNECTION_CONTEXT *connectionContext,
PIRP *acceptIrp)
{
NTSTATUS ns = STATUS_SUCCESS;
PTDI_EVENT_CONTEXT eventContext = NULL;
PTDI_IRP_CONTEXT tdiIrpContext = NULL;

ASSERT(KeGetCurrentIrql() <= DISPATCH_LEVEL);

eventContext = (PTDI_EVENT_CONTEXT) context;
ASSERT(NULL != eventContext);
ASSERT(NULL != eventContext->connectHandler);

// Call the original connect handler with the original context. This was saved
// away during IRP_MJ_INTERNAL_DEVICE_CONTROL/TDI_SET_EVENT_HANDLER
// TDI_EVENT_CONNECT processing.
ns = eventContext->connectHandler(eventContext->connectContext,
remoteAddressLength,
remoteAddress,
userDataLength,
userData,
optionsLength,
options,
connectionContext,
acceptIrp);
if ((STATUS_MORE_PROCESSING_REQUIRED == ns) && acceptIrp && *acceptIrp) {
PIRP irp;
irp = *acceptIrp;

if (irp->CurrentLocation <= 1) {
goto exit;
}

tdiIrpContext = ExAllocatePool(NonPagedPool,
sizeof(TDI_IRP_CONTEXT));
if (NULL == tdiIrpContext) {
goto exit;
}
tdiIrpContext->workItem = NULL;
tdiIrpContext->tdiIrp = irp;

IoCopyCurrentIrpStackLocationToNext(irp);
IoSetCompletionRoutine(irp,
TdiIrpComplete,
tdiIrpContext,
TRUE,
TRUE,
TRUE);
IoSetNextIrpStackLocation(irp);
}
exit:
return ns;
}

Any help will be greatly appreciated!

Thanks.
-Prasad

Is this the old TDI filter load order problem? Or insufficient stack locations?

IIRC, TDI filters can only operate properly on AOs that it saw being created. Usually builds a private database of these AOs.

NETBT creates some AOs immediatly on loading. If your TDI filter loads after NETBT then it does not have the AO information it needs to filter the early AOs.

Some notes from an old TDI filter…

//
// Handle IRPs With Insufficient I/O Stack Locations
// -------------------------------------------------
// TDI clients determine the number of I/O stack locations that they
// must provide when they open a TDI device. This value is never
// recomputed.
//
// When a TDI filter attached to a TDI provider device it adds an
// additional layer in the device stack, which in turn requires an
// additional I/O stack location.
//
// If a TDI filter attaches to a TDI provider device AFTER a TDI client
// has opened the device, then the TDI client will provide IRPs with
// insufficuent I/O stack locations. The test below will be positive.
//
// The primary example of this case is when this filter is loaded AFTER
// the NetBT TDI client. Certainly this would be the case if the TDI
// filter is loaded dynamically. It can also be the case if the TDI filter
// is loaded automatically - but still not early enough to attach before
// NetBT.
//
// The code below simply skips filter processing for these “bogus” IRPs.
// This allows NetBT to continue to operate. HOWEVER, NetBT IRPs will
// not be filtered.
//
// If it is important to filter NetBT, then there are two solutions:
//
// 1.) Insure that the filter is loaded before NetBT.
// 2.) Use “repeater” IRPs.
//
if (Irp->CurrentLocation == 1)
{
KSDBGP(DL_INFO,(“TDIH_DispatchInternalDeviceControl encountered bogus current location\n”));

IoSkipCurrentIrpStackLocation( Irp );
RC = IoCallDriver( pLowerDeviceObject, Irp );

return( RC );
}

Hi Thomas,

I don’t think it’s that problem.

If you check TdiConnectEventHandler function, I am already checking acceptIrp->CurrentLocation and setting the completion routine only if there are enough stack locations. When I run my test python script, the crash happens at random times and not after specific iterations. Hence, I suspect that it’s tied to concurrency. Also, the issue reproduces more easily on a multiprocessor system rather than a single processor system which also makes me believe that it’s tied to concurrency. On a single processor system, I had into introduce a sleep in the work queue item to reproduce the issue.

Further, as I said before, if I return STATUS_CONTINUE_COMPLETION from my completion routine, I don’t run into the crash. If I defer the completion to work queue item and return STATUS_MORE_PROCESSING_REQUIRED from my completion routine and then complete the IRP from the work queue item, it results in crash in netbt’s completion routine.

Looking at the crash it seems like some concurrency issue. May be, by the time my work queue item calls IoCompleteRequest, the file object is closed and hence netbt has released resources associated with it and then netbt’s completion routine gets called and attempts to access those released resources?

Thanks.
-Prasad

Have you tried running it with the driver verifier?

On Sat, Nov 3, 2012 at 9:59 AM, wrote:

> Hi Thomas,
>
> I don’t think it’s that problem.
>
> If you check TdiConnectEventHandler function, I am already checking
> acceptIrp->CurrentLocation and setting the completion routine only if
> there are enough stack locations. When I run my test python script, the
> crash happens at random times and not after specific iterations. Hence, I
> suspect that it’s tied to concurrency. Also, the issue reproduces more
> easily on a multiprocessor system rather than a single processor system
> which also makes me believe that it’s tied to concurrency. On a single
> processor system, I had into introduce a sleep in the work queue item to
> reproduce the issue.
>
> Further, as I said before, if I return STATUS_CONTINUE_COMPLETION from my
> completion routine, I don’t run into the crash. If I defer the completion
> to work queue item and return STATUS_MORE_PROCESSING_REQUIRED from my
> completion routine and then complete the IRP from the work queue item, it
> results in crash in netbt’s completion routine.
>
> Looking at the crash it seems like some concurrency issue. May be, by the
> time my work queue item calls IoCompleteRequest, the file object is closed
> and hence netbt has released resources associated with it and then netbt’s
> completion routine gets called and attempts to access those released
> resources?
>
> Thanks.
> -Prasad
>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

I believe the following is true: when a handle is closed, and it is the
last handle referencing the device, the I/O Manager attempts to cancel all
outstanding IRPs before sending the IRP_MJ_CLOSE notification. I believe
this is done precidely yo avoid the potential situation you describe.

However, I am not at all sure what happens if you call IoCompleteRequest
on an IRP which has already been completed, so perhaps one of the experts
can clarify this.

One thing that might be interesting: ASSERT(!Irp->Cancelled) [assuming
I’ve remembered the correct name of this field] and see if this produces a
message which is immediately followed by the crash.

Your analysis is correct; you have some kind of concurrency problem. If
you have to add Sleep calls to make something work, or, in your case, have
to add one to make it fail, then it is definitely a concurrency issue. In
your case, it indicates problems involving PASSIVE_LEVEL are the most
likely cause, but note also that suspending a thread (on a uniprocessor)
might change the timing just enough that an ISR or DPC can mess you over.

Dumping the contents of the IRP might give useful information which might
allow you to write a different ASSERT statement than the one I suggest and
discover a pattern where you get ASSERT-crash and no-ASSERT-no-crash,
which might help track this down.

Otherwise,I can only say I don’t know enough about NDIS internals to
suggest more specific actions.
joe

Hi Thomas,

I don’t think it’s that problem.

If you check TdiConnectEventHandler function, I am already checking
acceptIrp->CurrentLocation and setting the completion routine only if
there are enough stack locations. When I run my test python script, the
crash happens at random times and not after specific iterations. Hence, I
suspect that it’s tied to concurrency. Also, the issue reproduces more
easily on a multiprocessor system rather than a single processor system
which also makes me believe that it’s tied to concurrency. On a single
processor system, I had into introduce a sleep in the work queue item to
reproduce the issue.

Further, as I said before, if I return STATUS_CONTINUE_COMPLETION from my
completion routine, I don’t run into the crash. If I defer the completion
to work queue item and return STATUS_MORE_PROCESSING_REQUIRED from my
completion routine and then complete the IRP from the work queue item, it
results in crash in netbt’s completion routine.

Looking at the crash it seems like some concurrency issue. May be, by the
time my work queue item calls IoCompleteRequest, the file object is closed
and hence netbt has released resources associated with it and then netbt’s
completion routine gets called and attempts to access those released
resources?

Thanks.
-Prasad


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

When you get a just-built IRP, there is no current location, and the completion routine may already be set for this level.

The IO stack location to use by a lower driver is a next location. If you want to install your own completion routine, you should:

IoSetNextIrpStackLocation(irp);
IoSetCompletionRoutine(irp,
TdiIrpComplete,
tdiIrpContext,
TRUE,
TRUE,
TRUE);
IoCopyCurrentIrpStackLocationToNext(irp);

I respectfully disagree. IoSetCompletionRoutine sets completion routine in
the NEXT stack location so there is no need to fetch next stack to set
completion routine. Furthermore, the code advances the stack by one
(numerically decrease) which is dangerous and unnecessary. It’s dangerous
because if there is no stack over allocation, and everyone in the stack
(except the bottom one of course) tries to use completion rtn, then the
last one will corrupt the IRP.

Things to watch out in this case is, never call IoMarkIrpPending in the
creator’s completion if there is no extra stack allocated for the topmost.
Because

  1. There is no IO location because IoCompleteRequest will do the following
    for each stack in order
    a) fetch completion from current stack
    b) unwind the stack(numerically increase by one)
    c) THEN invokes Completion routine if there is one.
  2. Doing so makes no sense since no one else is watching the IRP.

Calvin

On Sat, Nov 3, 2012 at 12:54 PM, wrote:

> When you get a just-built IRP, there is no current location, and the
> completion routine may already be set for this level.
>
> The IO stack location to use by a lower driver is a next location. If you
> want to install your own completion routine, you should:
>
> IoSetNextIrpStackLocation(irp);
> IoSetCompletionRoutine(irp,
> TdiIrpComplete,
> tdiIrpContext,
> TRUE,
> TRUE,
> TRUE);
> IoCopyCurrentIrpStackLocationToNext(irp);
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Thank you all for the responses!

@Atul, I had already run it with driver verifier. It doesn’t complain anything.

@Joseph, since I am deferring the processing to work queue item from my completion routine, I am returning STATUS_MORE_PROCESSING_REQUIRED and then calling IoCompleteRequest from the work queue item to inform that I am done. I am doing the same at other places viz. outbound connections and it’s working fine. Also, I am running into issues with inbound connections only with netbt and not others. I have the debugger control at the time of the crash and Irp->Cancelled is FALSE which means that IRP is not cancelled.

@Alex, as I said I am already checking for enough stack locations before setting the completion routine.

@Calvin, I presume that your response is for Alex? Is there anything wrong that you see in the code that I posted?

Thanks.
-Prasad

Try this method in the accept handler:

IRP* req = *AcceptIrp;
IO_STACK_LOCATION* reqSL = IoGetCurrentIrpStackLocation(req);

context->oldRoutine = reqSL->CompletionRoutine;
context->oldContext = reqSL->Context;

reqSL->CompletionRoutine = TdiAcceptComplete;
reqSL->Context = context;
reqSL->Control = SL_INVOKE_ON_SUCCESS | SL_INVOKE_ON_ERROR | SL_INVOKE_ON_CANCEL;

And in the completion routine manually call the previous callback.

@Cristian, this won’t help me.

As I mentioned in my earlier comments and the code that I posted, I run into crash only when my completion routine returns STATUS_MORE_PROCESSING_REQUIRED and completes the IRP from the work queue item. If I return from the completion routine with STATUS_CONTINUE_COMPLETION, I don’t run into crash.

I also confirmed that by the time my work queue item gets scheduled, I receive IRP_MJ_CLEANUP and IRP_MJ_CLOSE on the file object. My client is just opening and closing the connection and it seems that I am receiving disconnect/close even before the connect completes.

Also, as I said, I don’t run into this issue on other non-netbt ports which makes me believe that it’s a bug on the netbt side?

Thanks.
-Prasad

Few more observations:

The crash happens, when, I receive IRP_MJ_CLEANUP and IRP_MJ_CLOSE even before the acceptIRP is completed from the work queue item. The file object received in IRP_MJ_CLEANUP/IRP_MJ_CLOSE is same as the one in the acceptIRP.

If I delay processing of IRP_MJ_CLEANUP OR if I delay disconnect initiated by the client, I don’t run into crash.

When IRP_MJ_CLEANUP is called, the call stack looks like the following.



fffff88001f708c0 fffff80001b8c844 nt!IopCloseFile+0x11f
fffff88001f70950 fffff80001b8c601 nt!ObpDecrementHandleCount+0xb4
fffff88001f709d0 fffff80001b8cbc4 nt!ObpCloseHandleTableEntry+0xb1
fffff88001f70a60 fffff80001894f93 nt!ObpCloseHandle+0x94
fffff88001f70ab0 fffff80001891530 nt!KiSystemServiceCopyEnd+0x13
fffff88001f70c48 fffff880013bb767 nt!KiServiceLinkage
fffff88001f70c50 fffff880013bae9d netbt!DelayedWipeOutLowerconn+0x4e
fffff88001f70c80 fffff800018a0021 netbt!NTExecuteWorker+0x7b
fffff88001f70cb0 fffff80001b3232e nt!ExpWorkerThread+0x111
fffff88001f70d40 fffff80001887666 nt!PspSystemThreadStartup+0x5a
fffff88001f70d80 0000000000000000 nt!KiStartSystemThread+0x16

Based on this, here is what I think is happening.

  1. My client issues a connect request on 139 port.
  2. I trap into this in the connect event handler, call the old handler. On the returned acceptIRP, I copy stack location, set my completion routine, set next stack location and return whatever old handler returned me.
  3. When I call the old handler, the netbt has already associated it’s own data structure as a completion context with the acceptIRP.
  4. When my completion routine is called, I schedule a work queue item and return STATUS_MORE_PROCESSING_REQUIRED.
  5. By the time, my work queue item gets a chance to run, the client issues a disconnect request which results in IRP_MJ_CLEANUP/IRP_MJ_CLOSE being issued. In response to this, netbt releases it’s data structure that it has associated with the connection.
  6. Then when my work queue item completes the IRP, it calls netbt completion routine which attempts to access the data structure which it has already released in the IRP_MJ_CLEANUP path and then crash…

Considering that it doesn’t happen with other TDI clients like afd, I am more inclined to believe that it’s a netbt specific issue?

Any thoughts or solutions?

Thanks.
-Prasad

Your TDI filter is allowing the IRP_MJ_CLEAN/IRP_MJ_CLOSE to complete
without ensuring that your deferred completion of the AcceptIrp has
completed?

I don’t run into this issue on other non-netbt ports which makes me
believe that it’s a bug on the netbt side?

No, it is not a NetBT bug. It is simply that other TDI clients are not
impacted in quite as spectacular a way. It is your bug.

Your TDI filter has changed the order of operations by deferring completion.
The Cleanup/Close is being allowed to complete (indeed, being allowed to
*start*) before the “completion of the completion” from the (Accept) IRP you
deferred.

When you insert a TDI filter with deferral behavior you must not violate the
TDI Client / Transport semantic contract.

Good Luck,
Dave Cattley

@Dave, ok, assuming that its my bug and not Netbt, what’s your suggestion to solve this? Synchronise processing of IRP_MJ_CLEANUP with the completion of acceptIrp? I can possibly synchronise it up to the point I call IoCompleteRequest from my work queue item but not beyond that right? Can you please point to any documentation that explains the semantic contract that you are referring to in the context of this specific issue?

NOTE: I have never been able to reproduce this crash with thousands of parallel incoming connections on non Netbt ports on 8 CPU systems. It happens only with Netbt ports. Also I have never ran into this issue with outbound connections on any ports.

Thanks
-Prasad

http://support.microsoft.com/kb/120170 [quote] A driver that holds pending IRPs internally must implement a routine for IRP_MJ_CLEANUP. When the routine is called, the driver should cancel all the pending IRPs that belong to the file object identified by the IRP_MJ_CLEAN call. In other words, it should cancel all the IRPs that have the same file-object pointer as the one supplied in the current I/O stack location of the IRP for the IRP_MJ_CLEANUP call. [/quote] While you may not think of this way, the AcceptIrp you have deferred completion processing is now being ‘held’ by your driver. When you pass that IRP_MJ_CLEANUP down the stack to (ultimately) the TDI transport, it is completed after the transport (and other filters below you) have satisfied the above requirement. But now *your* layer of the filter stack needs to also satisfy that requirement. Don’t complete IRM_MJ_CLEANUP until the cleanup is done. Good Luck,Dave Cattley

Thanks Dave the pointer. So I need to maintain the list of pending IRPs for the file object and then cancel those IRPs when I receive IRP_MJ_CLEANUP for the same file object? I am wondering, who will provide synchronisation for this e.g. What happens when I attempt to cancel the IRP while my work queue item is executing and/or has called IoCompleteRequest?

Thanks.
-Prasad

I did not necessarily say that ‘cancel’ was the correct behavior. This is especially true for an IRP that has been partially completed. All I was saying was that some consideration for ensuring that the deferred [completing] AcceptIrp would in fact be completed (or canceled, which is a form of completion) before the IRP_MJ_CLEANUP is completed. You really cannot ‘cancel’ the AcceptIrp in so far as it has actually been completed with some status already (it may have completed because it was canceled of course). You need to just ‘deal with it’ before completing the IRP_MJ_CLEANUP. And by ‘deal with it’ I mean do whatever is appropriate for whatever it is your TDI filter is doing that required it to defer the completion in the first place. Good Luck,Dave Cattley

Thanks again Dave. I will see how I can handle this.

Having said that, it’s still mysterious to me that why I run into crash only with Netbt and not others. May be, other TDI clients do not proceed with disconnect until the acceptIrp completes and hence it doesn’t lead to IRP_MJ_CLEANUP/IRP_MJ_CLOSE getting triggered before completing the acceptIrp?

Thanks.
-Prasad