Changing Timer Resolution from within Kernel drivers

I’m currently developing a dedicated kernel driver for an I2C interface
card for which I need to set timers with milli-seconds accuracy. I know
about the existence of the “timeBeginPeriod” and its undocumented native
API counterpart “NtSetTimerResolution” to change the default 10 mS
granularity. However, both procedures cannot be used from within kernel
drivers ( or ? ). Does anyone know about a way to increase the timer
resolution from within a
kernel driver temporarely, that would allow me to use the “KeSetTimer” up
to a milli-second accuracy ?

Here’s a snippet taken from a previous reply to another post (the subject was “atapi.sys hangs CPU for many seconds”) which bay be of intrest:

“I think that KeStallExecutionProcessor runs with interrupts disabled, so
the IRQL would be 31. Just disassemble the function; it’s pretty small.
I looked at it years ago to figure out how it did the high-resolution
timing. It’s kinda clever; it measures how many of a given loop iteration
it can execute in 1/18th of a second or whatever at bootup time, and then
scales that down to the appropriate number of loops in the specified
number of microseconds. (Thus, there’s no timer actually involved when
you call it.)”

You might be able to use the same kind of logic in your driver to obtain higher resolution timing, depending on you exact needs.

Regards,

Ed Lau

MidCore Software, Inc.
900 Straits Tpke
Middlebury, CT 06762

www.midcore.com

----- Original Message -----
From: Ghijselinck
To: NT Developers Interest List
Sent: Thursday, March 21, 2002 6:30 AM
Subject: [ntdev] Changing Timer Resolution from within Kernel drivers

I’m currently developing a dedicated kernel driver for an I2C interface
card for which I need to set timers with milli-seconds accuracy. I know
about the existence of the “timeBeginPeriod” and its undocumented native
API counterpart “NtSetTimerResolution” to change the default 10 mS
granularity. However, both procedures cannot be used from within kernel
drivers ( or ? ). Does anyone know about a way to increase the timer
resolution from within a
kernel driver temporarely, that would allow me to use the “KeSetTimer” up
to a milli-second accuracy ?


You are currently subscribed to ntdev as: xxxxx@midcore.com
To unsubscribe send a blank email to %%email.unsub%%

Try ExSetTimerResolution().

Stephan

On Thu, 21 Mar 2002 06:30:19 -0500, “Ghijselinck”
wrote:

>I’m currently developing a dedicated kernel driver for an I2C interface
>card for which I need to set timers with milli-seconds accuracy. I know
>about the existence of the “timeBeginPeriod” and its undocumented native
>API counterpart “NtSetTimerResolution” to change the default 10 mS
>granularity. However, both procedures cannot be used from within kernel
>drivers ( or ? ). Does anyone know about a way to increase the timer
>resolution from within a
>kernel driver temporarely, that would allow me to use the “KeSetTimer” up
>to a milli-second accuracy ?

This is a really bad idea for a couple of reasons.

First, this implies looping, which if done at DISPATCH_LEVEL holds off every other DISPATCH_LEVEL operation on this CPU while your loop is in progress. This wouldn’t work for this application to well, unless I am missing something here.

Second, there is no good way for your driver to determine what the HAL is determining here, as the HAL is using platform specific calls to do this, (read asm). So, you could make it work on say an x86 platform, but as soon as you go to an IA64 platform you are screwed. That is why it is in the HAL.

Also, KeStallExecutionProcessor does nothing to prevent interrupts, it just loops. It doesn’t even prevent preemption if called at < DISPATCH_LEVEL.


Bill McKenzie

“Ed Lau” wrote in message news:xxxxx@ntdev…
Here’s a snippet taken from a previous reply to another post (the subject was “atapi.sys hangs CPU for many seconds”) which bay be of intrest:

“I think that KeStallExecutionProcessor runs with interrupts disabled, so
the IRQL would be 31. Just disassemble the function; it’s pretty small.
I looked at it years ago to figure out how it did the high-resolution
timing. It’s kinda clever; it measures how many of a given loop iteration
it can execute in 1/18th of a second or whatever at bootup time, and then
scales that down to the appropriate number of loops in the specified
number of microseconds. (Thus, there’s no timer actually involved when
you call it.)”

You might be able to use the same kind of logic in your driver to obtain higher resolution timing, depending on you exact needs.

Regards,

Ed Lau

MidCore Software, Inc.
900 Straits Tpke
Middlebury, CT 06762

www.midcore.com

----- Original Message -----
From: Ghijselinck
To: NT Developers Interest List
Sent: Thursday, March 21, 2002 6:30 AM
Subject: [ntdev] Changing Timer Resolution from within Kernel drivers

I’m currently developing a dedicated kernel driver for an I2C interface
card for which I need to set timers with milli-seconds accuracy. I know
about the existence of the “timeBeginPeriod” and its undocumented native
API counterpart “NtSetTimerResolution” to change the default 10 mS
granularity. However, both procedures cannot be used from within kernel
drivers ( or ? ). Does anyone know about a way to increase the timer
resolution from within a
kernel driver temporarely, that would allow me to use the “KeSetTimer” up
to a milli-second accuracy ?


You are currently subscribed to ntdev as: xxxxx@midcore.com
To unsubscribe send a blank email to %%email.unsub%%

> granularity. However, both procedures cannot be used from within kernel

drivers ( or ? ). Does anyone know about a way to increase the timer
resolution from within a

ExSetTimerResolution.

Max

Thanks Stephan,

I will try it out. However, the call is not available in the NT40 DDK
version of Ntoskrnl.lib. Any idea if the library code within the W2000
version would work on NT40, or better, which kernel functions are exactly
executed ?

Christiaan

The right way to do this is to add timers to your hardware and have
them generate interrupts. If you can’t, you are likely screwed.
Windows is not real-time. Even if you get it to work on your machine,
it might fail on others in different configurations.

----- Original Message -----
From: Ghijselinck
To: NT Developers Interest List
Sent: Thursday, March 21, 2002 6:30 AM
Subject: [ntdev] Changing Timer Resolution from within Kernel drivers

I’m currently developing a dedicated kernel driver for an I2C interface
card for which I need to set timers with milli-seconds accuracy. I know
about the existence of the “timeBeginPeriod” and its undocumented native
API counterpart “NtSetTimerResolution” to change the default 10 mS
granularity. However, both procedures cannot be used from within kernel
drivers ( or ? ). Does anyone know about a way to increase the timer
resolution from within a
kernel driver temporarely, that would allow me to use the “KeSetTimer” up
to a milli-second accuracy ?


Do You Yahoo!?
Yahoo! Movies - coverage of the 74th Academy Awards®
http://movies.yahoo.com/

“Ntdev Reader” wrote in message news:xxxxx@ntdev…
>
> The right way to do this is to add timers to your hardware and have
> them generate interrupts. If you can’t, you are likely screwed.
> Windows is not real-time. Even if you get it to work on your machine,
NT is not hard real time, where you you’ve got some machine tool arm
moving at 20 MPH, and you must get the CPU in 2 milliseconds to
stop it or the machine will topple over…

But it is good enough to support Voice Over IP telephone calls, where the
round trip delay (phone to phone and back again) must be less than 0.25
seconds. To do this, you need to run your threads at a real time priority level,
which means that can lock up the machine if you have bugs such as infinite
loops (unless you are looking for starvation of lower priority threads). You
also must avoid poor quality drivers and hardware that break the rules on the
systems where you need soft real time behavior.

The default timer interupt of 10 (or 15) ms is often too long, and setting the
system down to 1 ms is reasonable. The problem with trying to set it from
the kernel is the interaction with other products that may be trying to do the
same time.

> it might fail on others in different configurations.

But more hardware interrupts are not going to fix the cases where it fails,
because the known cases are where video cards lock up the system
bus when their FIFOs fill, or other drivers stay in their ISRs for too long.
Configuration restrictions are necessary to ensure real-time behavior [
assuming that you are not willing to evolk the wrath of the list by
patching the interrupt vectors for the rest of the devices.]

-DH

PS. There real-time thread priorities are there to do soft real time, and they work.
The comments recommending that you don’t use them are probably because MS has
done too many support calls due to buggy real time code.

>
> > ----- Original Message -----
> > From: Ghijselinck
> > To: NT Developers Interest List
> > Sent: Thursday, March 21, 2002 6:30 AM
> > Subject: [ntdev] Changing Timer Resolution from within Kernel drivers
> >
> >
> >
> > I’m currently developing a dedicated kernel driver for an I2C interface
> > card for which I need to set timers with milli-seconds accuracy. I know
> > about the existence of the “timeBeginPeriod” and its undocumented native
> > API counterpart “NtSetTimerResolution” to change the default 10 mS
> > granularity. However, both procedures cannot be used from within kernel
> > drivers ( or ? ). Does anyone know about a way to increase the timer
> > resolution from within a
> > kernel driver temporarely, that would allow me to use the “KeSetTimer” up
> > to a milli-second accuracy ?
>
>
>
> __________________________________________________
> Do You Yahoo!?
> Yahoo! Movies - coverage of the 74th Academy Awards®
> http://movies.yahoo.com/
>
>

Of course, if other drivers disable your interrupts you are
out of luck. My point was that if you have your own interrupt
you can (1) fire it precisely when you need and (2) average
interrupt latency is lower than DPC (KeSetTimer) latency and
you can handle time-critical stuff in the ISR directly. So,
hardware timers are in order.

This is a very general advice, but the original poster can’t
really hope for something more, for he specified neither his
time requirements nor whether he has control over the hardware
design.

— Dave Harvey wrote:
>
> “Ntdev Reader” wrote in message news:xxxxx@ntdev…
> >
> > The right way to do this is to add timers to your hardware and have
> > them generate interrupts. If you can’t, you are likely screwed.
> > Windows is not real-time. Even if you get it to work on your machine,
> NT is not hard real time, where you you’ve got some machine tool arm
> moving at 20 MPH, and you must get the CPU in 2 milliseconds to
> stop it or the machine will topple over…
>
> But it is good enough to support Voice Over IP telephone calls, where the
> round trip delay (phone to phone and back again) must be less than 0.25
> seconds. To do this, you need to run your threads at a real time priority
> level,
> which means that can lock up the machine if you have bugs such as infinite
> loops (unless you are looking for starvation of lower priority threads). You
> also must avoid poor quality drivers and hardware that break the rules on the
> systems where you need soft real time behavior.
>
> The default timer interupt of 10 (or 15) ms is often too long, and setting
> the
> system down to 1 ms is reasonable. The problem with trying to set it from
> the kernel is the interaction with other products that may be trying to do
> the
> same time.
>
>
>
> > it might fail on others in different configurations.
>
> But more hardware interrupts are not going to fix the cases where it fails,
> because the known cases are where video cards lock up the system
> bus when their FIFOs fill, or other drivers stay in their ISRs for too long.
> Configuration restrictions are necessary to ensure real-time behavior [
> assuming that you are not willing to evolk the wrath of the list by
> patching the interrupt vectors for the rest of the devices.]
>
> -DH
>
> PS. There real-time thread priorities are there to do soft real time, and
> they work.
> The comments recommending that you don’t use them are probably because MS
> has
> done too many support calls due to buggy real time code.
>
>
> >
> > > ----- Original Message -----
> > > From: Ghijselinck
> > > To: NT Developers Interest List
> > > Sent: Thursday, March 21, 2002 6:30 AM
> > > Subject: [ntdev] Changing Timer Resolution from within Kernel drivers
> > >
> > >
> > >
> > > I’m currently developing a dedicated kernel driver for an I2C interface
> > > card for which I need to set timers with milli-seconds accuracy. I know
> > > about the existence of the “timeBeginPeriod” and its undocumented
> native
> > > API counterpart “NtSetTimerResolution” to change the default 10 mS
> > > granularity. However, both procedures cannot be used from within kernel
> > > drivers ( or ? ). Does anyone know about a way to increase the timer
> > > resolution from within a
> > > kernel driver temporarely, that would allow me to use the “KeSetTimer”
> up
> > > to a milli-second accuracy ?
> >
> >
> >
> >
> > Do You Yahoo!?
> > Yahoo! Movies - coverage of the 74th Academy Awards®
> > http://movies.yahoo.com/
> >
> >
>
>
>
> —
> You are currently subscribed to ntdev as: xxxxx@yahoo.com
> To unsubscribe send a blank email to %%email.unsub%%


Do You Yahoo!?
Yahoo! Movies - coverage of the 74th Academy Awards®
http://movies.yahoo.com/