The NT Insider

What's Your Test Score -- Best Practices for Driver Testing
(By: The NT Insider, Vol 11, Issue 3&4, May-August 2004 | Published: 18-Aug-04| Modified: 18-Aug-04)

In this article, we?ll try to lay out what we consider to be the best practices for testing any driver. This list, and its associated comments, is based on our 12+ years of experience writing drivers here at OSR, as well as lots of information we?ve seen shared in the driver development community.

This list isn?t a "wish list" or a collection of the types of testing that one would perform in a perfect world. Rather, we?ve tried to keep the list pragmatic. If you aren?t doing the tests described in the following list ? at least items 1 through 6 ? then you are not doing an adequate job of testing your code. What we?re attempting to establish here are the basic standards of acceptable professional practice. The only reason your mileage should vary is if you?re doing more tests or more stringent testing than what?s described in this article.

1) Test on the Latest OS
Regardless of the intended target for your code, you should always test aggressively on the most recently released version of Windows. At the date of this writing, the right system to use would be Windows Server 2003. This is true, even if you only intend your driver to run (today) on Windows 2000 client systems! This is because there are more capabilities and checks in Driver Verifier in the most recent versions of Windows. Also, there may be more (or more appropriate) checks in the checked build.

This isn?t to say that you should never test on the OS that?ll be your main deployment target. Obviously, you need to test aggressively on that too.

Also, this isn?t a suggestion to base your testing on a pre-release or beta release of Windows. It?d be insane to base testing of a driver you?re building today on Windows Longhorn ? There are just too many unknowns. Stick with released builds for the majority of your testing.

2) Enable All Driver Verifier Options
At all times, during all testing you perform (functional, regression, and stress) always have Driver Verifier enabled for your driver. And be sure that you?ve selected all the Driver Verifier options, except low resource simulation.

Remember, (as described in Trust Yet Verify), Driver Verifier watches to ensure that whatever your driver does, it does correctly. The more situations you put your driver through with Driver Verifier enabled, the better testing you?re getting.

3) Test Under The Checked Kernel and HAL
This is a must. I remember one time talking to a developer who was having trouble with his driver. I asked him if he had run it under the checked build. "Oh no, man," he said, "I tried that once and the system crashed. I never did use the checked build again." Duh!

The checked build of Windows contains a set of reasonableness tests for various parameters passed and actions taken by your driver. These tests are in addition to those provided by Driver Verifier. Therefore, it?s important to test with both the checked build and Driver Verifier. If you?re not testing with the checked build, you?re not adequately testing your driver.

4) Test on MultiProcessor Systems
I can?t believe I have to write this, but let?s be thorough: You must test your code on multiprocessor systems. And, the more CPUs the better. At the very least, and regardless of what you think about hyperthreading, get your test lab a few systems with hyperthreaded CPUs. These are inherently multiprocessor systems.

The more processors you have, the more quickly you?ll uncover your MP-specific bugs. So, it?s a good idea to test on a dual-processor, hyperthreaded system, which effectively acts like it?s a quad-processor.

You cannot ignore testing on MP systems. Most Intel chips that are sold today are hyperthreaded, so regardless of what you may think, if you?re code is running on new hardware it?ll be running in an MP environment.

5) Use Call Usage Verifier
Test your driver, at least periodically, with Call Usage Verifier (CUV). We recommend using the most recently released version of CUV, such as that provided in the Windows Server 2003 SP1 DDK. This version has had a number of enhancements and improvements made to it.

CUV can find a lot of ugly problems that are otherwise extremely hard to detect. For example, problems with invalid I/O Stack Locations are problems that Driver Verifier can?t even find. A good example of this type of problem: Calling IoAllocateIrp and then getting the location of the I/O Stack Location to fill in with parameters by calling IoGetCurrentIrpStackLocation. Ooops!

Get a checked version of your driver built with CUV, enable Driver Verifier, and run it on the checked build. Really put the driver through its paces.

6) PreFAST and Lint the Code
You must run your driver through PreFAST. At the very least, fix any bugs that show up when the "winpft" filter is selected. Yes, even if the bugs are PreFAST complaining about something stupid. Just fix it (so you don?t have to look at it the next time you run PreFAST).

Here at OSR, we?ve also found it very useful to run drivers through pcLint. We find that lint finds different errors than PreFAST, and one does not replace the other. We published an article on how to use lint for driver development a while back (All About Lint). If you haven?t read it, you should.

7) Segment Your Testing
Divide and conquer. Any proper driver testing program will need to separate out at least three kinds of tests:

  • Tests for functional correctness
  • Tests for implementation correctness
  • Long-run and stress testing

Never confuse these types of tests.

Functionality tests check to see if your driver does what your customers expect it to do. These tests answer the question, "Does your driver properly fulfill its mission?" Yes, you need to test whether or not the satellite dish turns right five degrees when you send the "turn right" command, with a parameter indicating that the turn should be five degrees. But it would be a serious error to simply exercise all the functionality in your driver, and declare it to be fully tested.

On the other hand, tests for implementation correctness aim at exercising your driver?s robustness. These tests answer the question, "In fulfilling its mission, does your driver do what it does in a way that?s valid and plays well with the Windows operating system?" You need to check to ensure that your driver robustly validates any parameters passed to it from user mode. You must validate that your driver can properly handle any invalid, inopportune, or malformed requests it might receive. Driver Verifier, CUV, and other test tools (such as DC2, pnpdtest, and the ACPI tests) that are provided as part of the DDK can help with this category of test.

Finally, there are long-run and stress tests. These tests answer the question, "Can your driver keep it up for a long time, and under heavy system loads?" We can?t believe the number of colleagues we run into who run tests for a few minutes and go home, confident that their driver works. Hullo? How about letting it run for two weeks under three times the normally expected load? You must perform these tests to ensure that your driver?s code paths are properly exercised.

8) Consider Code Coverage
Consider getting a tool that tracks or measures driver code coverage. Here at OSR, we haven?t actually used any of these tools. However, we do know that both Compuware and Bullseye make code coverage tools that?ll work for drivers. We haven?t heard anything positive or negative about the Compuware tool. We?ve heard very little about the Bullseye tool, but what we have heard has been positive. Maybe somebody with experience with either or both of these tools will volunteer to write up an article one of these days.

Having a good quality tool that can provide code coverage analysis for your driver would be nothing but a good thing. That way, you would know for sure that all key code paths are being exercised. It?s something to consider, right?


This article was printed from OSR Online

Copyright 2017 OSR Open Systems Resources, Inc.