The NT Insider

One Special Case -- Testing File Systems
(By: The NT Insider, Vol 11, Issue 3&4, May-August 2004 | Published: 18-Aug-04| Modified: 18-Aug-04)

When the recent double issue?s theme of testing was announced, one of the topics we knew had to be covered was testing in the file systems space. This is a difficult topic to actually discuss intelligently because there is so little actually available for testing in this arena.

Part of this is the nature of this type of driver ? most people developing in the file systems space are building filter drivers and the typical testing paradigm for testing any filter driver is to actually test the underlying device and make sure the filter driver doesn?t break anything! For those actually building a file system, the task is more difficult because there are few "new" file systems built, many of the APIs that must be tested are not supported by all file systems, and this is such a highly specialized area that it seldom receives much attention.

In this article, we?ll start by discussing what already exists for testing file systems, since that will be directly applicable to those building both new file systems and file system filter drivers. Next, we?ll cover a couple of key areas of testing to consider. Finally, we?ll finish up by discussing additional areas to explore in testing, which will be most useful to those building a new file system.

Existing Test Tools
The existing test tools that are available from Microsoft are all included in the Hardware Compatibility Test kit. At the time of this article, you can download this kit from Microsoft?s web site Most of these are tests that have been around ? in one form or another ? and are either generic driver tests (e.g., the DC2 tool that tests to ensure a driver handles IOCTLs properly) or are specific tests for file systems.

The IFS Tests are most directly applicable, but there are numerous other, non-specialized tests that should also continue to be useful ? tests for verifying that I/O works correctly, that disks respond as expected, etc. Be careful to read the published errata for the IFS Kit tests in particular ? there are a number of test cases that are known to fail under circumstances where the failure is not considered to be "incorrect". This can happen because a file system filter driver might actually change the behavior of the underlying file system. For example, an anti-virus scanning filter might scan a file after it has been modified, causing the access time of the file to change from the value expected by the test.

At OSR, we have our own in-house test suite (see Sometimes You Have to Write Your Own). In addition, we have augmented this with the Microsoft HCT tests and a number of other third-party benchmark and test suites, such as IOMETER (a program that performs I/O and measures that performance) and an industry benchmark (Netbench) that we have found tests another commonly observed range of functionality.

Environment Testing
As important as which test suite to use, is the understanding of the environment in which your software will operate. A common issue that is overlooked by those performing testing in the Windows environment for the first time, is the simple fact that how a file system is exercised is often a measure of the specific type of access used.

The important point here then is for testing to be done against the type of environment in which the product will be used. If this is primarily a server-side product, there is generally little benefit from focusing all of your testing energy on the behavior under local access. Instead, you should focus on remote (network) access, preferably using the same configurations you would anticipate from your customers. Thus, you should determine which of the various file servers would be used: SRV (for LanManager/CIFS access), SFM (for Macintosh access via Apple File Protocol), SFU (for UNIX client access via NFS) or some other file server component, whether provided by Microsoft or by a third party vendor. At OSR, we find each of these produces subtly different behavior sometimes or uses interfaces unique to that particular server product.

Testing for Interoperability
In addition, it is essential to test interoperability. Microsoft?s IFS web site ( includes a list of known products that can be used as a form of interoperability test matrix. There are so many of these products that it is unlikely to be feasible to test against all of them. However, it is important to identify which of these products is likely to be present within your target environment. For instance, unless you are developing your own anti-virus product, it is quite likely that you will find an anti-virus product present on your target system. Thus, testing with one (or more) anti-virus products is essential to ensuring your own product will interoperate properly.

Keep in mind that the order of driver loading can also make a difference. This is especially true with file system filter drivers, where the functionality of one filter might interfere with another filter. However, it is also true with file system drivers, where we have seen instances in which a failed device mount by one file system, blocks another from even noticing the mount attempt ? thus looking like a failure of the underlying product. To test these characteristics, you must control the load order of your driver relative to other drivers. You can use the DeviceTree utility to observe the attachment order of filter drivers.

Future Areas to Explore
We are always looking for ways to further expand our own ability to test file systems and file system filter drivers. One interesting technique suggested by the new Filter Manager model is to use an "encapsulation" test mechanism, where I/O operations are intercepted both before and after they are sent to a particular file system filter driver. This provides tremendous insight into how the subject filter driver is handling the I/O operations.

For the file systems developed here at OSR, we use a core set of tests (common functionality) and then build specialized tests that exercise the unique functionality of the file system. Our normal yardstick for "common functionality" is that functionality present in the FAT file system (for read/write) or CDFS file system (for read/only). Thus, when we are not sure how something should behave, we check to see what happens when we run the test on one of these file systems. Most of the time, we get the same results ? sometimes we find differences between the way that FAT and NTFS process a particular operation. In those cases, we usually choose one or the other for that particular file system. Occasionally, we find a feature that we decide should work differently on our file system than it does on FAT or NTFS.

Testing Methodology
In soliciting feedback on this topic, one person suggested that we should review the basics of testing methodology. Since our advice for those developing file system products is to develop their own tests, we think it worthwhile to reiterate these points. Specifically, when developing tests (and developing code, for that matter):

  • Pay particular attention to "edge" conditions (e.g., counter overflow conditions, unusual circumstances, etc.).
  • Ensure you handle extremes properly (e.g., buffer sizes that are too small or too big).
  • Ensure that failure cases and error paths are properly handling all conditions ? most of the fantastic failures we see are actually in error paths, even within our own code.
  • Consider the behavior of resource limitations that lead to allocation failure or process starvation. This is a serious issue within file systems because they must not only compete for resources, but must also cooperate with the rest of the system to ensure fairness.
  • Ensure that there are adequate resources for testing ? particularly for file systems, testing is usually more resource intensive than the actual development. At OSR (where we are responsible for both) we spend far more time testing than we do performing just "raw" development.
  • Look for ways to force timing issues to surface, for example through load testing or holding scarce resources for a long time. Our observation here is that as the code base becomes more mature, race conditions begin to surface. Ensure that you test on multi-processor machines, with both large and small memory configurations. One trick we have used (for example) is to force context switches at inconvenient times (e.g., drop a spin lock and then sleep).
  • Build your own test harness, be it a scripting language, GUI script builder, or whatever other tools that you can find.

The important thing to keep in mind is that no matter what you do in terms of testing, finding the problems before you ship the product is less expensive for the organization than finding them after you ship the product. Supporting users in the field that are having problems is painful, difficult and expensive ? and nobody is happy with the outcome!

As much as we wish there were a simple suite of tests to which we could point people building file systems and file system filter drivers, there is not. However, we hope that this article provides you with some basic pointers and gets things moving in the right direction. We trust that in the future we will see more initiatives in this area so that those of us building file systems and file system filter drivers will be able to better ensure our products work properly before we ship them out to our customers!


This article was printed from OSR Online

Copyright 2017 OSR Open Systems Resources, Inc.