OSRLogo
OSRLogoOSRLogoOSRLogo x Seminar Ad
OSRLogo
x

Everything Windows Driver Development

x
x
x
GoToHomePage xLoginx
 
 

    Sat, 23 Jun 2018     117300 members

   Login
   Join


 
 
Contents
  Online Dump Analyzer
OSR Dev Blog
The NT Insider
Downloads
ListServer / Forum
Driver Jobs
  Express Links
  · The NT Insider Digital Edition - May-June 2016 Now Available!
  · Windows 8.1 Update: VS Express Now Supported
  · HCK Client install on Windows N versions
  · There's a WDFSTRING?
  · When CAN You Call WdfIoQueueP...ously

Writing a Virtual Storport Miniport Driver

This article is one in a series on writing virtual Storport miniport drivers for Windows. Please find links to each article in the series here:Part I Part II Part III

 

Storport is a welcome relief to storage driver writers wishing to develop a driver that exports a virtual device. Until the advent of Storport, driver writers were forced to either modify/hack a SCSIport miniport driver or create a full SCSI port driver. The miniport approach was complicated (due to locking issues), unsupported, hard to maintain, and mostly provided poor performance. The full SCSI port driver approach had the advantage of providing good performance, but was exceptionally difficult to write and was also neither supported nor eligible for the Designed for Windows logo. Thus, development in this area was limited to companies and individuals who were willing to take chances, accept the inherent complexities, and forgo WHQL certification.

Storport, introduced in Windows Server 2003, was created to deliver the performance needed in RAID and SAN environments that the existing SCSIport driver was not capable of. As of Windows Vista SP1 and Server 2008, Microsoft provided an update to Storport, which enabled support for Storport virtual miniport drivers. This addition allows storage driver developers to finally have a supported model for exporting virtual devices.

The Storport model is very good model for a virtual miniport driver. It provides I/O queue management (e.g. queue depth, holding off I/Os when queue depth is reached), some I/O error handling (e.g. resets, some retries), all the operations necessary to handle PnP device (LUN) starting and stopping, and all the bus driver duties. One thing that it does not provide is help in hardware management (we're implementing a virtual miniport remember!), since there is no hardware assumed to be present. This means that virtual miniport is totally responsible for handling the I/O request. While this may sound daunting, remember that as a virtual miniport you have the full set of Kernel APIs and other drivers in the system to help you perform your work.

 

This article "Writing a Virtual Storport Miniport Driver" is based upon the information we have gained at OSR while developing a Storport virtual miniport driver. This topic, which is too big to be described by just one The NT Insider article, will be discussed over a series of articles and will culminate in the presentation of a fully functional Storport virtual miniport driver. This first article in the series describes the architecture, flow of control, and key routines to be implemented when developing a virtual miniport.

OSR would like to that James Antognini and Albert Chen from Microsoft for providing us with the information needed to produce the Storport virtual miniport. Without their help it would have taken awhile for information on writing one to be published.

 

Getting Started with Storport

For those of you who have never worked with Storport before let's go over some basics. When working with the Storport model, we develop a driver that interfaces to the Storport driver (Storport.sys) through a defined set of APIs and data structures; it is not an IRP based interface. Similar to the SCSI port/miniport model, the Storport driver, called a Port Driver, provides the environment that we operate in, and support routines that our miniport driver needs, in order to make our adapter available to the Windows storage subsystem. The adapter is assumed to be a device (virtual or physical), which can control one or more SCSI buses and one or more SCSI devices. We call our driver, developed for this model, a miniport driver. Figure 1 illustrates the Storport model.

 

Figure 1 - Storport Driver Model
Adapted from The Windows Storage Driver Stack in Depth (Microsoft WinHEC 2006)

A Storport miniport driver must conform to defined Storport rules in order to provide the services that the adapter, whether virtual or physical, offers to the operating system. Storport calls the miniport with SCSI_REQUEST_BLOCKS (SRBs) which describe the operation to be performed. Like an IRP, the SRB contains all the information that the miniport needs in order to perform the operation. It contains an operation code, buffers and parameters that describe the request. The driver performs the operation described in the SRB, puts the completion status into the SRB, and then notifies Storport when the request is completed. Again, this is not unlike how a normal driver handles an IRP.

The point here is that Storport does all the operations necessary to interact with the I/O Manager, PnP Manager and Power Manager, in order to make a device available to the system. How that is done is invisible to the miniport driver and it frees the developer to focus on providing the virtual adapter's services to Windows.

 

Designing our Virtual Miniport

As with any development project, a good project begins with a good design. Therefore when designing a virtual Storport miniport (we'll just refer to our driver as a miniport from now on), there are a few items that we need to think about, since they will impact how we define our driver. They are:

  • Are the resources that we're exporting accessible locally or remotely?
  • If remotely, how do we get to it? Via a network, or some other mechanism?
  • If locally, how do we get to it? Via another driver, kernel APIs, or a user mode entity?
  • Is our resource static or dynamic? If dynamic, how will the miniport be notified that it is present? For example, will the miniport be notified by another driver or some user mode component?
  • How are we going to process the requests we receive? Do they have to be performed synchronously or asynchronously? Can we use system worker threads, create our own thread pool, or is somebody else processing the request?
  • Handling Failure

Let's start a basic discussion of these items.

 

Accessing Resources

While it may be obvious to some, our design is going to based upon how we get access to the resource that our virtual adapter is going to be exporting as a local SCSI device. If the resource is local then we'll have to decide whether we can use kernel APIs to access the resource (Zw calls, memory accesses, or IRP based requests), or if we have to communicate with a user mode application or service to get access to the resource. Likewise, if the resource is remote, we have to decide if we can communicate with it via the network or via some other driver that has access to the resource. That other driver could be for some special device that exports both network and SCSI functionality and exists as a virtual bus driver which creates a virtual PDO that our miniport will be loaded to handle.

 

Static or Dynamic

How a resource becomes accessible to Storport is worthy of discussion. There are 2 types of devices, static devices and dynamic devices. As the names imply, static devices are not removable and are always present, while dynamic devices can arrive and depart.

If our devices are static then life is easy, and when called upon by Storport to enumerate our devices during initialization, all we have to do is describe our devices to Storport and we're golden. If on the other hand our devices are dynamic, then we'll need to have a way to know when those dynamic devices have arrived or have departed, and notify Storport of those occurrences.

 

Processing Requests

As with any other type of kernel driver, we need to be concerned about how we handle requests. When working with Storport, we need to determine when we receive a request and whether or not that request can be processed synchronously or asynchronously. If the request can be processed asynchronously, then our driver has to provide the means for this to be accomplished. All this really means is that we either have to use system work items, or create our own worker threads to provide the background processing and this probably implies that some queuing is involved.

When processing a request, our driver has to be very cognizant of what IRQL it is being called at. Unlike most device stacks, drivers working in the storage stack have to deal with the fact that their entry points can be called at IRQLs less than or equal to DISPATCH_LEVEL. So we have to make sure that the operations we need to be performed can be performed at that IRQL, or there is some way that we can process the request later on at some lower IRQL.

 

Finally, we have to determine how the request is going to be performed. Are we performing the operation or are we passing the request to some other entity to perform the operation (e.g., another driver or a user-mode service)?

 

Handling Failure

Failures in the storage stack are not well tolerated. Thus, when designing our driver we must ensure that we have all the resources necessary to process a request in place before initiating the request. In addition, if the resource is accessed across the network or via some other remote mechanism, we must consider the possibility that we may lose access to our device. Bad things can start happening; "Lost delayed write" messages come to mind....

 

Now that we've gotten some of the design issues out in the open, it is time to discuss the driver itself.

 

DriverEntry

DriverEntry, as all driver writers know, is the routine that is called when a driver is first loaded (we'll ignore export drivers at this time). This is the routine where we'll register our miniport with Storport. Since we're going to be writing a virtual Storport miniport driver, this means that we have to create and initialize a VIRTUAL_HW_INITIALIZATION _DATA structure as defined in Figure 2. This structure exports the entry points that our miniport supports and provides some initialization data that Storport needs in order to understand our virtual adapter. We register this structure with Storport via a call to StorportInitialize. Once that is done, Storport interacts with our miniport exclusively via the routines that we've defined in the structure.

typedef struct _VIRTUAL_HW_INITIALIZATION_DATA {
  ULONG HwInitializationDataSize;
  INTERFACE_TYPE AdapterInterfaceType;
  PHW_INITIALIZE HwInitialize;
  PHW_STARTIO HwStartIo;
  PHW_INTERRUPT HwInterrupt;
  PVIRTUAL_HW_FIND_ADAPTER HwFindAdapter;
  PHW_RESET_BUS HwResetBus;
  PHW_DMA_STARTED HwDmaStarted;
  PHW_ADAPTER_STATE HwAdapterState;
  ULONG DeviceExtensionSize;
  ULONG SpecificLuExtensionSize;
  ULONG SrbExtensionSize;
  ULONG NumberOfAccessRanges;
  PVOID Reserved;
  UCHAR MapBuffers;
  BOOLEAN NeedPhysicalAddresses;
  BOOLEAN TaggedQueuing;
  BOOLEAN AutoRequestSense;
  BOOLEAN MultipleRequestPerLu;
  BOOLEAN ReceiveEvent;
  USHORT VendorIdLength;
  PVOID VendorId;
  union {
    USHORT ReservedUshort;
    USHORT PortVersionFlags;
  };
  USHORT DeviceIdLength;
  PVOID DeviceId;
  PHW_ADAPTER_CONTROL HwAdapterControl;
  PHW_BUILDIO HwBuildIo;
  PHW_FREE_ADAPTER_RESOURCES HwFreeAdapterResources;
  PHW_PROCESS_SERVICE_REQUEST HwProcessServiceRequest;
  PHW_COMPLETE_SERVICE_IRP HwCompleteServiceIrp;
  PHW_INITIALIZE_TRACING HwInitializeTracing;
  PHW_CLEANUP_TRACING HwCleanupTracing;
} VIRTUAL_HW_INITIALIZATION_DATA, *PVIRTUAL_HW_INITIALIZATION_DATA;

 

Figure 2 - VIRTUAL_HW_INITIALIZATION_DATA Structure

As a virtual Storport miniport our driver is required to support 5 entry points which are:

 

 

  • HwFindAdapter
  • HwInitialize
  • HwAdapterControl
  • HwResetBus
  • HwStartIo

Some entry points are unnecessary such as HwFreeAdapterResources, since we have no resources, while other entry points are optional. These optional interfaces listed below may be necessary depending upon the architecture of the miniport.

 

 

  • HwProcessServiceRequest
  • HwCompleteServiceIrp
  • HwInitializeTracing
  • HwCleanupTracing

We'll talk about each of the required entry points in subsequent sections of this article and we'll leave the other entry points that we might implement for the follow on articles.

 

Besides filling our driver's entry points, there are some other fields that need consideration and they are:

 

 

  • HwInitializationDataSize -this needs to be set to sizeof(VIRTUAL_HW_INITIALIZATION_DATA).
  • AdapterInterfaceType - this indicates to Storport the bus that the miniport's virtual adapter resides on. Since we are implementing a miniport for this virtual adapter, we should set this field to Internal
    MultipleRequestsPerLun - must be set to TRUE, and indicates that the miniport's virtual adapter can queue multiple requests per logical unit.
  • PortVersionFlags - typically set to 0, but can be set to SP_VER_TRACE_SUPPORT to indicate that the virtual adapter supports tracing. Unfortunately we haven't found any documentation to indicate what type of tracing is supported (I would assume WPP), or how to implement it.
  • DeviceExtensionSize - indicates the size, in bytes, of the miniport's adapter-specific storage area (which is similar to a WDM devices device extension). This is used by the miniport as storage for driver-defined adapter information. This space is allocated from non-paged pool.
  • SpecificLuExtensionSize - indicates the size, in bytes, of the virtual adapter's per logical unit storage area. This is used by the miniport as storage for driver-determined logical unit information, and the space is allocated out of non-paged pool.
  • SrbExtensionSize - indicates the size, in bytes, of the miniport's per SRB request storage area. Each SRB provided to the miniport by Storport contains a pointer located in SRBs' SrbExtension area which points to a non-paged pool block of the size specified in SrbExtensionSize. We can use this area any way we want on a per SRB basis. For example, if for some reason we needed to queue this SRB to a worker thread for processing, we could embed a LIST_ENTRY field in the SRB extension for use in queuing.

Note that you'll also see the VenderIdLength, VendorId, DeviceIdLength, and DeviceId fields in the VIRTUAL_HW_INITIALIZATION _DATA structure. These fields may be supplied but are not required for a SMD.

 

HwFindAdapter

HwFindAdapter is the first routine called by Storport. This routine is called at IRQL PASSIVE_LEVEL and is responsible for determining whether the specified adapter, in our case the virtual adapter, is supported, and if it is, providing additional configuration information about the adapter.

 

To provide this additional configuration information, our miniport must initialize the input PORT_ CONFIGURATION_INFORMATION structure that is passed in. When our routine gets passed this structure, the Storport driver will already have initialized some fields in it, based on internal information and based on information that was set in our VIRTUAL_HW_ INITIALIZATION_DATA structure. Thus, it's important that we do not zero it. All our miniport needs to do is fill in the fields that the driver wants to use. This structure is defined in Figure 3.

typedef struct _PORT_CONFIGURATION_INFORMATION {
  ULONG  Length;
  ULONG  SystemIoBusNumber;
  INTERFACE_TYPE  AdapterInterfaceType;
  ULONG  BusInterruptLevel;
  ULONG  BusInterruptVector;
  KINTERRUPT_MODE  InterruptMode;
  ULONG  MaximumTransferLength;
  ULONG  NumberOfPhysicalBreaks;
  ULONG  DmaChannel;
  ULONG  DmaPort;
  DMA_WIDTH  DmaWidth;
  DMA_SPEED  DmaSpeed;
  ULONG  AlignmentMask;
  ULONG  NumberOfAccessRanges;
  ACCESS_RANGE  (*AccessRanges)[];
  PVOID  Reserved;
  UCHAR  NumberOfBuses;
  CCHAR  InitiatorBusId[8];
  BOOLEAN  ScatterGather;
  BOOLEAN  Master;
  BOOLEAN  CachesData;
  BOOLEAN  AdapterScansDown;
  BOOLEAN  AtdiskPrimaryClaimed;
  BOOLEAN  AtdiskSecondaryClaimed;
  BOOLEAN  Dma32BitAddresses;
  BOOLEAN  DemandMode;
  BOOLEAN  MapBuffers;
  BOOLEAN  NeedPhysicalAddresses;
  BOOLEAN  TaggedQueuing;
  BOOLEAN  AutoRequestSense;
  BOOLEAN  MultipleRequestPerLu;
  BOOLEAN  ReceiveEvent;
  BOOLEAN  RealModeInitialized;
  BOOLEAN  BufferAccessScsiPortControlled;
  UCHAR  MaximumNumberOfTargets;
  UCHAR  ReservedUchars[2];
  ULONG  SlotNumber;
  ULONG  BusInterruptLevel2;
  ULONG  BusInterruptVector2;
  KINTERRUPT_MODE  InterruptMode2;
  ULONG  DmaChannel2;
  ULONG  DmaPort2;
  DMA_WIDTH  DmaWidth2;
  DMA_SPEED  DmaSpeed2;
  ULONG  DeviceExtensionSize;
  ULONG  SpecificLuExtensionSize;
  ULONG  SrbExtensionSize;
  UCHAR  Dma64BitAddresses;
  BOOLEAN  ResetTargetSupported;
  UCHAR  MaximumNumberOfLogicalUnits;
  BOOLEAN  WmiDataProvider;
  STOR_SYNCHRONIZATION_MODEL SynchronizationModel;
  PHW_MESSAGE_SIGNALED_INTERRUPT_ROUTINE  HwMSInterruptRoutine;
  INTERRUPT_SYNCHRONIZATION_MODE  InterruptSynchronizationMode;
  MEMORY_REGION  DumpRegion;
  ULONG  RequestedDumpBufferSize;
  BOOLEAN  VirtualDevice;
  ULONG  ExtendedFlags1;
  ULONG  MaxNumberOfIO;
} PORT_CONFIGURATION_INFORMATION,
  *PPORT_CONFIGURATION_INFORMATION;

Figure 3 - PORT_CONFIGURATION_INFORMATION Structure

 

The initialization of this structure depends on what functionality our miniport is going to provide, so we need to at least consider setting certain fields in the PORT_CONFIGURATION_INFORMATION structure. These fields are:

 

 

  • VirtualDevice - This field must be set to TRUE, since it indicates to Storport that there is no real hardware behind this device (i.e. no DMA channels, no Interrupts, no hardware resources). This field indicates the Storport that it must behave differently for this device.
    ScatterGather - At present this field must be set to TRUE to indicate to Storport that our miniport's virtual adapter supports scatter/gather. If this is not set to TRUE, the virtual adapter will not fully initialize.
    Master - setting this field to TRUE indicates that our miniport's virtual adapter is a bus master device. Since our miniport's adapter is virtual, this setting probably does not matter. Note that the documentation on this field does not indicate how its setting will affect the behavior of our miniport's virtual adapter.
    CachesData - the setting of this field to TRUE indicates that our miniport's virtual adapter caches data and will cause Storport to notify the miniport when file system cache flushes or shutdowns occur. The design of our miniport would impact what we would set for this field.
    Dma32BitAddresses -This field has no meaning for a virtual Storport miniport.
    Dma64BitAddresses - this field has no meaning for a virtual Storport miniport.
    NumberOfBuses - this needs to be set to indicate the number of buses that Storport will need to query looking for connected devices. By default this number is 0, but should be set to a number between 1 and SCSI_MAXIMUM_BUSES (defined in "SRB.H").
    MaximumNumberOfTargets - this field indicates the maximum number of devices that can be found on a bus. This number should be somewhere between 1 and SCSI_MAXIMUM_TARGETS_PER_BUS (defined in "SRB.H"). The default for this value is SCSI_MAXIMUM_TARGETS_PER_BUS.
    MaximumNumberOfLogicalUnits - this field indicates the maximum number of logical units that can be found on a SCSI device. This number is somewhere between 1 and SCSI_MAXIMUM_ LOGICAL_UNITS (defined in "SRB.H"). The default for this value is SCSI_MAXIMUM_ LOGICAL_UNITS.
    SynchronizationModel - The value for this field can either be StorSynchronizeFullDuplex or StorSynchronizeHalfDuplex. For a virtual adapter we would want to select StorSynchronizeFullDuplex which means that our miniport driver can add new requests to its queue even while it is in the process of completing others. In addition, the miniport driver would not have to synchronize the execution of its HwStartIo and interrupt service routines (which a virtual Storport miniport would not have in the first place). The default for SynchronizationModel is StorSynchronizeHalfDuplex which is only intended for use if you are porting a Scsiport miniport driver to Storport. One thing to keep in mind is that the setting of this field may vary with what your design is trying to accomplish.
    MapBuffers - if TRUE, indicates that data buffers must be mapped to system virtual address. Since we're a miniport for a virtual adapter and need to access the data directly, setting this to TRUE is mandatory.
    MaximumTransferLength - this field is set by the miniport to indicate the maximum number of bytes that our virtual adapter can transfer in a single operation. Since our adapter is virtual we can either leave the default which is SP_UNINITIALIZED_ VALUE, indicating unlimited, or set this field to a value of at least 64KB.
    AlignmentMask - contains a mask indicating the alignment restrictions for buffers sent to the adapter. The Storport driver will ensure that any buffers provided are aligned on at least this boundary. This can sometimes be a pain for users who implement ScsiPassThrough, due to the fact that their requests will fail if the input buffers are not correctly aligned. Since we are implementing a virtual adapter, we can set this to any value we want, however being less restrictive is probably best. Possible values are 0 (byte aligned), 1 (word aligned), 3 (long aligned) and 7 (longlong aligned).
    AdapterScansDown - for a virtual adapter the setting of this field does not matter. If TRUE is set, the adapter scans the bus from targets 7 to 0, while if FALSE, the adapter scans the bus from target 0 to 7. The default is FALSE.

HwInitialize

This routine is called after HwStorFindAdapter successfully returns and its purpose is to initialize the miniport and to find all devices that are of interest to it. While the documentation for this routine says that the routine is called at DIRQL, we did not find this to be the case in the miniport we developed. In our driver, HwInitialize was always called at IRQL PASSIVE_LEVEL. And this makes sense, given that that's no interrupt object and that a driver for a virtual adapter doesn't really have a DIRQL.

 

How the virtual adapter is initialized will depend upon the design of the miniport being developed. For some, there is probably nothing to do, but for others, initialization may entail more work. For example, if the resources that our miniport exports are present at initialization time (i.e., we can access the file that backs our virtual disk) then our initialization may entail setting up the internal structures needed to allow these devices to be enumerated. However, if the devices that our miniport exports are not accessible (i.e., our resources are out on the network and we cannot connect to them), then there may be nothing to do at this time.

 

Follow on articles will address what needs to be set up and how to signal the existence of devices.

 

HwAdapterControl

This routine is called by Storport to perform synchronous operations that control the state or the behavior of the adapter. Our miniport?s HwAdapterControl routine is called with a SCSI_ADAPTER_CONTROL_TYPE parameter indicating the type of adapter control operation and our miniport performs the required operation.

 

 

  • ScsiQuerySupportedControlTypes - The call to HwAdapterControl asks the miniport which control operations (i.e., the commands following this one in the list) are supported by the virtual adapter.
  • ScsiStopAdapter - this operation is requested when Storport wants to shutdown the virtual adapter
  • ScsiRestartAdapter - this operations is requested when Storport wants to reinitialize the virtual adapter
  • ScsiSetBootConfig -this operation is requested when Storport wants to restore any settings on a SMD that the BIOS might need to reboot.
  • ScsiSetRunningConfig - this operation is requested when Storport wants to restore any settings on a virtual adapter that the miniport driver might need to control while the system is running.

HwResetBus

This routine is called by Storport to clear any error conditions that exist on the bus. The HwResetBus is always called at PASSIVE_LEVEL for a miniport supporting a virtual adapter. As for what we do in this routine, it again depends upon the architecture of our miniport. While for some there may be some work involved, for others, this routine could merely notify Storport that the bus reset completed successfully.

 

HwStartIo

The HwStartIo routine is called by Storport to initiate an I/O request at IRQL DISPATCH_LEVEL. For Storport, an I/O request is described by an SRB. SRBs sent to HwStartIo are expected to complete within the timeout value that is specified in the SRB TimeoutValue field. If the SRB is not completed in the specified time, the request will be completed by Storport and the logical unit, target, and bus will be reset. The SRB, defined in Figure 4, contains a function code field which indicates the function to perform. The functions possible are:

 

 

  • SRB_FUNCTION_EXECUTE_SCSI
  • SRB_FUNCTION_ABORT_COMMAND
  • SRB_FUNCTION_RESET_DEVICE
  • SRB_FUNCTION_RESET_LOGICAL_UNIT
  • SRB_FUNCTION_RESET_BUS
  • SRB_FUNCTION_TERMINATE_IO
  • SRB_FUNCTION_RELEASE_RECOVERY
  • SRB_FUNCTION_RECEIVE_EVENT
  • SRB_FUNCTION_SHUTDOWN
  • SRB_FUNCTION_FLUSH
  • SRB_FUNCTION_IO_CONTROL
  • SRB_FUNCTION_LOCK_QUEUE
  • SRB_FUNCTION_UNLOCK_QUEUE
  • SRB_FUNCTION_WMI
  • SRB_FUNCTION_PNP

typedef struct _SCSI_REQUEST_BLOCK {
  USHORT  Length;
  UCHAR  Function;
  UCHAR  SrbStatus;
  UCHAR  ScsiStatus;
  UCHAR  PathId;
  UCHAR  TargetId;
  UCHAR  Lun;
  UCHAR  QueueTag;
  UCHAR  QueueAction;
  UCHAR  CdbLength;
  UCHAR  SenseInfoBufferLength;
  ULONG  SrbFlags;
  ULONG  DataTransferLength;
  ULONG  TimeOutValue;
  PVOID  DataBuffer;
  PVOID  SenseInfoBuffer;
  struct _SCSI_REQUEST_BLOCK  *NextSrb;
  PVOID  OriginalRequest;
  PVOID  SrbExtension;
  union {
      ULONG  InternalStatus;
      ULONG  QueueSortKey;
  };
  UCHAR  Cdb[16];
} SCSI_REQUEST_BLOCK, *PSCSI_REQUEST_BLOCK;

Figure 4 - SCSI_REQUEST_BLOCK Structure

Of the functions listed above, the most important function for the miniport is SRB_FUNCTION_EXECUTE_SCSI. This function is the one that indicates to our miniport which SCSI operation is requested for the specified virtual device. While we won't list them all here, the operation that is to be performed is dictated by the OperationCode field contained within the SRB's CDB (Command Descriptor Block) field partially depicted in Figure 5. The interpretation of the CDB, like an IRP stack?s Parameter field, depends upon the OperationCode. For example, if the OperationCode was SCSIOP_READ (0x08) or SCSIOP_WRITE (0x0A), then you would interpret the structure as a CDB6READWRITE structure. Because all CDB's start with a UCHAR OperationCode field you can first cast the CDB to a CDB6GENERIC structure, switch off the OperationCode and then in that particular command handler reinterpret the CDB to its proper type.

typedef union _CDB {

    //
    // Generic 6-Byte CDB
    //

    struct _CDB6GENERIC {
       UCHAR  OperationCode;
       UCHAR  Immediate : 1;
       UCHAR  CommandUniqueBits : 4;
       UCHAR  LogicalUnitNumber : 3;
       UCHAR  CommandUniqueBytes[3];
       UCHAR  Link : 1;
       UCHAR  Flag : 1;
       UCHAR  Reserved : 4;
       UCHAR  VendorUnique : 2;
    } CDB6GENERIC;

    //
    // Standard 6-byte CDB
    //

    struct _CDB6READWRITE {
        UCHAR OperationCode;    // 0x08, 0x0A - SCSIOP_READ, SCSIOP_WRITE
        UCHAR LogicalBlockMsb1 : 5;
        UCHAR LogicalUnitNumber : 3;
        UCHAR LogicalBlockMsb0;
        UCHAR LogicalBlockLsb;
        UCHAR TransferBlocks;
        UCHAR Control;
    } CDB6READWRITE;

// Lots of other information left out to save space

} CDB, *PCDB;

Figure 5 - Partial CDB Structure

 

What we do in our miniport's SRB_FUNCTION_ EXECUTE_SCSI handler for each CDB SCSI Operation (SCSIOP) depends again upon what type of device that we are exporting. If, for example, we were presenting a locally based file as a SCSI disk, then we would probably be doing file operations to satisfy the requested operation. If the storage device we are supporting was located across the network, we might then be doing network operations directly via TCP/UDP or Windows Kernel Sockets. In fact, if we wanted to, there would be no reason why we couldn't communicate with a user mode service or some other driver on the system to provide access to our storage. So, again, the operations we perform depend upon the device that our driver exports. All devices would probably support SCSIOP_ READ, SCSIOP_WRITE, SCSIOP_INQUIRY, and SCSIOP_MODE_SENSE, Disk, Tapes, CD-ROM, and other devices may support additional operations. To find out which, you need to consult a reference guide or examine the existing Microsoft Disk, Tape, and CD-ROM class drivers (the source code for these are contained within the WDK) to figure out what needs to be handled.

 

This all probably seems pretty easy and obvious, except for one thing. Remember that this routine is called at IRQL DISPATCH_LEVEL which makes doing any network or file operations impossible at this point! Thank your native deity at this point, because when our HwStartIo function is called, it is not expecting the input SRB to be completed synchronously. All Storport wants to know upon return from HwStartIo is whether or not the command was initiated (i.e., accepted) which would mean that we return TRUE upon exit. If HwStartIo returns FALSE, this indicates that the input SRB was not successfully initiated.

 

An SRB being initiated does not mean completed. What it means is that the miniport has accepted the SRB for processing. This means that for any SRB we cannot complete immediately, we must have a way of processing it at a later time. As with any other type of Windows driver, we are responsible for propagating our own execution. So if we can't process the command immediately, one way to process it at a later time is to queue the input SRB to a worker thread (that we create), perform the requested operation in the worker thread (which is running at PASSIVE_LEVEL), and then notify Storport that the SRB was completed when our worker thread has finished processing the request. Remember, Storport expects our miniport to complete SRBs that were queued to it (just like the I/O Manager expects us to complete IRPs that were queued to it). When the SRB is eventually completed by our miniport?s call to StorportNotification, specifying RequestComplete, the completed SRB contains the appropriate SRB_STATUS_XXXXXX status indicating whether or not the operation completed successfully.

 

Summary

Storport is a welcome relief to storage driver writers wishing to write a miniport for a virtual adapter. This article starts us on the path of being able to design and implement one . In this article we discussed the architecture, flow of control, and key routines to be implemented when developing a miniport. In the following articles we will build upon this foundation.

 

 

References:

 

  • The SCSI Bus & IDE Interface, by FriedHelm Schmidt 2nd Edition, published by Addison-Wesley. ISBN: 0-201-17514-2
  • FWB's Guide To Storage, by Norman Fong, published by FWB Incorporated, Part No: 07-00841-001

Related Articles
Are You Writing a Port Driver?
Writing a Virtual Storport Miniport Driver (Part II)

User Comments
Rate this article and give us feedback. Do you find anything missing? Share your opinion with the community!
Post Your Comment

"MapBuffers"
The article indicates the followings about the MapBuffers field of the PORT_CONFIGURATION_INFORMATION structure: "Since we're a miniport for a virtual adapter and need to access the data directly, setting this to TRUE is mandatory"

Actually, the MapBuffers field appears in both PORT_CONFIGURATION_INFORMATION and VIRTUAL_HW_INITIALIZATION_DATA strucures and the DDK docs indicates the followings about the MapBuffers field of VIRTUAL_HW_INITIALIZATION_DATA structure: "Not valid for virtual miniport drivers. The virtual miniport driver must map all data buffers into virtual address space."

In fact, setting the field to TRUE (or STOR_MAP_ALL_BUFFERS) in either of the two structure has no effect and indeed it seems the SRB's DataBuffer always needs to be virtual address space.

Rating:
20-Nov-09, Valeriu Ilie


"Great reading"

Rating:
13-Nov-09, pradeep raman


Post Your Comments.
Print this article.
Email this article.

Kernel Debugging & Crash Analysis
LAB

Manchester, NH
30 July - 3 Aug 2018

Developing Filter Manager Minifilters
LAB

Manchester, NH
20-24 August 2018

Writing WDF Drivers I: Core Concepts
LAB

Santa Clara/San Jose, CA
10-14 Sept 2018

Writing WDF Drivers II: Advanced Implementation Techniques
LAB

Manchester, NH
16-19 October 2018

 
 
 
 
x
LetUsHelp
 

Need to develop a Windows file system solution?

We've got a kit for that.

Need Windows internals or kernel driver expertise?

Bring us your most challenging project - we can help!

System hangs/crashes?

We've got a special diagnostic team that's standing by.

Visit the OSR Corporate Web site for more information about how OSR can help!

 
bottom nav links