Export (0) Print
Expand All

Using iSCSI with Virtual Server 2005 R2

The iSCSI protocol, which unifies the TCP/IP networking protocol with the SCSI storage protocol, defines the rules and processes for transmitting and receiving block storage data over TCP/IP networks. Support for iSCSI is provided with Microsoft® Windows Server™ 2003, Microsoft Windows® 2000, and Microsoft Windows XP, and in Microsoft Virtual Server 2005 R2. With iSCSI, the hardware needed for connecting servers to storage is less expensive and less complex than with the common alternative, Fibre Channel. By using iSCSI, you can minimize the amount you spend on hardware for connecting your servers to the storage they use.

iSCSI is not only less expensive than Fibre Channel, but much easier to configure, both when deploying new hardware and when reconfiguring after a hardware component has been replaced (which is quite difficult with Fibre Channel).

Support for iSCSI in Microsoft products makes it easier to use attached storage for scenarios that use virtualization, including the following two scenarios:

  • Virtual machines (guests) running on servers. With Virtual Server 2005, you can create and run one or more virtual machines, each with its own operating system, on a single physical computer. The physical computer is called the host, and an operating system running on a host is called a guest.
  • Virtual machines (guests) running within a server cluster. This configuration increases the availability of the services that your guests provide to clients.
    There are several ways to configure a server cluster in which guests run. You can create a host cluster, a guest cluster, or a combination of the two. With host clustering, the physical host is the cluster node. If the host stops running, all of its guests are restarted on another physical host. Host clustering protects against failure of a physical host (hardware failure of a computer). With guest clustering, a guest is a cluster node, and if either the guest operating system or the clustered application on the guest fails, the guest can fail over to another guest, either on the same host or on a different host.
    For more information about host clustering, see "Virtual Server Host Clustering Step-by-Step Guide for Virtual Server 2005 R2" on the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=55644).

Structure of This White Paper

This white paper contains three main sections. The first section provides diagrams showing several architectural choices you can make when implementing iSCSI connections between servers and storage. The second section provides a diagram showing iSCSI being used for a Virtual Server guest cluster. The third section provides more details about how iSCSI communication works.

Architectural Choices for iSCSI Connections Between Servers and Storage

When designing the architecture for iSCSI connections between servers and storage, there are several options for the adapter that you use in the server, and an additional option for the storage hardware:

  • On the server end, you can use a network adapter (also known as a network interface card or NIC). Network adapters interact with the operating system through the network stack by using the Network Driver Interface Specification (NDIS).
    A similar option is to use a TCP Offload Engine (TOE) network adapter, which also uses NDIS.
  • On the server end, you can use a host bus adapter (HBA). Host bus adapters interact with the operating system through the storage stack by using one of the storage port drivers provided by Microsoft.
    A similar option is to use a device called a multifunction offload device that combines the functions of an iSCSI host bus adapter and a network adapter. Multifunction offload devices interact with the operating system through multiple driver stacks and can use multiple types of drivers, including NDIS miniport, Winsock Kernel, and storage miniport drivers.
  • On the storage end, instead of using iSCSI, you can use Fibre Channel. If you use Fibre Channel, you must use an iSCSI-to-Fibre Channel converter somewhere in the connection.

Using a Network Adapter in the Server

For an iSCSI connection between a server and storage, you can use a network adapter instead of an HBA in the server. Network adapters include standard adapters as well as adapters that use TCP Offload Engine (TOE) technology, which allows the adapter to do some of the work formerly done by the computer processor.

Windows Server 2003 with Service Pack 1 (SP1) communicates through iSCSI by using the Microsoft iSCSI Software Initiator 2.0, available on the Microsoft Download Center (http://go.microsoft.com/fwlink/?linkid=44352). The Microsoft iSCSI Software Initiator communicates with the network adapter through the network stack (iSCSI over TCP/IP). For more information about the implementation of iSCSI with Windows operating systems, see Microsoft Storage Technologies - iSCSI (http://go.microsoft.com/fwlink/?linkid=50522).

A network adapter you use for iSCSI is not different from other network adapters. The only requirement is that it must have the "Designed for Windows" logo.

The following figure shows an iSCSI implementation that uses a network adapter rather than an HBA in the server.

Server using network adapter and iSCSI Initiator

An advantage of using a network adapter in the server is that network adapters are a standard component in all computers, and the Microsoft iSCSI Software Initiator is a free download.

Using a Host Bus Adapter in the Server

For an iSCSI connection between a server and storage, you can decide to use a host bus adapter (HBA) in the server. Another alternative is to use a multifunction offload device in the server. Such a device supports the functions of both an iSCSI HBA and a network adapter. In a server running Windows Server 2003 with SP1, you must use the service included in the Microsoft iSCSI Software Initiator 2.0 (or later), and your HBAs or multifunction offload devices must work with this service.

To download the Microsoft iSCSI Software Initiator, go to the Microsoft Download Center (http://go.microsoft.com/fwlink/?linkid=44352). If you want to install just the service, at the beginning of setup, select Initiator Service only.

The following figure shows an iSCSI implementation that uses an HBA in the server.

Server using iSCSI host bus adapter

Using Fibre Channel on the Storage End of the Connection

You can connect a server that communicates through iSCSI to an iSCSI bridge device that supports Fibre Channel (or other storage protocols) at the storage end of the connection. (For this implementation, it does not matter which type of iSCSI device is used on the server, a network adapter or an iSCSI HBA.) The operating system communicates with the iSCSI bridge device as it would with any iSCSI target, without regard to the protocol being used at the storage end of the connection.

noteNote
A server cluster that uses Fibre Channel storage behind an iSCSI-to-Fibre Channel device is considered an iSCSI cluster, not a Fibre Channel cluster.

The following figure shows an iSCSI implementation that uses Fibre Channel on the storage end of the connection. The figure shows a network adapter in the server, but an HBA in the server would also work.

iSCSI connected through bridge to Fibre Channel

One advantage of using Fibre Channel for the storage is that you can work with Fibre Channel hardware that your organization has already purchased and learned how to use. By connecting your network infrastructure to your Fibre Channel infrastructure, you can create new possibilities for using existing storage in your organization. If you are purchasing new hardware, however,a native iSCSI target is less expensive and simpler to use than Fibre Channel.

Diagram of iSCSI in a Virtual Server Guest Cluster

One valuable use of iSCSI is in a Virtual Server guest cluster. For a Virtual Server guest cluster, iSCSI is required if you want to configure the guests to fail over from one physical host to another. Consider the way a physical cluster node communicates with cluster storage, as contrasted with a guest in a guest cluster. The physical cluster node uses a physical bus or physical network to communicate with cluster storage. The guest in the guest cluster, however, is a virtual machine and can communicate only through a virtual network, not through a physical bus. Therefore, a guest in a guest cluster must communicate with cluster storage through a storage protocol unified with a network protocol, that is, through iSCSI. Neither SCSI or Fibre Channel will work in this situation because they require a physical storage interconnect.

The following figure illustrates how a guest in a two-node guest cluster can use iSCSI. Note that the virtual network of the guest (shown in the figure) uses a standard network adapter that communicates by using Microsoft iSCSI Software Initiator 2.0.

Virtual Server guest cluster using iSCSI

Configuration Requirements for iSCSI in the Context of a Server Cluster

When you use iSCSI in the context of a server cluster, you must make sure that clustered disks are always remapped when a node is restarted. In addition, clustered disks must be fully mounted by the iSCSI service before the Cluster service attempts to bring them online.

To configure clustered disks as described in the previous paragraph, use the graphical interface included in the Microsoft iSCSI Software Initiator 2.0. In the interface, click the Targets tab and log on to one of the clustered disks. (If Log On is unavailable, click Details to log off so that you can log on again.) Make sure that Automatically restore this connection when the system boots is selected. Then, after using Disk Management to assign a drive letter to the disk, in the interface for the Microsoft iSCSI Software Initiator, click the Bound Volumes/Devices tab, click Add, and type the drive letter of the disk.

The Microsoft iSCSI Software Initiator is available on the Microsoft Download Center (http://go.microsoft.com/fwlink/?linkid=44352).

Understanding iSCSI

This section provides brief descriptions of the following:

  • How a server communicates with storage through iSCSI.
  • How devices are identified through iSCSI.
  • Which security protocols can be used in an iSCSI network.

How a Server Communicates with Storage through iSCSI

When a server is connected to a storage array through iSCSI, the configuration includes the hardware elements in the following list. For diagrams showing different configurations that use these hardware elements, see Architectural Choices for iSCSI Connections Between Servers and Storage.

  • A server that contains an appropriate adapter in one of two categories:
    • A network adapter.
    • An HBA or multifunction offload device. In a server running Windows Server 2003 with SP1, the HBA must work with the iSCSI service in the Microsoft iSCSI Software Initiator 2.0.
  • An Ethernet network.
  • Storage devices connected to the network:
    • One option is storage that uses a native iSCSI interface.
    • Another option is storage that does not use a native iSCSI interface. This option requires an iSCSI bridge device, which translates iSCSI commands to the protocol used by the storage (usually Fibre Channel).
    noteNote
    A cluster that uses Fibre Channel storage behind an iSCSI bridge device is considered an iSCSI cluster, not a Fibre Channel cluster.

When data is transmitted from server to storage, it is transformed in the following ways:

 

Stage of the communication process Details about how the information is changed

Application on server makes request

Application passes an I/O request to the operating system, which recognizes that the request must be handled by the driver stack.

SCSI-class driver prepares the request

Request is converted to SCSI commands and placed in a data structures called SCSI command description blocks (CDBs).

iSCSI device driver adds target information

SCSI CDBs are packaged in a protocol data unit (PDU) which carries additional information, including the logical unit number of the target device.

TCP driver encapsulates the information

The PDU is encapsulated.

IP adds routing information

The IP routing address of the final destination device is added.

Network layer (typically Ethernet) prepares and sends the packet across the physical network

The network layer translates logical addresses and names into physical addresses for transmission across the network.

How Devices are Identified Through iSCSI

In a network that connects various computers to each other, each computer must be uniquely identified. It is the same in an iSCSI network: all devices must be identified, including each volume (LUN) within each storage array. To do this, iSCSI uses both TCP/IP addressing information and an iSCSI qualified name (IQN). An IQN is permanent and globally unique in an iSCSI installation.

Device Discovery in an iSCSI Configuration

For a relatively simple iSCSI configuration, it is not difficult for an administrator to associate the correct IQN with a volume (LUN). However, as an implementation becomes more complex, the methods and interfaces must scale, or else the administrator's tasks can become unmanageable. The following discovery mechanisms can be used with the Microsoft iSCSI Software Initiator:

  • Manual specification of identifiers. For a relatively simple iSCSI configuration, the administrator can specify the necessary addresses or identifiers.
  • SendTargets. With this method, the administrator must specify part of the configuration manually, but the rest can be communicated automatically between devices.
  • Internet Storage Name Service (iSNS). The Internet Storage Name Service provides scalable naming and resource discovery services for storage devices on the IP network. Microsoft offers the Microsoft iSNS Server as a free download at Microsoft iSNS Server (http://go.microsoft.com/fwlink/?LinkID=55830).

Security in an iSCSI Configuration

Because iSCSI can operate in the Internet environment, security can be critically important. The iSCSI protocol is designed for use with several security protocols including Internet Protocol security (IPsec) and the Challenge Handshake Authentication Protocol (CHAP). For more information about iSCSI security, see "Microsoft Storage Technologies: Deploying iSCSI SANs" on the Microsoft Web Site (http://go.microsoft.com/fwlink/?LinkID=55829).

Additional Resources

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback

Community Additions

ADD
Show:
© 2014 Microsoft