Related Issues

Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

Shared disk verses Shared-nothing

You may see various documents that use terms like shared disk clusters and non-shared disk or shared-nothing clusters. These terms are very misleading and can cause confusion since they depend on the context of the discussion.

When talking about the physical connectivity of devices, shared disk clusters means that multiple computers have direct physical access to any given storage unit (for example, multiple hosts are directly connected to a disk drive on a SCSI bus that the computers are both connected to). Non-shared disk or shared-nothing clusters in this context means that any given disk is only physically connected to one computer. See Figure 17.

e666029b-e0e6-4bd4-b1fc-cbaf30e64734

Figure 17: Physical view of cluster topologies

In the context of file systems or data access from applications, shared disk means that applications running on multiple computers in a cluster can access the same disk directly at the same time. To support this application, the file system must coordinate concurrent access to a single disk from multiple hosts (e.g. a cluster file system). Clearly, shared physical access is required for this configuration. When talking about application or data access, non-shared disk means that only applications running on one computer can access data on any given disk directly. In this case, the physical disk may or may not be connected to multiple computers, but if it is, then only the connection from one computer is in use at any one time. See Figure 18 below.

ab6e87e5-aeb4-4a24-9498-64b1cf0baa63

Figure 18: Application view of cluster topologies

SAN Versus NAS

There are two industry-wide terms that refer to externally attached storage:

  • Storage Area Networks (SAN)

  • Network Attached Storage (NAS)

Having two, similar sounding terms leads to some confusion and therefore it is worth discussing the differences between the two different technologies before delving into storage area network details.

Storage area networks (SANs), see Figure 19 below, are typically built-up using storage-specific network technologies. Fibre channel is the current technology leader in this space. Servers connect to storage and access data at the block level. In other words, to the server, a disk drive out on the storage area network is accessed using the same read and write disk block primitives as though it were a locally attached disk. Typically, data and requests are transmitted using a storage-specific protocol (usually based on the SCSI family of protocols). These protocols are tuned for low latency, high bandwidth data transfers required by storage infrastructure.

309ca5cb-d747-4aa2-b929-372f7f1a918f

Figure 19: Storage Area Network

While fibre channel is by far the leading technology today, other SAN technologies have been proposed, for example SCSI over Infiniband, iSCSI (which is SCSI protocol running over a standard IP network). All these technologies allow a pool of devices to be accessed from a set of servers, decoupling the compute needs from the storage needs.

In contrast, network attached storage (NAS), see Figure 20 below, is built using standard network components such as Ethernet or other LAN technologies. The application servers access storage using file system functions such as open file, read file, write file, close file, etc.. These higher-level functions are encapsulated in protocols such as CIFS, NFS or AppleShare and run across standard IP-based connections.

bc9c51a4-50cc-4d8c-8765-d219f8d5abd5

Figure 20: Network Attached Storage

In a NAS solution, the file servers hide the details of how data is stored on disks and present a high level file system view to application servers. In a NAS environment, the file servers provide file system management functions such as the ability to back up a file server.

As SAN technology prices decrease and the need for highly scalable and highly available storage solutions increases, vendors are turning to hybrid solutions that combine the centralized file server simplicity of NAS with the scalability and availability offered by SAN as shown in Figure 21 below.

ff028b77-ffc7-4be3-8ba5-95aee27f21df

Figure 21: Hybrid NAS and SAN solution

The following table contrasts the SAN and NAS technologies

SAN Versus NAS Technologies

  Storage Area Network Network Attached Storage

Application Server Access methods

Block-level access

File-level access

Communication protocol

SCSI over Fibre Channel iSCSI (SCSI over IP)

CIFS, NFS; AppleShare

Network physical technology

Typically storage-specific

(e.g. Fibre-channel) but may be high-speed Ethernet

General purpose LAN

e.g. Gigabit Ethernet

Example Storage Vendors

Compaq StorageWorks SAN family; EMC Symmetrix

Network Appliance NetApp Filers; Maxtor NASxxxx; Compaq TaskSmart N-series

There are many camps that believe that in the future, various different technologies will win in the storage space and there are those that believe that in the end there will be a single network interconnect that will cover SAN and NAS needs, as well as basic inter-computer networking needs. Over time, the Windows platform and Windows Clustering technologies will support different interconnect technologies as they become important to end-customer deployments.