Export (0) Print
Expand All

Appendix B: Additional Information About Quorum Modes

Updated: October 20, 2011

Applies To: Windows Server 2008, Windows Server 2008 R2

This appendix supplements the information in Failover Cluster Step-by-Step Guide: Configuring the Quorum in a Failover Cluster, which we recommend that you read first. The appendix provides additional information about each quorum mode available in failover clusters in Windows Server 2008 and Windows Server 2008 R2, plus quorum recommendations for certain specific types of clusters.

In this appendix

Node Majority quorum mode

Node and Disk Majority quorum mode

Node and File Share Majority quorum mode

No Majority: Disk Only quorum mode

Selecting the appropriate quorum mode for a particular cluster

Local two-node cluster

Single-node cluster

Cluster with no shared storage

Even-node cluster

Multi-site or geographically dispersed cluster

On a cluster that uses Node Majority as the quorum mode, each node gets a vote, and each node’s local system disk is used to store the cluster configuration (the replica). If the configuration of the cluster changes, that change is reflected across the different disks. The change is only considered to have been committed, that is, made persistent, if that change is made to the disks on half the nodes (rounding down) plus one. For example, in a five-node cluster, the change must be made on two plus one nodes, or three nodes total.

The Node Majority mode is usually the best quorum mode for a cluster with an odd number of nodes.

Recommended for:

Not recommended for:

 

Number of nodes (N) Number of replicas Default node failures tolerated: N/2

1

1

0

2

2

0

3

3

1

4

4

1

5

5

2

6

6

2

7

7

3

8

8

3

9

9

4

10

10

4

11

11

5

12

12

5

13

13

6

14

14

6

15

15

7

16

16

7

We do not recommend that the Node Majority mode be used with a two-node cluster because such a cluster cannot tolerate any failures.

A Node and Disk Majority quorum is a configuration where each node and a physical disk, the ‘disk witness’, get a vote. The cluster configuration is stored by default on the system disk of each node in the cluster and on the disk witness. It is kept consistent across the cluster and the change is only considered to have been committed, that is, made persistent, if that change is made to half of the disks (rounding down) plus one. For example, in a four-node cluster with a disk witness, the four disks on the nodes plus the disk witness make five, so the change must be made on two plus one disks, or three disks total.

We recommend using a disk witness instead of a file share witness as it is less likely for a “split” scenario to occur, because there is a replica on the disk witness but not on the file share witness. The disk witness can ensure that there is no partition in time since it has a copy of the replica on it and ensures that the cluster has the most up to date configuration.

If you create a cluster with an even number of nodes, and the cluster software automatically chooses Node and Disk Majority, the cluster software will also choose a witness disk. The cluster software will choose the smallest disk that is more than 512 MB in size. If there are multiple disks that meet the criteria, the cluster software will pick the disk listed first by the operating system. You can choose a different disk for the witness disk by running the quorum configuration wizard.

The Node and Disk Majority quorum is economical, allowing the addition of another voter without the need to purchase another node.

Recommended for:

Not recommended for:

 

Number of nodes (N) Number of replicas Default node + disk failures tolerated: N/2

1

2

0

2

3

1

3

4

1

4

5

2

5

6

2

6

7

3

7

8

3

8

9

4

9

10

4

10

11

5

11

12

5

12

13

6

13

14

6

14

15

7

15

16

7

16

17

8

We do not recommend using single-node clusters because they cannot tolerate any failures.

The Node and File Share Majority quorum mode is a configuration where each node and the file share witness get a vote. The replica is stored by default on the system disk of each node in the cluster, and is kept consistent across those disks. However, a copy is not stored on the file share witness, which is the main difference between this mode and the Node and Disk Majority mode. The file share witness keeps track of which node has the most updated replica, but does not have a replica itself. This can lead to scenarios when only a node and the file share witness survive, but the cluster will not come online if the surviving node does not have the updated replica because this would cause a “partition in time.” This solution does not solve the partition in time problem, but prevents a “split” scenario from occurring.

We recommend using a disk witness instead of a file share witness as it is less likely for a “split” scenario to occur, because there is a replica on the disk witness but not on the file share witness. The disk witness can ensure there is no partition in time since it has a copy of the replica on it and ensures that the cluster has the most up to date configuration.

The cluster configuration is stored by default on the system disk of each node in the cluster, and information about which nodes contain the latest configuration is noted on the file share witness. This information is kept synchronized across the nodes and file share, and a change is only considered to have been committed, that is, made persistent, if that change is made or noted on half of the total locations (rounding down) plus one. For example, in a four-node cluster with a file share witness, the four disks on the nodes plus the file share witness make five, so the change must be made or noted in two plus one locations, or three locations total.

The following describes an example of how a partition in time can occur:

  1. You have a local two-node cluster with NodeA and NodeB. Both nodes are running and their replicas of the cluster configuration are synchronized.

  2. NodeB is turned off.

  3. Changes are made on NodeA. These changes are only reflected on NodeA’s replica of the cluster configuration.

  4. NodeA is turned off.

  5. NodeB is turned on. NodeB does not have the most recent replica of the cluster configuration, and the file share witness does not have a replica at all. However, the file share witness contains the information that NodeA has the most recent replica, so the file share witness prevents NodeB from forming the cluster.

The Node and File Share Majority quorum is economical, allowing the addition of another voter without the need to purchase another node or additional storage.

Recommended for:

Not recommended for:

 

Number of nodes (N) Number of replicas Default node + file share failures tolerated: N/2

1

1

0

2

2

1

3

3

1

4

4

2

5

5

2

6

6

3

7

7

3

8

8

4

9

9

4

10

10

5

11

11

5

12

12

6

13

13

6

14

14

7

15

15

7

16

16

8

We do not recommend using single-node clusters because they cannot tolerate any failures.

A No Majority: Disk Only quorum mode behaves similarly to the legacy quorum configuration from Windows Server 2003. We do not recommend using this quorum mode because it presents a single point of failure. A loss of the single shared disk causes the entire cluster to fail. The cluster replica that is stored on the shared disk is considered the primary database and is always kept the most up to date. The shared storage interconnect to the shared disk must be accessible by all members of the cluster. In the case of a situation where a node has been out of communication and may have an out-of-date replica of the configuration, the data on the shared disk is considered to be the authoritative copy of the cluster configuration. Having a copy on each node allows the shared disk replica to be automatically repaired or replaced if it is lost or becomes corrupted. This authoritative copy contributes the only vote towards having quorum. The nodes do not contribute votes.

The cluster service itself will only start up and therefore bring resources online if the shared disk is available and online, since it is the one and only vote in the cluster. If it is not online, the cluster is said not to have quorum and therefore the cluster service waits (trying to restart) until this disk comes back online. Because the configuration on the shared disk is authoritative, the cluster will always guarantee that it starts up with the latest and most up-to-date configuration.

In the case of a “split” scenario, any group of nodes that is not connected to the shared disk is prevented from forming a cluster since the nodes will have no votes. This ensures that only a group of nodes connected to the disk forms a cluster, and the nodes can run without the possibility of another subsection of the cluster also running.

In most cases, the No Majority: Disk Only quorum mode is not recommended for use because it presents a single point of failure.

Recommended for:

  • None

Not recommended for:

As shown in the following table, with the No Majority: Disk Only quorum mode, the cluster can tolerate the failure of n-1 nodes, but cannot tolerate any failures of the quorum disk.

 

Number of nodes Failures of nodes tolerated Failures of quorum disk tolerated

1

0

0

2

1

0

n

n-1

0

16

15

0

It is a best practice to decide which quorum mode you will use before placing the cluster into production. You might want to reevaluate your existing quorum mode when you add additional nodes.

This section provides recommendations for the quorum mode for the following types of clusters:

Local two-node cluster

Single-node cluster

Cluster with no shared storage

Even-node cluster

Multi-site or geographically dispersed cluster

The following table provides recommendations for the quorum mode for a local two-node cluster.

 

Recommended

Not recommended

Node Majority

X

Node and Disk Majority

X (best)

Node and File Share Majority

X

No Majority: Disk Only

X

This is the most common cluster configuration. For a standard local two-node cluster, it is recommended that you select the Node and Disk Majority mode. This can increase the availability of your cluster without requiring you to increase it to three nodes. While either type of witness (disk or file share) works, we recommend using a disk witness instead of a file share witness because it is less likely for a partition in time to occur. This is because a disk witness provides a replica of the cluster configuration, increasing the likelihood that the most up to date configuration is available to the cluster at any given time.

However, the Node and File Share Majority mode does not require an additional physical disk in cluster storage. This can be very beneficial if you have budgetary constraints.

ImportantImportant
If you have a two-node cluster, the Node Majority mode is not recommended, as failure of one node will lead to failure of the entire cluster.

The following table provides recommendations for the quorum mode for a single-node cluster.

 

Recommended

Not recommended

Node Majority

X

Node and Disk Majority

X

Node and File Share Majority

X

This is the simplest configuration and since there is only one node which can vote, the addition of a disk or file share witness costs additional storage space without the benefit of sustaining additional failures. For this reason the easiest and cheapest configuration is the Node Majority mode where the cluster will be running if and only if the single node is running.

This configuration is widely used for development and testing, providing the ability to use the cluster infrastructure without the expense and complexities of a second computer. This solution also enables using the health monitoring and resource management features of the cluster infrastructure on the local node. It can also be used as an initial step when planning to add more nodes at a later date (however the quorum mode should then be adjusted appropriately).

The following table provides recommendations for the quorum mode for a cluster with no shared storage.

 

Recommended

Not recommended

Node Majority

X

Node and Disk Majority

X

Node and File Share Majority

X

No Majority: Disk Only

X

This is a specialized configuration that does not include shared disks, but has other features that make it consistent with failover cluster requirements. Because there are no shared disks, the Node and Disk Majority mode cannot be used.

This would be used in the following situations:

  • Clusters that host applications that can fail over, but where there is some other, application-specific way to keep data consistent between nodes (for example, database log shipping for keeping database state up-to-date, or file replication for relatively static data).

  • Clusters that host applications that have no persistent data, but where the nodes need to cooperate in a tightly coupled way to provide consistent volatile state.

  • Clusters using solutions from independent software vendors (ISVs). If storage is abstracted from the Cluster service, independent software vendors have much greater flexibility in how they design sophisticated cluster scenarios.

The following table provides recommendations for the quorum mode for a cluster with an even number of nodes.

 

Recommended

Not recommended

Node Majority

X

Node and Disk Majority

X

Node and File Share Majority

X

No Majority: Disk Only

X

Even-node clusters (with 2, 4, 6, or 8 nodes) that use the Node Majority mode are not entirely economical, as they provided no additional quorum benefit from their n-1 counterparts (of 1, 3, 5, or 7 nodes). For example, if you have a 3-node or a 4-node cluster using Node Majority, you could still only tolerate the failure of one node while maintaining quorum. With the addition of a vote from a disk witness or file share witness, a 4-node cluster (using the Node and Disk Majority mode or the Node and File Share Majority mode) can sustain two failures, making it more resilient with little or no additional cost.

The following table provides recommendations for the quorum mode for a multi-site or geographically dispersed cluster.

 

Recommended

Not recommended

Node Majority

X

Node and File Share Majority

X (best)

No Majority: Disk Only

X

Many of the benefits of multi-site clusters largely derive from the fact that they work slightly differently from conventional, local clusters. Setting up a cluster whose nodes are separated by hundreds, or even thousands, of miles will affect the choices you make on everything from the quorum model you choose to how you configure your network and data storage for the cluster. For some business applications, even an event as unlikely as a fire, flood, or an earthquake can pose an intolerable amount of risk to business operations. For truly essential workloads, distance can provide the only hedge against catastrophe. By failing server workloads over to servers separated by even a few miles, truly disastrous data loss and application downtime can be prevented. Windows Server 2008 and Windows Server 2008 R2 support multi-site clustering of unlimited distance, making the solution more resilient to local, regional or even national disasters. This section outlines some of the considerations unique to multi-site clustering and examines what they mean for a disaster recovery strategy.

With Windows Server 2008 and Windows Server 2008 R2, you can deploy a multi-site cluster to automate the failover of applications in situations where the following occurs:

  • Communication between sites has failed.

  • One site is down and is no longer available to run applications.

A multi-site cluster is a failover cluster that has the following attributes:

  • Applications are set up to fail over just as in a single-site cluster. The Cluster service provides health monitoring and failure detection for the applications, the nodes, and the communications links.

  • The cluster has multiple storage arrays, with at least one storage array deployed at each site. This ensures that in the event of a failure of any one site, the other site or sites will have local copies of the data that can be used to continue to provide highly available services and applications.

  • The cluster nodes are connected to storage in such a way that in the event of a failure of a site or the communication links between sites, the nodes on a given site can access the storage on that site. In other words, in a two-site configuration, the nodes in Site A are connected to the storage in Site A directly, and the nodes in Site B are connected to the storage in Site B directly. The nodes in Site A can continue without accessing the storage on Site B and vice versa.

  • The cluster’s storage fabric or host-based software provides a way to mirror or replicate data between the sites so that each site has a copy of the data. There is no shared mass storage that all of the nodes access, which means that data must be replicated between the separate storage arrays to which each node is attached.

In this quorum mode, all nodes and a file share witness get a vote to determine majority for cluster membership. This helps to eliminate failure points in the old model, where it was assumed that the disk would always be available; if the disk failed, the cluster would fail. This makes the Node and File Share Majority quorum mode particularly well suited to multi-site clusters. A single file server can serve as a witness to multiple clusters (with each cluster using a separate file share witness on the file server).

noteNote
We recommend that you place the file share witness at a third site which does not host a cluster node.

For example, suppose you have two physical sites, Site A and Site B, each with two nodes. You also have a file share witness with a single vote stored at a third physical site, such as a smaller server at a branch office. You now have a total of five votes (two from Site A, two from Site B, and one from the file share witness).

  • Disaster at Site A: You lose two votes from Site A, yet with the two votes from Site B and the one vote from the file share witness, you still have three of five votes and maintain quorum.

  • Disaster at Site B: You lose two votes from Site B, yet with the two votes from Site A and the one vote from the file share witness, you still have three of five votes and maintain quorum.

  • Disaster at the file share witness site: You lose one vote from the file share witness, but with two votes from Site A and two votes from Site B, you still have four of five votes and maintain quorum.

With the use of the file share witness at a third site, you can now sustain complete failure of one site and keep your cluster running.

If a file share witness at a site independent of your cluster sites is not an option for you, you can still use a multi-site cluster with the Node Majority quorum mode. The result is that the majority of votes are necessary to operate the cluster.

noteNote
A multi-site cluster with three nodes at three separate sites is possible. It would continue to function if one of the sites were unavailable, but would cease to function if two sites became unavailable.

For example, suppose you have a multi-site cluster consisting of five nodes, three of which reside at Site A and the remaining two at Site B. With a break in communication between the two sites, Site A can still communicate with three nodes (which is greater than half of the total), so all of the nodes at Site A stay up. The nodes in Site B are able to communicate with each other, but no one else. Since the two nodes at Site B cannot communicate with the majority, they drop out of cluster membership. If Site A went down, to bring up the cluster at Site B, you would need to intervene manually to override the non-majority (for more information about forcing a cluster to start without quorum, see “Troubleshooting: how to force a cluster to start without quorum” in Failover Cluster Step-by-Step Guide: Configuring the Quorum in a Failover Cluster).

noteNote
This configuration is less fault-tolerant than the Node and File Share Majority mode, because the loss of the primary site causes the entire multi-site cluster to fail.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback

Community Additions

ADD
Show:
© 2014 Microsoft