Chapter 15 - High Availability Options

This chapter describes and recommends high availability alternatives for Microsoft® SQL Server™ 2000. Although many other sources detail the technical aspects of implementing high availability options, the unique value of this chapter is that it describes the strategic side of high availability, from SQL Server 2000 failover clustering to replication, and more. The details of this chapter concentrate on SQL Server solutions; however, most of the content will apply generically to any operations plan.

High availability is a phrase used to describe the continuous operation of a computer system. It is a complicated subject because it covers so many different areas within a company—from people, policies, and processes to hardware and software. There are two basic challenges in providing a highly available computer system. The first is to protect the primary server in the best manner possible. The second is to provide a remote copy of the database for the worst-case disaster. Each challenge has specific solutions, but a combination of solutions will yield the best results. The intent of this chapter is to better acquaint you with the high availability solutions available to you, describe optimal uses of each, and even suggest ways of combining solutions for maximum benefit.

The Importance of People, Policies, and Processes

Continuous computing is the ability of an organization to provide computerized application services all the time. You are likely familiar with other terms that describe the same fundamental principle: 24 x 7, uptime, and availability. Continuous computing is a difficult thing for organizations to maintain, and any organization should be aware that the cost and complexity of this undertaking rises the closer to 100 percent uptime a company attempts.

Are There Any 100 Percent Solutions?

No. Although there may be some instances of companies that have achieved a sustained record of 100 percent for some applications on some platforms over a defined period of time, these cases are rare. The majority of computer professionals are realistic and practical enough to attempt some form of 99.x percent solution, where x is some number of nines. The term "three nines" refers to 99.9 percent uptime, or about 8.76 hours of downtime per year. "Five nines" refers to 99.999 percent uptime, or approximately 5 minutes of downtime per year. For more information, see Chapter 16, "Five Nines: The Ultimate in High Availability."

Meeting High Uptime

Meeting high uptime is something that many companies face. The following section describes some things to consider when planning to meet these standards..

To plan for high uptime, consider doing the following:

  • Increasing training budgets and making sure the right training and certifications are chosen. 

  • Increasing awareness of the problem at all levels of the company. 

  • Creating a technical operations plan. 

Microsoft Readiness Framework can be used to assess the technology readiness of a company. A product called Microsoft Operations Framework can help guide the operations management and staff with setting policies, procedures, and skill sets.

Uptime Solutions and Risk Management

Investments in uptime solutions are nothing more than basic risk management—from the proactive side. How much are you willing to spend? To answer this, you first need to find out how much downtime costs your company. Do not try to be extremely precise; the calculation is complicated and an approximate figure is good enough. Factors such as payroll costs, lost revenue, overhead allocations, and support costs need to be considered. For example, if your server hosts data for an e-commerce application, the cost of even a small amount of downtime could be high. In that case, you might be willing to invest more in minimizing or eliminating downtime.

People: The Best Solution

Continuous operation is not just about technology. It is about people. Hiring the best people is crucial to a smooth running operation. Following closely in importance behind the role of staff are the roles of good policies and processes to help your people perform to their full potential. The role of technology is equally important and is related to the people, policies, and processes in your operation. For instance, the technology used in the company will dictate what skills you need to hire, what training plans you implement, and specific processes you define for managing that technology.

Roles of DBAs?

Database administrators (DBAs) have many roles in a data center. For example, they often handle support calls. Sometimes, the support calls come in with the message, "My database is broken." This is because the user sees messages that the database could not be contacted. Studies have shown 80 percent of downtime can be attributed to network outages, with the remainder of downtime caused by problems with applications, operating systems, or databases. So a DBA who gets such a the call needs to troubleshoot all these pieces.

Databases and directories are the primary storage for valuable company information. The other pieces in the process (network, operating systems, and applications) are just mechanisms for connecting to and working with the data. So, an important focus in a disaster recovery plan is the database, and the solutions are sometimes complex. The DBA needs to participate in creating, maintaining, and practicing disaster recovery to be well trained to handle the failover and failback situations.

Some DBAs must also be good at programming to participate in all phases of a development process, including the architecture, design, and development of an application, as well as its administration after the implementation is complete. Design and architecture work means training and experience in normalization and database design. Development participation includes knowing about application connectivity, connectivity constructs with the many different options available today (ADO, DAO, OLE-DB, ODBC, etc), Transact-SQL coding, version control, quality assurance techniques, and more.

The most difficult problem to solve in a high availability design is the database. The DBA is the one who must design, manage, and practice the recovery scenarios.

The Essentials of an Operations Plan

This chapter will not explain operations plans in depth, but since a good plan is critical to any discussion of high availability, it must be one of the first items mentioned. Recent products like Microsoft Operations Framework, coupled with the Microsoft Readiness Framework and the Microsoft Solutions Framework, are more helpful in formulating an operations plan, specifically tailored to IT staff and data centers using the Microsoft family of products. For more information about the Microsoft Operations Framework, documents on the subject of high availability project planning, and details about best practices and detailed project guidelines for disaster recovery and high availability, see & https://www.microsoft.com/technet.

Planning Redundancy

The key to high availability is redundancy. This includes people and processes as well as hardware, software, and data. For example, redundant power supplies are now standard—you should order redundant disk controllers and have at least one extra remote server to failover to. Unfortunately, this duplication of technical pieces increases the cost of uptime solutions.

Beyond hardware considerations, redundancy planning also includes personnel and training issues. What is your plan for when a night operator who calls in sick? Is someone else trained to cover those duties? Or, looking at redundancy from the process and policy point of view, can your sales staff take manual orders when the computer is down? What happens if you do need to fail over to a remote site? Is your plan in place for continuing the business?

For the redundant computers within the same room as the primary servers, here are more things to think about: Can you separate the servers onto a different public power grid? Do you need to do this if you have adequate UPS or generator capacity?

You can create redundant copies of your data in two ways. The first is the asynchronous method, which is sometimes implemented as a store and forward mechanism, and sometimes as a buffering or queuing mechanism. The basic premise is that the data is sent to the primary server and some other mechanism is used to eventually get the same exact data to another server. SQL Server replication and Microsoft Message Queuing use the asynchronous method. The second, or synchronous, method means that the application originally submitting the data must wait until the data is written to a primary server and a secondary server, or even multiple servers, before returning control to the application. Examples of the synchronous method are mirroring in hardware, and two-phase commit in software.

Note Some companies use the word "synchronous" when the product is near real time. Other terms of "low latency" and "high keep-up rate" are near real time and not synchronous.

Segmenting Your Solutions

Redundancy is not the only part of the solution. Segmenting your solutions into different pieces can minimize the impact of downtime by relegating it to a particular section of the company. There are different kinds of segmentation. For example, putting the accounting application on a different server than the order entry server, which could be different than the warehouse server. Or segmentation by business division, where each division has its own server, most likely sharing data through some form of replication. In these scenarios, when one server or division has outages, the other segments are not affected.

Segmentation from a database perspective can mean different things. The first is to segment by usage by having a primary data collection server and a secondary read-only server for reporting. The next is to segment to multiple servers by application, for example, one server for accounting and another for manufacturing. Another way is to segment a very large database (VLDB) and distribute it over multiple servers with one acting as the primary, making it appear as one database through partitioned views.

Be aware that while segmenting your solutions can minimize downtime, it does complicate a disaster recovery plan because you now have to plan redundancy for each segment.

Manual Procedures

Backup plans extend beyond the technical arena. Manual procedures must be in place whenever possible to give some continuity of work in the event of a computer outage. For example, can the company still take orders on paper? This is an important point since every possibility of a computer recovery cannot be considered. At times, companies must fall back to manual procedures for keeping a business running.

Increased Corporate Awareness: The Importance of Communication

Creating redundancy with whatever mechanism you have chosen brings an increased complexity to any organization. Everyone involved, especially the upper levels of management, needs to be aware that with the added complexity comes a responsibility to create a support organization that knows how to deal with the solutions. This means planning larger training budgets, spending more time and money on disaster recovery plans, purchasing redundant hardware, and so on.

High Availability and Mobile and Disconnected Devices

Mobile and disconnected users provide special challenges for planning high availability solutions. Some of the data is stored locally on the device and some is stored centrally in a server. Redundancy can be built on the server side to protect the data, but what about the local data on the device? This is a special problem for corporations to solve. Each device will bring its own challenges. For example, local contact numbers might be stored in a cell phone, or hard disks in a laptop can hold valuable information. So, mobile and disconnected devices must be considered when defining high availability for a corporation.

The Technical Side of High Availability

From a technical point of view, high availability is generally considered in two parts: hardware and software alternatives. The best solution for availability is usually a combination of elements from each of these sections. As mentioned earlier, the key to high availability is redundancy, which is applicable not only to hardware and data, but also to software. Be sure to budget for the extra licensing costs of software copies.

Hardware Alternatives

Three basic problems must be solved in the technical arena for high availability. These are protection against physical hardware failures, logical failures, and failures causing the loss of a data center. Physical hardware failures can be at the component level—disk drives, memory, CPU, network, and so on. The most difficult problem to solve is the loss of an entire data center, so we will concentrate on this problem.

To begin, always do research before you make a hardware investment.

Create standard operational policies about who is authorized to install new hardware or work on a computer; even choose specific times of the day to schedule any hardware work. Always have support contracts for every piece of equipment used in a production environment.

To address the possibility of data center loss, organizations typically will create a remote data center to gain the security of redundancy. The uses of this remote data center add complexity to redundancy design. Most companies realize the data in the remote databases can still be used for important activities, such as read-only reporting. Many availability options, however, require exclusive use of the remote databases to keep them constantly synchronized with the production databases, and this prevents them from using the remote databases for reporting. Companies wanting to use the remote databases for reporting will choose an availability option that leaves the remote databases available for reports, at least during certain times during the day.

Disk Drives

Always buy the fastest hard disk drives. Most drives are measured by revolutions per minute (RPMs), seek time, and latency. The best recommendation for SQL Server is that multiple smaller drives are better than one large one. This goes for the disk arrays too—more smaller arrays are better than one big one. This creates a configuration that allows the distribution of various files onto other drives (including transaction log files, operating system files, database files, and even indexes).

RAID

Striping is the concept of using multiple disk drives to create a single logical disk. For example, your C:\ drive can be an array of multiple physical drives. RAID (redundant array of independent disks) is a technology used for striping disk drives. RAID comes in several different flavors, and special disk controllers are needed to implement them.

RAID 0 is pure striping (with no redundancy). Use this for best performance; it is frequently used, but does not by itself contribute to increased availability and redundancy. A minimum of two physical drives are needed to make one logical RAID 0 device.

RAID 1 is mirroring, meaning that every drive or drive array has another spare drive associated with it. When a block of data is written to the primary drive, the same block is also written to the corresponding mirror drive. Use hardware mirroring because hardware implementations have better performance and stability than software mirroring implementations, such as the software mirroring in Microsoft Windows® 2000.

RAID 2, 3, 4, and 6 are not typically used in the latest database environments.

RAID 5 is striping with parity. For every byte written across a striped set, an extra byte, called the parity byte, is written to another disk. The parity byte is similar to a checksum. RAID 5 takes a minimum of three physical disks to create a striped set. Most RAID 5 systems today are hot-swappable, meaning that while the system is running, one of the drives in the array can be removed and replaced with no interruption of service. RAID 5 is best used in a single server environment whether or not another redundancy solution is being used. RAID 5 is considered the simplest solution to manage and maintain; however, there is a performance penalty to pay by always writing to the parity drive. The bottom line is that larger databases that need the extra speed will opt for RAID 1+0 instead of RAID 5.

RAID 1+0 is a combination of striping (RAID 0) and mirroring (RAID 1). The idea is to have a corresponding mirror array for every primary array. This way you get the performance of striping and the redundancy protection of using mirrored drives. With the new technology, the mirrored set no longer has to be attached to the same server as the original drives. This is called distance mirroring and can be used to get an instantaneous backup of your database. This is one of the best solutions in a high availability scenario because it speeds up your primary goal of getting a redundant copy of your database.

SANS

System Area Networks, or SANs, are disk storage solutions delivered over high-speed networks, usually some type of fiber optic channel. The I/O requests from the applications go through the Windows operating system to the disk drive through network protocols.

For most applications, this is the easiest and most cost effective method of getting a redundant copy of your database. A high availability architecture calls for mirrored SAN drives. This gives you an instantaneous copy of the database in a failover situation, but the second copy of the database must be idle and cannot be used by another copy of SQL Server. This instantaneous copy is called a snapshot backup and is explained later in this chapter.

Some SANs technology is delivered through TCP/IP over regular network speeds. These are not as fast as a dedicated disk channel and are not used where the highest I/O performance is required. SANs are currently used in heterogeneous environments and for storing redundant copies of the database.

Disk Configuration

What does disk configuration have to do with high availability? Using multiple drive arrays gives you different choices for managing file groups, configuring mirrored redundancy, creating secondary servers, and so on. Added flexibility means additional complexity, especially if something goes wrong.

RAID Solutions

Most storage vendors offer high availability solutions. These solutions have redundant copies of everything—hardware, software, and data. A typical solution is two fibre channel–based disk arrays running RAID 1+0 for the largest databases, and a fibre channel–based disk array running RAID 5 for the smaller databases. Each fibre channel node has one hub, duplicate RAID controllers per server, and associated cabling. Each RAID array has two ports used to attach to the two fibre channels.

The RAID 5 solution is less expensive because it only adds one extra drive in the array to get the redundancy. For the RAID 1+0 solution, every drive purchased must have another drive for its mirror, so you are doubling your disk drive costs. Again, the tradeoff of using RAID 5 is slower performance.

Cc917642.ha01(en-us,TechNet.10).gif 

SQL Server with Windows Clustering and duplicate disks. 

Full redundancy is added by using two or four node clustering in Windows Clustering, available in Windows 2000, Microsoft Cluster Service (MSCS), available in Windows NT® 4.0,, or by using Windows Network Load Balancing. Each of these is explained later in this chapter. Using this solution, you are protected against the following failures:

  • Server failure. Clustering will automatically restart the application on another server in the cluster and the next time the users connect, they will be connected to the secondary server. If your applications are smart enough to detect a disconnect and attempt reconnection, the users may never know the primary server was down. 

  • Fibre channel hub failure. When a host adapter, hub, or part of the cabling fails, the Windows NT or Windows 2000 drivers will force all I/O requests to take the alternate path to the other fibre channel. 

  • RAID controller failure. If a controller fails, the MSCS software will detect this and redirect all I/O requests to the duplicate controller. 

  • Disk Drive Failure. The RAID 5 solutions will continue running if one drive fails. And by using hot-swappable drives, the replacement is inserted and automatically resynchronized to be come part of the array again. RAID 1 or RAID 1+0 solutions will fail over to the mirror. When the primary disks are fixed, the mirror is automatically resynchronized. 

This configuration provides the best protection against failure from any component on one server, but it does not make a geographically remote copy of your database.

Fiber Distances

With the current technologies, creating geographically remote copies of databases is the fastest using fibre channel technologies. However, there are limits of 500 meters when using shortwave lasers using multimode optical fibers, and up to 10 kilometers using longwave lasers with at least two fabric switches.

Software Alternatives

Two major challenges must be faced to achieve high availability: protecting the servers and getting a redundant copy of the database. This section talks about the required bridge between the two challenges—namely, how to switch to the redundant server. Windows 2000 offers two primary methods for managing a failover, Windows Clustering and Network Load Balancing. These solutions are designed to protect against component, server, or application failure. They are not intended to protect against an entire data center loss, as in a fire.

Windows Clustering and SQL Server 2000 Failover Clustering

The Microsoft clustering solutions are a combination of hardware and software components. In a database environment, all servers share the same set of disk drives, so clustering in its simplest form does not protect against drive failures. The solution for protecting against disk drive failures is to have redundant hard disk sets. Work with your hardware vendor to set up one of the redundant disk solutions, like the duplicate fibre channels and disk sets described earlier.

Windows Clustering is designed to facilitate smooth software and hardware upgrades, because one of the computers in the cluster can be taken offline and upgraded while the others are still active.

For clustering, you will need to buy specific hardware on the Hardware Compatibility List (HCL). To find this list, search under "clusters" at the following Microsoft Web site: & https://www.microsoft.com/whdc/hcl/default.mspx. It is best to purchase the entire server as a unit—do not substitute components. Work with a preferred vendor for assistance in setting up the clusters.

Decide what your service level expectations are on the secondary servers. Do you need 100 percent of the capacity of the original server after a failover? Perhaps you can have less horsepower and still give a high degree of service to the business users.

Windows 2000 Datacenter Server is designed to scale up with support for up to 32 processors and up to 64 GB of RAM. It is also designed to scale out with N+1 clustering. The Winsock direct support gives technology agnostic connections so you can upgrade to new technologies without having to rewrite applications. Operations management will like the Process Control tool, which takes advantage with the Job Object API. More important than the technology is the support and testing of the Windows 2000 Datacenter. The testing program to get on its HCL is very strict and the entire system is tested, not just the individual components. Part of the testing rules is a seven-day mandatory retest for firmware changes, software patches, and service packs. Joint support teams between Microsoft and the hardware OEM will give joint problem resolution and reproduction labs. Multiple parties will provide new value-added services for change control, on-site support, system evaluation, and so forth.

Advantages

Windows Clustering has a lot of features that make it very attractive for high availability solutions. The first is the flexibility of architecture and design choices, such as Active/Active or Active/Passive (or in a four-node cluster Active/Active/Active/Active or Active/Active/Active/Passive). The Passive computer can be designated as the failover computer for all the active servers. You can even plan for less excess capacity than if each computer were to have its own failover node. And the last node does not really have to be passive. It can be active and still be the designated failover computer for each of the other servers.

Service failures are protected on the primary computer, like SQL Server, SQL Agent, full-text search engine, Windows operating system failures or errors, and server hardware failures. For the database transactions, committed transactions are not lost as partially completed transactions are rolled back after the failover occurs. The typical SQL Server recovery rules apply during the failover. The log is read from the last checkpoint, committed transactions are reapplied, and uncommitted transactions are rolled back.

Clients do not have to know a failover occurred. Applications with retry logic written into them will simply reconnect to the same IP address or network name and will be connected to the new computer in the cluster. Those applications having short timeout settings may get a disconnect message because their request timed out. But, since most failovers can happen in less than one minute, and one minute is the default timeout for most connection APIs, the users may not experience a time-out error. However, recovery time depends upon database recovery times and possibly on network connectivity times to your DNS and/or WINS servers.

Considerations

When using Windows clustering in a SQL Server environment, consider the following:

  • The disks are shared between all the computers in a cluster. Therefore, if any of the disk components fail, it does no good to switch to another server in the cluster, because it is using the same disks. The solution for this problem is to use redundant fibre channels and disk drives, as mentioned in the section in this chapter about disk drives. 

    Database corruption, either within the database (internal corruption) or introduced by the user (logical corruption), is not prevented by Windows Clustering or redundant disks. Logical corruption can be avoided by designing the application so that a user is prevented from corrupting the data. 

  • If you are using full-text search, you must make sure you have a replicated copy of the search indexes on the other computer in the cluster. 

  • A database can only be accessed by one SQL Server at a time. So, when a failure occurs on the primary server, the SQL Server on the secondary server is started during the failover process and must be given time to start. If the cause of the failure on the primary server is a corrupted database, then the failover to the secondary computer in the cluster is not going to work. 

  • In a cluster, the second server may be passive (Active/Passive), or may be used for other functions (Active/Active). If SQL Server is running on the second computer, it must be used by another application and linking to a different database than the primary server. In an Active/Active cluster, if one computer fails, another instance of SQL Server must be installed to manage the database from the primary server. 

  • Clustering is potentially expensive, especially when servers are separated by distance. The cluster computers must be on the same subnet, so the maximum distance is limited and depends upon your choice of networking components. Clustering can also be expensive in terms of the amount of increased knowledge the administrators must have to support the environment. The Active/Passive environments are easier to administer, but the Active/Active environments take a while to learn, and you should practice the administration taskscomputerTherefore, there is an additional cost of providing a lab setup similar to the production environment.

  • You need to think about the software development, quality assurance, and staging environments. Perhaps the development SQL Servers do not need to be clustered, but in order for the quality assurance team to certify the application as ready for production, they must have a duplicate of the production environment. And the staging servers are almost always a duplicate of the production environment. So, when you are planning a clustering environment, you need to think about all the servers involved. 

Cluster Option 1 – Shared Disk Backup

This is the easiest configuration supported in SQL Server 2000 and SQL Server version 7.0. In the illustration below, SQL Server is installed on each node in the cluster. Each node also has its own disk drives for its own databases. The shared disk is used for database backup files only. When a server in the cluster fails, SQL Server is started on another node and the backup copy of the database is used on the shared drive. This solution is better in SQL Server 2000 as another instance of SQL Server can be prepared on the other cluster nodes specifically for this purpose.

Cc917642.ha02(en-us,TechNet.10).gif

Cluster Option 2 – Snapshot Backup

A new feature of SQL Server 2000, snapshot backup is actually a hardware-assisted backup. The illustration below shows how this is implemented using mirrored disk volumes. When a backup is requested, writes on that server are suspended until the mirror is broken (usually a matter of milliseconds). Another copy of SQL Server on another server is started, linking to the previously mirrored disk drives. The second copy of the database can now be used for reporting, data base consistency checker (DBCC), volume testing, and/or backup to tape. At some later point, the drives are reattached to create a mirror again. The mirror gets synchronized again over a period of time, and the backup process can start over again.

As stated earlier, redundancy is the key. What happens if the server fails while the snapshot is occurring? Most companies choose to install two mirror sets to solve this problem. This way, an active hardware mirror is always on the server while the second mirror is involved in the snapshot backup.

Full and file backups are supported, but differential backups are not. The hardware vendors provide the hardware and software components to control this type of backup. The typical SQL Server backup and restore commands are not used.

Cc917642.ha03(en-us,TechNet.10).gif

Option 3 – Failover Clustering

SQL Server 2000 is aware of clusters and SQL Server Setup automates the installation and configuration on all nodes in a cluster. Failover cluster nodes can now be easily added and removed and operating system upgrades can be upgraded without affecting the remaining nodes in the cluster. To accomplish this, all user and system databases are created on shared disk drives. See the illustration below to view a sample configuration.

Cc917642.ha04(en-us,TechNet.10).gif

This option costs approximately 2.5 times that of a two node active/passive configuration. SQL Server upgrades will still require minimal downtime.

Detail Configuration Showing Database Placement

The illustration below shows where to place the different SQL Server databases in any of the above configurations.

Cc917642.ha05(en-us,TechNet.10).gif

Windows 2000 / SQL Server 2000 Detail Clustering Solution 

In this diagram you see the log disk for Server A, the log disk for Server B, the database disk for Server A, and the database disk for Server B across both RAID controllers and both RAID chassis.

By doing this you get the advantage of utilizing the bandwidth of the SCSI channels to their fullest extent, as well as having true redundancy for your hardware. This would work best if you have redundant processors, power supplies, and network interface cards (NICs). This is also assuming that the operating system is located on a RAID 1 set of drives housed within the server chassis, not the external disk storage chassis. Another set of RAID 1 mirrored disks can be installed in the chassis to hold the page file for Windows 2000. Both of these RAID Arrays can be driven from one of the RAID cards. These cards typically support two or three SCSI channels, so you should have a few extra SCSI channels left over.

The CD-ROM drive and the tape backup drive would be connected to a standard SCSI interface, not the RAID controllers. These are usually built into the motherboard of the server.

Many configurations start out with a single external chassis with several disk drives (12 total) as follows:

  • 2 drives for Windows NT 4.0 and Windows 2000 (RAID 1) 

  • 2 drives for logs from database A (RAID 1) 

  • 2 drives for logs from database B (RAID 1) 

  • 3 drives for database A (RAID 5) 

  • 3 drives for database B (RAID 5) 

  • 0 drive for hot spare 

This is a typical configuration, but one that in fact is not recommended, for the following reasons:

  • Too much contention on the SCSI bus. 

  • Too much disk I/O contention. 

  • It becomes a single point of failure for the external disk chassis. 

Make sure that database server performance is adequate when considering disk queuing and disk I/O contention.

Instead, configure the disk layout as shown in the illustration above.

Network Load Balancing

The Network Load Balancing technology is designed to spread the load between the different nodes of a cluster. Network Load Balancing is available as an option in Windows 2000 Advanced Server and Windows 2000 Datacenter Server. With Network Load Balancing, administrators can simply add another server to the node as traffic increases.

Network Load Balancing services TCP/IP services such as Web, Terminal Services, proxy, streaming media services, and virtual private networks (VPN). Network Load Balancing services in a cluster communicate among themselves to provide the following key benefits:

  • High availability: A server failure is automatically detected by Network Load Balancing and traffic is routed among the remaining servers in the cluster within 10 seconds. However, any databases on the failed server are not immediately available to a SQL Server installation on the other servers. One of the remaining servers must start SQL Server to connect to the database. 

  • Scalability: Client requests are distributed across all servers in a cluster. 

Network Load Balancing is used primarily as a performance enhancement to spread the requests between servers. One key point using Network Load Balancing is that it is only used in a non-clustered environment.

Network Load Balancing gives a virtual IP address for the multiple servers, allowing a potential failover. Like clustering, Network Load Balancing servers must be in the same subnet, which limits the geographic distance between the servers.

Network Load Balancing must be set up in a dual-NIC environment to work properly. The primary NIC is for Network Load Balancing, and the secondary NIC is for SQL Server on both computers to do the log shipping. Network Load Balancing is also good for load balancing on reporting databases. In this case, the dual-NIC setup is not required. In either scenario, the secondary NIC can boost overall network throughput.

Only the primary server is enabled for clients to log connect to. The failover is not instantaneous, especially if the clients do not have retry logic. OLE DB or ODBC clients without specifically written retry logic will have to be stopped and restarted to connect to the failover server. Microsoft Internet Information Services (IIS) is an example of a program that has built-in retry logic.

Here are some configuration tips to get this scenario to work. Network Load Balancing must be installed on only one NIC. TCP/IP should be the only protocol in existence for this adapter. Network Load Balancing and port rules must be identical on all the NIC cards, but the host parameters must be unique for each server. All port rules must be set for all ports used by the application. In this case, SQL Server is the application, and since its normal port is 1433, it must have a port rule of "1433 to 1433". Both a dedicated IP address and the virtual IP address must be entered, and the dedicated IP address must be listed first. Both IP addresses must be static, and Dynamic Host Configuration Protocol (DHCP) cannot be used.

In sum, you can use Network Load Balancing instead of clustering to provide failover capabilities in case

of a server or component failure. Network Load Balancing has no provision for getting a redundant copy of the database to the other server, so another method, like log shipping or replication, must be used.

For more information on Network Load Balancing, see & https://www.microsoft.com/windows2000/.

SQL Server Alternatives

This section discusses specific high availability solutions for SQL Server, and is written for DBAs and others who need to support the SQL Server portion of a data center. Up to this point we have discussed the best server protection methods with hardware redundancy, disk drive configurations, and managing the failover process with clustering or Network Load Balancing. The final step is to discuss methods of getting database copies to other servers.

Database Maintenance and Availability

Database systems need special attention from time to time. Although some of the new features in SQL Server 2000, such as online analysis and index maintenance, will help to diminish this need, a good operations plan will account for a certain amount of database maintenance.

Here is a list of maintenance tasks that need to be done occasionally, and that will either require some downtime or have some negative impact on performance:

  • Rebuilding the index 

  • Running the DBCC 

  • Performing fragmentation analysis 

For more information about the various features in SQL Server 2000 designed to minimize downtime, see SQL Server Books Online.

Backup and Restore

The most basic way to create a redundant copy of your server is to use the backup and restore solutions available through SQL Server and other backup software programs. The flexibility of the SQL Server backup solutions—including (but not limited to) full backups, filegroup backups, striped backups, differential backups, and transaction log backups—gives a solution for everyone.

One fundamental recommendation is to do SQL Server backups to disk, hopefully to a different disk drive/array than your database is residing on. Then, using another Windows backup program, copy the database backup files to tape.

More complex solutions may be preferable, as your database gets larger and larger. These solutions include striped backups and snapshot backups (both described earlier in this chapter, in "Hardware Alternatives").

Still greater complexity is added when your application has other dependent databases that need to also have synchronized backups and restores. This can happen, for example, between an order entry database, a warehouse database, and an accounting database. You cannot restore one without getting out of sync with the others. Additionally, adding replication to this scenario can introduce a cross-database dependency that will not occur with non-replicated deployments. The Publisher and Distributor databases should have synchronized backups. A new option in transactional replication in SQL Server 2000, sync with backup, ensures that the Log Reader Agent will not propagate any transactions to the distribution database that have not been backed up at the Publisher. For more information, see "Strategies for Backing Up and Restoring Transactional Replication" in SQL Server Books Online.

Finally, be aware that if you are using Active Directory™ as an additional data store, and you must restore to an older copy of the database, you will probably be out-of-sync with the Active Directory. So, make sure you consider this in your planning stages.

Another option is distance mirroring, which allows you to do almost instantaneous snapshot backups. This is where the data is collected with hardware mirroring and the mirrored disks actually exist in another location, attached to another server. Refer to the snapshot backup discussion earlier in this chapter in the hardware section. To perform a snapshot backup, the mirror link is disconnected from the primary server, a backup is taken from the mirrored files, and then the mirror link is reconnected. And to protect against failures during the time the mirror link is disconnected, a third set of disks is employed as another mirror. This way, the primary system always has at least one active mirror. For more information about this feature, see SQL Server Books Online.

Two-Phase Commit

Two-phase commit (2PC) is a term that describes a synchronous transaction: An application starts a distributed transaction, writes to the primary database, writes to the remote database, then commits both writes. If either part fails, the entire transaction fails. Using 2PC for creating a remote copy of a database is considered a form of front-end replication.

This is not recommended for most people, but the only 100 percent guarantee you have when designing a redundant database is to use 2PC. The only possible loss is the active transaction, which to most applications is an acceptable loss.

There are various ways to do a distributed transaction. The first is from the application, where the front-end program (or the ASP code in a web application) controls the links to both computers and controls the transaction. The second is for a middle-tier object to write to the different databases, in which case the transaction can be either explicitly controlled by the application or implicitly controlled using COM+ and Component Services, available in Windows 2000, in conjunction with the Microsoft Distributed Transaction Coordinator (MS DTC). The third way is to write to a single database and have a trigger send the data to another server. A fourth method is to use any of the forms of SQL Server replication that can employ 2PC. A fifth method is to use transacted message queues in the Microsoft Message Queuing (also known as MSMQ) product.

There can be more than one computer to handle the redundancy. You can send to a primary server, a redundant/backup server, and perhaps a reporting server, or as many other servers as you need. The performance of each transaction slows down with each server added.

Another potential problem with this method is that all computers maintaining copies of the data must be well connected at the time any INSERT, UPDATE, or DELETE operation occurs. If any of the computers participating in the distributed transaction are not connected, the transaction fails.

Replication

Replication is another option for many high availability solutions. Replication is extremely flexible, and offers multiple solutions that can be combined into many different scenarios. The following table shows different replication applications and their relative autonomy and connectivity requirements.

Replication Choices 

Merge

Queued Updating Subscriber

Transaction Publish & Subscribe

Immediate Updating Subscriber

Microsoft Transaction Server

All databases can be updated

Subscriber queues transactions to be applied to Publisher at later time

Store and forward

One Subscriber updates Publisher. Other Subscribers are via transaction replication

Two-phase commit for all transactions

Full autonomy, low connectivity requirements

Full autonomy, low connectivity requirements

Full consistency, but with latency

Connections required during updates

Full consistency but little autonomy

Scenario 1: Reporting. If you intend to use your redundant copy of your database for reporting, then replication is your best bet since people can be running reports against the subscription database while replication is occurring. Most of the other solutions need exclusive access to the redundant copy/copies of your database, thereby locking out anyone wishing to do read-only reporting or other work. Consider the use of transactional replication when setting up a redundant copy of your database for reporting purposes. Transactional replication is well suited to deployments requiring read-only copies of data and can generally achieve replication of incremental changes with very low latency.

Scenario 2: Mobile users. Replication is a very strong solution for mobile/disconnected users. Use merge replication when providing mobile users the autonomy to make updates while offline is imperative. This solution automatically tracks changes to data on a laptop and merges it with the main database when the users are again connected to the network. So, disconnected users have a highly available application without being directly connected to the network.

Scenario 3: SQL farms. Although a slightly different concept than a Web farm using Network Load Balancing, a SQL farm is made possible by replicating the read-only (or low volatile) data from the primary database to database servers specifically assigned to certain IIS or middle-tier servers. In this scenario there is still usually one central data collection database and multiple read-only databases for lookup tables and other non-volatile or low-volatile data. Using transactional replication in this scenario will assist in giving higher availability to the Web application, but does not directly address the issue of getting geographically remote copies of your transactional data. This scenario is one form of scale-out architecture designed to improve aggregate read performance and to keep the volume on the central Publisher lower. In the illustration below, three Web servers work with one SQL Server to do all the data collection and modification. The read-only SQL Server on the left supports one Web server and the read-only SQL Server on the right supports two Web servers. This is just a sample layout showing one possible scenario. Of course, you will want to set up the read-only SQL Servers with Network Load Balancing so if one server fails, the other will take over.

Cc917642.ha06(en-us,TechNet.10).gif 

SQL Farm 

Scenario 4: Many Publishers, centralized Subscriber. Autonomous divisions who need to do their own data collection and eventually copy it to a corporate office or to share with other divisions will use some form of replication. If one division experiences problems, the others are not affected, giving a slightly different view of high availability. This is sometimes called the roll-up scenario.

Scenario 5: Standby servers with replication. Creating a standby server with a redundant copy of the data could be done with replication, although there might be better solutions, like failover clustering or distance mirroring depending on your needs. Replication does not replicate code changes (views, triggers, and so on), security changes, msdb changes, or job changes (msdb). These and other issues are solvable problems if you want to use replication, but other solutions are easier for creating geographically remote copies of your server.

As with other forms of creating redundancy, when using replication, administrative overhead increases as soon as you make a copy of the data. Companies implementing replication should prepare to support the additional complexity. This usually means additional budget allocations for training, additional expenditures in personnel, cost and time to add to the disaster recovery plan, as well as additional time for each recovery practice session.

Be aware of some additional technical concerns with certain replication scenarios. For example, upgrading 100 Subscribers to the next version of SQL Server is a manually intensive process, even with the improvements in remote installation and upgrade options. The issue for high availability solutions using replication is the same for any asynchronous (store and forward) solution. In the worst-case scenario where the publication and the distribution servers are lost at the same time, transaction data could be lost. This is one reason for the recommendation to write your disaster recovery plan before installing replication and to include in that plan the possible loss of servers involved in replication.

Latency may not be a big problem for some applications using replication, but a large queue or store of waiting transactions can develop if the network is down. Or if a very large transaction is done, such as an update of 50,000 rows to fix a data problem, the replication transaction queue will grow until this abnormal transaction is complete, causing abnormal latency.

SQL Server replication offers both synchronous and asynchronous solutions. This is the most basic question to be answered when designing a copy of a database. Transactional replication is an asynchronous solution, using the store-and-forward methodology, where the potential loss of data comes when transactions are waiting to be distributed when a disaster occurs. Transactional replication with immediate updating allows synchronous updates to be made at Subscribers, using a 2PC method, where the only potential loss in an update made at the Subscriber is the current transaction

Merge replication is a low-cost way of having two synchronized, geographically separate servers. New features in SQL Server 2000 merge replication have made this type of replication easier to install and support. It can be more complex because of the need to learn about conflict resolution. And once again, you need to upgrade your disaster recovery plan before implementing merge replication. The best recommendation when considering merge replication is to include sufficient time for additional quality assurance tests.

The companies that have the best chance to succeed with replication will have:

  • Increased the awareness of the difficulty of distribution to the highest levels of management. 

  • Written the additions to their disaster recovery plans before implementing replication. 

  • Increased the training budget for DBAs and operations/support staff. 

Replication: Immediate Updating with Queued Updating as a Failover

The option immediate updating with queued updating as a failover can be used in transactional replication to synchronize the updates between a remote Subscriber and the Publisher. Immediate updating is where a 2PC transaction is done and both the Subscriber and Publisher are always synchronized. Using queued updating as a failover allows updates to continue at the Subscriber even if network connectivity between Publisher and Subscriber is lost.

Immediate updating typically requires a reliable connection because 2PC is the primary controller for the transaction on the Subscriber. Queued updating as a failover can be set to use SQL Server queues or Message Queuing to provide the ability to queue data modifications made at the Subscriber. The record modifications are saved in a local message queue and transmitted to the primary publishing server when the network is available again. Subscribers other than the originating one will receive the updates through the normal latent distribution method.

Applications must be made aware of a new error message that arises when a row version has been changed at the Publisher by another session. The error handling for this situation should be written to get the latest changes from the Publisher, and then have the user attempt the update again. For more information about immediate updating with queued updating as a failover, see SQL Server Books Online.

Log Shipping

Log shipping is another solution for creating redundant copies of databases on other servers. The basic premise is that the transaction logs are backed up on your primary server, copied across to another server, and restored. This is done using regular SQL Server backup and restore commands, with additional tables and stored procedures for tracking and administering the process.

In SQL Server 2000, log shipping is very robust due to new technology improvements in the backup/recovery features as well as the new recovery models. It is even incorporated into the Database Maintenance Plan Wizard for easy setup. The other improvements include easier administration through the Monitor features to track status, SQL Server Enterprise Manager additions to get to any server, and the ability to set up flexible actions and alerts. Additional features for flexible choices are the different frequencies for backup, copy, and restore—they do not have to be the same.

Log shipping for high availability solutions is attractive because it is inexpensive for most applications and is relatively easy to administer. This is the lowest cost method of creating remote databases. The basic problem of using log shipping is that your recoverability is only as good as your last backup. For example, if you are doing log shipping once per minute, how many potential transactions will you lose if you cannot recover that one last minute before a failover occurred? The second problem is that it needs exclusive use of the secondary database to do a restore.

If the time between restores is long enough, you can do DBCC commands to check the consistency of the database, and you can then do full backups of the data. Doing this removes these burdens from the primary server.

Log shipping may not be the best solution because it is a slower failover process than with Windows Clustering. The failover and recovery process on the second server may involve some manual processes. There are definitely manual processes if you need to get the last transaction log from the primary server. Most of the steps can be automated, but an administrator will want to run through a predefined checklist to validate the second server before allowing clients to access it.

Another problem with log shipping is that the full-text search will fail when the primary system fails. Another copy of the search indexes must be on the second server. As soon as the search service is restarted on the second computer, clients can use it again. This can be automated, but be aware that this is not an instantaneous failover.

Log shipping without Network Load Balancing means that the clients must either manually reconnect, or have connection retry logic built into the application.

Cc917642.ha07(en-us,TechNet.10).gif

Log Shipping 

Log shipping can be used together with replication, clustering, and hardware solutions to provide the ultimate in architecture and design choices. For more information about log shipping, see Chapter 13, "Log Shipping."

Message Queuing

Message Queuing (also known as MSMQ) actually has several intriguing reasons for consideration in architecting high availability solutions, especially using MSMQ version 3.0. It can exist as a guaranteed transport mechanism between the client computers and the database, it can exist between a middle tier layer and the database, or it can exist behind the database layer in a specific replication scenario. In any case, the primary reason for using it in high availability designs is to get a redundant copy of the database.

The basic premise switches from having a service like replication or log shipping create a redundant copy of the database after the initial write has been done, to having a service, like Message Queuing, that captures the input and sends it to two different servers.

Any time a message comes into the queue, there can be multiple Subscribers targeted to accept those messages. Using Message Queuing in the client layer, one Subscriber can be the primary database, and the other Subscribers can be your failover database or databases.

Message Queuing is primarily a store-and-forward strategy, similar to some forms of SQL Server replication. The latest version of Message Queuing has many features that make it an attractive consideration for creating a redundant copy of your data. For example, Message Queuing allows primary and backup queues. If the primary computer is lost, a failover occurs to a backup controller, with no loss of data.

The best solution for using Message Queuing in a high availability environment is to put the Primary Enterprise Controller (PEC) on a clustered computer and the Backup Site Controller (BSC) on another clustered computer in a remote location, and a very big pipeline connecting the two.

The downside is that applications must be written to use Message Queuing. This is not a solution for buying an off-the-shelf application, like an accounting system, and adding message queues to help solve your uptime problem. It takes design and development work to employ the Message Queuing technology.

Combining SQL Server Solutions

The above alternatives are meant to give you an overview of the different features of SQL Server and their strengths and weaknesses when used individually. This section combines different strategies, each solving the others weaknesses. Even in these combined solutions, remember that the first priority is to focus on the people, policies, and processes involved in your high availability solution.

All of these combined solutions assume that you are doing enough hardware protection on each database server, usually through RAID and/or fibre channel redundancy, as well as the usual redundant power supplies, disk controllers, and so on. This will protect you from a loss of individual components within a system or cluster.

There are several challenges to be solved. The first is to get a redundant copy of a database to at least one remote server, even if that database is always idle. The other is to get a read-only copy of the database to a remote server to remove reporting burdens from the primary server. What does creating a reporting server have to do with high availability? By shifting the reporting burden to a secondary server, it makes the primary server more available due to less locking conflicts.

The following is a step-by-step approach to designing a solution that will give your company the right mix of recoverability, failover, and reporting databases.

  1. Protect the individual components. 

    Solution: Hardware redundancy is the key. Purchase dual items of everything, from NIC cards to power supplies and disk drive sets. Every component must be duplicated. 

  2. Protect the server. 

    Solution: Use failover clustering or Network Load Balancing. Different companies will assess the pros and cons of these methods differently, according to their particular applications.

  3. Create an idle copy of the database. 

    Solution: Backup and restore, replication, log shipping (per minute), Message Queuing, and hardware mirroring can be used.

  4. Create a read-only copy of the database (if needed). 

    Solution: Replication, log shipping (daily), and hardware mirroring. 

Server Clusters, Hardware Mirroring, and Replication

Use server clusters, with all the redundant hardware, to give the ultimate protection for your primary server. Then, use hardware mirroring to make the failover copy of your database. And use replication to create a read-only reporting server. This is probably the best combination of all the technologies for the companies who can afford all the hardware and licenses, and who can afford the best-trained administrators.

Using this configuration assumes that there is one primary cluster for the database server and two secondary servers, one local and one geographically remote. This solution also assumes that a reporting server is required with 24 x 7 read-only access.

This solution protects against both logical and physical failures. Failover clustering is a good enterprise solution as long as the hardware in a cluster is in close physical proximity. And clustering solutions, combined with mirroring and dual fibre channel disk managers, offer a very good choice for instantaneous database failover. As discussed above in the clustering section of this chapter, it does not provide for a geographically remote failover server. Replication or periodic log shipping is good for that solution.

Log Shipping with Network Load Balancing

Network Load Balancing in conjunction with log shipping can be a good substitute for clustering. For a failover to be successful, there must be a current database on the other server. Log shipping will give you that other copy.

When Network Load Balancing and log shipping are used together, both servers are set to inactive at startup. The primary node is manually started. When Network Load Balancing is used for read-only reporting SQL Servers, all servers can be set in auto-start mode. In dual homed configurations, the NICs must be non-routable to each other. And although direct IP communication can be achieved, the preferred method is to use a WINS server entry, with LMHosts being another preferred alternative.

Here is a sample failure scenario and one possible recovery process. This is just a sample process and should be tailored for each application. As you can see, this is not an instantaneous failover.

  1. Something happens to the primary server. 

  2. Attempt to recover the transaction log on the failed server. If it is accessible, copy it to the secondary server and do a final restore. This puts the secondary database in the latest state with no loss of transactions. 

  3. Start SQL Server on the secondary server and take the database out of standby mode. 

  4. Start Network Load Balancing on the secondary server allowing clients to log back on. 

Variations to Account for Different Scenarios

The solutions above do not necessarily cover all environments. Other considerations can be made for different scenarios, especially if your systems are set up for the highest performance.

  • Scenario 1: VLDB with high number of active connections 

    Companies in this scenario like to have 24 x 7 coverage with very high availability requirements, usually five nines (99.999 percent), leaving only five minutes of outages allowed per year. The solution described here assumes the following for the largest database: 

    One primary database for data collection, segmented over X number of databases on Y number of servers. Each server is a two node cluster (minimum), and each database is replicated to a standby server using SQL Server replication as the delivery mechanism. 

    There is also one reporting server for read-only access, segmented over X number of databases on Y number of servers. This reporting server can be a data warehouse with or without the additional OLAP storage. 

    The idea of segmenting your data collection servers into X number of databases (on the same server) is for increased performance and easier administration. Large databases are difficult to administer as they have long backup times, long DBCC times, and long index rebuild times. If you can split your application into multiple databases, it is easy to administer. The increased performance is achieved by putting the different databases in different file groups on different disk arrays. You can also split tables and indexes over different file groups and different disk arrays within the same database. 

    If the databases are still too big for one computer to handle, SQL Server 2000 gives the option of spreading the database over Y number of servers. The added performance comes with additional administrative challenges by having to coordinate backups. This coordination may be minimized or avoided by architecting the split so one transaction never crosses a server boundary.

  • Scenario 1a: Very Large Database (VLDB) with high number of active connections

    Instead of using replication to get a redundant copy of the database, use the new snapshot backup option. This was discussed before in an earlier section. And to solve the potential exposure problem of a disaster occurring while the mirror is broken, add a second mirror. This way, you always have a mirrored copy of the data on the primary server. 

  • Scenario 2: Typical Web scenario 

    In this Web site scenario, everything is duplicated. Networks, SQL Server computers, and SMTP computers are all duplicated. Even the SQL Server disk arrays are duplicated, as are the fibre channel controllers for those disks. 

Cc917642.ha08(en-us,TechNet.10).gif

  • Scenario 3: Typical small/medium company (or department server) 

    The companies in this scenario will most like have a single server, with some form of RAID protection, usually RAID 5. The backup/restore strategy is the primary means of disaster recovery. In this scenario, off-site tape rotation is critical. Companies with this design most likely have another server that could be used as a spare, but it may not be configured for that use until a problem occurs. The requirements are usually for 8 x 5 or 12 x 5 instead of 24 x 7, giving plenty of opportunities for planned maintenance. 

Conclusion

Achieving high availability is mostly accomplished by setting and following the best practices for your environment. This is achieved though the use of proper policies and procedures, as well as choosing the best redundancy option for your hardware, software, and application. Distributed data through log shipping is best when a geographically remote server is available that is not used for other usage. Distributed data through replication is best when your remote server needs to be accessed for reporting, analysis, and/or data entry. Hardware redundancy is required to give the best protection for a server in a specific location.