Lessons Learned

Author: Mike Ruthruff

Summary: This paper discusses lessons learned during test scenarios in which Microsoft consolidated Microsoft® SQL Server™ databases that previously resided on separate physical servers on to a single 32-bit clustered Windows environment.

On This Page

Overview
Configuration of the Consolidation Environment
Managing Workload Performance in a Consolidated Environment
Using Multiple Instances of SQL Server
AWE Considerations for Clustered Environments
Conclusion
Appendix A: Storage Configuration and Performance
Appendix B: References
Appendix C: Hardware Configuration

Overview

The performance of Microsoft® SQL Server™ in consolidated environments depends greatly on the hardware being used and on the characteristics of the SQL Server workloads involved. There are restrictions imposed by the architecture of the 32-bit platform which affect the ability of SQL Server to fully utilize systems with large amounts of physical memory. Using multiple instances is one approach to fully utilizing large memory resources on a single machine; however, using multiple instances may require that more thought be given to SQL Server tuning options used. This is especially true when using Cluster service in conjunction with SQL Server.

This paper discusses lessons learned during test scenarios in which Microsoft built and operated a server to consolidate SQL Server instances that previously lived on separate machines on to a 32-bit clustered environment. Specifically, this paper provides the following:

  • An example of a 32-bit configuration that may be used for database consolidation.

  • Information about clustering SQL Server in a storage area network (SAN) environment.

  • Information about running concurrent multiple instances of SQL Server, including tuning options you can use to optimize performance.

  • Additional considerations for clustering SQL Server in a large memory environment.

This paper is not intended to address all of the questions you may encounter when consolidating SQL Server databases. It is intended to present the lessons learned during testing on a specific configuration. In addition, the results of the scenarios presented in this paper may not necessarily apply to all consolidated environments. For more information about the planning necessary to successfully select and deploy environments for SQL Server consolidation, see the resource "Planning for Consolidation with Microsoft SQL Server 2000" in Appendix B.

Configuration of the Consolidation Environment

Host Hardware Configuration

For the consolidation server in the test scenarios presented in this paper, Microsoft used a Unisys ES7000 server. This server was divided into two partitions, with each partition owning one-half of the total machine resources. Each partition was configured with 16 CPUs, 32 gigabytes (GB) of RAM, and four Emulex LP9000 Host Bus Adapters (HBAs). Both partitions were running the Microsoft Windows® Server™ 2003, Datacenter Edition, operating system and SQL Server 2000 with Service Pack 3 (SP3). The partitions were clustered using Windows Clustering. EMC PowerPath software was used to balance I/O across the four HBAs on each of the two nodes in the Windows Cluster.

Storage Configuration

The requirements of Windows Clustering dictated the storage configuration used in the test environment. There are two main requirements for storage used with clustered instances of SQL Server:

  • At the time of writing, SQL Server does not support mount point volumes for use with clustered instances of SQL Server; consequently, drive letters must be used for any shared disk resources. For more information about SQL Server support for mount point volumes, see Appendix B.

  • Each instance of SQL Server must own at least one of the shared disk resources.

In addition to these requirements, there are other special considerations when implementing clustering using a SAN. For more information about clustering in a SAN environment, see Appendix B.

The storage configuration used for the test scenarios in this paper was the EMC Symmetrix 5.5 model 8530 storage array. During testing, the array was also being used by another host for other unrelated SQL Server testing. Total storage for the portion of the SAN used for the test scenarios was approximately 1.6 terabytes spread across 22 logical unit numbers (LUNs) consisting of nine 108 GB LUNs and thirteen 54 GB LUNs. These LUNs were mapped to Windows drive letters and each LUN configured as a separate volume. In the test scenarios, there were a total of 16 instances of SQL Server, each owning one or two volumes. Database sizes ranged from 10 GB at the low end to approximately 150 GB at the high end.

For a more detailed look at the storage configuration and performance results in the test scenarios presented in this paper, see "Appendix A: Storage Configuration and Performance".

Databases and Consolidation Workloads

Microsoft SQL Server 2000 can support up to 16 instances of the SQL Server database engine on a single computer running Windows. For the test scenarios discussed in this paper, Microsoft installed 16 named instances of SQL Server as clustered virtual servers on the two clustered nodes in the configuration. Each instance of SQL Server contained between 1 and 25 databases, with most instances having an average of five.

Of the 16 instances of SQL Server, 12 contained databases gathered from various sources, both internal and external Microsoft deployments, and each source was a real production database. Workload for these databases was created by using SQL Profiler to play back previously captured traces of production workloads. The workload types captured in these traces were primarily online transaction processing (OLTP) workloads, although some traces contained a few maintenance-related queries or other queries that were created for a particular purpose. OLTP workloads consist primarily of insertions, deletions, and updates to the data, along with some selective queries.

The remaining four instances of SQL Server consisted of two databases with representative OLTP workloads, and two databases with representative decision support system (DSS), which may also be referred to as relational data warehouse (RDW) workloads. The DSS workloads consisted of complex read-only queries with parallel execution plans.

Before defining the consolidation workloads, Microsoft first determined how many concurrent instances of SQL Server could be supported using the workloads in our specific hardware environment. The decision about how many simultaneous workloads would run at any one time was based on the hardware capacity of a single node in the cluster. For the workloads and hardware configuration in the test scenarios, CPU was the limiting resource. CPU resources were exhausted before any of the other main hardware resources. Running workload against 10 concurrent instances of SQL Server resulted in an average CPU utilization of almost 80 percent across all of the 16 CPUs on a single node. This became the baseline consolidation workload, because it was close to the capacity of a single node, with just enough CPU resources remaining to be able to add one or two additional workloads.

During performance testing, Microsoft ran four different consolidation workload scenarios. The workloads used are shown in Table 1.1. These scenarios were selected in order to measure the effect of adding additional OLTP, DSS, and Database Console Command (DBCC) maintenance workloads to the baseline.

Table 1.1   Consolidation workload details

Workload

Workload Description

Workload A (considered to be the baseline workload)

10 SQL Profiler traces played back concurrently. The SQL Profiler traces contained primarily OLTP workloads, with a few maintenance-style queries or other queries created for a particular purpose.

Workload B

Workload A, plus two additional SQL Profiler traces replaying OLTP workloads. The OLTP workloads were added because the combined additional CPU utilization of the two was close in comparison to the CPU utilization of adding DSS or maintenance workloads.

Workload C

Workload A, plus one instance of SQL Server running a DSS-style workload (consisting of complex select queries).

Workload D

Workload A, plus one instance of SQL Server running a purely maintenance-related workload (DBCC CHECKDB, DBCC SHOWCONTIG, and DBREINDEX of all of the user tables in the database).

Managing Workload Performance in a Consolidated Environment

SQL Server Tuning Options

On larger systems, significant performance gains can be realized by tuning SQL Server using the options in the SP_CONFIGURE stored procedure. To examine the effect different configuration settings had on performance in our consolidated environment, Microsoft applied different settings during the test scenarios and then measured the resulting gains or losses in performance. Each of the following configurations given below was tested across all four of the workloads previously mentioned in Table 1.1:

  • Default Server Settings. All of the instances of SQL Server were run using the default server settings.

  • Affinity Mask (Using CPU Affinity to evenly divide CPU resources). Each group of three instances of SQL Server was assigned to a group of four CPUs (a total of 12 active instances at any one time). The groups were determined based on observed CPU utilization during previous runs. The instances of SQL Server that demanded the most CPU resources were allocated into separate groups.

  • I/O Affinity (Using I/O and CPU Affinity to set function of each CPU). The 16 CPUs in the configuration were divided so that 12 CPUs were used for SQL Server processing and 4 CPUs were dedicated to I/O. All of the instances of SQL Server in the configuration could use any of the 12 CPUs for processing.

  • Max Degree of Parallelism (Limiting the amount of parallelism in query plans). The MAXDOP option was set to four for all of the instances of SQL Server. All other options were set to default values.

  • /3GB, Max Server Memory, and Min Server Memory. No testing was done using these settings on the 32-bit system.

    In the test scenarios presented in this paper, the /3GB boot option was not used because the system used in the configuration had 32 GB of RAM. To utilize more than 4GB of physical memory, you must start Windows using the /PAE boot option in the Boot.ini file. Enabling /3GB with /PAE for systems with greater than 16 GB of physical memory will result in the operating system recognizing only 16 GB of available memory on the system when it is started.

    The test scenarios also did not require any adjustment to the min server memory or max server memory settings, because the system used in the configuration had 32 GB of physical memory and only 12 active instances of SQL Server were running at any one time. The practical memory limit of each instance of SQL Server on a 32-bit platform is just under 2 GB, so cumulative memory consumption by the SQL Server processes in the test scenarios was no more than 24 GB. Setting upper memory limits for SQL Server is considered a best practice, and should be done in consolidated environments if the possibility exists for memory contention between instances. Bringing an instance of SQL Server online when other instances have consumed nearly all of the available system memory could result in lowered performance across all instances.

Multiple-Instance Workload Performance Results

The results of the workload performance tests indicate that, for consolidated environments, using the default server settings generally leads to very good overall performance compared with other SQL Server tuning options. Figure 1 1 and Figure 1.2 show the system CPU utilization and total transactions per second across all of the instances of SQL Server that had transactional workloads. (For details about each workload type, see Table 1.1.)

Cc917531.cosl3201(en-us,TechNet.10).gif

Figure 1.1   Cumulative transactions per second

Cc917531.cosl3202(en-us,TechNet.10).gif

Figure 1.2   Total CPU utilization

Figure 1.1 and Figure 1.2 demonstrate that, in all of the workload scenarios, setting either the CPU affinity or the I/O affinity resulted in lower transactional throughput and in the CPU resources not being fully utilized. Using these settings also did not result in any significant reduction in the number of context switches. For all of the workload scenarios, context switches per second ranged from 20,000 to 25,000 per second. CPU affinity and I/O affinity are set very infrequently, and only in select deployments that have a large number of CPUs, in order to successfully tune the performance of a single instance of SQL Server or to manage the coexistence of disparate instances. However, for the consolidated environment in the test scenarios, allowing SQL Server to manage the CPU resources simplified the configuration and resulted in better overall throughput across all of the workloads. It is recommended that you test on a case-by-case basis when deciding whether to use either of these settings, because the performance benefit may vary based on the characteristics of the workloads that are involved.

Two of the workload scenarios shown resulted in a higher transactional throughput when the MAXDOP option was set to four instead of the default (zero). Higher transactional throughput occurred despite the lower total CPU utilization. For certain queries, the SQL Server query processor will create a parallel execution plan that may result in the query executing across all of the available CPUs. A parallel execution of one query across all available CPUs may be beneficial to the performance of that particular query; however, it may require more CPU resources and thus may hinder the performance of other queries that are also executing on the system. Setting a limit for the MAXDOP option on a system with many CPUs may be beneficial. It is recommended that you test both with and without setting the MAXDOP option, in order to ensure that no one query or set of queries will take a disproportionate amount of the machine resources.

Heterogeneous Workload Performance Results

It is generally a good practice to separate different workload types (for example, DSS and OLTP) onto separate servers. However, this may not be realistic for all consolidation scenarios. To determine how DSS workloads impact other concurrent OLTP workloads, Microsoft used one consolidation workload in the test scenarios that contained an I/O intensive DSS workload. The DSS workload consisted of a set of complex select queries (with many complex joins and heavy use of the tempdb database). The consolidated workload is represented as Workload C in Figure 1.1 and Figure 1.2. Figure 1.3 and Figure 1.4 show the increased read I/O activity for the DSS and maintenance workloads in the test scenarios.

Cc917531.cosl3203(en-us,TechNet.10).gif

Figure 1.3   Disk reads per second

Cc917531.cosl3204(en-us,TechNet.10).gif

Figure 1.4   Disk writes per second

As these illustrations demonstrate, the workloads containing the DSS and maintenance queries performed a higher number of disk reads than the workloads consisting primarily of OLTP queries. The impact on total system performance of I/O intensive workloads such as DSS depends greatly on the performance of the disk subsystem. In the test scenarios, the DSS workloads were run concurrently, with minimal impact to the performance of the OLTP workloads. This performance was achieved without any special tuning to the I/O subsystem.

Because of the large number of database and database-related files in the test scenarios, the storage was not configured with any physical isolation between the data, log, and tempdb files. Likewise, the storage was not configured with any physical isolation between the instances of SQL Server that had DSS workloads and the instances that had OLTP workloads. Instead, the storage configuration spread each of the LUNs that were exposed by the SAN across as many physical disks as possible. For SQL Server, it is always a best practice to physically isolate data from log files at the disk level. However, this may not always be practical for consolidation environments, especially those in which storage are networks (SAN) are being used. In many cases, the storage may actually be shared with other systems, as was the case in the test scenarios presented in this paper. The ability to successfully consolidate heterogeneous workloads on a single system is highly dependent on the workload characteristics and on the performance of the I/O subsystem. If you are attempting to consolidate workloads that have very different I/O characteristics, you should perform careful testing to determine the effect on I/O performance of the entire system.

Using Multiple Instances of SQL Server

For the test scenarios presented in this paper, there were two primary reasons for using multiple instances of SQL Server:

  • The goal of the testing was to create an environment with a large server running many instances of SQL Server concurrently, in order to measure performance and to determine how best to tune the performance of that environment.

  • On the 32-bit platform, using multiple instances of SQL Server allows for full utilization of the large amount of physical memory that is installed on the Unisys ES7000 server (32 GB per clustered node, which results in approximately 2 GB of memory allocated to each instance of SQL Server).

Each group of databases from a different server was put into its own instance of SQL Server, simplifying the overall deployment and configuration of the test scenarios. Backups of databases from separate servers were restored on the consolidation server.

If you are consolidating databases from different servers onto the same instance of SQL Server, remember to consider issues such as logon conflicts, as well as settings for collation and sort order and other server-specific settings. Mapping into separate instances of SQL Server simplifies this process. In addition, having multiple instances of SQL Server provides the ability to isolate workloads, with each instance having its own tempdb database. For more information about planning for and handling these deployment issues, see "Planning for Consolidation with Microsoft SQL Server 2000" in Appendix B.

On 32-bit platforms configured with 4 GB or more of physical memory, you can either use Address Windowing Extensions (AWE) or use multiple instances of SQL Server as a way to fully utilize the large amount of physical memory. AWE may work well for some scenarios; however, you should be aware that AWE memory can only be used for data cache. The memory for procedure cache, connections, locks, and other internal resources of SQL Server must come from the 2 GB (or 3GB, depending on the settings used) portion of the virtual memory. On systems needing to support a large number of databases and user connections, multiple instances of SQL Server may be a better approach to fully alleviate the 2 GB or 3GB memory constraint imposed by the 32-bit platform for these data structures.

Desktop Heap Considerations

In environments where many services are running under distinct user accounts, be aware that this may exhaust the desktop heap resources and you may need to do some tuning in order to ensure that services are able to start successfully. Each service running as a specific user account will run in its own Windows station. Each Windows station can contain zero or more desktops and each desktop object has a desktop heap associated with it. A finite amount of desktop heap is available to Windows, and when the available desktop heap has been exhausted, failures may occur when starting the services. You can configure the amount of desktop heap allocated by each Windows station using the registry, and you may need to lower the amount allocated by each non-interactive desktop in order to allow all services to start without failure. Whether or not you need to do this depends on how many services are running and how many distinct user accounts there are. For more information about Windows stations, desktops, the types of errors that may occur and how to control the amount of desktop heap allocated by each non-interactive Windows station, see Appendix B.

AWE Considerations for Clustered Environments

After measuring the workload performance of the consolidated environment in the test scenarios, Microsoft tested AWE enabled instances of SQL Server to determine how they would behave during failover.

AWE Memory and Startup Performance

When SQL Server starts instances that are configured to use AWE memory, it must allocate and commit the memory pages on process start. For security reasons, the memory pages that SQL Server uses must be initialized with zeros prior to being used by the SQL Server process. The length of time it takes SQL Server to start depends on whether or not Windows has enough memory pages initialized with zeros to satisfy the allocation requests when the process starts. If not enough pages are available, memory pages must be initialized with zeros during process start. On termination of the SQL Server process, as memory pages are released back to Windows and put on the free list, Windows has a zero page thread (priority 0) that will zero out the memory pages.On Windows Server 2003 (as well as Windows 2000 with Service Pack 3 or later with the hotfix mentioned in Appendix B), the initialization with zeros of pages occurs in parallel across all CPUs. On earlier versions of Windows, the initialization with zeros of memory pages occurred on a single thread and can result in slow start times for AWE enabled instances of SQL Server that are running on large memory systems.

In the test scenarios, the startup time for AWE enabled instances of SQL Server, configured to use approximately 31.5 GB of RAM, ranged from under 30 seconds to approximately 4 minutes depending on whether or not the memory page needed to be initialized with zeros. For AWE enabled instances of SQL Server that take noticeable time to start, you may need to change some of the settings related to Cluster service. Cluster service has a Pending Timeout property on the SQL Server resource. If SQL Server does not start within the set timeout period, Cluster service will issue a stop request to Service Control Manager and will then try to restart the service. For large memory systems in which SQL Server is AWE enabled, the timeout may need to be adjusted from the default of 30 seconds, to a higher value.

AWE Enabled SQL Instances and Failover

For active/active clustering scenarios on servers with greater than 4 GB of physical memory on each node, you can configure instances of SQL Server with AWE memory enabled. SQL Server will allow you to configure the memory so that the cumulative memory consumed by all of the instances running on multiple nodes is greater than the physical memory available on a single node. Because AWE memory is statically allocated and is not released until process termination, you should always set the SP_CONFIGURE max server memory option to limit the amount of memory consumed by any given instance, to ensure that there is always enough physical memory available for any instance that may fail over. It is generally considered a best practice to configure active/active clusters so that the cumulative resources consumed across the nodes are never greater than the resources available on a single node. This ensures that there will always be adequate system resources should a failover occur. The associated negative trade-off is that all of the memory is not being utilized all of the time.

The behavior of AWE enabled instances of SQL Server in failover situations is the same as the behavior of any AWE enabled instance of SQL Server when started. SQL Server only uses AWE memory if the available virtual address space is less than the lesser of the following two values:

  • Total available physical memory at the time of process start

  • Configured running value for 'max server memory' setting

Otherwise, SQL Server reverts to using a dynamic model. The amount of virtual address space depends on the SQL Server startup options used (specifically the –g option). For more information about startup options, see SQL Server Books Online.

Table 1.2 contains examples of memory consumption before and after failover for different AWE failover scenarios. In each of the scenarios, the AWE enabled instances were configured with the AWE enabled option set to one and the min server memory equal to the max server memory.

Table 1.2   AWE memory consumption scenarios

Scenario

Total memory (per node)

Memory consumption before failover

Memory consumption after failover

Two AWE enabled instances (one per node, active/active).

Each instance is configured with max server memory = 20 GB.

32 GB

The instance on Node 1 has 20 GB.

The instance on Node 2 has 20 GB.

The original instance on Node 1 has 20 GB.

The failed over instance has approximately 11.7 GB.

Four AWE enabled instances (two per node, active/active).

Each instance is configured with max server memory = 10 GB.

32 GB

Each instance on Node 1 has 10 GB.

Each instance on Node 2 has 10 GB.

The original instances on Node 1 each have 10 GB.

If the instances are failed over sequentially:

The first instance that failed over gets 10 GB. The second instance that failed over reverts to dynamic memory.

If the instances are failed at the same time:

The amount of memory consumed by each instance varies; however, the total cumulative memory consumed by the two instances will be close to 11.7 GB.

If there are instances running on the machine that are not AWE enabled, AWE enabled instances coming online will not result in the reduction of any of the memory consumed by the non-AWE instances; the non-AWE enabled instances will retain all of the memory they have. This is true regardless of whether the non-AWE instances are busy or idle. This occurs because SQL Server determines the amount of AWE memory to allocate at startup based on the max server memory setting and the amount of physical memory that is available.

Although SQL Server will always try to leave room for the operating system, when the system is very low on available physical memory and additional instances are brought online, the system may begin to use virtual memory in the page file (otherwise known as paging). This can have a negative impact on overall system performance. You should keep these considerations in mind when you configure memory for multiple instances of SQL Server.

Conclusion

The performance of SQL Server in consolidated environments depends greatly on the hardware you are using and on the characteristics of the SQL Server workloads that are involved. The test scenarios presented in this paper provide information about how SQL Server will behave in consolidated environments. However, be aware that the results included here will not necessary apply to every configuration. Before consolidating separate instances of SQL Servers onto a single server, give careful thought to determining the level of resource use in your current system. For more information about how to plan for a consolidation of many instances onto a single server, see the resources in Appendix B.

Following is a list of the key points discussed in this paper:

  • SQL Server 2000 supports up to 16 concurrent instances on a single Windows instance. In reality, the number of supported instances is limited by the hardware resources available. The hardware in the test scenarios was able to support the workload of 12 concurrent instances before exhausting the CPU resources.

  • If you are clustering SQL Server in a SAN environment, there are special considerations to be aware of and it is recommended that you work closely with your storage vendor. Also, ensure that all hardware in your configuration is on the Windows Hardware Compatibility List (HCL).

  • For consolidated environments that contain many instances of SQL Server, the results of the test scenarios indicate that no CPU or I/O affinity tuning is necessary, and in fact it may actually reduce workload performance. However, tuning the MAXDOP option did improve workload performance in certain cases.

  • For clustered environments in which instances of SQL Server use AWE memory, the amount of memory acquired by an instance of SQL Server during failover depends on the amount of memory that is available. In certain cases, AWE enabled instances may revert back to the dynamic memory model.

Appendix A: Storage Configuration and Performance

The EMC Symmetrix 5.5 model 8530 storage array was used in the 32-bit platform consolidation testing presented in this paper. The requirements for clustering instances of SQL Server determined how the storage was configured for this testing. There are two main requirements for storage on SQL Server clusters:

  1. At the time of writing, SQL Server does not support mount point volumes for use with clustered instances of SQL Server. You must use drive letters for any shared disk resources.

  2. Each instance of SQL Server must own at least one of the shared disk resources.

In configuring the storage array, it was not practical to establish physical isolation between the data, log, and tempdb files at the spindle level, due to the total number of files. In addition, the limited number of drive letters available and the requirement to use these for clustering drove the decision to place the data, log, and tempdb files on the same LUN. This resulted in all files for a specific instance of SQL Server sharing common physical disks, and each instance owning one or two of the LUNs as shared disk resources. The storage array was configured so that each LUN spanned as many physical disks as possible, and RAID (redundant array of independent disks) 1+0 was used in order to optimize performance. There were approximately 191 total data and log files across all of the SQL Server virtual instances (not including tempdb and system database files). Table B.1 provides information about the use of each of the LUNs.

Table B.1   LUN configuration

LUN number

Size

Use

Spindles

LD0 – LD12

54 GB

Data, log, and tempdb files

8 total for each LUN

LD13 – LD21

108 GB

Data, log, and tempdb files

16 total for each LUN

Figures B 1 and B 2 are included here as a reference to the I/O performance achieved during testing with this storage configuration. In the performance tests, the limiting hardware resource was CPU availability. I/O performance was good even without physical isolation in the design.

Cc917531.cosl3205(en-us,TechNet.10).gif

Figure B 1   Disk throughput with a default settings configuration

Cc917531.cosl3206(en-us,TechNet.10).gif

Figure B.2   Average disk latency with a default settings configuration

Workloads A, B, C, and D in Figure B 1 are the workloads described in "Databases and Consolidation Workloads" in this paper (Table 1.1). For more information about these workloads, see Table 1.1 in that section.

Appendix B: References

Planning for Consolidation with Microsoft SQL Server 2000

https://www.microsoft.com/technet/prodtechnol/sql/2000/plan/SQL2KCon.mspx

Clustering and SAN

Microsoft Windows Clustering: Storage Area Networks

https://www.microsoft.com/windowsserver2003/techinfo/overview/san.mspx

Microsoft KB Article 304415 Support for Multiple Clusters Attached to the Same SAN Device

https://support.microsoft.com/?id=304415

Microsoft KB Article 280297 How to Configure Mount point volumes on a Clustered Server

https://support.microsoft.com/?id=280297

Microsoft KB Article 819546 SQL Server 2000 support for mounted volumes

https://support.microsoft.com/?id=819546

Desktop Heap Considerations

Microsoft KB Article 184802 PRB: User32.dll or Kernel32.dll Fails to Initialize

https://support.microsoft.com/?id=184802

Window Stations

https://msdn2.microsoft.com/library/ms687096.aspx

AWE and Large Memory Support

Microsoft KB Article 816488 Rate of Page-Zeroing Process Is Unexpectedly Slow

https://support.microsoft.com/?id=816488

Microsoft KB Article 329914 AWE-Enabled SQL Server 2000 May Take a Long Time to Start

https://support.microsoft.com/?id=329914

Microsoft KB Article 283037 Large Memory Support Is Available in Windows 2000 and Windows Server 2003

https://support.microsoft.com/?id=283037

Microsoft KB Article 274750 HOW TO: Configure memory for more than 2 GB in SQL Server

https://support.microsoft.com/?id=274750

Appendix C: Hardware Configuration

The test scenarios presented in this paper were conducted using the following environment:

Server

Unisys ES7000

  • 32 Intel® Pentium® III 700 MHz Xeon processors

  • 64 GB of RAM

  • 8 Emulex LP9000 PCI Host Bus Adapters

Storage

EMC Symmetrix 5.5 (model 8530) with 16 GB of cache, ninety-six 10 K RPM 73 GB drives, and 12 fiber host connections. Configured with RAID 1+0.

Fabric Switch

EMC branded McData director class, 1 Gbs, 64 port fiber switch

Operating system

Microsoft Windows Server 2003, Datacenter Edition

Database server

Microsoft SQL Server 2000, Enterprise Edition, Build 2000.80.194.0, Service Pack 3