System Administration

Updated : October 26, 2001


This chapter explains best practices for system administration in a Microsoft® SQL Server™ 2000 environment, including regular daily, weekly, and monthly maintenance tasks. Index creation and maintenance is examined in thorough detail. Backup and restore is addressed, along with moving data in various ways. Advanced administration, presented at the end of the chapter, includes topics concerning memory management, log shipping, and clustering. After reading this chapter, database administrators will be able to determine the appropriate needs for their SQL Server 2000 database environment, the tools and technologies to be used, and a maintenance calendar for the most common administrative activities.

On This Page

The DBA Team
Administrative Topics
Administrative Schedules
Advanced Administrative Topics


System administration of a Microsoft® SQL Server™ 2000 environment is a key responsibility of any database administrator (DBA). SQL Server 2000 offers many different functions to assist the DBA in maintaining and optimizing the environment. Although resources are available within SQL Server 2000 for the DBA to create an optimal server environment, many DBAs lack the necessary skills or knowledge to take advantage of these resources. This chapter focuses on educating the DBA in common areas of system administration.

A fully functional and optimal environment has several defining characteristics. It is:

  • Documented

  • Automated

  • Standardized

  • Reliable

  • Available

  • Optimized

Administration of database systems consists of a variety of administrative, optimization, and troubleshooting tasks. Since this guide has separate chapters on performance tuning (see Chapter 5) and troubleshooting (see Chapter 7), those topics are only briefly covered in this chapter. This chapter focuses on the following:

  • The team that is responsible for database administration

  • Tools, functionalities, and technologies available to the database administrator

  • The administrative maintenance tasks to perform on a daily, weekly, and monthly schedule

  • Advanced concepts dealing with large-memory management, log shipping, and clustering for highly available systems

The DBA Team

The role of the database administrator (DBA), also known as a database system engineer (DSE), has changed over time. As databases have become more sophisticated, self-tuning and automated administration features have become more common. As a result, the DBA is freed from working at a tactical level, and can focus on more strategic initiatives (for example, analyzing usage trends to cyclically improve the database, and managing the flow of information between points within a company or outside it).

Successfully managing a database system demands a strategic approach, and a high level of accountability in database administration practices. These concepts, which in this document are targeted at a data center with a team of database professionals, may be successfully tailored to apply to any database environment.

To be successful, you must clearly define how the team does business with other teams in the company. For this to work, it is vital for all the DBAs on the team to work as a unit. To encourage teamwork and cooperation, begin by assigning shared ownership for all the database servers to the entire group. Encourage and reward honesty and responsibility, to build confidence in the team members. The success of the each team member depends on the success of the other team members.

To share the responsibility of supporting many systems, DBAs must be available to each other during working hours (using a cell phone or e-mail) and during their nonworking hours (to some extent) to ensure the requirements of the SLA are met and/or exceeded.

If you have a system that only one person can run, then you have introduced a single point of failure into the system. This is why it is so important for the DBA team to act as an integrated unit. Several things can help you accomplish this.


Assign a primary and a secondary DBA for each system, so that there are at least two people supporting each one. Also, rotate staff between projects, between classes of server, and between production and development environments. Rotation encourages cross training, provides breadth of experience, common practices, and leads to better documentation. The interval between rotations should be long enough that the DBA can become comfortable with the system he or she is now handling.

Effective Use of Spare Time

Make use of spare time to bring teammates up to speed on unfamiliar projects or systems, or new techniques that are being used by a member of the team. If there is no spare time, create time for DBA teambuilding. Finding this time can be difficult. Introduce the concept of the "brown bag," where a presentation is held over a long working lunch period (1.5 to 2 hours). Presentations at most major conferences range from 1 to 2 hours in length; quite a bit of material can be covered this way, especially if meetings take place weekly.

Another side benefit of a lunch presentation is that each person is required to organize their notes, document the system in question, and get some practice addressing a group.


Encourage the development of communication skills, both spoken and written. The DBA position is evolving, with less emphasis on server settings and more on managing corporate knowledge. The company benefits most from DBAs who can become leaders in the data center.

Also, by establishing a high level of communication and trust among team members, the team's overall response time will increase. As each DBA becomes familiar with other systems, the number of issues that can be handled by only one person will diminish. As a result, the DBAs will not need to call each other as frequently for answers, which can save valuable minutes when a server is unavailable.

It is crucial that the DBA group have respect within the Information Technology (IT) department. If a professional and thorough team is presented to the members of the operations and development groups, a certain respect will be generated that may be needed in the future to help the DBA group override any decisions that would pose a risk to the system. Standard processes help to increase the reputation of the DBA team's seriousness and reliability.

Alert Messages

For conditions that are either escalated to a high priority or remain unresolved after a set amount of time, consider sending alerts to the company's help desk in addition to the team members. The help desk should be able to locate the appropriate personnel (even if their cell phone is turned off). If no DBA is available, the issue can be escalated until the problem is resolved.

Arranging escalation through the help desk takes careful planning, especially in regard to educating the help desk team about the types of alert messages they may receive, the escalation path within the DBA group, and the people they can contact or the actions they can take (approved by the network and security administrators) if the entire DBA team is unavailable.

Team Web Site

Another mechanism for keeping the entire team in communication is establishing a team Web site that displays quick status information for each server. In this case, the server behind the team Web site would be included in the alert messages that the server broadcasts. Each DBA team member could post alerts as well, thus communicating upcoming events or important project or system notes to the group in a centralized manner. This team site can begin as a simple network share, private to the DBA team.

Creating a world-class database administration service is a cycle. Improvements are introduced over time, and this effort yields the greatest benefit: the improvement of the people who support the system, by increasing their knowledge and understanding of the product.

Continuous Learning

DBAs need time to learn; database technologies are constantly evolving, as is the theory upon which they are based. When leading a team, be sure to create time for the team members to learn. If part of a team, be persistent in seeking new training and providing justification for it.

Encourage everyone to learn continuously (one chapter or subsection at a time), rather than all at once. Very large goals are hard to achieve. Instead, encourage the team to take a cross-section of available materials on a single topic. For example, if the goal is to learn replication, read about it from different sources: books, trade journals, Web sites, Usenet groups, and so on. Then move to a new subject as the need or interest changes. Presenting material reinforces new skills, and also benefits the group.

The company will benefit from this educational investment. People who know the system best, who have new knowledge and deeper levels of understanding, can continue to raise the standard for creating and maintaining highly available systems. It all starts with the right mindset for high availability and due diligence in all efforts.

Service Level Agreement

The main purpose of a service level agreement for a database system is to establish the expectations that the management and the users have for the system. See Chapter 8 for a detailed discussion of service level agreements, and their effect on the DBA's role.

Administrative Topics

Building a healthy and optimal SQL Server 2000 environment requires not only a technical knowledge of SQL Server 2000, its components, and the practical application of those components, but also how SQL Server fits into the overall solution or task that responds to the business need. Keeping the data correct, secure, and available at all times is the primary focus of the DBA.

Documentation and Standardization

Documentation and standardization of the SQL Server 2000 environment is important to existing personnel and to newly hired DBAs. The DBA is completely responsible for deciding on the level and depth of the coverage. Documentation generally includes the following:

  • Components of the environment

  • Processes

  • Protocols

  • Contact information

  • Issue resolution history

Components of the environment include not only the server hardware and software, but also the network infrastructure, such as T1 lines, and the suppliers of goods and services, as well as the various staff members involved.

Processes describe the steps involved in completing an operation. The operation could be anything from adding a new login to a server to performing a disaster-recovery operation.

Protocols are the rules governing the operations. The rules ensure that all of the operations are performed in a consistent and established way.

Contact information should be maintained for those who are involved within the organization and outside of it. Those individuals and groups within the organization include DBAs, developers, network personnel, help-desk personnel and management. Those individuals and groups outside the organization include hardware representatives and application software representatives (both sales and support). Telephone numbers, fax numbers, cell numbers, schedules, and backup (secondary) contact information should be maintained.

Issue-resolution history includes all steps and contacts involved in resolving past production issues (see Chapter 7 on problem and incident management). History should be stored in a searchable database with an easy-to-use interface, preferably Web based, to expedite the resolution of future issues.

Standardization is the key to simplifying administration. Any operation that can be scripted, should be scripted, and those scripts should be documented and maintained. Typically scripts and other critical information are kept in a run book and should be reviewed and understood by all current and new DBAs (see Chapter 2 for additional information on creating a run book.)

Standardizing configurations across servers is useful in any environment. To do this, create a list of all available settings on the server that are not the default value, and document them. Make this the standard configuration for every new machine. Obviously, many of the systems may immediately depart from the standard. To handle this, simply document the difference for that machine between the standard configuration and the configuration for the server in question. These documents, stored as read-only files in an easily accessible area (a Web repository would be good), can help you better support a large number of servers.

For the storage subsystem, standardizing drive letters is very helpful. Table 4.1 provides examples of possible drive-letter designations.

Table 4.1 Drive-letter Designations

Logical Drive Letter



Operating system, SQL executables


Usually a CD-ROM drive


Reserve for another system drive, if needed

F through H


I through P

Data files


Quorum drive

R, S

SQL executables and system databases

T through V

Transaction log files

X, Y, Z

Backups, or imported data for bulk loading


A standard admin directory to store reports such as logs, trace results, sqldiag.exe output, and so on

For network drives, consider using universal naming conventions (UNC), to allow maximum portability of the code relying on these network drives.

No matter how meticulous the standardization is, there are times when exceptions must be made. Simply document them, and move on; the goal is to make 90 percent of it standard, and then the remaining 10 percent will be easier to remember.

Tools, Functionality, and Technologies

SQL Server 2000, the Windows 2000 platform, other Microsoft products, and various third-party tools, functionalities, and technologies can be integrated to create a complete SQL Server 2000 administration solution. The tools and utilities used and how they are used will depend on several factors, including the following:

  • Environment

  • Management

  • Workload

  • Cost

  • Preference

  • Understanding

A simple and effective example is using a custom Microsoft Management Console (MMC) that includes the Enterprise Manager snap-in (including external tools) and other relevant snap-ins. Links to (including the SQL Server Web site, TechNet, and MSDN) and other useful Web sites can be integrated into the MMC also, to easily build a dynamic management tool. (See Figure 4.1.)

Figure 4.1: Console Root

Figure 4.1: Console Root

Note: This chapter will focus primarily on SQL Server 2000 functionality. For additional information on Windows 2000 and other Microsoft technologies see Chapter 5, Monitoring, and the For More Information section at the end of this chapter.



In general, it is better to use scripts to perform all repetitive operations, which are then turned into stored procedures for execution. Scripts reduce the possibility and occurrence of user errors. Scripts should be documented using inline and block comments within the script, and readme.txt or .doc files outside the script. Scripts should be maintained in a centrally secured directory structure for all SQL Servers or, if possible, in a library versioning tool, such as Microsoft Visual SourceSafe, so that the master copy will always be known, and there will be no ambiguity as to which is the current production version. Scripts can always be used where the graphical user interface (GUI) interfaces and wizards may be limited to a certain set of operations with only some of the available options. Operations that take place via GUI interfaces and wizards can be captured, saved, and replayed using SQL Profiler. In addition, using the "Save Change" script button in Enterprise Manager means changes in objects such as tables can easily be scripted. This can be very beneficial to an inexperienced DBA who uses Enterprise Manager to perform administrative tasks and is not sure what is taking place behind the scenes. Operations that take place through custom or third-party applications can also be captured using SQL Profiler. Doing so can assist all DBAs in troubleshooting application functionality and performance issues.

New scripts should be thoroughly tested in a quality assurance (QA) environment prior to their introduction into the production environment. Scripts for rollbacks should be created and maintained as well. A simple and effective way to create similar scripts is to use the script templates feature of the Object Browser in Query Analyzer. Templates allow DBAs to reuse code and logic simply by replacing template parameters with new values. The scripts can then be saved to the appropriate folder.

To ensure automation, consistency, security, and history, the DBA should encapsulate scripts into stored procedures. The stored procedures can contain additional logic to give DBAs finer control over the operations. In addition, they can contain additional code to record information in custom tables (that is, auditing). These tables can be used with triggers and e-mail to provide enhanced customized solutions. To make the process even better, DBAs can work with developers to create a custom front-end Web interface that can be used by DBAs on any server within the environment.

Windows Script Host (WSH)

The Windows Script Host (WSH) enables ActiveX scripts such as VBScript or Jscript to be run directly within a Windows 2000 environment. The user simply has to execute the .vbs file just like a CmdExec file. The WSH has low memory requirements and is more robust than MS-DOS commands. Administrators can change scripts easily without having to recompile or contact a developer. Thus, WSH is ideal for certain administrative functions of a DBA.

For example, if a DBA needs to recreate a SQL Server 2000 security environment, the DBA can create one VBScript to create the domain user and group account and another Transact-SQL (T-SQL) script to create the database logins, roles, and users.

Indexes and Statistics

Indexes are storage structures within SQL Server 2000 that allow developers to uniquely identify records in a table (this is called entity integrity) and allow DBAs to increase performance. The Index Tuning Wizard (ITW) allows administrators to easily script out the creation and drops of indexes based on captured real-world queries.

For more information, see "Index Tuning Wizard" in SQL Server Books Online.

Clustered Versus Non-clustered Indexes

There are two categories of indexes available—clustered and non-clustered.

Clustered indexes store the actual data pages at the leaf level of their b-tree structure. The records are physically sorted and stored based on the clustered index key chosen. For this reason, there can be only one clustered index on a table at any given time.

Non-clustered indexes store pointers to the data. The pointer chosen will depend on whether or not a clustered index exists for the table. The records are sorted logically, based on the non-clustered key chosen. There can be up to 249 non-clustered indexes for a single table.

In the scenario where a table has a clustered index, the non-clustered indexes will use the clustered index key as their pointer to the data. For this reason clustered-index key values should be kept at a minimum length. The larger the key, the larger the row, the fewer rows that fit on a page, the more pages required, the more I/O needed to read the pages, and the fewer plans and pages that will be found in cache the second time around because new pages will take their place. All of this contributes to a significant increase in the SQL Server and system resources used, which can result in a significant decrease in performance.

Since any given table can have only one clustered index, care should be taken in choosing which column(s) to index. Generally it is best to create a clustered index in cases where data is one of the following:

  • Retrieved in sorted order

  • Grouped together

  • Accessed in a range

A clustered index might also be considered on an auto-incrementing column, such as an identity column, to minimize page splitting and to keep the page that is receiving the inserts constantly in the buffer cache to improve performance.

Be careful when choosing multiple columns for a clustered index. Statistics will only be created for the first column of a composite (multi-column) index. To provide the Query Optimizer with more information about the distribution of the other column(s) values, consider creating statistics for the other column(s) by using the CREATE STATISTICS Transact-SQL statement.

Limit the number of non-clustered indexes that are created in write-intensive production systems. The reason for this recommendation is that non-clustered indexes contain pointers to the data. For each insert or delete that is made on a table, SQL Server must not only perform and log that operation, it also has to change each non-clustered index that is affected. This can have an adverse effect on performance.

Composite and Covering Indexes

The DBA should create indexes to match the search criteria of the user's request. Indexes that are not used are not useful; in fact, they reduce performance by the load required to maintain them. Create composite (multi-column) indexes whenever possible to cover multiple requests. The DBA can also consider the performance advantages of using a covering index.

A covering index is a non-clustered index that contains all the information requested by the user in the index's leaf level (including the key and the pointer). DBAs should not "recreate" the table by adding too many columns to the index key. Covering indexes can result in a dramatic increase in performance. Non-clustered indexes are better suited to queries that are looking for a specific record. Columns that are frequently aggregated should be indexed, as should any foreign key columns, to increase performance when performing joins.

To speed the creation of an index in situations where the tempdb database resides on a separate disk with possibly a separate controller, use the SORT_IN_TEMPDB argument. This argument tells SQL Server to store the intermediate sort results in the tempdb database. The operation separates the reads and writes between the tempdb database and the destination filegroup, which increases performance, especially when tempdb is on a faster drive. It also improves the likelihood that the index will use contiguous extents.

FILLFACTOR Settings for Indexes

When building an index it is important to specify how full to make the pages of the index. The fullness can be controlled with the FILLFACTOR and PAD_INDEX arguments. The FILLFACTOR argument sets the fullness for the leaf level of the b-tree index by specifying the percentage of empty space on each page. For clustered indexes, the leaf-level pages are the actual data pages. The default behavior is to fill the leaf-level pages completely. Although a higher FILLFACTOR is advantageous for read operations, it can be very detrimental to write operations. A high FILLFACTOR in a read-intensive environment will mean more records per page and thus fewer pages and less I/O. Conversely, the same setting in a write-intensive environment will cause massive page splitting, thus fragmenting the data across the pages allocated to the table. In general, it is best to choose an appropriate FILLFACTOR setting and to specify this value when creating the index. The percentage chosen will vary for each table in each database, depending on the environment. Things to consider include the following:

  • Percentage of reads and writes for the table

  • Modification activity

  • Space used

  • Performance

The number of read and write operations for a given table will be used to determine the FILLFACTOR setting. A higher FILLFACTOR is appropriate in environments where data will primarily be analyzed. A lower FILLFACTOR should be used in heavily modified environments. Remember that modification activity on a table with a high FILLFACTOR can reduce performance. But when exactly do the bulk of the modifications take place? Is it once a week, once a day, throughout a 24-hour period, or perhaps within a certain range of hours? A lower FILLFACTOR requires more page I/O and disk space that can impair performance. So what should the FILLFACTOR setting be? The answer to this and many other optimization questions is unfortunately— "It depends." The general guidelines have been presented above, and it is up to the DBA to choose the appropriate setting when creating the index, depending on the various factors involved. The DBA can then use the DBCC SHOWCONTIG statement to monitor fragmentation over time. If the original setting was too high or too low, the index can easily be rebuilt and testing can continue.

Note: The FILLFACTOR setting can be set server-wide for all CREATE INDEX statements using sp_configure 'FILLFACTOR. '

For very heavily modified tables, the DBA may consider using the PAD_INDEX option to apply the FILLFACTOR percentage to the non-leaf or intermediate pages of the index.

Index Fragmentation

The FILLFACTOR and PAD_INDEX settings are static, and over time indexes can become fragmented because of data modifications. Some pages will become fuller than others. Some pages will split, and new pages will have to be allocated, linked into the chain and with half the data moved to the new page. This process can be expensive, and many times these new pages are not contiguous, nor even in the same extent, but are allocated from another extent on disk. Fragmentation can be controlled in one of two ways: by rebuilding the index or by defragmenting the index.

Rebuilding the index should be done with the CREATE INDEX statement using the DROP_EXISTING argument. The advantage of this approach over the approach of dropping and recreating the index is that, in the case of a clustered index, if the clustered index keys remain the same, non-clustered indexes will not have to be rebuilt, nor will the data have to be resorted.

Note: All indexes can be rebuilt for a table in a single operation using the DBCC DBREINDEX statement.

The DBCC INDEXDEFRAG statement can be used to defragment an index's leaf-level pages. It does this by performing an in-place reordering of the leaf pages so that the logical order of the pages matches the physical order of the pages. DBCC INDEXDEFRAG also attempts to apply the original FILLFACTOR setting to compact the records on the pages. The major advantage of using this command, as compared to the other processes, is that this is an on-line operation, which allows users to continue to use the index while the operation is taking place.


Statistics store distribution information about the key values in a column. The query processor uses this information when deciding how to process a given query. By default, statistics are created automatically for the first column of an index key when an index is created on a table that contains data. Statistics can also be created on other columns of the key and on columns not associated with an index. This process provides SQL Server with more information about the distribution of the data—thus resulting in a more efficient execution plan being chosen and generated. Statistics of this type can be created either automatically by setting the option with the ALTER DATABASE statement or sp_dboption, or manually by using the CREATE STATISTICS statement. Statistics can be found in the binary statblob column of the sysindexes table and can be viewed using the DBCC SHOW_STATISTICS statement. The default behavior is to have SQL Server create the statistics automatically.

Note: Statistics that are automatically generated by SQL Server will appear in the sysindexes table using a value ranging from 2 to 250. The name of the statistic will begin with WA_Sys_. The attributes Dpages, reserved, and used will all show the value 0 (zero).

The usefulness of statistics is dependent on their accuracy. Statistics that are out of date can lead to an inefficient plan being generated, which would affect performance. To prevent this from happening, SQL Server 2000 automatically updates all statistics.

Warning: DBAs should think carefully before deciding to disable the automatic updating of statistics.

Index and Statistics Recommendations

Indexes should be created to increase performance and to maintain uniqueness of the records. Over time, by analyzing the operations for a database, DBAs will be able to determine the best time to rebuild or defragment indexes. These operations should be scripted and placed in jobs that take place on a recurring basis depending on the environment. These jobs should be scheduled during non-peak hours the performance of the system is not affected. While allowing SQL Server to automatically update statistics works for most environments, it does not work for all environments. If the statistics for the environment are not updated as frequently as needed by the application, which may impact performance and potentially availability, a job should be created to manually update the statistics, and the auto update stats parameter should be disabled for the database.

Backup and Restore

Every database environment should have an established backup and restore plan for disaster recovery. This plan should be thoroughly tested and documented in a simulated environment using production backups. The plan should also address the entire system, including applications and the components of the operating system. All possible scenarios of failure should be considered and addressed. Tests should be carried out on a regular recurring basis. The specifications of this plan will depend on the environment. The following are several factors the DBA should consider when creating this plan:

  • Availability requirements

  • Cost (both resource cost and downtime/recovery cost)

  • Current and future resources (both hardware/software and personnel)

  • The environment

The availability requirements are dictated by the service level agreement (SLA) (see Chapter 8). Check the SLA to ensure compliance with its goals and directives.

Cost and resources play a key role. How long can the system be down? How much data can be lost? Does the company have a DBA? Is the DBA suitably trained? Is there a secondary DBA who has been trained to perform recovery operations? How much money does the DBA have to allocate for equipment, software and training?

The environment will dictate the people, the processes and protocols involved, including the type of backups performed, their schedules, and the recovery models used.

Think of the backup/restore plan as a component of a data insurance policy.

SQL Server 2000 provides the tools necessary to back up and restore both system and user-defined databases. These tools include Enterprise Manager, Database Maintenance Plan Wizards, and the BACKUP and RESTORE Transact-SQL commands.

To prevent accidental mistakes or to add an additional level of security to prevent unauthorized access, DBAs should consider using a backup-set password and/or a media-set password. The backup-set password prevents unauthorized restores of the backup set. The media-set password prevents unauthorized restores of any of the backup sets contained on the media. The password also prevents unauthorized backups to the media itself.

Note: Adding a media-set password prevents other product backups, such as Windows 2000 backups, from being appended to the media.

Types of Backups

There are several types of backups available—full, differential, transaction log, and file/filegroup. In addition to the different types, there are several recovery models available—simple, bulk-logged, and full.

Full Backups

Full database backups are the default and the starting point for all other types of backups. A full database backup captures the entire database, including all entries in the transaction log—excluding any unallocated extents in the file(s). Pages are read directly from disk to increase the speed of the operation.

Full database backups should be performed on system and user-defined databases on a regular basis (see Administrative Schedules later in this chapter). The master and msdb databases should be backed up any time a change occurs that affects either database. When creating/altering databases, logins, linked servers, configuration changes, and so on, back up the master database. When creating/altering jobs, alerts, operators, schedules, and so on, back up the msdb database. The distribution database should be backed up as well if the server is performing the distribution role. The model database can also be backed up if significant changes have been made to it.

Note: To make recovery faster, easier, and more complete, always have an up-to-date backup of the master and msdb databases. Also, it is recommended that user-defined objects not be created in either of these system databases.

Full database backups to user-defined databases should take place following the creation of the database to give the recovery process a starting point. A full database backup should also take place on a regularly scheduled basis. The schedule will depend on the circumstances (again see Administrative Schedules later in this chapter).

Note: A full database backup does not clear the transaction log. A process should be initiated to clear the log periodically to prevent the log from filling up. Any transactions that were in progress, however, during the database backup will be included in the backup.

Differential Backups

Differential backups capture all the data that has changed since the last full database backup. Differential backups will increase the speed of the backup operation as well as the restore. Since only the changed or newly allocated extents (bitmap tracking) are captured, differential backups are faster and smaller than full database backups. Also, in a recovery process, the last differential backup can be restored without applying a series of the individual transaction-log backups or differential backups that took place between the last full database backup and the last differential backup (all changed data is captured in the last differential backup).

Note: Differential backups do not allow point-in-time or marked log recovery.

Transaction-Log Backups

Transaction-log backups capture modifications to the database. The transaction log serially captures these modifications. Backups of the transaction log provide a history of the transactions that have taken place within the database. The backups of the log are then used in the recovery process to restore the database fully, to a point in time (STOPAT) or to a mark in the log (STOPATMARK or STOPBEFOREMARK). Transaction log backups are applied to recover a database by rolling forward (redo) any committed changes not reflected in the database and rolling back (undo) any uncommitted transactions. Log backups are smaller and are taken more frequently than full or differential backups.

Database Recovery

There are several ways to bring the database back to a particular time to correct a user or application error. The database, differential or transaction-log backup could be restored WITH RECOVERY to bring the database back to the state at the time of the backup. When working with transaction log backups, the database can be restored to a point-in-time or to a marked transaction.

Point-In-Time Recovery

Point-in-time recovery only works with transaction-log backups. The DBA will use the STOPAT argument of the RESTORE statement to specify a date and time to recover the database. With many production databases now working within many different international time zones, it is imperative that the DBA calculate the correct date and time from the user reporting the error. The STOPAT option cannot be combined with NO_RECOVERY to test for the incorrect data—thus the correct time is needed. Any transactions that have not been committed prior to the date and time specified in the RESTORE command will be rolled back—resulting in a loss of work.

Marked transactions give the DBA greater control in determining when an errant transaction took place and make the recovery process much easier. The transaction is given a name with the BEGIN TRAN statement and the name is stored in the log using the WITH MARK argument. If the DBA needs to recover from an errant statement, the DBA can specify either STOPBEFOREMARK or STOPAFTERMARK to control whether or not the statement is applied to the database. The STOPBEFOREMARK argument does not include the marked transaction in the restore. The STOPAFTERMARK will include the marked transaction in the restore. Any transactions that are not committed prior to or following the mark (depending on which option is chosen) will not be reflected in the database.

Note: When mark names are the same within the transaction log, use the AFTER datetime argument to differentiate between them.

Log space should be set to grow automatically and the amount of log space should be constantly monitored either through scripts and auditing tables or through a performance condition alert on Object – SQL Server : Databases Counter: Pe3rcent Log Used . If the transaction log fills, all activity in the database is stopped until the log is cleared or until the log is increased in size. If the log fills, try to clear the log with the BACKUP LOG statement or try to increase the size of the log file (or add an additional log file) with the ALTER DATABASE command—the file can be shrunk later if needed. If neither method works, clear the log with the BACKUP LOG statement using NO_LOG or TRUNCATE_LOG arguments. A full database backup should immediately follow clearing the log to minimize the chances of lost data, because the log no longer contains the committed transactions.

The transaction log should be placed on a fault-tolerant set of drives that are separate from the actual data. This will increase performance, since the writes to the log file(s) are written sequentially and the writes to the data file(s) are usually written randomly throughout the pages of the file(s). Furthermore, separating the log file(s) from the data file(s) also decreases data loss as the orphaned log can still be captured in the event that the data file(s) become damaged.

Frequent transaction log backups can reduce data loss. Differential backups should be implemented as well in this scenario to reduce recovery time and to reduce data loss.

Note: When attempting to make backups of the transaction log when the database is in a simple recovery mode or the truncate log on checkpoint option is enabled, the Enterprise Manager transaction log option will be grayed out and Query Analyzer will return the error number 4208. To perform transaction log backups, these options must not be set.

The first step in recovering a damaged database should be to attempt to capture the orphaned part of the transaction log. This is possible when the data file(s) become damaged and the master and the database log are still intact. Although the database will show up as suspect, the DBA can still back up the entire transaction log from the time of the last transaction log backup through the time when the database file(s) became damaged. This can be accomplished using the NO_TRUNCATE argument of the BACKUP LOG statement.

Note: An unbroken sequence of transaction log backups is required for a full recovery. If any transaction log backup is damaged or missing, no other transaction logs that followed that log backup can be applied.

The last backup to be restored in a recovery process should be the orphaned transaction log (if it could be captured) or the last available transaction log backup. For all backups restored prior to the last one, the DBA should specify the NO_RECOVERY argument. For the last backup restored, the DBA should accept the default of RECOVERY. The NO_RECOVERY argument prevents the backup that was restored from being recovered, thus allowing future backups to be restored. So if a transaction had its BEGIN TRAN captured as part of one transaction log backup and its COMMIT TRAN captured in another, the restore of the first with NO_RECOVERY followed by the last with RECOVERY would allow the transaction to be rolled forward and reflected in the database following recovery.

Note: If a transaction log backup is restored with RECOVERY, no additional logs can be applied. If additional logs exist, the restore process must be started over, beginning with the last full database backup.

Microsoft works very closely with various third-party providers of transaction-log-based software. These software products make it easier to view and work with the transaction log. With these products, users can create undo statements that reverse the work of prior transactions in the log without restoring the transaction log. To find links to these vendors' Web sites, please see the For More Information section at the end of this chapter.

Note: Careful consideration and precautions should be taken prior to working with individual entries in the transaction log.

File and Filegroup Strategies

File and filegroup backup strategies are useful in environments with very large databases (VLDB). A database comprising multiple files can be backed up one file at a time. The file backups can occur on a staggered schedule, a strategy which reduces the backup time significantly. To backup groups of files at the same time, filegroup backups can be initiated. when this is done, restore times are reduced because only a part of the database is actually restored. File and filegroup backups require having transaction-log backups as well to bring the recovered file(s) up-to-date with the remaining database files at recovery time.

Caution: Transaction-log backups need to occur and must be available to recover individual database files. This includes the tail end or orphaned part of the log. If these logs are not available and the file cannot be restored to a state consistent with the remaining database files, the database will have to be recovered from the last full database backup.

File differential backups can also be included as part of the file backup strategy. Similar to database-differential backups, file-differential backups capture only the modifications and allocations of extents that belong to the file that is being backed up. This results in smaller and faster backups and restores. A single differential file backup can be created for multiple files that are backed up independently. When a single file is restored, only the last differential-file backup needs to be restored—no prior differential-file backups need to be restored. Also, to increase efficiency, only the extents that apply to the damaged file are restored.

Note: Transaction-log backups still must be taken. The tail end of the log must be available as well. Hence, the simple recovery mode is not available to file/filegroup and file-differential backups.

A very common backup process includes the following:

  • Performing a full database backup following the creation of the database

  • Performing differential backups as needed

  • Performing frequent transaction-log backups as needed

A very common restore process includes the following:

  • Capturing the orphaned log using the NO_TRUNCATE argument

  • Fixing the media problem

  • Restoring the last full database backup (NO_RECOVERY)

  • Restoring the last differential backup (NO_RECOVERY)

  • Restoring all log backups in sequence following the last differential backup (all logs NO_RECOVERY)

  • Restoring the orphaned log (RECOVERY)

Backup Strategies

Full database backups should always be performed. Transaction log backups are generally performed in most cases as well. The exception to this would be environments for which tracking changes is not desirable, that is, those in which there are infrequent data changes or testing environments. These environments should use the Simple Recovery Model (described later in this paper). Differential backups should be used in environments that have many transactions and have either many log backups or large log backups. File and filegroup backup strategies can be implemented in environments that contain extremely large amounts of data. To view information about backups, use Enterprise Manager, use the various informational RESTORE statements, or query the backupset system table. To view information about the restore history, use Enterprise Manager or query the restorehistory system table. For security purposes, backups should be kept offsite and be located in a safe or similar secure structure while onsite.

Note: File based backup devices should never reside on the same physical disks as the database files.

Recovery Models

Recovery models dictate what is captured in the transaction log. The recovery models include full, bulk-logged, and simple models. The default recovery model for the standard and enterprise editions of SQL Sever is full. The default is determined by the setting in the model database and can be changed for the model or another database by using the RECOVERY argument of the ALTER DATABASE statement.

Full Recovery Model

The full recovery model logs all operations fully. Recovery can be to a specified point in time or to a mark in the log. Most production systems will use the full recovery model with an occasional switch to the bulk-logged recovery model to minimize the amount of logging for certain operations.

Note: The full recovery model is the recommended model to use.

Bulk-logged Recovery Model

The bulk-logged recovery model is similar to the full recovery model in that all operations including bulk operations are logged. The difference lies in the fact that the bulk-logged recovery model logs and maintains the only the page allocations for certain operations, as opposed to the individual rows. These operations include: CREATE INDEX, SELECT INTO, BCP.EXE, BULK INSERT and text/image operations. This process results in fewer entries being logged and (a) smaller log file(s). If the data file is lost while bulk-logged recovery is in use, all transactions in the current log are lost to recovery because the log contains only pointers to the data pages that contain the data—not to the actual data. This prevents the use of point-in-time recovery under the bulk-logged recovery model.

There are several conditions that need to be in place for the minimal logging to take place with bulk-copy operations under the bulk-logged recovery mode. The target table must meet the following conditions:

  • Not be part of a publication

  • Not have any enabled triggers

  • Not have any indexes

  • Use the TABLOCK optimizer hint

In situations where excessively large amounts of data are being bulk inserted into the target table with indexes, it may prove to be more efficient to drop the indexes prior to the insert. The indexes can then be recreated following the data load. This way the inserts into the indexes do not have to be logged at the time of the data load.

Simple Recovery Model

The simple recovery model relies on full database backups and optionally differential backups—transaction-log backups are not available. Hence, point-in-time recovery and log-mark recovery are not available. Databases can only be recovered to the time of the last full or differential backup—whichever is available. The simple recovery model, however, is the easiest to administer and enhances the performance of certain bulk and other operations by minimizing the amount of logging and automatically reusing log space. This model is highly appropriate for testing environments where the functionality of an application is being tested and the data being used is unimportant.

Note: The simple recovery model has the same functionality as the prior SQL Server versions' truncate log on checkpoint option. This option should not be set for production systems, except for read-only servers.

Switching Between Recovery Models

Enterprise Manager or the ALTER DATABASE statement can be used to switch between the different recovery models. Switching from full to bulk-logged and back to full is the most common way. This switch does not require any changes in existing scheduled backups. Prior to switching from full or bulk-logged to simple, it is recommended that the DBA back up the transaction log. This will allow the DBA to recover to at least this point if there is a future problem. However, since the simple recovery model prevents transaction-log backups, the DBA should disable all log backups until the recovery model has been changed and a full database or differential backup has been taken. Following the switch from simple to full or bulk-logged, the DBA should take a full database or differential backup and then re-enable any disabled log backups. Refer to Table 4.2 for more information.

Table 4.2 Backup Strategies When Switching Between Recovery Models





Full Recovery

Bulk-Logged Recovery

No action

Requires no change in backup strategy. Continue to perform periodic database, log, and (optionally) differential backups.

Full Recovery

Simple Recovery

Optionally back up the transaction log prior to the change

Executing a log backup immediately before the change permits recovery to that point. After switching to the simple model, stop executing log backups.

Bulk-Logged Recovery

Full Recovery

No action

Requires no change in backup strategy. Recovery to any point in time is enabled after the next log backup. If point-in-time recovery is important, execute a log backup immediately after switching.

Bulk-Logged Recovery

Simple Recovery

Optionally back up the transaction log prior to the change

Executing a log backup immediately before the change permits recovery to that point. After switching to the simple model, stop executing log backups.

Simple Recovery

Full Recovery

Back up the database after the change

Execute a database or differential backup after switching to the full recovery model. Begin executing periodic database, log, and (optionally) differential backups.

Simple Recovery

Bulk-Logged Recovery

Back up the database after the change

Execute a database or differential backup after switching to the bulk-logged model. Begin executing periodic database, log, and (optionally) differential backups.

Backup Performance

There are certain ways to increase the performance of the backup and restore operation.

Spread the database files evenly across multiple disks to increase the parallel I/O. Then use multiple backup devices spread across multiple drives or backup tape drives to increase the speed of the backup operation. When these striped backup sets are created, the same media type must be used for all backup devices. Disk drives are obviously faster than tape, and tape backups require that the devices be physically attached to the server where SQL Server resides. To increase the speed, many DBAs will create the backups directly on disks and then have a third-party tool copy the backup files to tape for offsite storage. Newer tape drives have built-in hardware data compression capabilities to minimize the size and timing of the backups.

Note: Backups to network drives are allowed but can severely degrade backup performance and increase network congestion.

DBAs should run throughput tests on various hardware configurations to determine the best configuration. Whenever possible, it is best not to share disk and tape drives on the same SCSI controller. Also, care should be taken not to exceed the capacity of the SCSI channel for the tape drive.

Note: DBAs should check the tape drive documentation as some high performance drives require a separate SCSI channel to operate at the highest level.

Split-mirror and snapshot backups are another way of increasing the speed of the backup and restore operations. For a more detailed discussion, see the Advanced Management section at the end of this chapter.

Microsoft works very closely with various third-party providers of backup and restore software. Third-party tools enable the DBA to create a centralized backup plan for multiple products and operating environments. In addition, the Virtual Device Interface (VDI) for Backup allows data to be transferred to and from third-party backup software products using shared memory architecture to decrease the copy overhead and increase the speed of the operation. To find links to some of these vendors' Web sites, please see the For More Information section at the end of this chapter.

Backup Validity

There are several ways to test the reliability of the backup process and the data in the backup itself.

A common approach is to use a standby server. Backups can be taken on the production server and then applied to the standby server. The standby server (in standby mode) allows for DBCC commands, such as DBCC CHECKDB, to be run against the data. In this scenario, the DBA would be assured that the backup operation took place correctly, the backup media was not damaged, and the data contained in the backup was not corrupt.

Note: If corruption occurs in the original database following the full database backup and restore, log backups will not detect the corruption on the production server because the log does not contain the actual data pages.

DBAs can run the RESTORE VERIFYONLY statement to test the backup itself. The statement examines the backup device(s) to ensure that all files are present and can be read.

Note: This statement does not check for data corruption.


Automation of the SQL Server 2000 environment allows the DBA to focus time and energy on business needs and to integrate new technologies into the business model, instead of worrying about day-to-day maintenance activities.

The primary component of the automation process is the SQL Server Agent. The agent is the scheduling component of SQL Server 2000. Using the Agent, DBAs can create jobs, alerts, and operators to build an automated and proactive environment:

  • Jobs are defined operations consisting of tasks that can be used for various situations. Those situations may include maintenance activities, monitoring activities, and disaster recovery.

  • Alerts are predefined responses to events generated by SQL Server or threshold requirements as dictated by the DBA. Alerts may notify operators via e-mail, pager, or the net send command. Alerts may also execute jobs as part of their definition.

  • Operators are the users or groups that are notified following job execution or when an alter command has fired.

When used in conjunction with the components of the SQL Server Agent, scripts, stored procedures, triggers, and tables provide the DBA with a powerful set of tools for managing the environment.


Jobs should be created and scheduled for all operations. Notifications should be part of the job process. To prevent the sending of too much e-mail, notifications should be restricted to job failures. In addition to both e-mail and pager notifications, net send notifications should be implemented in anticipation of e-mail failures. When jobs will be run at various times, it is best to create one job and then create multiple schedules for that job.

Note: Jobs and schedules both have to be enabled to function properly. To troubleshoot execution of the jobs, use Enterprise Manager or sysjobhistory. To troubleshoot notifications, use Enterprise Manager or sysoperators.

In cases where a DBA is responsible for multiple SQL servers, the DBA should consider creating a multi-server administration model consisting of a master server and multiple target servers. The master server is used to create and store job definitions. Enlisted target servers periodically connect to the master server and download jobs that have been assigned to them. The target server carries out the job based on the schedule assigned and reports back the status to the master server. Jobs of this nature are termed multi-server jobs and can be modified only on the master server. Target servers can view only the job definition and steps. This prevents other administrators from modifying the job's steps and schedules or disabling the job.

Note: The DBA should consider placing target servers into server groups for the multi-server jobs so that a job assignment can be issued to just one group of servers.

Important: All single-server and multi-server jobs should be thoroughly tested prior to introduction into the production environment.


Alerts can be created to respond to events, or they can be set for thresholds in SQL Server. These alerts are used to handle events as they occur and are detected. Alerts are usually designed to initiate a corrective action and to notify a DBA that the event took place and that the corrective action to handle the event has been triggered.

There are two types of alerts available—those that are event driven and those that are defined for thresholds. The event-driven alerts require that an event be written to the Windows 2000 Application event log. Threshold alerts allow the DBA to define a threshold for a particular System Monitor counter and instance.

Events that have a severity of 19 to 25 are considered severe events and in some cases can shut down the SQL Server. Microsoft provides sample alerts for each of these event levels for all databases. It is recommended that, at a minimum, the DBA supply the necessary operator-notification information as part of the alert definition. All alerts should be standardized in their formats and the messages that are returned. When notifying operators, it is best to have SQL Server send a notification to a group of operators instead of just a single operator. It is also recommended that the DBA create a fail-safe operator to notify in case the other operators cannot be contacted. The DBA should set the appropriate rules in the e-mail application to ensure proper notification of high-priority events. The DBA may also want to capture additional information into a table and then have a trigger send an xp_sendmail to provide additional information to the notified operator or group.

Note: The alert must be enabled for it to fire. Also, double-check the settings such as error number, severity, database(s), and additional text when defining alerts to ensure that they function properly. To troubleshoot firing, check the history via Enterprise Manager or the sysalerts table.

In cases where a DBA is responsible for multiple servers running SQL Server, the DBA should strongly consider creating a centralized event-forwarding server to respond to all unhandled events from the other servers running SQL Server.

For additional information about setup and precautions, see "Managing Events" in SQL Server Books Online.

Data Movement

This section explains the operational best practices for the various methods of data movement.

Data Transformation Services (DTS)

DTS is a set of SQL Server 2000 components that allows DBAs to import, export, and transform both relational and non-relational sources of data. The two main components of DTS are the DTS Data Pump and OLE-DB. The DTS Data Pump is the engine that provides the interfaces and methods allowing connection to and transformation of the data. OLE-DB is Microsoft's Universal Data Access component, which allows DBAs to access and work with many different forms of data through the use of OLE-DB providers.

DTS provides a powerful extract, transform, and load (ETL) toolset for transferring data between systems. It can be used for many DBA-related functions from the simple transfer of data from a file into a table, to a more complex automated data-warehousing transformation, which might include importing new business records, massaging data, updating the fact table, and performing an incremental update of an Analysis Services cube.

The DTS wizards should be used to create simple transformations and the DTS Designer should be used for complex transformations. The end result of a transformation process is a DTS package. A DTS package is a self-contained collection of connections, tasks, transformations, and workflows.

The package itself can be saved in several formats. For transformations that require data lineage (an audit trail for the records) or that share meta data with other applications, packages should be saved in Microsoft Meta Data Services within SQL Server 2000. For transformations that are run on multiple servers, packages should be saved to a Component Object Model (COM) structured storage file (.dts). Packages should be saved to a Visual Basic file (.bas) for developers who want to integrate DTS code into their applications.

DTS Connections and Tasks

DTS provides DBAs with an array of possible connections and tasks. DBAs can move and transform data from any ODBC (Open Data Base Connectivity) or OLE-DB (Object Linking and Embedding Data Base ) source. The more common DTS tasks used by DBAs include the following:

  • Transform Data Task

  • Execute SQL Task

  • Bulk Insert Task

  • Copy SQL Server Objects Task

  • Transfer Databases Task

  • Transfer Logins Task

The Transform Data Task is quite effective at moving and manipulating data. The following are some of the advantages:

  • Column mappings can be altered

  • Destination data types and null-ability can altered

  • ActiveX scripts and COM components can be used for transformations

  • Look-up tables can be used to enhance the functionality

  • Exception files can be created with a max error count

The Execute SQL Task allows DBAs to include Transact-SQL scripts in their package logic. For example, a DBA could easily create a Data Transformation Service (DTS) package and schedule it so that it would perform the process of removing records older than a set number of days from a business table, inserting those records into a history table and including the details about the process in an audit table. To ensure that all operations complete as a unit, DBAs should implement transaction logic for the SQL tasks within the package.

The Bulk Insert Task allows DBAs to quickly import massive amounts of data from a file (.txt, .csv, etc..) into a SQL Server table. The task itself encapsulates the Transact-SQL bulk insert statement. To increase the speed of the operation, DBAs should copy the text file to a directory on the SQL Server prior to the operation.

Note: Transformations cannot be part of the Bulk Insert Task. Also, failed records are not captured. For this reason, make sure that all records are correct and complete prior to using this task.

The Copy SQL Server Objects Task allows DBAs to easily transfer objects from one SQL Server database to another. Scripts are written to disk for the entire operation. DBAs are encouraged to review the .log files for the source and destination servers to track down any errors.

The Transfer Database Task allows DBAs to easily transfer a database from one server to another. Databases can be moved or copied and destination database file locations can be changed as well.

The Transfer Logins Task allows DBAs to easily transfer logins from one SQL Server to another. Refer to SQL Server Books Online to verify source and destination version requirements for the transfer tasks.


Replication is another technique used to move data. Two types of replication frequently used to distribute data are snapshot replication and transactional replication. The choice depends on the requirements and restrictions of the environment.

Snapshot replication is a periodic bulk refresh of data. All data that is part of the publication is transferred to the subscribers—not just the changes, or the net-changed data. Snapshot replication is best used in the following situations:

  • A minimal amount of data is used

  • The data is infrequently changed

  • A higher degree of latency is acceptable

Transactional replication transfers the changes to the subscribers. A snapshot of the initial data is taken and applied to the subscribers. As the changes take place on the publisher, they are marked for replication, copied to a store and forward database (by default, named "distribution"), and applied to the subscribers.

Transactional replication is best used in environments with the following types of characteristics:

  • Larger amounts of data are used

  • Transactional consistency is required

  • Minimal latency is a must

  • Reliable network connectivity and high bandwidth are available

Latency as a Factor in Replication

For distributing an up-to-date copy of data to other servers on a regular basis, use snapshot replication. For situations that require the modifications to the data to be distributed to other servers as the changes take place (some latency allowed), use transactional replication. For situations that require that all of the source changes be reflected immediately on the other servers (no latency allowed), use distributed transactions in combination with the distributed transaction coordinator (DTC).


The bulk copy program (BCP) and the BULK INSERT T-SQL statement are used to transfer large amounts of data to and from files and tables. BCP is a command-line utility that can easily be integrated into .cmd and .bat files.

Note: BCP switches are case sensitive.

The BULK INSERT statement can easily be integrated into scripts and stored procedures. The BULK INSERT statement is the faster of the two methods with one restriction—it allows only one-way data transfers (imports only). To speed the transfer of data and to minimize server resources, consider using non-logged operations with large batch sizes.

For more information, see "Logged and Nonlogged Bulk Copy Operations" and "Optimizing Bulk Copy Performance" in SQL Server Books Online.

Linked Servers

Linked servers provide the DBA with the ability to query and manipulate data on disparate systems. Both relational and non-relational sources of data can be used. Using SELECT INTO, INSERT SELECT, and INSERT EXEC, the DBA can very easily join and move data from multiple sources into a destination SQL Server table.

Note: For situations where ad-hoc access is infrequent, the DBA may use OPENROWSET or OPENDATASOURCE. In both cases, the DBA should be sure the .sql files are placed in a secured NTFS folder for security reasons because the login and passwords will be visible in the statements.

Log Shipping

For a more detailed discussion, see the Advanced Management section at the end of this chapter.

Using Backup and Restore to Move Data

The SQL Server backup and restore process can be used to move data between servers. A backup of a database from one SQL Server can be restored to another SQL Server.

Detaching and Attaching Databases

Another method of copying or moving a database from one SQL Server to another is to use the detach/attach process. Using either Enterprise Manager's Copy Database Wizard, the DTS Transfer Database task, or the stored procedures sp_detach_db and sp_attach_db, DBAs can very easily and quickly transfer databases between servers.

Comparison of Data Moving Strategies

When choosing among the many techniques of moving data in SQL Server 2000, the DBA should keep both the operational requirements and the environment in mind.

Moving Databases

For moving an entire database, it is recommended that the DBA do a full backup and restore, but in some cases it may be faster and more appropriate to do a detach/attach. Enterprise Manager provides an easy-to-use GUI interface for the detach/attach process or you can use the Copy Database Wizard, which transfers the logins along with the data and structure.

Keeping Logins Synchronized

After the databases are moved, logins should be transferred from the source server to the destination server. Logins can be scripted out or the DTS Transfer Login task can be used. Use sp_change_users_login and/or sp_resolve_logins on the destination server to fix out-of-synch server logins and database user accounts.

For more information, please see the section on logins and users restored to another SQL Server in Chapter 3 of this guide.

Moving/Importing/Exporting Data—Large or Small Amounts

For importing large amounts of data from a text file, use the T-SQL Bulk Insert statement or the DTS Bulk Insert task. For frequent joining and importing of small amounts of data from multiple sources, use linked servers and the appropriate T-SQL statement. For exporting large amounts of data from a table, use the bcp.exe CmdExec utility. In situations that require a small-to-moderate amount of data to be massaged, scrubbed, or transformed, use the DTS Transform Data task.

Starting and Stopping Services

Under Windows 2000, SQL Server and several other related components run as services. The various services include the following:

  • SQL Server (including named instances)

  • SQL Agent (including named instances)

  • SQL Full-Text


  • Microsoft Search

  • Microsoft Message Queuing

Depending on the service, these services can be started and stopped using various tools. The various tools include the following:

  • Enterprise Manager

  • Services Manager

  • Services applet

  • Service Control Manager (SCM) utility

  • The net start and net stop commands, which can be done through a command line

  • Task Manager

Starting the SQL Server service allows users to submit queries to the database server. Pausing the SQL Server service prevents new connections to the server but it does allow existing connections to finish their existing statements and exit. Stopping the SQL Server service disables logins, lets currently running transactions finish, and performs a checkpoint in all databases to sync the cache with the hard drive (this minimizes the amount of activity and time required to recover a database at startup).

Starting, pausing, and stopping the SQL Server service should be done through Enterprise Manager whenever possible. In some cases, such as when Enterprise Manager is not responding or a domain password or account has been changed for the service account, the service information should be modified and started with the Services applet and then also changed in Enterprise Manager.

Note: Making the second change (in Enterprise Manager) is only needed where the server is running the Search service and using full-text indexes.

To determine connections that are preventing the server from shutting down, use sp_who or Current Activity in Enterprise Manager. For users who cannot be connected or in a very busy system with blocking taking place, the DBA may choose to disconnect the blocking user connection using the KILL statement. All statements that have not been committed will be cancelled and rolled back. The DBA may consider capturing the DBCC INPUTBUFFER and related connection information to a table, prior to eliminating the service process ID (SPID) of the user connection. This will allow the DBA to learn more about the statement that is being cancelled in case the operation needs to be redone.

Note: To stop SQL Server immediately, with no checkpoints, use the T-SQL SHUTDOWN statement with the [WITH NOWAIT] argument. This option should be used with caution as active transactions will be rolled back at startup and changes will be lost. Recovery time will generally be longer as well.

Operations That Require a SQL Server Restart

The following administrative functions require a restart of the SQL Server 2000 service before they can take effect:

  • Applying a service pack

  • Changing a server name (which is not possible on a failover cluster)

  • Changing the service(s) account or password

  • Changing mail profiles

  • Resetting the suspect status

  • Changing the affinity-mask option

  • Changing the awe-enabled option

  • Changing the C2 auditing mode option

  • Changing the default fillfactor setting for the server

  • Changing the lightweight pooling option

  • Changing the locks option

  • Changing the open-objects option

  • Changing the priority-boost option

  • Changing the remote-access option

  • Changing the scan for startup procs option

  • Changing the user connections option

  • Changing the working set size option


Database Console Commands (DBCC) are statements that allow DBAs to perform various maintenance and related activities to ensure a smoothly running SQL Server 2000 operating environment. The output of the DBCC statements should be saved for analysis. A nice approach is to save the output to a text file, modify the output's content to remove unnecessary data, and then enter the data into a custom table. Triggers, scripts, alerts, jobs, and e-mail can be combined to create a proactive environment.

Common DBCC statements run by DBAs include the following:






DBCC CHECKDB is used to check the allocation and structural integrity of all the objects contained within the database. It is used to detect, and in some cases correct, any corruption in the database.

Note: To increase performance, consider using the noindex, physical_only, or no_infomsgs options. For read intensive environments, consider using the tablock option to speed the operation.

DBCC SHOWCONTIG is used to detect fragmentation in indexes. Fragmentation can severely affect performance as pages become full and then split (causing inefficient I/O scans) and as modifications leave other pages less full (causing more I/O).

DBCC DBREINDEX is often used to rebuild all indexes for a table with a single statement. This statement also rebuilds indexes used by primary key and unique constraints.

DBCC INDEXDEFRAG is used to de-fragment the leaf level of an index. This statement is quite useful when an index becomes fragmented. This is an online operation and can be more efficient than recreating an index.

Caution: De-fragmenting multiple large indexes at one time can fill the transaction log or disk while the operation logs the moving of the records. To minimize this, defragment fewer indexes at a time, or just a few smaller indexes at one time. Also, ensure that there is adequate log and disk space prior to the operation.

DBCC SHRINKDATABASE is used to reduce the size of a database. This statement can be beneficial when a database has been allocated much more space than is required. An associated DBCC SHRINKFILE statement can be used to reduce the size of an individual file—including the transaction log.

Administrative Schedules

"It depends," is the frequent answer to questions about administering an enterprise data environment. Which tools should be used? Which administrative tasks should take place? How often should these tasks occur? These and other administrative questions cannot always be answered uniformly. Many factors must be considered. How many servers are involved? How many applications? How many databases? How many users? How many transactions take place in a given period of time? What is the ratio of reads to writes for a given system?

The DBA needs to fully understand the entire data center and IT environment, all maintenance activities, and the ramifications of any change on the performance and availability of the servers and the supported applications.

Whenever possible, maintenance activities should be standardized, scripted, automated, and documented. Maintenance activities and DBCC operations should be run with a minimal impact on the performance of the production system. Options and arguments for the various commands should be evaluated and considered. These tasks, whenever possible, should take place during non-peak low-load times.

Tasks can generally grouped into daily, weekly and monthly schedules. Depending on the circumstances, additional administrative tasks may need to be implemented, and some tasks may occur more or less often or not at all.

Daily Tasks

The following tasks should all be carried out on a daily basis:

  • Ensure that all services are up and running.

  • Ensure network connectivity to the server.

  • Check the Windows NT Event Viewer for error messages.

  • Check for adequate disk space for growing database files and log files.

  • Check SQL Server jobs (look for failures).

  • Check the SQL Server logs for error messages.

  • Back up the SQL Server transaction log.

  • Perform a full database backup of the master database after making changes to the system catalog.

  • Perform a full database backup of the msdb database after making changes to the database catalog.

  • Monitor Windows 2000 and SQL Server counters in System Monitor to ensure optimal performance.

  • Back up the database with a differential backup after performing non-logged operations, if the recovery model is not set to full recovery.

  • Track locking, blocking, deadlocks, long-running queries, and high-resource queries.

  • Track, at a minimum, in System Monitor: processor, memory, disk (I/O), and network. Other system and application objects and counters can be introduced for performance trend analysis and for troubleshooting performance issues as they occur.

  • Resolve all trouble tickets and document the process.

Weekly Tasks

These tasks should be carried out every week:

  • Perform full/differential database backups of all system and production databases.

  • If auto-update statistics is disabled, execute the UPDATE STATISTICS statement.

  • Remove physical and logical fragmentation from indexes using CREATE INDEX WITH DROP_EXISTING, or logical fragmentation using DBCC INDEXDEFRAG, or rebuild all indexes using DBCC REINDEX.

  • Remove extra unused space from data and log files.

  • Evaluate possible approaches to increasing application and server performance, based on the knowledge obtained from daily operational activities.

Monthly Tasks

Take care of these tasks once a month:

  • Perform a complete backup of the entire operating system.

  • Perform full/differential database backups of all system and production databases.

  • Perform a restore of the operating system (optional).

  • Perform a complete restore of all monthly system and production database backups on test machine.

  • Perform DBCC CHECKDB on all restored system and production databases.

  • Execute sqldiag.exe and store the contents into a secure appropriately labeled folder.

  • Compare gathered performance statistics with baseline statistics to improve performance and to forecast future software/hardware acquisitions.

Note: Maintain a running log of all maintenance activities. The Database Maintenance Plan Wizard can be set to log activities automatically.

Database Maintenance Plan Wizard

Regular maintenance activities can easily be accomplished using the SQL Server Maintenance Plan Wizard.

The Maintenance Plan Wizard allows DBAs to quickly create plans and associated scheduled jobs for performing the most common administrative tasks for a database. A plan can be created for multiple databases or a single database.

The list of tasks that can be accomplished include the following:

  • Reorganizing the data and index pages—removing fragmentation

  • Updating statistics—creating the most optimized plans

  • Removing unused space—disk space management

  • Performing DBCC CHECKDB—ensuring allocation and structural integrity

  • Backing up the database—disaster recovery

    • Backing up the transaction log—point-in-time disaster recovery

    • E-mail or write a maintenance report—plan reporting

    • Writing history information local or remote—ad-hoc auditing capability

Advanced Administrative Topics

Availability and Performance

Availability often means different things to different people. Frequently, the definition depends on where the user sits in the business organization. To some it means 99.999 percent uptime with performance as dictated by the SLA, while for others it means that the server, although it may be slow, is available most of the time and users are able to perform their job functions without large disruptions.

Performance has various interpretations as well. All applications should be optimized to work with SQL Server prior to being deployed in a production environment, including SQL Server parameter settings, memory settings, server hardware configuration, and correct indexes. How well an application performs when it accesses SQL Server depends on the agreements with the user community, and largely stems from the application design. For example, allowing ad hoc queries in an application is not recommended, because consistent performance may never be achieved, and queries may never use the indexes, however they are tuned. SQL Server is only as good as the queries that are sent to it, and how it is optimized to handle the specific workload of the application.

To achieve the desired availability and performance of SQL Server, the DBA needs to establish availability and performance requirements with quantified numbers and percentages for each database, server, application, and environment. This documentation will help provide the information necessary to create an achievable SLA from which the DBA can work to improve all maintenance operations and proactively foresee potential problems.

The DBA needs to make sure that all databases are available—this is in accordance with the SLA for the application. Availability is a combination of people, processes, proper database maintenance, redundant hardware, and technology. The main technologies in SQL Server 2000 that need high availability are log shipping and failover clustering.

High Availability Techniques

This section discusses methods for achieving high availability.

Log Shipping

Log shipping is a high-availability feature of SQL Server 2000 Enterprise Edition. It enables DBAs to automate the process of maintaining a standby server that is a small delta of time behind the primary server. The process is configured using the SQL Server Database Maintenance Plan Wizard. The result of creating a standby server is that transaction logs are backed up on the primary server, then automatically shipped (copied) to the standby server and restored. The DBA has control over how often the backups take place and the time delta between backups and restores. The logins must also be transferred to the secondary server, otherwise the database cannot be brought online.

In the event of the production server failure, the standby server can be brought online manually to serve application requests. There should be no need to switch back to the primary after it is fixed if the secondary machine can handle the workload, as this switch will cause more downtime. The time required to change roles to the standby server will vary depending on the environment. To reduce the number of lost transactions, the DBA should attempt to backup, copy, and apply the orphaned log (NO_TRUNCATE) if there is still connectivity to the primary server.

Setting Up Log Shipping

The initial setup for the database/server combination is done through the Database Maintenance Plan Wizard.

Note To enable log shipping, a single database must be chosen.

Table 4.3 shows the possible parameters and tradeoffs to consider when configuring log shipping.

Table 4.3 Log Shipping Configuration


Server Affected


Backup database as part of the maintenance plan


By default, this is selected on the Specify the Database Backup Plan dialog box. It will create a SQL Server Agent job named "DB Backup Job for DB Maintenance Plan 'Configured Plan Name' ". Selecting this option may not be necessary if a backup plan already exists for the database.

Primary server name


Name of the primary server.

Secondary server name(s)


Names of the secondary server(s) to log ship.

Database to log ship (on primary)

Primary server

The database that the transaction log files will be generated from.

Directory to store the backup file (should be a valid UNC name)

One accessible by primary and secondary


Create a subdirectory under the UNC for each database

N/A—UNC file path

Set this to Yes; it will avoid confusion and it will be easier to find files that have been configured for log shipping when using more than one database.

Delete transaction log files that are older than a certain time period

N/A—UNC file path


Backup file extension (default is TRN)


Leave as TRN.

Network share name for backup directory

One accessible by primary and secondary

Same as "Directory to Store Backup File," otherwise log shipping will not work properly.

Transaction log destination directory (should be valid UNC on secondary server)

Secondary server


Create and initialize new database

Secondary server

Choose "No," and do the initial backup, copy, and restore manually.

Database load state

Secondary server

Set to "No recovery" if just applying transaction logs for high availability, or to "Standby" if making it a read-only reporting server.

Terminate user connections in database

Secondary server

Set to "Yes." If "No" is selected, a job will need to be set up to terminate the connections, otherwise the logs will never be applied. The syntax is:
If the log-shipped database is used for reporting, reset it with the following to allow users to have access after the transaction log is applied:

Allow database to assume primary role

Secondary server

Set this to "Yes" to allow the secondary to be the primary. Set the "Transaction log backup directory" to the same one as "directory to store backup file."

Perform a full backup (if not using an existing DB)



Use most recent backup file (if not using an existing DB)


Do not select this. Manually restore the backup file.

Transaction log backup schedule (default is every 15 minutes)


This should be set to a lower number for higher volume sites and smaller files. If this is set to a larger number, the files will be larger and the standby server will not be as in synch with the primary.

Copy/load frequency (default is 15 minutes)

Primary/ Secondary

The smaller the number, the closer the data on the log shipping pair will be. If the secondary is used as a reporting server, use a higher value.

Load delay (default is 0 minutes)


The smaller the number, the closer the data on the log shipping pair will be. However, if the transaction log files are large, an adjustment may need to be made to allow time for the file to copy.
If the secondary server is used as a reporting server, consider setting this higher, to stack the log files and allow users to do reporting.

File retention period (default is 24 hours)


Configure this to match the corporate archival scheme.

Backup alert threshold


If the backups are large, adjust this number to accommodate that so that false errors do not appear.

Out of sync alert threshold


Same guideline as above—if the file takes 15 minutes to copy (because it is 3 gigabytes [GB]) and 45 minutes to apply, set this number accordingly.

Log shipping monitor server


This should be on a completely separate server than either the primary or secondary server, because if either server is lost, the history of what has occurred will be lost.

Authentication mode for Monitor Server

Monitor server

Set to "Windows" if possible, but if the password for the account that the SQL Server runs under is changed, the change must be made on all servers defined in log shipping.
If SQL Server is chosen, it will create the log_shipping_ monitor_probe user, for which a password must be entered.

Generate a report


This is optional, and the DBA would need to configure a directory to place the reports in, and configure how long to retain the reports or e-mail them using SQL Mail.

Limit number of history entries in the sysdbmaintplan_history table


Log shipping is verbose. This can increase the size of your table quickly if frequent transaction logs are generated. Adjust accordingly. Also ensure that when allowing unlimited growth that msdb is set to autogrow as well. To clear out the entries, execute the stored procedure sp_delete_backuphistory.
Also, log shipping puts entries into the Application Event Viewer, so that may need to be backed up and cleared out from time to time as well.

Note: The DBA can configure only one database per maintenance plan for log shipping.

For more information on role changing and synchronizing logins, see Books Online for "How to set up and perform a log shipping role change."

Failover Clustering

Failover clustering is the leading high-availability feature of SQL Server 2000 Enterprise Edition. It allows SQL Server to utilize an existing Windows cluster (which creates redundancy) to install up to 16 SQL Server virtual servers. A virtual server is a clustered instance of SQL Server 2000. In the event of a failure, the node hosting the virtual server will relinquish control, and another node will then run the virtual server. The failover is automatic, no manual intervention is required, and the server will be transactionally current. If clients are not cluster aware, they may need to reconnect to SQL Server as it goes through a normal startup procedure. The startup time on the other node will depend on how many transactions need to be rolled forward or rolled back.

From a DBA standpoint, a clustered SQL Server should, for the most part, be treated like a normal SQL Server instance. However, keep in mind that since this is for high availability, servers must be treated with greater care so that availability is not affected. Except in rare cases, you should use the tools that ship with SQL Server to manage the server. Use Enterprise Manager when changing any passwords, especially the one that stops and starts the virtual server,. The Cluster Administrator must be used to do certain tasks, for example, moving the resources to another node or changing elements such as drive dependencies.

The most important aspect of a successful failover clustering implementation is the underlying Windows Clustering, the configuration of the hardware, and ensuring that all databases and applications take availability into account when planning for maintenance.

For more in-depth information on failover clustering, from configuration to implementation, administrations, and troubleshooting, see the Building a Highly Available Database Cluster white paper at

Backing Up and Restoring

You should know about and understand some special backup considerations for a cluster.

Although backing up the databases in a clustered environment is not completely unlike backing up a normal server, it definitely is more complex. So how is such a situation handled? Cluster systems are used for large and mission-critical databases. Backing up and restoring databases in the terabyte range cannot be handled as a 10 MB database would be, although many try to treat them in the same way. The following general best practices apply:

  • Make frequent backups

  • Have offline storage rotation for backup files put onto tape or any other media

  • Test and time restores on all backups so that in the event of an emergency, the backup will be known to be good, and the time to restore will be known as well. Knowing the time to restore is crucial when a server is down

Important: Do not use the quorum drive to store backups.

Backing Up to Disk and Tape

More often than not, it is easiest to first back up to disk.

Create a cluster disk share so that all nodes will have access to the backup share in the event of a failover. Do not attempt to back up to any local drives. After the database is backed up, it should be copied to another location, backed up to another medium such as tape, and then archived in a secure offsite location after it is tested and verified. The goal of backups in a high-availability environment is to remove a single point of failure. If a backup is made and kept on a drive somewhere, even if RAID is used, what happens if the array fails? While this is unlikely, the worst-case scenario must always be considered.

Another method that you can use, is to provide two steps in the backup job. Set up two backup methods (for example, tape drive and a shared cluster disk). Set up the maintenance plan, and then alter the backup task. If the backup succeeds in step one, exit with success, but if it fails (for any reason), invoke the second method. This would ensure that there is not a single point of failure in the backup strategy.

Snapshot Backups

One way to back up and restore a clustered SQL Server is to use a snapshot backup. SQL Server 2000 supports snapshot backups, in which the disks are mirrored, a complete set of disks is broken off the mirror, and that set is then used as a backup. Snapshot backups require specialized hardware and are fully supported by SQL Server 2000.

For example, TerraServer is a Web site that delivers aerial photographs and maps provided by the United States Geological Survey. The database is currently nearing two terabytes of data, and it uses Windows 2000 Datacenter Server with SQL Server 2000 failover clustering in the N+1 scenario. Backing up this very large database (VLDB) had to be carefully planned.

TerraServer employs a snapshot backup. They have three disk mirrors in addition to RAID (think of them as three columns lined up next to each other); that is, three copies of the data that are kept in synchronization by hardware. Therefore, in the event of a disk failure, two sets of backups are available. At some point, however, one of the mirror sets is broken off, and it essentially becomes a live spinning backup of the database. At the time it is broken off, however, it will no longer be kept synchronized, nor will SQL Server see it. SQL Server 2000 recognizes this, and handles its memory buffers appropriately. A tape solution is then used to back this live volume up, and at some point, spin the disk set back in, so there are three mirrors once again. This process happens on a cyclical basis.

Backing Up an Entire Clustered System

It is not enough to back up the SQL Server 2000 databases. A backup of the complete system must be performed as well. It is also important to back up the system state of a Windows cluster, and if it needs to be restored, restore the system state after the operating system is put on the machine. This requires a cluster-aware backup program. Some third-party vendors can provide this service.

The following is a list of native tools that can be used to effectively manage backup/restore operations of a clustering environment.

  • Ntbackup.exe backs up and restores cluster configuration, which includes the quorum disk and system state. The tool does not work with remote servers. If the server is running the cluster service, the system-state data will also include any resource-registry checkpoints and the quorum resource recovery log, which contains the most recent cluster database information.

  • Clusrest.exe restores the contents of backup quorum log to the live quorum.

  • Clustool.exe backs up and restores certain parts of the cluster configuration. It also includes a migration tool for porting stand-alone file and printer shares to a cluster. Core resources such as the cluster IP address, cluster names, and quorum disks are not restored. This tool is available from the Windows 2000 Server Resource Kit and replaces clusconb.exe.

  • Dumpcfg.exe backs up and restores disk signatures. It is available as part of the Windows 2000 Server Resource Kit.

  • Cluster Automation Server is a series of ActiveX controls for working with cluster service and is part of Windows 2000 (msclus.dll). If you are running Windows NT 4.0, you will find it on the Windows 2000 SDK CD (Redist\Cluster\NT4\i386).

The previous considerations for backing up to disk and tape still apply. Ensure that there are no single points of failure and all nodes have access to the same devices in the same way.

Log Shipping and Clustering

Combining log shipping with failover clustering provides a great secondary availability solution if failover clustering is the primary method deployed. It also removes a single point of failure. Log shipping allows for compensation for the distance problem that is most associated with failover clustering. The log-shipped secondary should be placed in a different geographical location to make sure that something like a power failure will not eliminate both the primary and secondary servers.

Memory Management

SQL Server 2000 dynamically maintains the memory pool by default. If SQL Server needs more memory as the resource load increases, it will request additional memory from Windows 2000. If another application starts and needs some of the memory allocated to SQL Server 2000, the SQL Server memory pool will shrink and relinquish memory to the other application. It is strongly recommended, however, that other applications and services not be installed on the SQL Server production system. This approach will limit issues stemming from resource contention that can contribute to both a lack of performance and availability of the system.

The maximum memory setting of SQL Server 2000 specifies the maximum amount of memory that SQL Server 2000 will obtain. This memory is not fully allocated at the startup of SQL Server; rather, it grows over time, as needed, until the maximum is reached. The minimum memory setting specifies that SQL Server 2000 will not relinquish any memory below the minimum value once this amount of memory has been obtained. Like the maximum setting, SQL Server 2000 does not fully acquire this value at startup, but continuously receives additional memory as needed until this value is reached.

To maximize SQL Server performance, set a sufficient minimum memory value but do not make it so high that it starves the operating system or other processes, which would cause Windows 2000 to page to disk. To decrease the time necessary for SQL Server to free memory to the operation system, thus decreasing the startup time for other applications on the system, consider reducing the maximum memory value for SQL Server 2000.

The preceding paragraphs noted the default dynamic memory behavior of SQL Server 2000. When the Windows 2000 Address Windowing Extensions (AWE) are enabled, however, memory is not dynamically managed and all memory set for SQL Server 2000 is acquired at startup and not released back to the Windows 2000 environment until SQL Server 2000 is stopped. The reason for this behavior lies in the fact that SQL Server 2000 is allocated memory from the Windows 2000 nonpageable pool—thus, the pages will not be swapped out. To prevent potential issues, the maximum memory setting should be used.

Advanced Memory Management

Memory Allocation with Address Windowing Extensions and Physical Addressing Extensions

The SQL Server 2000 Enterprise edition includes support for the Windows 2000 Address Windowing Extensions (AWE) API under Windows 2000 Advanced Server or Windows 2000 Datacenter Server. These extensions allow SQL Server 2000 to access up to 8 GB of physical memory under Windows 2000 Advanced Server and 64 GB of physical memory under Windows 2000 Datacenter Server.

The ability of SQL Server 2000 to access up to 64 GB of physical memory can, if configured properly, significantly increase performance. Allocation pages are more likely to be in the buffer cache and stay there, enabling SQL Server 2000 to quickly locate the data for reads and writes. Data and index pages are more likely to be in and stay in the buffer cache, reducing the amount of continuous physical I/O. Ad-hoc and procedure plans are more likely to be in and stay in the procedure cache, reducing the need for the optimizer to frequently generate new execution plans.

With AWE, a memory-intensive application can now run much more efficiently under SQL Server 2000 to increase performance. Windows 2000 Advanced Server and Windows 2000 Datacenter Server introduced the enhanced AWE API. AWE allows applications to access large amounts of physical memory. Due to limitations of 32-bit memory addressing, only up to 4 GB of physical memory can be used by Windows NT 4.0 and Windows 2000 without having AWE enabled. By default, 2 GB of memory is dedicated to the operating system and 2 GB of memory to the application. With a /3GB switch in the Boot.ini used by the operating system, an application such as SQL Server can access up to 3 GB of memory, and the operating system is reduced to 1GB of memory. As a result, even if a server were configured with 8 GB of memory, anything beyond 4 GB would be virtually unusable. AWE is the support built into the operating system as a way of exposing extended memory to Win32®-based applications.

AWE requires an application, such as SQL Server 2000, to be coded specifically for AWE. AWE support within SQL Server 2000 must be configured using the awe enabled option in sp_configure.

sp_configure ' awe enabled ', 1

This is set per instance. By default, awe enabled is set to 0, or off. Enabling AWE support in SQL Server 2000 also requires some additional operating-system configuration. For more information, see Books Online—"Managing AWE Memory."

Another option that can be used to take advantage of larger amounts of memory is Physical Addressing Extension (PAE). PAE enables a 32-bit operating system to address memory above 4 GB. For more information about PAE and how to set it up, see the Knowledge Base article - 268363 - Intel Physical Addressing Extensions (PAE) in Windows 2000.

Note If PAE is enabled, backup and restore errors with Windows 2000 or SQL Server 2000 backups may be encountered. For more information, see the Knowledge Base article: 280793 – SQL Server 2000 or Windows 2000 Backup Not Viewable While Running in PAE Mode

When choosing hardware for a solution (clustered or not) and planning on using large memory, make sure that the configuration includes hardware that supports large memory. To check, search all categories for the term "large memory" on the Hardware Compatibility List (HCL).

Table 4.4 summarizes how extended memory settings should be configured based on the amount of large memory being set up.

Table 4.4 Extended Memory Configuration

4 GB or less

4 GB to 16 GB

More than 16GB

/3GB switch

/3GB enabled

/3GB disabled


AWE enabled

AWE enabled


PAE enabled (Boot.ini)

PAE enabled (Boot.ini)

Note: When enabling AWE or PAE memory, it is highly recommended that the configuration be tested prior to bringing the server(s) online in a production capacity.

Although AWE memory can significantly improve the performance of SQL Server 2000, DBAs should be aware of the ramifications of enabling this option:

  • The instance of SQL Server does not dynamically manage the size of the memory address-space used.

  • If the max server memory configuration option is not set when AWE is enabled with SQL Server 2000, SQL Server grabs the total memory available (except 128 MB to allow the base operating system to function), potentially depriving the operating system and any other processes that would be running on the same server.

  • After it has been initialized, AWE memory holds all the memory acquired at startup until the server is shut down.

  • If AWE is enabled and is taking too much memory, SQL Server must be shut down to reconfigure it, causing downtime (which makes a high-availability option such as failover clustering less available). Because the memory pages used by the instance of SQL Server are taken from the non-pageable pool of Windows memory, none of the memory can be exchanged. This means that if the physical memory is filled up, SQL Server cannot use the page file set up on a physical disk to account for the surplus in memory usage.

  • Once the max server memory option is configured, set the working set size to 0.

Memory and Processor Management With Multiple Instances

SQL Server 2000 currently supports up to 16 instances (one default and 15 named, or 16 named) on one server or under Windows Clustering. A single instance is easier to manage, and if both server nodes are equal in a clustered environment, there will be no issues in the event of a failover. When more than one instance exists in a failover cluster or on a server, however, dynamic memory should not be configured, because each SQL Server instance will be fighting for the same memory pool. Configure only what is needed for that particular instance. Do not forget to take into account the needs of the operating system and any other processes or applications that are running on the server. For example, think of a two-node failover cluster as glasses of water. Both glasses have a maximum capacity of 8 ounces (which would be 8 GB of memory). Glass A and Glass B contain 3 ounces of water each. If you pour the contents of Glass B into Glass A, Glass A can handle the entire amount of liquid with no overflow. From a SQL Server perspective, for this example to work, AWE memory must be enabled, and each instance must use the sp_configure stored procedure max server memory option to cap memory on each instance at 3 GB. In the event of the failover, there are still 2 GB of memory left for the operating system and any other processes running.

Multiple SQL Server instances have the same problem with processor power: with each additional instance, more power could be taken away from other processes and instances. Before placing multiple instances of SQL Server onto a server or Windows cluster, not only benchmark memory requirements, but also processor requirements, because starving an instance, another process or application, or the operating system can adversely affect performance of the solution.

Note: Dynamic or static memory allocations can be set for each instance of SQL Server 2000. The DBA should keep this in mind when allocating memory so that too much memory is NOT allocated to all of the instances restricting other applications (SQL Server 2000 will leave approximately 128MB free for Windows 2000 when AWE is enabled). When clustering, the DBA should ensure that the fail-over server has adequate physical memory to handle the memory requirements of the fail-over instances.


This chapter explained the best database administration practices, and the reasoning behind choosing one option over another. Areas covered include index management, backups and restores, data movement, and the advanced topics of large memory management, log shipping, and clustering. Daily, weekly, and monthly tasks were listed. DBAs should now be able to determine which tasks to perform, how often, and which tools and technologies are appropriate for their situation.

More Information

For more information, please see the following technical resources.

Web Sites

Microsoft SQL Server Home:

SQL Server Magazine:

Microsoft Web Solution Platform:


MSDN Online:


Supporting PAE Memory Under Windows 2000

Address Windowing Extensions and Windows 2000 Datacenter Server

Building a Highly Available Database Cluster


Delaney, Kalen. Inside Microsoft SQL Server 2000. Microsoft Press. 2000

Garcia, Marcelina, ed.; James Reding, Edward Whalen, and Steve Adrien DeLuca. Microsoft SQL Server 2000 Administrator's Companion. Microsoft Press. 2000.

Microsoft Corporation. Microsoft SQL Server 2000 Resource Kit. Microsoft Press. 2001.


Microsoft Application Center

Microsoft Host Integration Server

Microsoft Operations Manager


Log Explorer for SQL Server

Note: Third-party products are listed to enhance enterprise solutions. Microsoft does not directly endorse or support the reliability of these products.