Chapter 9 - Server Architecture

The server is the Microsoft SQL Server component that receives SQL statements from clients and performs all the actions necessary to complete the statements. This section discusses:

  • An overview of the components that make up the server. 

  • How the server compiles each batch of SQL statements into an execution plan that tells the server how to process the statement. 

  • How the server manages Microsoft Windows resources such as memory, threads, and tasks. 

  • How the server determines what part of a distributed query references a linked server and what request to transmit to the server to obtain the needed data. 

  • How the server transmits remote stored procedure calls to remote servers. 

  • How the server manages concurrency and transaction issues. 

  • How the server implements server cursors. 

  • The features that allow SQL Server to scale from small laptop computers to large servers that provide the primary data storage for large enterprises. 

  • How the SQLMail component integrates SQL Server with e-mail servers to allow the server to send e-mail and pages when specified events occur. 

Server Architecture Overview

The server components of Microsoft SQL Server receive SQL statements from clients and process those SQL statements. This illustration shows the major components involved with processing an SQL statement received from a SQL Server client.

Cc917545.srvarch1(en-us,TechNet.10).gif 

Tabular Data Stream 

SQL statements are sent from clients by using an application-level protocol specific to SQL Server called Tabular Data Stream (TDS). SQL Server version 7.0 accepts two versions of TDS:

  • TDS 7.0 sent by clients running the 7.0 versions of the SQL Server client components. TDS 7.0 clients support all the features of SQL Server 7.0. 

  • TDS 4.2 sent by clients running SQL Server client components from SQL Server 4.21a, 6.0, and 6.5. TDS 4.2 clients do not support features introduced in SQL Server 7.0 and the server sometimes has to adjust the data it sends back to clients using TDS 4.2. For example, TDS 4.2 clients do not support Unicode data types, so SQL Server 7.0 must convert any Unicode data to character data before sending it to the client, with possible loss of extended characters. TDS 4.2 clients also do not support char, varchar, binary, or varbinary values longer than 255 bytes, so SQL Server 7.0 must truncate any values longer than 255 before sending them to the client. 

Server Net-Libraries 

TDS packets are built by the Microsoft OLE DB Provider for SQL Server, the SQL Server ODBC driver, or the DB-Library DLL. The TDS packets are then passed to a SQL Server client Net-Library, which encapsulates the TDS packets into network protocol packets. On the server, the network protocol packets are received by a server Net-Library that extracts the TDS packet and passes it to Open Data Services.

This process is reversed when results are returned to the client.

Each server can be simultaneously listening on several network protocols and will be running one server Net-Library for each protocol it is listening on.

Open Data Services 

Open Data Services is the component that manages all the TDS packets coming from the server Net-Libraries. Open Data Services determines the type of each TDS packet (login packet, SQL statement execution, or remote stored procedure call) and then calls the appropriate function in SQL Server.

Database Server 

The database server processes all requests passed to it from Open Data Services. It compiles all the SQL statements into execution plans, then uses the plans to access the requested data and build the result set returned to the client.

See Also 

In This Volume 

Client/Server Architecture

Database Server

The database server of Microsoft SQL Server has two main parts; the relational engine (RE) and the storage engine (SE). One of the most important architectural changes made in SQL Server version 7.0 is to clearly separate the relational and storage engine components within the server and to have them use the OLE DB API to communicate with each other.

Cc917545.srvarch2(en-us,TechNet.10).gif 

The processing for a SELECT statement that references only tables in local databases can be briefly summarized as:

  1. The relational engine compiles the SELECT statement into an optimized execution plan. The execution plan defines a series of operations against simple rowsets from the individual tables or indexes referenced in the SELECT statement. 

    A rowset is the OLE DB term for a result set. The rowsets requested by the relational engine return the amount of data needed from a table or index to perform one of the operations used to build the SELECT result set. For example, this SELECT statement requires a table scan if it references a table with no indexes: 

    SELECT * FROM ScanTable
    
The relational engine implements the table scan by requesting one rowset containing all the rows from **ScanTable**. 

This SELECT statement only needs information available in an index: 

<pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">SELECT DISTINCT LastName

FROM Northwind.dbo.Employees

The relational engine implements the index scan by requesting one rowset containing the leaf rows from the index built on the **LastName** column. 

This SELECT statement needs information from two indexes: 

<pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">SELECT CompanyName, OrderID, ShippedDate

FROM Northwind.dbo.Customers AS Cst JOIN Northwind.dbo.Orders AS Ord ON (Cst.CustomerID = Ord.CustomerID)

The relational engine requests two rowsets, one for the clustered index on **Customers** and the other on one of the nonclustered indexes in **Orders**. 
  1. The relational engine then uses the OLE DB API to request that the storage engine open the rowsets. 

  2. As the relational engine works through the steps of the execution plan and needs data, it uses OLE DB to fetch the individual rows from the rowsets it requested the storage engine to open. The storage engine transfers the data from the data buffers to the relational engine. 

  3. The relational engine combines the data from the storage engine rowsets into the final result set transmitted back to the user. 

Relational Engine

The main responsibilities of the relational engine are:

  • Parsing the SQL statements.

    The parser scans an SQL statement and breaks it down into the logical units, such as keywords, parameters, operators, and identifiers. The parser also breaks down the overall SQL statement into a series of smaller logical operations. 

  • Optimizing the execution plans.

    There are typically many different ways that the server could use data from the source tables to build the result set. The optimizer determines what these various series of steps are, estimates the cost of each series (primarily in terms of file I/O), and chooses the series of steps that has the lowest cost. It then combines the specific steps with the query tree to produce an optimized execution plan. 

  • Executing the series of logical operations defined in the execution plan. 

  • Processing Data Definition Language (DDL) and other statements.

    These statements are not the typical SELECT, INSERT, DELETE, or UPDATE statements, and have special processing needs. Examples are the SET statements to set connection options, and the CREATE statements to create objects in a database. 

Storage Engine

The main responsibilities of the storage engine are:

  • Managing the files on which the database is stored and the usage of space in the files. 

  • Building and reading the physical pages used to store data. 

  • Managing the data buffers and all I/O to the physical files. 

  • Controlling concurrency. Managing transactions and using locking to control concurrent user access to rows in the database. 

  • Logging and recovery. 

  • Implementing utility functions such as the BACKUP, RESTORE, and DBCC statements and bulk copy. 

SQL Server Language Support

Microsoft SQL Server version 7.0 is installed with 24 natural languages defined on the server. The definitions for each language establishes how date data is interpreted:

  • Short and long names for each month. 

  • Names for each day. 

  • Which day is considered the first day of the week. 

These language definitions are stored in master.dbo.syslanguages and each language is identified by a language ID.

Each SQL Server installation uses a default language for all connections to the server. For more information about configuring the setting, see "default language Option" in Microsoft SQL Server Administrator's Companion.

Most connections use the default language configured for the server, but each connection can individually set a SQL Server language to be used for the connection:

  • Microsoft ActiveX Data Object and OLE DB applications can include the Language keyword in a provider string specified when they connect. 

  • OLE DB applications can also set the provider-specific property SSPROP_INIT_CURRENTLANGUAGE before connecting. 

  • Open Database Connectivity (ODBC) applications can include the LANGUAGE keyword in a connection string specified on SQLDriverConnect. ODBC applications can also specify the language setting in a SQL Server ODBC data source definition. 

  • DB-Library applications can use dblogin to allocate a loginrec, then use the DBSETNATLANG macro to specify a language setting before calling dbopen to connect. 

  • Any application can use the SET LANGUAGE statement to specify the SQL Server language. 

SQL Server also supports having multiple, language-specific copies of the error messages stored in master.dbo.sysmessages. All SQL Server installations get the set of English messages. SQL Server is localized, or translated, into French, German, Spanish, and Japanese. Installations of localized versions of SQL Server install the translated set of messages in addition to the English set. When SQL Server sends a message to a connection, it uses the localized message if the language ID of the connection matches one of the language IDs present in sysmessages. If there is no message in sysmessages with the same language ID, the English version of the message is sent.

Query Processor Architecture

SQL statements are the only commands sent from applications to Microsoft SQL Server. All of the work done by a copy of SQL Server is the result of accepting, interpreting, and executing SQL statements. These are the processes by which SQL statements are executed by SQL Server:

  • Single SQL statement processing 

  • Batch processing 

  • Stored procedure and trigger execution 

  • Execution plan caching and reuse 

  • Parallel query processing 

Single SQL Statement Processing

Processing a single SQL statement is the simplest case of how Microsoft SQL Server executes SQL statements. The steps used to process a single SELECT statement that only references local base tables (no views or remote tables) illustrates the basic process.

Optimizing SELECT Statements 

A SELECT statement does not state the exact steps the database server should use to retrieve the requested data. This means the database server must analyze the statement to determine the most efficient way to extract the data. This is called optimizing the SELECT statement, and the component that does this is called the query optimizer.

A SELECT statement only defines:

  • The format of the result set. This is specified mostly in the select list, although other clauses such as ORDER BY and GROUP BY also affect the final form of the result set. 

  • The tables containing the source data. This is specified in the FROM clause. 

  • How the tables are logically related for the purposes of the SELECT statement. This is defined in the join specifications. 

  • What conditions the rows in the source tables must satisfy to qualify for the SELECT statement. These are specified in the WHERE and HAVING clauses. 

A query execution plan is a definition of:

  • The sequence in which the source tables are accessed. 

    There are typically many different sequences in which the database server can access the base tables to build the result set. For example, if the SELECT statement references three tables, the database server could first access TableA, use the data from TableA to extract matching rows from TableB, then use the data from TableB to extract data from TableC. Or the database server could access the tables in the reverse sequence, or in the sequence TableB, TableA, or TableC, or in the sequence TableB, TableC, TableA, or the sequence TableC, TableA, TableB

  • The methods used to extract data from each table. 

    There are also typically different methods for accessing the data in each table. If only a few rows with specific key values are needed, the database server can use an index. If all the rows in the table are needed, the database server can ignore the indexes and do a table scan. If all the rows in a table are needed, but there is an index whose key columns are in an ORDER BY, doing an index scan instead of a table scan may save a separate sort of the result set. If a table is very small, table scans may be the most efficient method for almost all access to the table. 

The process of choosing one execution plan out of several possible plans is called optimization. The query optimizer is one of the most important components of an SQL database system. While some overhead is used by the optimizer to analyze the query and choose a plan, this overhead is saved back several-fold when the optimizer picks an efficient execution plan. For example, two construction companies can be given identical blueprints for a house. If one company spends a few days at the start to plan how they will build the house, and the other company just starts building without planning, the company that takes the time to plan their project will most likely finish first.

The SQL Server query optimizer is a cost-based optimizer. Each possible execution plan has an associated cost in terms of the amount of computing resources used. The optimizer must analyze the possible plans and choose the one with the lowest estimated cost. Some complex SELECT statements have thousands of possible execution plans. In these cases, the optimizer does not analyze all possible combinations. It instead uses sophisticated algorithms to quickly find an execution plan that has a cost reasonably close to the theoretical minimum.

The SQL Server query optimizer does not strictly choose the execution plan with the lowest resource cost; it chooses the plan that most quickly returns results to the user with a reasonable cost in resources. For example, processing a query in parallel typically uses more resources than processing it serially, but completes the query faster. The SQL Server optimizer will use a parallel execution plan to return results if the load on the server will not be adversely affected.

The optimizer relies heavily on distribution statistics when estimating the resource costs of different methods of extracting information from a table or index. Distribution statistics are kept for columns and indexes. They indicate the selectivity of the values in a particular index or column. For example, in a table representing cars, many cars have the same manufacturer, but each car has a unique vehicle identification number. An index on the vehicle identification number is more selective than an index on manufacturer. If the index statistics are not current, the optimizer may not make the best choice for the current state of the table. For more information about keeping index statistics current, see "Statistical Information" in Microsoft SQL Server Database Developer's Companion.

The optimizer is important because it lets the database server adjust dynamically to changing conditions in the database without needing input from a programmer or database administrator. This frees programmers to focus on describing the final result of the query. They can trust the optimizer to always build an efficient execution plan for the state of the database each time the statement is run.

Processing a SELECT Statement 

The basic steps SQL Server uses to process a single SELECT statement are:

  1. The parser scans the SELECT statement and breaks it down into logical units such as keywords, expressions, operators, and identifiers. 

  2. A query tree, sometimes called a sequence tree, is built describing the logical steps needed to transform the source data into the format needed by the result set. 

  3. The optimizer then analyzes all the ways the source tables can be accessed and selects the series of steps that returns the results fastest while consuming less resources. The query tree is updated to record this exact series of steps, and the final, optimized version of the query tree is called the execution plan. 

  4. The relational engine begins executing the execution plan. As steps that need data from the base tables are processed, the relational engine uses OLE DB to request that the storage engine pass up data from the rowsets requested from the relational engine. 

  5. The relational engine processes the data returned from the storage engine into the format defined for the result set, and returns the result set to the client. 

Processing Other Statements 

The basic steps described for processing a SELECT statement also apply to other SQL statements such as UPDATE, DELETE, and INSERT. UPDATE and DELETE statements both have to target the set of rows to be modified or deleted, the process of identifying these rows is the same as that used to identify the source rows that contribute to the result set of a SELECT statement. The UPDATE and INSERT statements may both contain embedded SELECT statements that provide the data values to be updated or inserted.

Even DDL statements such as CREATE PROCEDURE or ALTER TABLE are ultimately resolved to a series of relational operations on the system catalog tables and sometimes (such as ALTER TABLE ADD COLUMN) against the data tables.

View Resolution

An SQL statement is resolved to a single execution plan, even when it references a view. Execution plans are not stored for views; the source of the view is stored. When an SQL statement references a view, the parser and optimizer analyze the source of both the SQL statement and the view and resolve them into a single execution plan. There is not one plan for the SQL statement and a separate plan for the view.

For example, consider the following view:

USE Northwind
GO
CREATE VIEW EmployeeName AS
SELECT EmployeeID, LastName, FirstName
FROM Northwind.dbo.Employees
GO

Given this view, both of these SQL statements perform the same operations on the base tables and produce the same results:

/* SELECT referencing the EmployeeName view. */
SELECT LastName AS EmployeeLastName,
OrderID, OrderDate
FROM Northwind.dbo.Orders AS Ord
JOIN Northwind.dbo.EmployeeName as EmpN
ON (Ord.EmployeeID = EmpN.EmployeeID)
WHERE OrderDate > '31 May, 1996'

/* SELECT referencing the Employees table directly. */
SELECT LastName AS EmployeeLastName,
OrderID, OrderDate
FROM Northwind.dbo.Orders AS Ord
JOIN Northwind.dbo.Employees as Emp
ON (Ord.EmployeeID = Emp.EmployeeID)
WHERE OrderDate > '31 May, 1996'

The SQL Server Query Analyzer showplan feature shows that the relational engine builds the same execution plan for both of these SELECT statements.

Worktables

The relational engine may need to build a worktable to perform a logical operation specified in an SQL statement. Worktables are typically generated for certain GROUP BY, ORDER BY, or UNION queries. For example, if an ORDER BY clause references columns that are not covered by any indexes, the relational engine may need to generate a worktable to sort the result set into the order requested.

Worktables are built in tempdb and are dropped automatically at the end of the statement.

Batch Processing

A batch is a collection of one or more SQL statements sent in one unit by the client. Each batch is compiled into a single execution plan. If the batch contains multiple SQL statements, all of the optimized steps needed to perform all the statements are built into a single execution plan.

There are several ways to specify a batch:

  • All the SQL statements sent in a single execution unit from an application comprise a single batch and generate a single execution plan. For more information about how an application specifies a batch, see "Batches" in Microsoft SQL Server Database Developer's Companion

  • All the statements in a stored procedure or trigger comprise a single batch. Each stored procedure or trigger is compiled into a single execution plan. 

  • The string executed by an EXECUTE statement is a batch compiled into a single execution plan. 

  • The string executed by an sp_executesql system stored procedure is a batch compiled into a single execution plan. 

When a batch sent from an application contains an EXECUTE statement, the execution plan for the executed string or stored procedure is executed separately from the execution plan containing the EXECUTE statement. The execution plan generated for the string executed by an sp_executesql stored procedure also remains separate from the execution plan for the batch containing the sp_executesql call. If a trigger is fired by a statement in a batch, the trigger execution plan executes separately from the original batch.

For example, a batch that contains these four statements uses five execution plans:

  • An EXECUTE statement executing a stored procedure. 

  • An sp_executesql call executing a string. 

  • An EXECUTE statement executing a string. 

  • An UPDATE statement referencing a table that has an update trigger. 

Cc917545.srvarch3(en-us,TechNet.10).gif

Stored Procedure and Trigger Execution

Microsoft SQL Server version 7.0 changes the way stored procedures and triggers are executed. SQL Server 7.0 also introduces changes in the way all execution plans are managed that gives many batches the same performance benefits that were only available to stored procedures in earlier versions of SQL Server.

SQL Server 7.0 stores only the source for stored procedures and triggers. When a stored procedure or trigger is first executed, the source is compiled into an execution plan. If the stored procedure or trigger is again executed before the execution plan is aged from memory, the relational engine detects the existing plan and reuses it. If the plan has aged out of memory, a new plan is built. This process is similar to the process SQL Server 7.0 follows for all SQL statements. The main performance advantage that stored procedures and triggers have in SQL Server 7.0 is that their SQL statements are always the same, therefore the relational engine matches them with any existing execution plans.

Stored procedures had a more pronounced performance advantage over other SQL statements in earlier versions of SQL Server. Earlier versions of SQL Server did not attempt to reuse execution plans for batches that were not stored procedures or triggers. The only way to reuse execution plans was to encode the SQL statements in stored procedures.

The execution plan for stored procedures and triggers is executed separately from the execution plan for the batch calling the stored procedure or firing the trigger. This allows for greater reuse of the stored procedure and trigger execution plans. For more information, see "Batch Processing" in this volume.

Execution Plan Caching and Reuse

Microsoft SQL Server version 7.0 introduces significant changes in the way execution plans are managed. There is a much higher chance that individual execution plans will be reused in a SQL Server 7.0 system than in earlier versions.

SQL Server 7.0 has a pool of memory that is used to store both execution plans and data buffers. The percentage of the pool allocated to either execution plans or data buffers fluctuates dynamically depending on the state of the system. The part of the memory pool used to store execution plans is called the procedure cache.

SQL Server 7.0 execution plans have two main components:

  • Query plan 

    The bulk of the execution plan is a reentrant, read-only data structure that can be used by any number of users. This is called the query plan. No user context is stored in the query plan. There are never more than one or two copies of the query plan in memory; one copy for all serial executions and another for all parallel executions. The parallel copy covers all parallel executions, regardless of their degree of parallelism. 

  • Execution context 

    Each user currently executing the query has a data structure that holds the data specific to their execution, such as parameter values. This data structure is called the execution context. The execution context data structures are reused. If a user executes a query and one of the structures is not in use, it is reinitialized with the context for the new user. 

Cc917545.srvarch6(en-us,TechNet.10).gif 

When any SQL statement is executed in SQL Server 7.0, the relational engine first looks through the procedure cache to see if there is an existing execution plan for the same SQL statement. SQL Server 7.0 reuses any existing plan it finds, saving the overhead of recompiling the SQL statement. If there is no existing execution plan, SQL Server 7.0 goes ahead and generates a new execution plan for the query.

SQL Server 7.0 has an efficient algorithm to find any existing execution plans for any given SQL statement. In most systems, the minimal resources used by this scan are less than the resources saved by being able to reuse existing plans instead of compiling every SQL statement.

The algorithms to match new SQL statements to existing, unused execution plans in the cache require that all object references be fully qualified. For example, the first of these SELECT statements is not matched with an existing plan, while the second is:

SELECT * FROM Employees

SELECT * FROM Northwind.dbo.Employees

Aging Execution Plans 

After an execution plan is generated, it stays in the procedure cache. SQL Server 7.0 ages old, unused plans out of the cache only when space is needed. Each query plan and execution context has an associated cost factor that indicates how expensive the structure is to compile. These data structures also have an age field. Each time the object is referenced by a connection, the age field is incremented by the compilation cost factor. For example, if a query plan has a cost factor of 8 and is referenced twice, its age becomes 16. The lazywriter process periodically scans the list of objects in the procedure cache. On each scan, the lazywriter decrements the age field for each object by 1. The age of our sample query plan is decremented to 0 after 16 scans of the procedure cache, unless another user references the plan. The lazywriter process deallocates an object if three conditions are met:

  • The memory manager needs memory and all available memory is currently used. 

  • The age field for the object is 0. 

  • The object is not currently being referenced by a connection. 

Because the age field is incremented each time an object is referenced, frequently referenced objects do not have their age fields decremented to 0 and are not aged from the cache. Objects that are infrequently referenced are soon eligible for deallocation, but are not actually deallocated unless memory is needed for other objects.

Recompiling Execution Plans 

Certain changes in a database can cause an execution plan to be either inefficient or no longer valid, given the new state of the database. SQL Server detects these changes that invalidate an execution plan, and marks the plan as invalid. A new plan must then be recompiled for the next connection that executes the query. The conditions that cause a plan to be invalidated include:

  • Any structural changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW). 

  • New distribution statistics being generated either explicitly from a statement such as UPDATE STATISTICS or automatically. 

  • Dropping an index used by the execution plan. 

  • An explicit call to sp_recompile

  • Large numbers of changes to keys, INSERT or DELETE statements for a table referenced by the query. 

  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly. 

Parameters and Execution Plan Reuse

The use of parameters, including parameter markers in ADO, OLE DB, and ODBC applications, can increase the reuse of execution plans.

The only difference between the following two SELECT statements are the values compared in the WHERE clause:

SELECT * FROM Northwind.dbo.Products WHERE CategoryID = 1

SELECT * FROM Northwind.dbo.Products WHERE CategoryID = 4

The only difference between the execution plans for these two queries is the value stored for the comparison against the CategoryID column. It would be good if Microsoft SQL Server version 7.0 recognized that the statements generate essentially the same plan and reused the plans.

Separating constants from the SQL statement by using parameters helps the relational engine recognize duplicate plans. There are two ways to use parameters:

  • In Transact-SQL, use sp_executesql

    DECLARE @MyIntParm INT
    

SET @MyIntParm = 1 EXEC sp_executesql N'SELECT * FROM Northwind.dbo.Products WHERE CategoryID = @Parm', N'@Parm INT', @MyIntParm

This particular method is best suited for Transact-SQL scripts, stored procedures, or triggers that generate SQL statements dynamically. For more information, see "Building Statements at Run Time" in *Microsoft SQL Server Database Developer's Companion*. 

ADO, OLE DB, and ODBC use parameter markers. Parameter markers are question marks (?) that replace a constant in an SQL statement and are bound to a program variable. For example, in an ODBC application:

  - Use **SQLBindParameter** to bind an integer variable to the first parameter marker in an SQL statement. 

  - Place the integer value in the variable. 

  - Execute the statement, specifying the parameter marker (?): 
    
    <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">SQLExecDirect(hstmt, 

"SELECT * FROM Northwind.dbo.Products WHERE CategoryID = ?", SQL_NTS);

The OLE DB Provider for SQL Server and the SQL Server ODBC Driver that ship with SQL Server 7.0 both use **sp\_executesql** to send statements to SQL Server 7.0 when parameter markers are used in applications. 

See Also 

In Other Volumes 

"sp_executesql" in Microsoft SQL Server Transact-SQL and Utilities Reference 

"Command Parameters" in Microsoft SQL Server Building Applications 

"Parameters Collection and Parameter Object" in Microsoft SQL Server Building Applications 

"Using Statement Parameters" in Microsoft SQL Server Building Applications 

Auto-parameterization

In Microsoft SQL Server version 7.0, using parameters or parameter markers in Transact-SQL statements increases the ability of the relational engine to match new SQL statements with existing, unused execution plans. If an SQL statement is executed without parameters, SQL Server 7.0 parameterizes the statement internally to increase the possibility of matching it against an existing execution plan.

Consider this statement:

SELECT * FROM Northwind.dbo.Products WHERE CategoryID = 1

The value 1 at the end of the statement can be specified as a parameter. The relational engine builds the execution plan for this batch as if a parameter had been specified in place of the value 1. Because of this auto-parameterization, SQL Server 7.0 recognizes that the following two statements generate essentially the same execution plan and reuses the first plan for the second statement:

SELECT * FROM Northwind.dbo.Products WHERE CategoryID = 1

SELECT * FROM Northwind.dbo.Products WHERE CategoryID = 4

When processing complex SQL statements, the relational engine may have difficulty determining which expressions can be auto-parameterized. To increase the ability of the relational engine to match complex SQL statements to existing, unused execution plans, explicitly specify the parameters using either sp_executesql or parameter markers. For more information, see "Parameters and Execution Plan Reuse" in this volume.

Preparing SQL Statements

The Microsoft SQL Server version 7.0 relational engine introduces full support for preparing SQL statements before they are executed. If an application needs to execute an SQL statement several times, using the database API it can:

  • Prepare the statement once. 

    This compiles the SQL statement into an execution plan. 

  • Execute the precompiled execution plan each time it needs to execute the statement. 

    This saves recompiling the SQL statement on each execution after the first. 

Preparing and executing statements is controlled by API functions and methods. It is not a part of the Transact-SQL language. The prepare/execute model of executing SQL statements is supported by the OLE DB Provider for SQL Server and the SQL Server ODBC driver. On a prepare request, the provider or driver sends the statement to SQL Server with a request to prepare the statement. SQL Server compiles an execution plan and returns a handle to that plan to the provider or driver. On an execute request, the provider or driver sends the server a request to execute the plan associated with the handle.

Excess use of the prepare/execute model can degrade performance. If a statement is only executed once, a direct execution requires only one network round trip to the server. Preparing and then executing an SQL statement only executed one time requires an extra network round-trip; one to prepare the statement and one to execute it.

Preparing a statement is more effective if parameter markers are used. For example, assume an application may be asked occasionally to retrieve product information from the Northwind sample database. There are two ways the application can do this. First, the application could execute a separate query for each product requested:

SELECT * FROM Northwind.dbo.Products
WHERE ProductID = 63

An alternative would be for the application to:

  1. Prepare a statement containing a parameter marker (?): 

    SELECT * FROM Northwind.dbo.Products
    

WHERE ProductID = ?

  1. Bind a program variable to the parameter marker. 

  2. Each time product information is needed, fill the bound variable with the key value and execute the statement. 

The second method is more efficient when the statement is executed more than three or four times.

In SQL Server 7.0, the relative performance advantage of the prepare/execute model over direct execution is reduced by the reuse of execution plans. SQL Server 7.0 has efficient algorithms for matching current SQL statements with execution plans generated for prior executions of the same SQL statement. If an application executes an SQL statement with parameter markers multiple times, SQL Server 7.0 will reuse the execution plan from the first execution for the second and subsequent executions (unless the plan ages from the procedure cache). The prepare/execute model still offers these benefits:

  • Finding an execution plan by an identifying handle is slightly more efficient than the algorithms used to match an SQL statement to existing execution plans. 

  • The application can control when the execution plan is created and when it is reused. 

  • The prepare/execute model is portable to other databases, including earlier versions of SQL Server. 

Prepare and Execute in Earlier Versions of SQL Server 

Versions of SQL Server earlier than 7.0 did not support the prepare/execute model directly. The SQL Server ODBC driver, however, supported the prepare/execute model through the use of stored procedures:

  • When an application requested that an SQL statement be prepared, the ODBC driver would wrap the SQL statement in a CREATE PROCEDURE statement and send that to SQL Server. 

  • On an execute request, the ODBC driver would request that SQL Server execute the generated stored procedure. 

In SQL Server 6.0 and 6.5, the generated stored procedures were temporary stored procedures stored in tempdb. SQL Server 4.21a and earlier versions did not support temporary stored procedures, so the driver generated regular stored procedures stored in the current database. The OLE DB Provider for SQL Server and the SQL Server ODBC driver released with SQL Server 7.0 follows these behaviors when connected to versions 4.21a, 6.0, or 6.5 of SQL Server.

See Also 

In This Volume 

Execution Plan Caching and Reuse

Parameters and Execution Plan Reuse

In Other Volumes 

"Command Object" in Microsoft SQL Server Building Applications 

"Preparing Commands" in Microsoft SQL Server Building Applications 

"Prepared Execution" in Microsoft SQL Server Building Applications 

Parallel Query Processing

Microsoft SQL Server provides parallel queries to optimize query execution for computers having more than one processor. By allowing SQL Server to perform a query in parallel by using several operating system threads, SQL Server completes complex queries with large amounts of data quickly and efficiently.

During query optimization, SQL Server looks for queries which might benefit from parallel execution. For these queries, SQL Server inserts exchange operators into the query execution plan to prepare the query for parallel execution. An exchange operator is an operator in a query execution plan that provides process management, data redistribution, and flow control. After exchange operators are inserted, the result is a parallel query execution plan. A parallel query execution plan can use more than one thread, whereas a serial execution plan, used by a nonparallel query, uses only a single thread for its execution. The actual number of threads used by a parallel query is determined at query plan execution initialization and is called the degree of parallelism.

Degree of Parallelism

Microsoft SQL Server detects the best degree of parallelism for each instance of a parallel query execution automatically by considering the following:

  1. Is SQL Server running on a computer with more than one processor (an SMP computer)? 

    Only computers with more than one processor can take advantage of parallel queries. 

  2. What is the number of concurrent users active on the SQL Server installation at this moment? 

    SQL Server monitors its CPU usage and adjusts the degree of parallelism at the query startup time. Lower degrees of parallelism are chosen if the CPUs are already busy. 

  3. Is there sufficient memory available for parallel query execution? 

    Each query requires a certain amount of memory to execute. Executing a parallel query requires more memory than a nonparallel query. The amount of memory required for executing a parallel query increases with the degree of parallelism. If the memory requirement of the parallel plan for a given degree of parallelism cannot be satisfied, SQL Server decreases the degree of parallelism automatically or completely abandons the parallel plan for the query in the given workload context and executes the serial plan. 

  4. What is the type of query being executed? 

    Queries heavily consuming CPU cycles are the best candidates for a parallel query. For example, joins of large tables, substantial aggregations, and sorting of large result sets are good candidates. Simple queries, often found in transaction processing applications, find the additional coordination required to execute a query in parallel outweigh the potential performance boost. To distinguish between queries that benefit from parallelism and those that cannot, SQL Server compares the estimated cost of executing the query with the cost threshold for parallelism value. Although not recommended, users can change the default value of 5 using sp_configure

  5. Is there a sufficient amount of rows processed in the given stream? 

    If the optimizer determines the number of rows in a stream is too low, it does not introduce exchange operators to distribute the stream. Consequently,the operators in this stream are executed serially. Executing the operators in a serial plan avoids scenarios when the startup, distribution, and coordination cost exceeds the gains achieved by parallel operator execution. 

The INSERT, UPDATE, and DELETE operators are executed serially. However, the WHERE clause of either an UPDATE or DELETE, or SELECT portion of an INSERT statement may be executed in parallel. The actual data changes are then serially applied to the database.

Static and keyset cursors can be populated by parallel execution plans. However, the behavior of dynamic cursors can be provided only by serial execution. The optimizer always generates a serial execution plan for a query that is part of a dynamic cursor.

At execution time, SQL Server determines if the current system workload and configuration information allow for parallel query execution. If parallel query execution is warranted, SQL Server determines the optimal number of threads and spreads the parallel query's execution across those threads. When a query starts executing on multiple threads for parallel execution, the query uses the same number of threads until completion. SQL Server reexamines the optimal number of thread decisions each time a query execution plan is retrieved from the procedure cache. For example, one execution of a query can result in use of a serial plan, a later execution of the same query can result in a parallel plan using three threads, and a third execution can result in a parallel plan using four threads.

Use SQL Server Profiler to monitor the degree of parallelism for individual queries. The Event Sub Class column of the SQL Operators Event category indicates the degree of parallelism for each parallel query. For more information, see "SQL Operators Event Category" in Microsoft SQL Server Administrator's Companion.

The showplan output for every parallel query will have at least one of these three logical operators:

  • Distribute Streams 

  • Gather Streams 

  • Repartition Streams 

See Also 

In Other Volumes 

"Setting Configuration Options" in Microsoft SQL Server Administrator's Companion 

"System Stored Procedures" in Microsoft SQL Server Transact-SQL and Utilities Reference 

"sp_configure" in Microsoft SQL Server Transact-SQL and Utilities Reference 

Parallel Query Example

The following query counts the number of orders placed in a given quarter starting on April 1, 1998 in which at least one line item of the order was received by the customer later than the committed date. This query lists the count of such orders grouped by each order priority and sorted in ascending priority order.

Note This example uses theoretical table and column names.

SELECT o_orderpriority, COUNT(*) AS Order_Count
FROM orders
WHERE o_orderdate >= '1997/04/01'
AND o_orderdate < DATEADD (mm, 3, '1997/04/01')
AND EXISTS
(
SELECT *
FROM lineitem
WHERE l_orderkey = o_orderkey
AND l_commitdate < l_receiptdate
)
GROUP BY o_orderpriority
ORDER BY o_orderpriority

Assume the following indexes are defined on the lineitem and orders tables:

CREATE INDEX l_order_dates_idx 
ON lineitem
(l_orderkey, l_receiptdate, l_commitdate, l_shipdate)

CREATE UNIQUE INDEX o_datkeyopr_idx
ON ORDERS
(o_orderdate, o_orderkey, o_custkey, o_orderpriority)

Here is one possible parallel plan generated for the query shown earlier:

|--Stream Aggregate(GROUP BY:([ORDERS].[o_orderpriority])
DEFINE:([Expr1005]=COUNT(*)))
|--Parallelism(Gather Streams, ORDER BY:
([ORDERS].[o_orderpriority] ASC))
|--Stream Aggregate(GROUP BY:
([ORDERS].[o_orderpriority])
DEFINE:([Expr1005]=Count(*)))
|--Sort(ORDER BY:([ORDERS].[o_orderpriority] ASC))
|--Merge Join(Left Semi Join, MERGE:
([ORDERS].[o_orderkey])=
([LINEITEM].[l_orderkey]),
RESIDUAL:([ORDERS].[o_orderkey]=
[LINEITEM].[l_orderkey]))
|--Sort(ORDER BY:([ORDERS].[o_orderkey] ASC))
| |--Parallelism(Repartition Streams,
PARTITION COLUMNS:
([ORDERS].[o_orderkey]))
| |--Index Seek(OBJECT:
([tpcd1G].[dbo].[ORDERS].[O_DATKEYOPR_IDX]),
SEEK:([ORDERS].[o_orderdate] >=
Apr 1 1997 12:00AM AND
[ORDERS].[o_orderdate] <
Jul 1 1997 12:00AM) ORDERED)
|--Parallelism(Repartition Streams,
PARTITION COLUMNS:
([LINEITEM].[l_orderkey]),
ORDER BY:([LINEITEM].[l_orderkey] ASC))
|--Filter(WHERE:
([LINEITEM].[l_commitdate]<
[LINEITEM].[l_receiptdate]))
|--Index Scan(OBJECT:
([tpcd1G].[dbo].[LINEITEM].[L_ORDER_DATES_IDX]), ORDERED)

Cc917545.stream(en-us,TechNet.10).gif

The illustration shows an optimizer plan executed with degree of parallelism equal to 4 and involving a two-table join.

The parallel plan contains three Parallelism operators. Both the Index Seek operator of the o_datkey_ptr index and the Index Scan operator of the l_order_dates_idx index are performed in parallel, producing several exclusive streams. This can be determined from the nearest Parallelism operators above the Index Scan and Index Seek operators, respectively. They are both repartitioning the type of exchange; they are merely reshuffling data among the streams producing the same number of streams on their output as they have on input. This number of streams is equal to the degree of parallelism.

The Parallelism operator above the l_order_dates_idx Index Scan operator is repartitioning its input streams using the value of L_ORDERKEY as a key so the same values of L_ORDERKEY end up in the same output stream. At the same time, output streams maintain the order on the L_ORDERKEY column to meet the input requirement of the Merge Join operator.

The Parallelism operator above the Index Seek operator is repartitioning its input streams using the value of O_ORDERKEY. Because its input is not sorted on the O_ORDERKEY column values and this is the join column in the Merge Join operator, the Sort operator between the Parallelism and Merge Join operators ensure the input is sorted for the Merge Join operator on the join columns. The Sort operator, like the Merge Join operator, is performed in parallel.

The topmost Parallelism operator gathers results from several streams into a single stream. Partial aggregations performed by the Stream Aggregate operator below the Parallelism operator are then accumulated into a single SUM value for each different value of the O_ORDERPRIORITY in the Stream Aggregate operator above the Parallelism operator.

See Also 

In Other Volumes 

"Logical and Physical Operators" in Microsoft SQL Server Diagnostics 

Memory Architecture

Microsoft SQL Server version 7.0 dynamically acquires and frees memory as needed. It is no longer necessary for an administrator to specify how much memory should be allocated to SQL Server, although the option still exists.

Modern operating systems such as Microsoft Windows NT and Microsoft Windows 95/98 support virtual memory. Virtual memory is a method of extending the available physical memory on a computer. In a virtual memory system, the operating system creates a pagefile, or swapfile, and divides memory into units called pages. Recently referenced pages are located in physical memory, or RAM. If a page of memory is not referenced for a while, it is written, or swapped out, to the pagefile. If that piece of memory is later referenced by an application, the operating system reads the memory page back from the pagefile into physical memory, also called swapping in memory. The total amount of memory available to applications is the amount of physical memory in the computer plus the size of the pagefile. If a computer has 256 MB of RAM and a 256 MB pagefile, the total memory available to applications is 512 MB.

One of the primary design goals of all database software is to minimize disk I/O because disk reads and writes are among the most resource-intensive operations that happen on a computer. SQL Server builds a buffer cache in memory to hold pages read from the database. Much of the code in SQL Server is dedicated to minimizing the number of physical reads and writes between the disk and the buffer cache. The larger the buffer cache is, the less I/O SQL Server has to do to the database files. However, if the buffer cache causes SQL Server's memory requirements to exceed the available physical memory on the server, then the operating system starts swapping memory to and from the pagefile. All that has happened is that the physical I/O to the database files has been traded for physical I/O to the swap file.

Having a lot of physical I/O to the database files is an inherent factor of database software. By default, SQL Server tries to reach a balance between two goals:

  • Minimizing or eliminating pagefile I/O to concentrate I/O resources for reads and writes of the database files. 

  • Minimizing physical I/O to the database files by maximizing the size of the buffer cache. 

There are differences in the way Windows NT and Windows 95/98 report virtual memory usage to applications. Because of this, SQL Server 7.0 uses different algorithms to manage memory on the two operating systems.

Windows NT Virtual Memory 

Testing has shown that Windows NT has minimal memory swapping until the memory allocations equal the available physical memory minus 4 MB. When running on Windows NT, the default is for SQL Server to attempt to keep free physical memory on the computer at 5 MB, plus or minus 200 KB. This amount of free space keeps Windows NT from paging excessively, while at the same time allowing SQL Server to have the largest buffer cache possible that will not cause extra swapping.

As other applications are started on the computer running SQL Server, they consume memory and the amount of free physical memory drops below 5 MB. SQL Server then frees memory from its address space. If another application is stopped and more memory becomes available, SQL Server increases the size of its memory allocation.

If SQL Server is running on a computer where other applications are frequently stopped or started, the allocation and deallocation of memory by SQL Server may slow the startup times of other applications. Also, if SQL Server is one of several server applications running on a single computer, the system administrators may need to control the amount of memory allocated to SQL Server. In these cases, they can use the min server memory and max server memory options to control how much memory SQL Server can use. For more information, see "Server Memory Options" in Microsoft SQL Server Administrator's Companion.

Windows 95/98 Virtual Memory 

When running on Windows 95/98, SQL Server 7.0 uses a demand-driven algorithm for allocating memory. As more Transact-SQL statements are processed and demand for cached database pages rises, SQL Server requests more virtual memory. When the demands on SQL Server go down, such as when fewer Transact-SQL statements are being processed, SQL Server frees memory back to the operating system.

SQL Server Memory Pool

In Microsoft Windows NT and Microsoft Windows 95/98, the total amount of virtual memory available to an application forms the set of valid memory addresses for the application. The total virtual memory allocation for an application is known as its address space.

The Microsoft SQL Server address space has two main components, each of which has several subcomponents:

  • Executable code 

    The number and size of the executable files and dynamic link libraries (DLLs) used by a SQL Server installation varies over time. In addition to the executable files and DLLs used by Open Data Services, the SQL Server engine, and server Net-Libraries, the following components load in their own DLLs, and these DLLs can allocate memory themselves:

    • Distributed queries can load an OLE DB Provider DLL on the server running SQL Server. 

    • Extended stored procedures are implemented as DLLs that are loaded into the SQL Server address space. 

    • The OLE Automation system stored procedures are used to create instances of OLE Automation objects. Each class of OLE Automation object loads its own code into the SQL Server address space. 

  • Memory pool 

    The memory pool is the main unit of memory for SQL Server. Almost all data structures that use memory in SQL Server are allocated in the memory pool. The main types of objects allocated in the memory pool are:

    • System-level data structures 

      These are data structures that hold data global to SQL Server, such as database descriptors and the lock table. 

    • Buffer cache 

      This is the pool of buffer pages into which data pages are read. 

    • Procedure cache 

      This is a pool of pages containing the execution plans for all currently executing Transact-SQL statements. 

    • Log caches 

      Each log has a cache of buffer pages used to read and write log pages. The log caches are managed separately from the buffer cache to reduce the synchronization between log and data buffers. This results in faster, more robust code in SQL Server version 7.0. 

    • Connection context 

      Each connection has a set of data structures that record the current state of the connection. These data structures hold items such as parameter values for queries and stored procedures, cursor positioning information, and tables currently being referenced. 

Cc917545.srvarch5(en-us,TechNet.10).gif 

The size of the memory pool can be very dynamic, especially on computers running other applications. By default, SQL Server seeks to keep the amount of virtual memory allocations on the computer at 5 MB less than the physical memory. The only way SQL Server can do this is by varying the size of its address space. The only variable component in the SQL Server address space controlled by SQL Server is the memory pool. The other variable components in the SQL Server address space, such as the number and size of OLE DB providers, OLE Automation objects, and extended stored procedures, are all controlled by application requests. If an application executes a distributed query, SQL Server must load the associated OLE DB provider. This means that if a SQL Server component is loaded, or another application starts up, the only mechanism SQL Server can use to release the memory needed by the new component or application is to reduce the size of the memory pool. SQL Server administrators can set limits on how much the size of the buffer pool varies through the min server memory and max server memory configuration options.

The regions within the memory pool are also highly dynamic. The SQL Server code constantly adjusts the amounts of the memory pool assigned to the various areas to optimize performance. Within the memory pool, the areas used to store connection context and system data structures are controlled by user requests. As new connections are made, SQL Server has to allocate data structures to store their context. As new databases are defined, SQL Server has to allocate data structures to define the attributes of the database. As tables and views are referenced, SQL Server allocates data structures describing their structure and attributes. This leaves the buffer cache, procedure cache, and log caches as the memory units whose size is controlled by SQL Server. SQL Server adjusts the sizes of these areas dynamically as needed to optimize performance.

For more information about the sizes of the various system and connection context data structures, see "Memory Used by SQL Server Objects" in this volume. SQL Server 7.0 is very efficient in the way it stores the context and state information for each connection, typically using less than 24 KB for each connection.

Thread and Task Architecture

Complex applications may have many tasks that could be performed at the same time. Threads are an operating system feature that lets application logic be separated into several concurrent execution paths.

When an operating system executes an instance of an application, it creates a unit called a process to manage the instance. The process has a thread of execution, which is the series of programming instructions performed by the application code. In a simple application with a single set of instructions that can be performed serially, there is just one execution path, or thread, through the application. More complex applications may have several tasks that could be performed in tandem, not serially. The application could do this by starting separate processes for each task, but starting a process is a resource-intensive operation. Instead, an application can start separate threads, which are relatively resource-inexpensive. Each thread can be scheduled for execution independently from the other threads associated with a process. Each thread stores the data unique to it in an area of memory called a stack.

Threads allow complex applications to make more effective use of a CPU even on computers with a single CPU. With one CPU, only one thread can execute at a time. If one thread executes a long running operation that does not use the CPU, such as a disk read or write, then another one of the threads can execute until the first operation completes. Being able to execute threads while other threads are waiting for an operation to complete allows the application to maximize its use of the CPU. This is especially true for multiuser, disk I/O intensive applications such as a database server.

Computers with multiple CPUs can execute one thread per CPU at the same time. If a computer has eight CPUs, it can concurrently execute eight threads.

Windows NT Fibers 

The Microsoft Windows operating system code that manages threads is in the kernel. Switching threads requires switches between the user mode of the application code and the kernel mode of the thread manager, which is a moderately expensive operation. Microsoft Windows NT fibers are a subcomponent of threads that are managed by code running in user mode. Switching fibers does not require the user-mode to switch to the kernel-mode transition needed to switch threads. The scheduling of fibers is managed by the application, while the scheduling of threads is managed by Windows. Each thread can have multiple fibers.

SQL Server Task Scheduling

Microsoft SQL Server uses Microsoft Windows threads, and sometimes fibers, to execute several concurrent tasks efficiently. SQL Server version 7.0 always runs several threads for system processes: one or more threads for each server Net-Library, a network thread to handle login requests, and a signal thread for communicating with the service control manager.

SQL Server has an internal layer that implements an environment similar to an operating-system for scheduling and synchronizing concurrent tasks without having to call the Windows kernel. This internal layer can schedule fibers as effectively as it works with threads. SQL Server maintains a pool of either threads or fibers for user connections. The maximum size of this pool is controlled by the max worker threads server configuration option.

The server configuration lightweight pooling option controls whether SQL Server uses threads or fibers. The default is for lightweight pooling to be set to 0, in which case SQL Server schedules a thread per concurrent user command, up to the value of max worker threads. If lightweight pooling is set to 1, then SQL Server uses fibers instead of threads. This is called running in fiber mode. In fiber mode, SQL Server allocates one thread per CPU, then allocates a fiber per concurrent user command, up to the max worker threads value. SQL Server uses the same algorithms to schedule and synchronize tasks when using either threads or fibers. SQL Server Desktop Edition does not support fibers.

A SQL Server batch is a set of one or more Transact-SQL statements sent from a client to SQL Server for execution as a unit. As SQL Server receives batches from clients, it associates each batch with an available free thread or fiber from the worker pool. If there are no free threads or fibers and the max worker threads value has not been reached, SQL Server allocates a new thread or fiber for the new batch. If there are no free threads or fibers available and the max worker threads value has already been reached, the new batch blocks until a thread is freed. After a thread or fiber is associated with a batch, it remains associated with the batch until the last of the result sets generated by the batch has been returned to the client. At that time, the thread or fiber is freed and can be scheduled to the next available batch.

While threads and fibers are lightweight in their use of resources, they still consume resources. In systems with hundreds or thousands of user connections, having one thread or fiber per connection could consume enough resources to reduce the efficiency of SQL Server. Allocating a thread or fiber for each user connection is also not necessary because most connections actually spend much of their time waiting for batches to be received from the client. The worker pool only needs to be large enough to service the number of user connections that are actively executing batches at the same time. Leaving max worker threads at its default value of 255 lets SQL Server effectively map user connections over a number of threads or fibers that do not consume too many resources.

Allocating Threads to a CPU

By default, Microsoft SQL Server simply starts each thread and Microsoft Windows NT assigns each thread to a specific CPU. Windows NT distributes SQL Server threads evenly among the CPUs on a computer. At times, Windows NT can also move a thread from one CPU with heavy usage to another CPU.

SQL Server administrators can use the affinity mask configuration option to exclude one or more CPUs from being eligible to run SQL Server threads. The affinity mask value specifies a bit pattern that indicates the CPUs that are used to run SQL Server threads. For example, the affinity mask value 13 represents the bit pattern 1101. On a computer with four CPUs, this indicates SQL Server threads can be scheduled on CPUs 0, 2, and 3, but not on CPU 1. If affinity mask is specified, SQL Server allocates threads evenly among the CPUs that have not been masked off. Another effect of affinity mask is that Windows NT does not move threads from one CPU to another. affinity mask is rarely used; most systems get optimal performance by letting Windows NT schedule the threads among the available CPUs.

Using the lightweight pooling Option

The overhead of switching thread contexts is not great. Most systems will not see any performance difference between setting the lightweight pooling option to 0 or 1. The only servers likely to benefit from lightweight pooling have:

  • Large multi-CPU servers. 

  • All of the CPUs running near maximum capacity. 

  • A lot of context switching. 

These systems may see a slight increase in performance by setting the lightweight pooling value to 1.

See Also 

In Other Volumes 

"lightweight pooling Option" in Microsoft SQL Server Administrator's Companion 

Thread and Fiber Execution

Microsoft Windows schedules threads for execution based on a numeric priority ranging from 1 through 31 (0 is reserved for operating system use). When several threads are waiting to execute, Windows dispatches the thread with the highest priority.

Microsoft SQL Server defaults to a priority of 7, which is called the normal priority. This gives SQL Server threads a high enough priority to get enough CPU resources without adversely affecting other applications. The priority boost configuration option can be used to increase the priority of SQL Server threads to 13, or high. This setting gives SQL Server threads a higher priority than most other applications. Thus, SQL Server threads will tend to be dispatched whenever they are ready to run and will not be preempted by threads from other applications. This can improve performance when a server is only running SQL Server and no other applications. If a memory-intensive operation occurs in SQL Server, however, other applications are not likely to have a high-enough priority to preempt the SQL Server thread. The performance of other applications and components on the server can be degraded if priority boost is turned on, so it should only be used under tightly controlled conditions.

SQL Server does not preempt running threads. A thread that does not voluntarily yield just keeps processing. The time slice configuration option is used prevent a looping thread from processing forever. time slice defines how many milliseconds a process can execute without yielding. If a thread reaches the time slice value, SQL Server assumes it is stuck in a loop and terminates the task. This is uncommon and the default time slice setting rarely needs to be changed. Setting the value too low hurts performance by increasing time checks, and it also may start prematurely terminating tasks that simply are doing a lot of work.

Threads occasionally need to wait for another thread or threads to free a resource. The threads should not be allowed to go into an indefinite wait. There are three server configuration options that control waits:

  • spin counter 

    For some resources, SQL Server uses a spinlock to synchronize access to a resource. The spin counter option limits how many times a thread can spin on a lock before it temporarily sleeps. This allows other tasks to run. When the thread wakes up, it will retry the spin lock. 

  • resource timeout 

    This limits how long a thread waits for the completion of an asynchronous I/O to complete on Microsoft Windows NT. 

  • query wait 

    Some Transact-SQL statements require large amounts of memory for operations, such as sorts. If there is not enough memory available, the thread waits for memory to be freed. The query wait option limits how long a thread can wait for memory. 

See Also 

In Other Volumes 

"spin counter Option" in Microsoft SQL Server Administrator's Companion 

"resource timeout Option" in Microsoft SQL Server Administrator's Companion 

"query wait Option" in Microsoft SQL Server Administrator's Companion 

I/O Architecture

The primary purpose of a database is to store and retrieve data, so performing a lot of disk reads and writes is one of the inherent attributes of a database server. Disk I/O operations consume many resources and take a relatively long time to complete. Much of the logic in relational database software concerns making the pattern of I/O usage highly efficient.

Microsoft SQL Server allocates much of its virtual memory to a buffer cache and uses the cache to reduce physical I/O. Data is read from the database disk files into the buffer cache. Multiple logical reads of the data can be satisfied without requiring that the data be physically read again. The data remains in the cache until it has not been referenced for some time and the database needs the buffer area to read in more data. Data is written back to disk only if it is modified. Data can be changed multiple times by logical writes before a physical write transfers the new data back to disk.

The data in a SQL Server version 7.0 database is stored in 8 KB pages. Each group of eight contiguous pages is a 64 KB extent. The buffer cache is also divided into 8 KB pages.

SQL Server I/O is divided into logical and physical I/O. A logical read occurs every time the database server requests a page from the buffer cache. If the page is not currently in the buffer cache, a physical read is then performed to read the page into the buffer cache. If the page is currently in the cache, no physical read is generated; the buffer cache simply uses the page already in memory. A logical write occurs when data is modified in a page in memory. A physical write occurs when the page is written to disk. It is possible for a page to remain in memory long enough to have more than one logical write made before it is physically written to disk.

One of the basic performance optimization tasks for a SQL Server installation involves sizing the SQL Server memory. The goal is to make the buffer cache large enough to maximize the ratio of logical reads to physical reads, but not so large that excessive memory swapping starts generating physical I/O to the pagefile. SQL Server 7.0 does this automatically when running with its default configuration settings.

By maintaining a relatively large buffer cache in virtual memory, SQL Server can significantly reduce the number of physical disk reads it requires. After a frequently referenced page has been read into the buffer cache, it is likely to remain there, completely eliminating further reads.

SQL Server 7.0 uses two Microsoft Windows NT features to improve its disk I/O performance:

  • Scatter-gather I/O 

    Before scatter-gather I/O was introduced in Windows NT version 4.0 Service Pack 2, all of the data for a disk read or write on Windows NT had to be in a contiguous area of memory. If a read transferred in 64 KB of data, the read request had to specify the address of a contiguous area of 64 KB of memory. scatter-gather I/O allows a read or write to transfer data in to or out of discontiguous areas of memory. 

    If SQL Server 7.0 reads in a 64 KB extent, it does not have to allocate a single 64 KB area and then copy the individual pages to buffer cache pages. It can locate eight buffer pages, then do a single scatter-gather I/O specifying the address of the eight buffer pages. Windows NT places the eight pages directly into the buffer pages, eliminating the need for SQL Server to do a separate memory copy. 

  • Asynchronous I/O 

    In an asynchronous I/O, an application requests a read or write operation from Windows NT. Windows NT immediately returns control to the application. The application can then perform additional work, and later test to see if the read or write has completed. By contrast, in a synchronous I/O, the operating system does not return control to the application until the read or write completes. Using asynchronous I/O allows SQL Server to maximize the work done by individual threads while they are processing a batch. 

SQL Server supports multiple concurrent asynchronous I/O operations against each file. The maximum number of I/O operations for any file is controlled by the max async io configuration option. If max async io is left at its default of 32, then a maximum of 32 asynchronous I/O operations can be outstanding for each file at any point in time. The default setting should be sufficient for almost all environments.

Reading Pages

The read requests generated by the system are controlled by the relational engine and further optimized by the storage engine. The access method used to read pages from a table determines the general pattern of reads that will be performed. The relational engine determines the most effective access method, such as a table scan, an index scan, or a keyed read. This request is then given to the storage engine, which optimizes the reads required to implement the access method. The reads are requested by the thread executing the batch.

Table scans benefit from the new data structures introduced in Microsoft SQL Server version 7.0. In earlier versions of SQL Server, data pages were in a doubly-linked chain that often had a somewhat random distribution through the database file. SQL Server had to read each page individually to get the pointer to the next page, resulting in a series of single, somewhat random reads. Read-ahead capabilities were limited. In a SQL Server 7.0 database, the IAM pages list the extents used by a table or index. The storage engine can read the IAM to build a serial list of the disk addresses that must be read. This allows SQL Server 7.0 to optimize its I/Os as large sequential reads in disk order. SQL Server 7.0 issues multiple serial read-ahead reads at once for each file involved in the scan. This takes advantage of striped disk sets.

SQL Server 7.0 also reads index pages serially in disk order, improving the performance of index scans. Index processing is further improved by the use of pre-fetch hints to allow serial read-ahead processing of a nonclustered index.

For example, this illustration shows a simplified representation of a set of leaf pages containing a set of keys and the intermediate index node mapping the leaf pages.

Cc917545.ioarch1(en-us,TechNet.10).gif

SQL Server 7.0 uses the information in the intermediate index page above the leaf level to schedule serial read-ahead I/O's for the pages containing the keys. If a request is made for all the keys from 'ABC' to 'DEF', SQL Server 7.0 reads turn around the index page above the leaf page but does not simply do a read for page 504, then a read for 505, and so on. until it finally reads page 556, the last one with keys in the desired range. Instead, the storage engine scans the intermediate index page and builds a list of the leaf pages that must be read. The storage engine then schedules all the I/O's in disk order. The storage engine also recognizes that pages 504/505 and 527/528 are contiguous and perform a single scatter-gather I/O to read adjacent pages in one operation. When there are many pages to be retrieved in a serial operation, SQL Server schedules a block of reads at a time. The number of reads scheduled is slightly larger than the value of the max async io configuration option. When a subset of these reads is completed, SQL Server schedules an equal number of new reads until all the needed reads have been scheduled.

Freeing and Writing Buffer Pages

In Microsoft SQL Server version 7.0, one system is responsible for:

  • Writing modified buffer pages to disk. 

  • Marking as free those pages that have not been referenced for some time. 

SQL Server 7.0 has a singly-linked list containing the addresses of free buffer pages. Any thread needing a buffer page uses the first page in the free buffer list.

The buffer cache is an in-memory structure. Each buffer page has a header that contains a reference counter and an indicator of whether the page is dirty, which means the page contains modifications that have not yet been written to disk. The reference counter is incremented by 1 each time an SQL statement references the buffer page. The buffer cache is periodically scanned from the start to the end. Because the buffer cache is all in memory, these scans are very quick and require no I/O. During the scan, the reference counter in each buffer page header is divided by 4 and the remainder discarded. When the reference counter goes to 0, the dirty page indicator is checked. If the page is dirty, a write is scheduled to write the modifications to disk. SQL Server uses a write-ahead log, so the write of the dirty data page is blocked while the log page recording the modification is first written to disk. After the modified page has been flushed to disk, or if the page was not dirty to start with, the page is freed. The association between the buffer page and the data page it contains is removed and the buffer is placed on the free list.

Using this process, frequently referenced pages remain in memory while buffers holding pages that are not referenced eventually return to the free buffer list. The size of the free buffer list is determined internally by SQL Server, based on the size of the buffer cache. The size cannot be configured.

When SQL Server is running on Microsoft Windows NT, the work of scanning the buffer, writing dirty pages, and populating the free buffer list is mostly done by the individual worker threads. The worker threads perform their scans in the interval of time after they have scheduled an asynchronous read and the read completes. A thread gets the address of the next section of the buffer pool that needs to be scanned from a central data structure, then scans that section of the buffer pool while the read I/O processes asynchronously. If a write must be performed, it is also scheduled asynchronously and does not interfere with the thread*'*s ability to process the completion of its own read.

There is a separate lazywriter thread that also scans through the buffer cache. The lazywriter process sleeps for an interval of time. When it is restarted, it checks the size of the free buffer list. If the free buffer list is below a certain point (dependent on the size of the cache) the lazywriter process scans the buffer cache to reclaim unused pages and write dirty pages that have a reference count of 0. On Windows NT, most of the work populating the free buffer list and writing dirty pages is done by the individual threads and the lazywriter thread typically finds little to do. Microsoft Windows 95/98 does not support asynchronous writes, so the lazywriter thread does the work of populating the free buffer list and writing dirty pages.

The checkpoint process also periodically scans the buffer cache for dirty pages and writes to disk any buffer page that remains marked as dirty through two checkpoints. The difference is that the checkpoint process does not place the buffer page back on the free list. The work of the checkpoint process is intended to minimize the number of dirty pages in memory to reduce the length of a recovery if the server fails, not to populate the free buffer list. Checkpoints typically find few dirty pages to write to disk because most dirty pages get written to disk by the worker threads or lazywriter thread in the period between two checkpoints.

Writes of log records are usually scheduled asynchronously by a logwriter thread. The exceptions are when:

  • A commit forces all pending log records for a transaction to disk. 

  • A checkpoint forces all pending log records for all transactions to disk. 

Distributed Query Architecture

Microsoft SQL Server version 7.0 supports two methods for referencing heterogeneous OLE DB data sources in Transact-SQL statements:

  • Linked server names 

    The system stored procedures sp_addlinkedsrv and sp_addlinkedserverlogin are used to give a server name to an OLE DB data source. Objects in these linked servers can be referenced in Transact-SQL statements using four-part names. For example, if a linked server name of DeptSQLSrvr is defined against another copy of SQL Server 7.0, the following statement references a table on that server: 

    SELECT * FROM DeptSQLSrvr.Northwind.dbo.Employees
    
The linked server name can also be specified in an OPENQUERY statement to open a rowset from the OLE DB data source. This rowset can then be referenced like a table in Transact-SQL statements. 
  • Ad hoc connector names

    For infrequent references to a data source, the OPENROWSET function is specified with the information needed to connect to the linked server. The rowset can then be referenced the same way a table is referenced in Transact-SQL statements: 

    SELECT *
    

FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'c:\MSOffice\Access\Samples\Northwind.mdb';'Admin';''; Employees)

SQL Server 7.0 uses OLE DB to communicate between the relational engine and the storage engine. The relational engine breaks down each Transact-SQL statement into a series of operations on simple OLE DB rowsets opened by the storage engine from the base tables. This means the relational engine can also open simple OLE DB rowsets on any OLE DB data source.

Cc917545.srvarch4(en-us,TechNet.10).gif 

The relational engine uses the OLE DB API to open the rowsets on linked servers, to fetch the rows, and to manage transactions.

For each OLE DB data source accessed as a linked server, an OLE DB provider must be present on the server running SQL Server. The set of Transact-SQL operations that can be used against a specific OLE DB data source depends on the capabilities of the OLE DB provider. For more information, see the "OLE DB Provider Reference for Distributed Queries" in Microsoft SQL Server Database Developer's Companion.

Whenever possible, SQL Server pushes relational operations such as joins, restrictions, projections, sorts, and group by operations to the OLE DB data source. SQL Server does not default to scanning the base table into SQL Server and performing the relational operations itself.

See Also 

In Other Volumes 

"Configuring Linked Servers" in Microsoft SQL Server Administrator's Companion 

"Distributed Queries" in Microsoft SQL Server Database Developer's Companion 

Full-text Query Architecture

The full-text query support introduced in Microsoft SQL Server version 7.0 supports sophisticated searches on character string columns.

This capability is implemented by the Microsoft Search service, which has two roles:

  • Indexing support 

    Implements the full-text catalogs and indexes defined for a database. Accepts definitions of full-text catalogs, and the tables and columns making up the indexes in each catalog. Implements requests to populate the full-text indexes. 

  • Querying support 

    Processes full-text search queries. Determines which entries in the index meet the full-text selection criteria. For each entry that meet the selection criteria, it returns the identity of the row plus a ranking value to the MSSQLServer service, where this information is used to construct the query result set. The types of queries supported include searching for:

    • Words or phrases. 

    • Words in close proximity to each other. 

    • Inflectional forms of verbs and nouns. 

The full-text engine runs as a service named Microsoft Search on Microsoft Windows NT Server. It is installed when the Full-Text Search feature is selected during custom installation. The Microsoft Search service itself is not installed during a Desktop SQL Server installation. While this means that the Microsoft Search service is not installed on either Microsoft Windows 95/98 or Windows NT Workstation clients, these clients can make use of the service when connected to a SQL Server Standard or Enterprise installation.

The full-text catalogs and indexes are not stored in a SQL Server database. They are stored in separate files managed by the Microsoft Search service. The full-text catalog files are not recovered during a SQL Server recovery. They also cannot be backed up and restored using the Transact-SQL BACKUP and RESTORE statements. The full-text catalogs must be resynchronized separately after a recovery or restore operation. The full-text catalog files are accessible only to the Microsoft Search service and the Windows NT system administrator.

Communications between SQL Server and the Microsoft Search service are made through a full-text provider.

The full-text catalogs, indexes, and searches supported by the Microsoft Search service apply only to tables in SQL Server databases. The Windows NT Indexing Service provides similar functionality against operating system files. Indexing Service includes an OLE DB Provider for Indexing Service for OLE DB consumers. SQL Server applications can access the OLE DB Provider for Indexing Service through distributed queries. Transact-SQL statements can combine full-text searches referencing SQL Server tables with textual searches of file data by using both the full-text SQL constructs with distributed query references to the OLE DB Provider for Indexing Service. For more information, see "Full-text Querying of File Data" in Microsoft SQL Server Database Developer's Companion.

See Also 

In This Volume 

Microsoft Search Service

Full-text Querying SQL Server Data

Full-text Catalogs and Indexes

In Other Volumes 

"Full-text Indexes" in Microsoft SQL Server Database Developer's Companion 

Full-text Indexing Support

This illustration shows the components that make up the full-text indexing support. These are the components involved in defining, creating, and populating full-text indexes.

Cc917545.ftart1(en-us,TechNet.10).gif

Enabling databases and tables for full-text indexing, defining and populating the indexes is specified using:

  • SQL Server Enterprise Manager. 

    One of the nodes of a database tree in the SQL Server Enterprise Manager GUI is used to manage the full-text catalogs in the database. 

  • Applications using SQL Distributed Management Objects (SQL-DMO). 

    SQL-DMO has objects for managing full-text catalogs and indexes. 

  • Applications using Transact-SQL and a standard database API. 

    Transact-SQL has a set of system stored procedures for managing full-text catalogs and indexes. 

The other components are involved in defining and populating full-text indexes in this manner:

  1. A Microsoft SQL Server database is enabled for full-text indexing. 

  2. The full-text catalogs for the database are specified. 

  3. Individual tables are enabled for full-text indexing and associated with a catalog. 

  4. Individual columns in each table are added to the full-text index for the table. All the metadata information from steps 1 through 4 is stored in system tables in SQL Server databases. 

  5. The full-text indexes for each table are activated on a table-by-table basis. When a full-text table index is activated, a start seed value is sent from SQL Server to the indexing service within the Microsoft Search service. The start seed value identifies the table involved in the full text index. 

    Population is requested on a catalog-by-catalog basis. Each catalog population request is sent to the indexing service.

    • The indexing service passes the appropriate start seed value to the SQL Server Handler. 

    • The SQL Server Handler is a driver containing logic to extract text data from the SQL Server columns involved in a full-text index. The Handler retrieves the data from SQL Server and passes it back to the index service. 

    • The indexing service then passes an index identifier and indexable strings to the index engine. The index engine eliminates noise-words such as a, and, or the. It also determines the word boundaries and builds a full-text index covering the words passed down from the indexing service. The full-text index is stored in the full-text catalog file. 

    • At the end of the population the indexing service calculates a new start seed value that records the point at which a subsequent incremental population should start. 

The population in Step 6 can take different forms.

  • Full population 

    Index entries are built for all the rows in all the tables covered by the full-text catalog. A full population typically occurs when a catalog is first populated. 

  • Incremental population 

    Only adjusts index entries for rows that have been added, deleted, or modified after the last population. This ability requires that the indexed table have a column of the timestamp data type. If the table does not have a timestamp column, only full populations can be performed. Requests for incremental populations on tables without timestamp columns result in a full population operation. 

    If a new full-text index is defined for a table that has not been associated with the catalog before, the next incremental population request for the catalog builds all the entries for the table. 

    Incremental population requests are implemented as full populations if any of the metadata for the table has changed since the last population. This includes altering any column, index, or full-text index definitions. 

Typically, a full-text catalog will be defined and initially have a full population executed. Incremental population operations are then run at intervals to pull across new data that has been added to the indexed columns in SQL Server. These periodic incremental population operations are usually defined as jobs and are dispatched automatically by the SQLServerAgent service.

See Also 

In This Volume 

Microsoft Search Service

Full-text Catalogs and Indexes

In Other Volumes 

"Full-text Indexes" in Microsoft SQL Server Database Developer's Companion 

Full-text Querying Support

When Microsoft SQL Server receives a Transact-SQL statement with a full-text construct, it retrieves the needed information from the Microsoft Search service using the full-text provider. Full-text constructs are the CONTAINS or FREETEXT predicates, or the CONTAINSTABLE or FREETEXTTABLE rowset functions. The full-text constructs can reference multiple columns in the full-text index if it is not known which column may contain the search conditions.

The components involved in this process are:

Cc917545.ftart2(en-us,TechNet.10).gif

  1. An application sends SQL Server a Transact-SQL statement with a full-text construct. 

  2. The SQL Server relational engine validates the full-text construct by querying the system tables to determine if the column reference is covered by a full-text index. The relational engine reduces each SQL statement to a series of rowset operations, and uses OLE DB to pass the operations to underlying components, usually the storage engine. The relational engine transforms any full-text construct into a request for a rowset from the full-text provider instead of the storage engine. The rowset requested is the set of keys satisfying the search condition and a ranking indicating how well the data for each key met the search condition criteria. The command sent with the rowset request to the full-text provider includes the full-text search condition. 

  3. The full-text provider validates the request and changes the search conditions to a form used by the querying support component of the Microsoft Search service. The request is sent to the search service. 

  4. The querying support component uses the search engine component to extract the requested data from the full-text index. This data is then passed back to the full-text provider in the form of a rowset. 

  5. The full-text provider returns the rowset to the relational engine. 

  6. The relational engine combines all the rowsets it receives from the storage engine and the full-text provider to build the final result set it sends back to the client. 

See Also 

In This Volume 

Full-text Catalogs and Indexes

Microsoft Search Service

In Other Volumes 

"Full-text Indexes" in Microsoft SQL Server Database Developer's Companion 

"Full-text Querying SQL Server Data" in Microsoft SQL Server Database Developer's Companion 

Remote Stored Procedure Architecture

Remote stored procedures are a legacy feature of Microsoft SQL Server. Their functionality in Transact-SQL is limited to executing a stored procedure on a remote SQL Server installation. The distributed queries introduced in SQL Server version 7.0 support this ability along with the ability to access tables on linked, heterogeneous OLE DB data sources directly from local Transact-SQL statements. Instead of using a remote stored procedure call on SQL Server 7.0, use distributed queries and an EXECUTE statement to execute a stored procedure on a remote server.

In addition to still supporting the remote stored procedures from existing applications, the OLE DB Provider for SQL Server and the SQL Server ODBC driver both make use of a performance optimization of remote stored procedures, and this shows up in SQL Server Profiler traces.

SQL Server Profiler traces events in an instance of SQL Server, such as receipt and return of Tabular Data Stream (TDS) packets sent between applications and the server. TDS is the application-level protocol defined for SQL Server client/server communications.

When an application sends a Transact-SQL batch for execution, a generic packet for executing SQL is used that shows up in the SQL Server Profiler trace as SQL:BatchStarting and SQL:BatchCompleted events. When one instance of SQL Server sends a request for another instance of SQL Server to execute a remote stored procedure, a specialized RPC TDS packet is used. The RPC packet is tailored to the needs of transmitting requests to execute a stored procedure. The relational engine also recognizes that this is a specialized packet and implements a number of optimizations that speeds the execution of the stored procedure. These show up in a SQL Server Profiler trace as RPC:Starting and RPC:Completed events.

The specialized RPC packet is not limited to use in server-to-server communications. The OLE DB Provider for SQL Server and the SQL Server ODBC driver take advantage of this specialized RPC packet to increase performance in two cases:

  • If an application uses the ODBC CALL syntax to execute a stored procedure. 

  • When the provider or driver themselves internally generate calls to system stored procedures. 

Users analyzing SQL Server Profiler traces can see these RPC entries from the provider and driver.

A server running SQL Server 7.0 can send and receive remote stored procedure calls to other version 7.0 servers. A server running SQL Server 7.0 can also send and receive remote stored procedure calls to servers running either version 6.0 or 6.5. A server running SQL Server 7.0 can receive remote stored procedure calls from a server running version 4.21a, but the server running version 7.0 cannot make remote stored procedure calls to the server running version 4.21a. The 4.21a server cannot recognize the version of the Tabular Data Stream (TDS) used by SQL Server 7.0.

See Also 

In Other Volumes 

"Configuring Remote Servers" in Microsoft SQL Server Administrator's Companion 

"Calling a Stored Procedure (OLE DB)" in Microsoft SQL Server Building Applications 

"TSQL Event Category" in Microsoft SQL Server Administrator's Companion 

"Calling a Stored Procedure" in Microsoft SQL Server Building Applications 

Transactions Architecture

Microsoft SQL Server maintains the consistency and integrity of each database despite errors that occur in the system. Every application that updates data in a SQL Server database does so using transactions. A transaction is a logical unit of work made up of a series of statements (selects, inserts, updates, or deletes). If no errors are encountered during a transaction, all of the modifications in the transaction become a permanent part of the database. If errors are encountered, none of the modifications are made to the database.

A transaction goes through several phases:

  • Before the transaction starts, the database is in a consistent state. 

  • The application signals the start of a transaction. This can be done explicitly with the BEGIN TRANSACTION statement. Alternatively, the application can set options that a new transaction is started automatically by the first Transact-SQL statement executed after the completion of a prior transaction. No record is written to the log at this point; the first record is written to the log when the application generates the first log record for a data modification. 

  • The application starts modifying data. These modifications are made one table at a time. As a series of modifications are made, they may leave the database in a temporarily inconsistent intermediate state. 

  • When the application reaches a point where all the modifications have completed successfully and the database is once again consistent, the application commits the transaction. This makes all the modifications a permanent part of the database. 

  • If the application encounters some error that prevents it from completing the transaction, it undoes, or rolls back, all the data modifications. This returns the database to the point of consistency it was at before the transaction started. 

SQL Server applications can also run in autocommit mode. In autocommit mode each individual Transact-SQL statement is committed automatically if it is successful and rolled back automatically if it generates an error. There is no need for an application running in autocommit mode to issue statements that specifically start or end a transaction.

All Transact-SQL statements run in a transaction; either an explicit transaction or an autocommit transaction. All SQL Server transactions that include data modifications either reach a new point of consistency and are committed, or are rolled back to the original point of consistency. Transactions are not left in an intermediate state where the database is not consistent.

See Also 

In Other Volumes 

"Transactions" in Microsoft SQL Server Database Developer's Companion 

Transaction Recovery

The row-level locking introduced in Microsoft SQL Server version 7.0 required significant changes in the mechanisms used to roll back transactions.

Every SQL Server database has a transaction log that records data modifications made in the database. The log records the start and end of every transaction and associates each modification with a transaction. SQL Server stores enough information in the log to either redo (roll forward) or undo (roll back) the data modifications that make up a transaction. Each record recorded in the log is identified by a unique log sequence number (LSN). All of the log records for a transaction are chained together.

SQL Server records many different types of information in the transaction log. SQL Server 7.0 primarily logs the logical operations performed. The operation is reapplied to roll forward a modification, and the opposite of the logical operation is performed to roll back a modification.

SQL Server controls when modifications are written from its data buffers to disk. SQL Server may cache modifications in buffers for a period of time to optimize disk writes. A buffer page that contains modifications that have not yet written to disk is known as a dirty page. Writing a dirty buffer page to disk is called flushing the page. When modifications are cached, care must be taken to ensure that no data modification is flushed before the corresponding log image is written to the log file. This could create a modification that could not be rolled back if necessary. To ensure that it can recover all modifications, SQL Server uses a write-ahead log, which means that all log images are written to disk before the corresponding data modification.

A commit operation forces all log records for a transaction to the log file so that the transaction is fully recoverable even if the server is shut down. A commit operation does not have to force all the modified data pages to disk so long as all the log records are flushed to disk. A system recovery can roll the transaction forward or backward using only the log records.

Periodically, SQL Server ensures that all dirty log and data pages are flushed. This is called a checkpoint. Checkpoints reduce the time and resources needed to recover when a copy of SQL Server is restarted. In SQL Server 7.0, a checkpoint does not write out all dirty pages. Instead, a checkpoint:

  • Writes to disk all dirty data pages that were recorded as dirty at the last checkpoint and have not been flushed after that time. 

  • Writes to disk a list of all outstanding transactions. This list contains the LSN of the first log record for the transaction. 

  • Writes to disk all dirty log pages. 

  • Records a list of all remaining dirty data pages. The LSN of the first log image for any of these dirty pages is recorded in the database boot page. This represents the point in the log at which a system-wide recovery must start processing the log and it is called the Minimum Recovery LSN. 

  • The LSN of the chain of log records that record the checkpoint is written to the database boot page. 

This logic reduces the overhead of a checkpoint in SQL Server 7.0 compared to earlier versions of SQL Server. Because most dirty data pages will be written to disk in the interval from one checkpoint to the next, most checkpoints will have few modified data pages to write. Because all the log images are written to the log, all the information needed to recover the modifications is recorded on disk at a checkpoint.

Rolling Back an Individual Transaction 

If any errors occur during a transaction, SQL Server uses the information in the log file to roll back the transaction. This rollback does not affect the work of any other users working in the database at the same time. Usually, the error is returned to the application, and if the error indicates a possible problem with the transaction, the application issues a ROLLBACK statement. Some errors, such as a 1205 deadlock error, roll back a transaction automatically. If anything stops the communication between the client and the server while a transaction is active, SQL Server rolls back the transaction automatically when notified of the stoppage by the network or operating system. This could happen if the client application terminates, if the client computer is shut down or restarted, or if the client network connection is broken. In all of these error conditions, any outstanding transaction is rolled back to protect the integrity of the database.

Recovery of All Outstanding Transactions at Start-up 

It is possible for a SQL Server installation to sometimes stop processing; for example, if an operator restarts the server while users are connected and working in databases. This can create two problems:

  • There may be an unknown number of SQL Server transactions partially completed at the time the server stopped. These incomplete transactions need to be rolled back. 

  • There may be an unknown number of data modifications recorded in the SQL Server log file, but the corresponding modified data pages were not flushed to the data files before the server stopped. Any committed modifications must be rolled forward. 

Whenever a SQL Server installation is started, it must find out if either of these conditions exist and address them. The following steps are taken in each SQL Server database:

  • The LSN of the last checkpoint is read from the database boot block along with the Minimum Recovery LSN. 

  • The transaction log is scanned from the Minimum Recovery LSN to the end of the log. All committed dirty pages are rolled forward by redoing the logical operation recorded in the log record. 

  • SQL Server then scans backward through the log file rolling back all uncompleted transactions by applying the opposite of the logical operation recorded in the log records. 

This type of recovery is also done by a RESTORE statement. After restoring the copy of the database or log, the RESTORE statement also has to ensure that all log images recording dirty modifications are rolled forward and all uncompleted transactions are rolled back.

Concurrency Architecture

When many people attempt to modify data in a database at the same time, a system of controls must be implemented so that modifications made by one person do not adversely affect another. This is called concurrency control.

Concurrency control theory has two classifications for the methods of instituting concurrency control:

  • Pessimistic concurrency control 

    A system of locks prevents users from modifying data in a way that affects other users. After a user performs an action that causes a lock to be applied, other users cannot perform actions that would conflict with the lock until it is released by the owner. This is called pessimistic control because it mainly used in environments where there is high contention for data, where the cost of protecting data with locks is less than the cost of rolling back transactions if concurrency conflicts occur. 

  • Optimistic concurrency control 

    In optimistic concurrency control, users do not lock data when they read it. When an update is performed, the system checks to see if the data was changed by another user after it was read. If the data was updated by another user, an error is raised. Typically, the user receiving the error rolls back the transaction and starts over. This is called optimistic because it is mainly used in environments where there is low contention for data, and where the cost of occasionally rolling back a transaction outweighs the costs of locking data when it is read. 

Microsoft SQL Server supports a wide range of optimistic and pessimistic concurrency control mechanisms. Users specify the type of concurrency control by specifying:

  • A transaction isolation level for a connection. 

  • Concurrency options on cursors. 

These attributes can be defined using either Transact-SQL statements or through the properties and attributes of the database APIs such as ADO, OLE DB, and ODBC.

See Also 

In Other Volumes 

"Four Concurrency Problems" in Microsoft SQL Server Database Developer's Companion 

"Cursor Concurrency" in Microsoft SQL Server Database Developer's Companion 

Locking Architecture

A lock is an object used by software to indicate that a user has some dependency on a resource. The software does not allow other users to perform operations on the resource that would adversely affect the dependencies of the user owning the lock. Locks are managed internally by system software and are acquired and released based on actions taken by the user.

Microsoft SQL Server uses locks to implement pessimistic concurrency control among multiple users performing modifications in a database at the same time. By default, SQL Server manages both transactions and locks on a per connection basis. For example, if an application opens two SQL Server connections, locks acquired by one connection cannot be shared with the other connection. Neither connection can acquire locks that would conflict with locks held by the other connection. Only bound connections are not affected by this rule. For more information, see "Using Bound Connections" in Microsoft SQL Server Database Developer's Companion.

SQL Server locks are applied at various levels of granularity in the database. Locks can be acquired on rows, pages, keys, ranges of keys, indexes, tables, or databases. SQL Server dynamically determines the appropriate level at which to place locks for each Transact-SQL statement. The level at which locks are acquired can vary for different objects referenced by the same query; for example one table may be very small and have a table lock applied, while another, larger table may have row locks applied. The level at which locks are applied does not have to be specified by users and needs no configuration by administrators. SQL Server ensures that locks granted at one level of granularity respect locks granted at another level. For example, if UserA attempts to acquire a share lock on a row, SQL Server also attempts to acquire intent share locks on the page and the table. If UserB has an exclusive lock at the page or table level, UserA is blocked from acquiring locks until the lock held by UserB is freed.

There are several lock modes: shared, update, exclusive, intent, and schema. The lock mode indicates the level of dependency the connection has on the locked object. SQL Server controls how the lock modes interact. For example, an exclusive lock cannot be obtained if other connections hold shared locks on the resource.

Locks are held for the length of time needed to protect the resource at the level requested:

  • The duration of share locks used to protect reads depends on the transaction isolation levels. At the default transaction isolation level of READ COMMITTED, a share lock is held only as long as it takes to read a page. In scans, the lock is held until a lock is acquired on the next page in a scan. If the HOLDLOCK hint is specified, or the transaction isolation level is set to either REPEATABLE READ or SERIALIZABLE READ, the locks are held to the end of the transaction. 

  • Depending on the concurrency options set for a cursor, the cursor may acquire shared-mode, scroll locks to protect fetches. When scroll locks needed, they are held until the next fetch or the closing of the cursor, whichever happens first. If HOLDLOCK is specified, however, the scroll locks are held until the end of the transaction. 

  • Exclusive locks used to protect updates are held until the end of the transaction. 

If a connection attempts to acquire a lock that conflicts with a lock held by another connection, the connection attempting to acquire the lock is blocked until:

  • The conflicting lock is freed and the connection acquires the lock it requested. 

  • The time-out interval for the connection expires. By default, there is no time-out interval, but some applications set a time-out interval to prevent an indefinite wait. 

If several connections become blocked waiting for conflicting locks on a single resource, the locks are granted on a first-come, first-served basis as the preceding connections free their locks.

SQL Server has an algorithm to detect deadlocks, a conditions where two connections have blocked each other. If SQL Server detects a deadlock, it will terminate one transaction, allowing the other to continue. For more information, see "Deadlocking" in Microsoft SQL Server Database Developer's Companion.

SQL Server may dynamically escalate or deescalate the granularity or type of locks. For example, if an update acquires a large number of row locks and has locked a significant percentage of a table, the row locks are escalated to a table lock. If a table lock is acquired, the row locks are released. SQL Server 7.0 rarely needs to escalate locks; the query optimizer usually chooses the correct lock granularity at the time the execution plan is compiled. For more information, see "Lock Escalation" and "Dynamic Locking" in Microsoft SQL Server Database Developer's Companion.

See Also 

In Other Volumes 

"Locking" in Microsoft SQL Server Database Developer's Companion 

"Cursor Locking" in Microsoft SQL Server Database Developer's Companion 

Latching

Latches are very lightweight, short-term synchronization objects protecting actions that need not be locked for the life of a transaction. They are primarily used to protect a row while it is being read for a connection.

When the relational engine is processing a query, each time a row is needed from a base table or index, the relational engine uses the OLE DB API to request that the storage engine return the row. While the storage engine is actively transferring the row to the relational engine, the storage engine must ensure that no other task modifies either the contents of the row or certain page structures such as the page offset table entry locating the row being read. The storage engine does this by acquiring a latch, transferring the row in memory to the relational engine, and then releasing the latch.

SQL Server Performance Monitor has a Latches object that indicates how many times latches could not be granted immediately and the amount of time threads spent waiting for latches to be granted.

See Also 

In Other Volumes 

"SQL Server: Latches Object" in Microsoft SQL Server Administrator's Companion 

Distributed Transactions Architecture

Distributed transactions are transactions that involve resources from two or more sources. Microsoft SQL Server supports distributed transactions, allowing users to create transactions that update multiple SQL Server databases and other sources of data.

A distributed transaction involves:

  • Resource managers 

    The software controlling each resource involved in a distributed transaction is known as a resource manager. A distributed transaction is made up of local transactions in each individual resource manager. Each resource manager must be able to commit or roll back its local transaction in coordination with all the other resource managers in the distributed transaction. SQL Server can operate as a resource manager in a distributed transaction that complies with the X/Open XA specification for Distributed Transaction Processing. 

  • Transaction manager 

    Committing or rolling back a distributed transaction is controlled by a software component called a transaction manager. The transaction manager coordinates with each resource manager to ensure that all the local transactions making up the distributed transaction are committed or rolled back together. The Microsoft Distributed Transaction Coordinator (MS DTC) service operates as a transaction manager. MS DTC complies with the X/Open XA specification for Distributed Transaction Processing. 

  • Two-phase commit (2PC) 

    Special commit processing is required to prevent problems in managing transactions spanning multiple resource managers. A commit of a large transaction can take a relatively long time as log buffers are flushed freed. The commit process itself can also encounter errors that would force a rollback. If a transaction manager simply asked each resource manager to commit, it could get a success status back from some resource managers and then get an error from one resource manager. This creates a conflict because all of the distributed transaction should be rolled back, but parts are already committed. Two-phase commits address this problem by dividing a commit into two phases:

    • Prepare 

      The transaction manager sends a prepare to commit request to each resource manager. Each resource manager then performs all resource-intensive actions needed to complete the commit process, such as flushing all log buffers. The resource manager only retains the minimum locks needed to maintain the integrity of the transaction, and then returns success to the transaction manager. 

    • Commit 

      If all the resource managers return success to their prepare requests, the transaction manager then sends commit commands to each resource manager. Each resource manager then quickly records the transaction as completed and frees the last held resources. If any resource manager returns an error to the prepare request, the transaction manager then sends rollback commands to each resource manager. 

There are several ways applications can include SQL Server in a distributed transaction:

  • If an application has a local transaction and issues a distributed query, the local transaction is escalated to a distributed transaction. 

  • Issue a BEGIN DISTRIBUTED TRANSACTION statement. 

  • If an application has a local transaction and the option REMOTE_PROC_TRANSACTIONS is set ON, calling a remote stored procedure escalates the local transaction to a distributed transaction. 

  • Applications using the OLE DB Provider for SQL Server or the SQL Server ODBC driver can use OLE DB methods or ODBC functions to have a SQL Server connection join a distributed transaction started by the application. 

See Also 

In This Volume 

MS DTC Service

In Other Volumes 

"Distributed Transactions" in Microsoft SQL Server Database Developer's Companion 

Cursor Architecture

All SQL statements operate on a set of rows. A SELECT statement returns a complete result set containing all the rows that met the qualifications in the SELECT statement. Applications need to process the result set one row or block of rows at a time. Cursors are a logical extension to result sets that let applications work with the result set row by row.

Microsoft SQL Server supports several mechanisms for specifying cursors:

  • The Microsoft OLE DB Provider for SQL Server supports the cursor functionality of the ADO and OLE DB APIs. 

  • The Microsoft SQL Server ODBC driver supports the cursor functionality of the ODBC, RDO, DAO, and Microsoft Foundation Classes Database Classes APIs. 

  • Microsoft Embedded SQL for C supports the cursor functionality of the Embedded SQL standard. 

  • The DB-Library API supports the same level of cursor functionality as the OLE DB and ODBC APIs. 

See Also 

In Other Volumes 

"Cursors" in Microsoft SQL Server Database Developer's Companion 

Server Scalability

Microsoft SQL Server version 7.0 extends the scalability of SQL Server at both ends of the performance spectrum. SQL Server 7.0 adds enhanced support for desktop users by running on Microsoft Windows 95/98. Many of the architectural changes made to the database engine and the data structures in SQL Server 7.0 databases increase the speed with which very large databases (VLDBs) are processed. These enhancements make the same SQL Server database engine a good choice for managing databases ranging from a small, personal database on a Windows 95/98 laptop to terabyte databases accessed by thousands of Internet users.

SQL Server 7.0 Databases on the Desktop

The same Microsoft SQL Server version 7.0 database engine that supports thousands of concurrent users can also be installed on desktop computers running either Microsoft Windows 95/98 or Microsoft Windows NT Workstation. Any user who has a per-seat Client Access License for the Small Business Server, Standard, or Enterprise editions of SQL Server can install a Desktop edition server.

The SQL Server Desktop edition provides desktop users with essentially the same functionality and features as the large servers, except that full-text indexes, the publication of transaction replications, and Microsoft SQL Server OLAP Services are not supported. For more information about the capabilities about the Desktop editions, see "SQL Server Editions" in this volume.

At the same time, the ease-of-use features of the database engine allow it to run in a laptop or desktop environment with minimal configuration tuning from the user.

The programming model is the same for the Desktop edition as it is for SQL Server Small Business Server, Standard, or Enterprise editions. Applications written for any edition of SQL Server run unchanged against the others, except for the features listed above as not supported in the Desktop edition.

The Desktop SQL Server edition offers database services to individual users. The database engine supports optimizations that maximize performance in small systems with small amounts of memory:

  • The internal data structures of the database, such as mixed extents, significantly reduce the size of small databases, or databases with many small tables. 

  • When running at its default configuration settings, SQL Server configures itself dynamically to the current resource usage on the computer without the need for tuning commands from the user. 

  • Many configuration options from SQL Server 6.5 have been replaced with internal logic in the database engine that configures these options automatically based on load. 

  • It is no longer necessary to update distribution statistics manually; these are updated automatically. 

  • Database files grow or shrink automatically depending on the amount of data. 

SQL Server 7.0 replication and the ability of the database engine to attach and detach databases offers a good solution for mobile and disconnected users with laptops. These users can periodically connect to a regional or departmental server to resynchronize their database information with the main database through replication. Alternatively, a database can be placed on a compact disc and sent to remote users, where they can simply attach it to their server to get the latest information.

SQL Server 7.0 on Windows 95/98

Microsoft SQL Server version 7.0 includes two main types of software, the client software and server software. All users covered by either a per-seat or per-server SQL Server Client Access License can install the SQL Server client software on a Microsoft Windows 95/98 computer. Any user covered by a per-seat SQL Server Client Access License can also install the SQL Server Desktop Edition server components on Windows 95/98.

SQL Server Client Software on Windows 95/98 

SQL Server 7.0 client software consists of:

  • Utilities for managing SQL Server and performing ad hoc queries of SQL Server databases. 

  • Connectivity components such as the OLE DB Provider for SQL Server, the SQL Server ODBC driver, and the client Net-Libraries. These are used by any application that connects to a server running SQL Server. 

The SQL Server 7.0 client software runs the same on a Windows 95/98 computer as it does on a Microsoft Windows NT computer, with the following exceptions:

  • The Windows 95/98 network redirector does not provide computer browser support. SQL Server dialog boxes that depend on this feature to get a list of servers do not display a server list on Windows 95/98. This includes the Register Server dialog box, the Register Server Wizard, and the Query Analyzer Login dialog box. 

  • The SQL Server tools that poll for the state of a server (SQL Server Enterprise Manager, SQL Server Agent) must do so actively using a poll service state interval defined by the user. 

SQL Server Desktop Edition on Windows 95/98 

The SQL Server Desktop edition is the only edition whose server components can be installed on Windows 95/98.

When SQL Server Desktop edition is running on Windows NT Workstation, Windows NT Server, or Windows NT Enterprise, it supports its full set of features. For more information about the list of features, see "SQL Server Editions" in this volume. When SQL Server Desktop Edition is running on Windows 95/98 computers, the features in the list below are also not available:

  • The Named Pipes and Banyan Vines server Net-Libraries cannot be installed on Windows 95/98. A Desktop edition database on a Windows 95/98 computer cannot accept connections using these protocols. While Windows 95/98 does not support these server Net-Libraries, it does support the client Net-Libraries. SQL Server clients running on Windows 95/98 computers can connect to SQL Server databases on Windows NT computers using these protocols. 

  • Neither the client nor server AppleTalk Net-Libraries are supported on Windows 95/98. 

  • Windows 95/98 does not support the server functions of the API used for Windows NT Authentication. Clients cannot connect to a Desktop edition database on a Windows 95/98 computer using Windows NT Authentication. Windows 95/98 does support the client functions of the API for Windows NT Authentication, so SQL Server clients running on Windows 95/98 computers can connect to SQL Server databases on Windows NT computers using Windows NT Authentication. 

  • The server side of using encryption with the Multiprotocol Net-Library is not supported on Windows 95/98. Clients cannot connect to a Desktop edition database on a Windows 95/98 computer using encryption. Windows 95/98 does support the client functions for encryption, so SQL Server clients running on Windows 95/98 computers can connect to SQL Server databases on Windows NT computers using encryption. 

  • Windows 95/98 does not support asynchronous I/O or scatter-gather I/O. Because of this, the database engine cannot use some of the I/O optimizations it uses on Windows NT to maximize throughput with many concurrent users. 

  • On Windows 95/98, SQL Server manages its memory requests based on the amount of database work being done instead of maintaining virtual memory at a point that minimizes swapping as it does on Windows NT. For more information, see "Memory Architecture" in this volume.

  • Windows 95/98 does not have a component that corresponds to Window NT services. The SQL Server database engine and SQL Server Agent run as executable programs on Windows 95/98. These SQL Server components cannot be started as services automatically. They can be started by placing a command prompt command in the Windows 95 startup group, but then they run as a separate MS-DOSĀ® window. 

    SQL Server Service Manager is installed in the Windows 95/98 startup group and operates with the same user interface as it does on Windows NT. 

  • Windows 95/98 does not have event logs. SQL Server uses a SQL Server Profiler-based mechanism to launch alerts on Windows 95/98. 

  • SQL Server Performance Monitor is not available on Windows 95/98 computers. Performance Monitor counters cannot be implemented for Desktop SQL Servers installations running on Windows 95/98. Windows 95/98 clients cannot monitor the performance counters of a SQL Server installation running on Windows NT. 

  • The SQL Server Version Upgrade utility does not run on Windows 95/98 and only supports export servers running on Windows NT. 

SQL Server 7.0 on Large Servers

One of the primary design goals for Microsoft SQL Server version 7.0 was to increase its ability to support large databases with many concurrent users that have to run in production 24 hours a day, 7 days a week. While earlier versions of SQL Server did well at supporting large numbers of concurrent users, the length of time it took to run utility, backup, and restore operations limited the size of a manageable SQL Server database to around 200 to 300 MB.

SQL Server 7.0 Enterprise Edition is capable of handling terabyte-sized databases with thousands of concurrent users. Some of the features that allow this are:

  • The on-disk data structures in the database have been redesigned. The new data structures and data organization present more opportunities for parallel processing and serial, read-ahead scans. Table scans and index scans can now be performed serially, which is especially useful in online analytical processing (OLAP) that characterizes data warehouses. For more information, see "I/O Architecture" in this volume. 

  • The database page size increased from 2 KB to 8 KB, and the size of database extents increased from 16 KB to 64 KB. This results in large, more efficient I/O transfers. 

  • SQL Server 7.0 now natively supports the prepare/execute model of executing SQL statements. It also has logic to share query execution plans between connections without requiring an application to prepare the statement. These features reduce the overhead associated with compiling and executing statements. For more information, see "Execution Plan Caching and Reuse" in this volume.

  • New hash and merge join types offer improved join performance. For more information, see "Advanced Query Tuning Concepts" in Microsoft SQL Server Diagnostics.

  • SQL Server 7.0 supports intra-query parallelism on servers which have more than one CPU. Individual SQL statements can be split into two or more tasks that operate concurrently to return the results faster. For more information, see "Parallel Query Processing" in this volume.

  • SQL Server 7.0 evaluates an SQL statement and dynamically chooses the locking granularity (row, page, table) that will maximize concurrent throughput. For more information, see "Locking Architecture" in this volume.

  • SQL Server 7.0 uses Microsoft Windows NT asynchronous I/O and new scatter-gather I/O along with new buffer cache management algorithms to maximize OLTP performance. For more information, see "I/O Architecture" in this volume.

  • The speed of the BACKUP and RESTORE statements have been significantly improved. BACKUP has been improved to do a serial scan without the random reads required in earlier versions of SQL Server. BACKUP and RESTORE can be run during production work because they do not interfere with database activity. BACKUP and RESTORE use parallel I/Os when a backup is stored on multiple backup devices. New BACKUP options, such as differential backups, and backing up only files or filegroups, reduce size of backups and their effect on the system. For more information, see "Backup/Restore Architecture" in this volume.

  • The robustness of the SQL Server 7.0 on-disk data structures are much simpler than in earlier versions, which makes the structures less prone to problems. Also, the database engine is coded to detect errors at relatively early points in processing and terminate a task before it causes problems in the database itself (fail-fast logic). These result in fewer problems with on-disk structures and reduces or eliminates the need to run database integrity checks. 

  • The algorithms in the database integrity check statements are much faster in SQL Server 7.0 than in earlier versions. The integrity check statements now make a single serial scan of the database and check objects in parallel during the scan of the database. For more information, see "Data Integrity Validation" in this volume.

  • The SQL Server 7.0 bulk copy components now transfer data at increased speeds. The bcp bulk copy utility can now copy data in parallel from multiple sources into the same file concurrently. For more information, see "Parallel Data Load Using bcp" in Microsoft SQL Server Administrator's Companion.

  • SQL Server 7.0 now supports doing bulk loads directly on the server without transferring the data through a client. This is done using the new BULK INSERT statement, and is the fastest way to get large amounts of data into a table. For more information, see "BULK INSERT" in Microsoft SQL Server Transact-SQL and Utilities Reference.

  • Distribution statistics indicate the selectivity of index keys and are used by the query optimizer to choose the most efficient index when compiling a query. If the statistics are out of date, the optimizer may not generate an optimal execution plan. SQL Server 7.0 can be set up to generate distribution statistics automatically, which improves the effectiveness of the query optimizer. The sampling processes that generate the statistics have also been improved; they can now generate reliable statistics after scanning less data than earlier versions of SQL Server. For more information, see "Statistical Information" in Microsoft SQL Server Database Developer's Companion.

  • SQL Server includes failover cluster support. Two Windows NT servers can have copies of SQL Server and both access a set of shared disks holding SQL Server databases. One operates as the primary server and the other as the secondary. If the primary server fails, the secondary server comes online, recovers any uncompleted transactions recorded in the database logs, and begins operating in place of the lost primary server. For more information, see "Configuring SQL Server Failover Support" in Microsoft SQL Server Administrator's Companion.

    SQL Server Enterprise Edition running on Windows NT Enterprise Edition offers two significant performance benefits:

    • It can scale effectively on up to 32 processors. 

    • When running on a Digital Alpha server, it can use 64-bit memory addressing. 

SQL Server and Mail Integration

Microsoft SQL Server provides a set of extended stored procedures that allow SQL Server to operate as a workgroup post office for a MAPI-enabled e-mail system.

Cc917545.mailarc1(en-us,TechNet.10).gif 

The computer running SQL Server must be set up as an e-mail client. SQL Server Enterprise Manager is used to assign an e-mail account and password to the SQL Server installation. The mail component of SQL Server can then be enabled to be started automatically when the SQLServerAgent service is started. Alternatively, the mail component can be started and stopped at will using either SQL Server Enterprise Manager, or the xp_startmail, xp_stopmail, and xp_sendmail stored procedures.

When the mail component of SQL Server is running, it can be used to:

  • Read e-mail using sp_processmail, or a combination of xp_findnextmessage, xp_readmail, and xp_deletemail. The messages sent to SQL Server typically contain a Transact-SQL statement or batch to be executed. The statement is executed and the result set is returned as a reply e-mail with an optional CC: list. 

SQL Server events and alerts can be combined with SQL Mail functionality to build a system in which a server running SQL Server can e-mail or page the relevant administrators automatically if serious conditions arise.