Chapter 8 - Monitoring Server Performance and Activity

Microsoft SQL Server version 7.0 provides a variety of tools that can be used to monitor the performance of a computer running SQL Server and the user activity that occurs in databases. Monitoring is an important task that allows you to determine if your database application is working efficiently and as expected. As your application, database, and environment change, regular monitoring allows you to check that these elements continue to work efficiently and to identify any areas that need modifying. For example, as more concurrent users use a database application, the load on SQL Server can increase. By monitoring you determine if the current SQL Server or system configuration must be changed to handle the increased workload, or if the increased load is having no significant effect on performance and therefore does not require any configuration change.

Monitoring SQL Server or the system environment requires:

  • Determining the monitoring goals you want to achieve. 

    Choosing the most appropriate tool for the type of monitoring you will perform. The most flexible tools that can be used to monitor SQL Server are:

    • SQL Server Profiler. 

    • Windows NT Performance Monitor. 

  • Using the tool to monitor SQL Server or the system environment and analyze the data captured. 

See Also 

In Other Volumes 

"Optimizing Database Performance" in Microsoft SQL Server Diagnostics 

Goals of Monitoring

After Microsoft SQL Server version 7.0 is installed and running, you can monitor the server to:

  • Determine whether it is possible to improve performance. For example, by monitoring the response times for frequently used queries, you can determine if changes to the query or indexes on the tables are needed. 

  • Determine user activity. For example, by monitoring users attempting to connect to SQL Server, you can determine if security is adequately set up. 

  • Troubleshoot any problems or debug application components, such as stored procedures. 

  • Test applications and development systems. For example, by monitoring SQL queries as they are executed, you can determine if they are written correctly and producing the expected results. 

See Also 

In Other Volumes 

"Optimizing Database Performance" in Microsoft SQL Server Diagnostics 

Monitoring to Improve Performance

Optimal performance is minimal response time and maximal throughput as a result of minimizing network traffic, disk I/O, and CPU time. This goal is achieved by thoroughly analyzing the application requirements, understanding the logical and physical structure of the data, and assessing and negotiating tradeoffs between conflicting uses of the database, such as online transaction processing (OLTP) versus decision support.

Response Time vs. Throughput 

Response time measures the length of time required for the first row of the result set to be returned. Response time refers to the amount of time for the user to receive visual affirmation that a query is being processed.

Throughput measures the total number of queries that can be handled by the server during a given time.

As the number of users increases, contention between users increases, which in turn causes response time to increase and overall throughput to decrease.

Factors That Affect Performance 

Periodically monitor Microsoft SQL Server to ensure that it is performing at acceptable levels, or to determine if performance can be further improved. These areas affect the performance of SQL Server:

  • System resources (hardware) 

  • Microsoft Windows NT operating system 

  • Database applications 

  • Client applications 

  • Network 

However, before these areas can be monitored, you must know what level of performance is reasonable given normal working conditions. This is done by establishing a server performance baseline.

Establishing a Performance Baseline

To determine whether your Microsoft SQL Server system is performing optimally, you should establish a server performance baseline by taking performance measurements over time. Each set of measurements should be compared against the same measurements taken earlier.

After establishing a server performance baseline, compare the baseline statistics to current server performance. Numbers far above or far below your baseline are candidates for further investigation. They may indicate areas where the server needs to be tuned or reconfigured to improve performance.

For example, if the amount of time to execute a set of queries increases, you may want to examine the queries to determine if they can be rewritten or if column statistics or new indexes need to be added.

At a minimum, measurements should be taken to determine:

  • Peak and off-peak hours of operation. 

  • Production query or batch command response times. 

  • Database backup and restore completion times. 

See Also 

In Other Volumes 

"sp_configure" in Microsoft SQL Server Transact-SQL and Utilities Reference 

Performance Monitoring Example: Identifying Bottlenecks

A bottleneck is a component of the system that restricts the performance of the entire system, for example, due to excessive demand on a system resource. Every system will have bottlenecks. By monitoring the Microsoft SQL Server system for bottlenecks, you can determine if changes can be made to the limiting component to make it perform at an optimum level.

Demand on a resource can become extreme, causing a bottleneck for the following reasons:

  • Insufficient resources requiring additional or upgraded components. 

  • Resources of the same type are not sharing workloads evenly and need to be balanced. For example, one disk is being monopolized compared to others. 

  • Malfunctioning resource requiring replacement. 

  • Incorrectly configured resource requiring reconfiguration. 

Decide What to Examine 

Low numbers can be just as meaningful as high numbers. If a number is lower than expected, it may be a clue to a problem in another area. For example:

  • Some other component is preventing the load from reaching this component. 

  • Network congestion is preventing client requests from reaching the server. 

  • A bottleneck is preventing client computers from accessing the server as frequently as expected. 

  • Windows NT Performance Monitor is being used incorrectly (for example, user is neglecting to turn on disk counters, looking at the wrong instance or the wrong counters, or even looking at the wrong computer). 

A low number could also mean that the system is performing better than expected in meeting user expectations.

These are five key areas you should monitor to track server performance and identify bottlenecks.

Bottleneck candidate

Effects on the server

Memory usage

An inadequate amount of memory allocated or available to SQL Server will degrade performance. Data must be continually read from the disk rather than residing in the data cache. Microsoft Windows NT performs excessive paging by swapping data to and from the disk as the pages are needed.

CPU processor utilization

A constantly high CPU rate may indicate the need for a CPU upgrade (or the addition of multiple processors).

Disk I/O performance

A slow disk I/O (disk reads and writes) will cause transaction throughput to degrade.

User connections

An improperly configured number of users can cause your system to run too slowly or restrict the amount of memory that could otherwise be made available to SQL Server.

Blocking locks

A process may be forcing another process to wait, thereby stopping the blocking process.

See Also 

In This Volume 

Monitoring Memory Usage

Monitoring CPU Use

Monitoring Disk Activity

SQL Server: Locks Object

SQL Server: General Statistics Object

Monitoring to Determine User Activity

You can monitor individual user activity to pinpoint transactions that may be blocking other transactions or causing the performance of Microsoft SQL Server to be slower than expected.

Monitoring user activity helps identify trends such as the types of transactions run by certain users, if any users are executing inefficient ad hoc queries, and the types of transactions that require the most resources.

To collect statistical information about users, use either SQL Server Profiler or Windows NT Performance Monitor. Use the SQL Server Enterprise Manager Current Activity window to perform ad hoc monitoring of SQL Server, which allows you to determine user activity on the system.

See Also 

In This Volume 

Monitoring with SQL Server Enterprise Manager

Sessions Event Category

SQL Server: General Statistics Object

Monitoring to Troubleshoot Problems

You can monitor the following areas to troubleshoot problems:

  • Microsoft SQL Server stored procedures or batches of SQL statements submitted by user applications 

  • User activity, such as blocking locks or deadlocks 

  • Hardware activity, such as disk usage 

Problems can include:

  • Application development errors involving incorrectly written Transact-SQL. 

  • Hardware errors, such as disk or network-related errors. 

  • Excessive blocking due to an incorrectly designed database. 

SQL Server Profiler can be used to monitor and troubleshoot Transact-SQL and application-related problems. Windows NT Performance Monitor can be used to monitor hardware and other system-related problems.

Monitoring to Test Applications

During the development phase of an application, any SQL scripts or stored procedures written will need to be tested to ensure they work as expected. By single-stepping through each line of SQL as it is executed by Microsoft SQL Server, Transact-SQL statements and stored procedures can be tested and debugged as necessary. This ensures more robust systems when the application is delivered and made available to users. SQL Server Profiler allows SQL statements or stored procedures executed by an application to be traced, line by line, as they are executed by SQL Server.

See Also 

In This Volume 

Single-Stepping Traces

Replaying Traces

Components of Monitoring

Monitoring an application, Microsoft SQL Server, or the operating system environment (hardware and software), involves:

  • Identifying the events that must be monitored. 

    The events determine the activities that are monitored and captured. These events depend on what is being monitored and why. For example, when monitoring disk activity, it is not necessary to monitor SQL Server locks. 

  • Determining the event data to capture. 

    The event data describes each instance of an event as it occurred. For example, when monitoring lock events, it is useful to capture data that describes the tables, users, and connections affected by the lock event.

    • Applying filters to limit the event data collected.

      Limiting the event data allows the system to focus on the specific types of events pertinent to the monitoring scenario. For example, when the slow queries for a database are monitored, a filter can be applied to monitor only the queries issued by the application against a particular database that take more than 30 seconds to execute. 

    • Monitoring (capturing) events. 

      This is the process of actively monitoring the application, SQL Server, and so on, to see what is occurring. For example, when disk activity is monitored using Windows NT Performance Monitor, it captures and displays the event data, such as disk reads and writes, to the screen. 

    • Saving captured event data. 

      This allows the data to be analyzed at a later time or even replayed (when using SQL Server Profiler). Captured event data is saved to a file that can be loaded back into the tool that originally created the file for analysis by a user. SQL Server Profiler also allows event data to be saved to a SQL Server table. Saving captured event data is vital when creating a performance baseline. The performance baseline data is saved and used when comparing recently captured event data to determine if performance is optimal. 

    • Creating definition files that contain the settings specified to capture the events. 

      This includes the events themselves, event data, and filters that are used to capture data. These files can be used to monitor a specific set of events at a later time without redefining the events, event data, and filters. For example, if you frequently want to monitor the number of deadlocks and the users involved in those deadlocks, you can create a file defining those events, event data, and event filters; save the definition; and reapply the filter the next time you want to monitor deadlocks. SQL Server Profiler uses trace definition files for this purpose. 

    • Analyzing captured event data. 

      The saved, captured event data is loaded into the application that captured the data. For example, a captured trace from SQL Server Profiler can be reloaded into SQL Server Profiler for viewing and analysis. Analyzing event data involves determining what is happening and why. Using this information allows you to make changes that can improve performance, such as adding more memory, correcting coding problems with Transact-SQL statements or stored procedures, changing indexes, and so on, depending on the type of analysis performed. For example, using the Index Tuning Wizard, a captured trace from SQL Server Profiler can be analyzed automatically and index recommendations can be suggested and created. 

    • Replaying captured event data. 

      This allows you to establish a test copy of the database environment from which the data was captured and replay the captured events as they originally occurred on the real system. You can replay them at the same speed as they originally occurred, as fast as possible (to stress the system), or more likely, replay them one step at a time, which allows you to analyze the system after each event has occurred. To determine the effects of the events, this allows you to analyze the exact events that occur on a production system in a test environment, thereby preventing any effect on the production system. Because the captured events can be replayed, testing and analysis can be repeated until the problem is fully understood. Only SQL Server Profiler allows you to replay captured events. 

See Also 

In Other Volumes 

"Index Tuning Wizard" in Microsoft SQL Server Database Developer's Companion 

Choosing a Tool to Monitor Server Performance and Activity

Microsoft SQL Server provides a comprehensive set of tools for monitoring events in SQL Server. The choice of tool depends on the events to be monitored and the type of monitoring. For example, ad hoc monitoring to determine the number of users who are currently connected to a computer running SQL Server can be accomplished by using the sp_who system stored procedure, rather than creating a trace and using SQL Server Profiler.

Tool

Description

SQL Server Profiler

Provides the ability to monitor server and database activity (for example, number of deadlocks, fatal errors, tracing stored procedures and Transact-SQL statements, or login activity). You can capture SQL Server Profiler data to a SQL Server table or a file for later analysis, and also replay the events captured on SQL Server, step by step, to see exactly what happened. SQL Server Profiler tracks engine process events, such as the start of a batch or a transaction.

Windows NT Performance Monitor

Provides the ability to monitor server performance and activity using predefined objects and counters or user-defined counters to monitor events. Windows NT Performance Monitor collects counts of the events rather than data about the events (for example, memory usage, number of active transactions, number of blocked locks, or CPU activity). Thresholds can be set on specific counters to generate alerts that notify operators. Windows NT Performance Monitor mainly tracks resource usage, such as the number of buffer manager page requests that are in use.
Windows NT Performance Monitor only works on Microsoft Windows NT and can monitor (remotely or locally) SQL Server running only on Windows NT.

Current activity window (SQL Server Enterprise Manager)

Provides the ability to graphically display information about currently running processes in SQL Server, blocked processes, locks, and user activity. This is useful for ad hoc views of current activity.

Error logs

Contains more complete information about events in SQL Server. You can use the information in the error log to troubleshoot problems that are known to be SQL Server-related. The Microsoft Windows NT application log provides an overall picture of events occurring on the Windows NT system as a whole, as well as events in SQL Server and SQL Server Agent.

sp_who

Reports snapshot information about current SQL Server users and processes, including the currently executing statement and if the statement is blocked. This is a Transact-SQL alternative to viewing user activity in the current activity window in SQL Server Enterprise Manager.

sp_lock

Reports snapshot information about locks, including the object ID, index ID, type of lock, and type or resource to which the lock applies. This is a Transact-SQL alternative to viewing lock activity in the current activity window in SQL Server Enterprise Manager.

sp_spaceused

Displays an estimate of the current amount of disk space used by a table (or a whole database). This is a Transact-SQL alternative to viewing database usage in SQL Server Enterprise Manager.

sp_monitor

Displays snapshot statistics, including CPU usage, I/O usage, and amount of time idle, which indicate how busy SQL Server has been since sp_monitor was last executed.

DBCC statements

Checks performance statistics and the logical and physical consistency of a database. For more information, see "DBCC" in Microsoft SQL Server Transact-SQL and Utilities Reference. 

Built-in functions

Displays snapshot statistics about SQL Server activity since the server was started that are stored in predefined SQL Server counters. For example, @@CPU_BUSY contains the amount of time the CPU has been executing SQL Server code; @@CONNECTIONS contains the number of SQL Server connections or attempted connections made; @@PACKET_ERRORS (T-SQL) returns the number of network packet errors that have occurred on Microsoft® SQL Server™ connections since the last time SQL Server was started. For more information, see "FUNCTIONS" in Microsoft SQL Server Transact-SQL and Utilities Reference.

SQL Server Profiler extended stored procedures

Gathers SQL Server Profiler statistics by executing Transact-SQL extended stored procedures. For more information, see "System Stored Procedures" in Microsoft SQL Server Transact-SQL and Utilities Reference.

Trace flags

Displays information about a specific activity within the server that is used to diagnose detailed problems or performance issues (for example, deadlock chains). For more information, see "Trace Flags" in Microsoft SQL Server Transact-SQL and Utilities Reference.

Simple Network Management Protocol (SNMP)

Monitors SQL Server from applications that run on operating systems that support SNMP (for example, UNIX) by generating alerts when specific events occur within the server. For more information, see "SNMP" in this volume.

Comparing Tool Features and Functions

The choice of a monitoring tool depends on the type of events and activity to be monitored.

Event or activity

SQL ServerProfiler

Windows NTPerformance Monitor

Currentactivitywindow

Transact-SQL

Errorlogs

Trend analysis

Yes

Yes

 

 

 

Replaying captured events

Yes

 

 

 

 

Ad hoc monitoring

Yes

 

Yes

Yes

Yes

Generating alerts

 

Yes

 

 

 

Graphical interface

Yes

Yes

Yes

 

Yes

Using within custom application

Yes1

 

 

Yes

 

1 Using SQL Server Profiler extended stored procedures.

 

 

 

 

 

When deciding between SQL Server Profiler and Windows NT Performance Monitor (the two main monitoring tools), the key difference is that SQL Server Profiler monitors engine events while Windows NT Performance Monitor monitors resource usage associated with server processes. For example, SQL Server Profiler could be used to monitor deadlocks events, including the users and objects involved in the deadlock. Windows NT Performance Monitor could be used to monitor the total number of deadlocks occurring in a database or on a specific object.

Microsoft Windows NT also provides tools that generate an accurate understanding of what is happening in the system:

  • Task Manager 

    Shows a synopsis of the processes and applications that are running on the system. 

  • Network Monitor Agent 

    Assists in monitoring network traffic. 

For more information about Windows NT tools, see your Windows NT documentation.

Monitoring with SQL Server Profiler

SQL Server Profiler is a graphical tool that allows system administrators to monitor engine events in computers running Microsoft SQL Server. Examples of engine events include:

  • The login connects, fails, and disconnects. 

  • The Transact-SQL SELECT, INSERT, UPDATE, and DELETE statements. 

  • The remote procedure call (RPC) batch status. 

  • The start or end of a stored procedure. 

  • The start or end of statements within stored procedures. 

  • The start or end of an SQL batch. 

  • An error written to the SQL Server error log. 

  • A lock acquired or released on a database object. 

  • An opened cursor. 

Data about each event can be captured and saved to a file or SQL Server table for later analysis. Data about the engine events is collected by creating traces. Examples of the event data captured within a trace include:

  • The type (class) of event, such as SQL:BatchCompleted, which indicates the completion of an SQL batch. 

  • The name of the computer on which the client is running. 

  • The ID of the object affected by the event, such as a table name. 

  • The SQL Server name of the user issuing the statement. 

  • The text of the Transact-SQL statement or stored procedure being executed. 

  • The time the event started and ended. 

Event data can be filtered so that only a subset of the event data is collected. This allows you to collect only the event data in which you are interested. For example, only the events that affect a specific database, or those for a particular user, can be collected, and all others ignored. Alternatively, data could be collected about only those queries that take longer than five seconds to execute.

Additionally, SQL Server Profiler allows captured event data to be replayed against SQL Server, thereby effectively reexecuting the saved events as they originally occurred.

SQL Server Profiler can be used to:

  • Monitor the performance of SQL Server. 

  • Debug Transact-SQL statements and stored procedures. 

  • Identify slow-executing queries. 

  • Test SQL statements and stored procedures in the development phase of a project by single-stepping through statements to confirm that the code works as expected. 

  • Troubleshoot problems in SQL Server by capturing events on a production system, and replay those captured events on a test system, thereby re-creating what happened in the production environment for testing or debugging purposes. Replaying captured events on a separate system allows the users to continue using the production system without interference. 

SQL Server Profiler provides a graphical user interface to a set of extended stored procedures. You can also use these extended stored procedures directly. For example, it is possible to create your own application that uses SQL Server Profiler extended stored procedures to monitor SQL Server.

See Also 

In Other Volumes 

"System Stored Procedures" in Microsoft SQL Server Transact-SQL and Utilities Reference 

Starting SQL Server Profiler

SQL Server Profiler is started from either the Microsoft Windows NT or Windows 95/98 Start menu or SQL Server Enterprise Manager. When SQL Server Profiler is started, it does not automatically monitor any events until a trace definition is created (or imported from a trace definition file) and started.

Permissions Needed to Connect to SQL Server 

SQL Server Profiler can connect to Microsoft SQL Server using either Windows NT Authentication Mode or SQL Server Authentication. When Windows NT Authentication Mode is used to connect to SQL Server, the user account that is used to run SQL Server Profiler must be granted permission to connect to SQL Server. The login account must also be granted permissions to execute SQL Server Profiler extended stored procedures. For more information, see "System Stored Procedures" in Microsoft SQL Server Transact-SQL and Utilities Reference. 

Important When using a shared memory network library, SQL Server does not support impersonation unless SQL Server Profiler is connected using Windows NT Authentication.

Events Monitored by SQL Server Profiler

Microsoft SQL Server provides a set of events (event classes) that can be used by SQL Server Profiler to monitor activity in SQL Server. An event class is an event generated within the SQL Server engine, such as the start of the execution of a stored procedure, a successful or failed connection to SQL Server, a transaction, or a lock time out. Within SQL Server Profiler, event classes are grouped into event categories (collections) that describe the type of event class. For example, all lock event classes are grouped within the Locks event category.

Note Grouping event classes into categories aids usability and understanding within the SQL Server Profiler interface. There is no corresponding engine event that maps to the event category.

These are the SQL Server event categories.

Event category

Description

Cursors

Collection of event classes that are produced by cursor operations.

Error and Warning

Collection of event classes that are produced when a SQL Server error or warning occurs. For example, an error during the compilation of a stored procedure or an exception in SQL Server.

Locks

Collection of database object locking event classes.

Misc.

Collection of miscellaneous event classes that do not fit into any of the other event categories.

Objects

Collection of event classes that are produced when database objects are created, opened, closed, dropped, or deleted.

Scans

Collection of database object scan event classes. Database objects that can be scanned include tables and indexes.

Sessions

Collection of event classes that are produced by clients connecting to and disconnecting from SQL Server.

SQL Operators

Collection of event classes that are produced from the execution of SQL data manipulation language (DML) operators.

Stored Procedures

Collection of event classes that are produced by the execution of stored procedures.

Transactions

Collection of event classes that are produced by the execution of Microsoft Distributed Transaction Coordinator (MS DTC) or SQL transactions, or by writing to the transaction log.

TSQL

Collection of event classes that are produced by the execution of Transact-SQL passed to SQL Server from the client.

User Configurable

Collection of user-configurable event classes.

Data Columns and Defaults 

The data columns describe the data that is collected for each of the event classes captured in the trace. Because the event class determines the type of data that is collected in the trace, not all data columns are applicable to all event classes. For example, the Binary Data data column, when captured for the Lock:Acquired event class, contains the value of the locked page ID or row but has no value for the Disconnect event class. Default data columns are automatically populated for all event classes.

The data displayed in SQL Server Profiler can either be displayed in the order the events occur or grouped based on one or a combination of data columns. This is similar to the GROUP BY clause in Transact-SQL. Grouping events allows you to view events by the type of data displayed. For example, grouping events by SQL User Name and Duration allows you to view all monitored events grouped by user, and then subgrouped by duration. This allows you to easily view which user events are taking the longest amount of time to execute.

Data column

Description

Application Name 1

Name of the client application that created the connection to SQL Server. This column is populated with the values passed by the application rather than the displayed name of the program.

Binary Data

Binary value dependent on the event class captured in the trace.

Connection ID 1

ID assigned by SQL Server to the connection that is established by the client application. Events produced by system processes may not have a connection ID.

CPU 1

Amount of CPU time (in milliseconds) that is used by the event.

Database ID 1

ID of the database specified by the USE database statement, or the default database if no USE database statement has been issued for a given connection. SQL Server Profiler displays the name of the database if the Server Name data column is captured in the trace and the server is available. The value for a database can be determined by using the DB_ID function.

Duration 1

Amount of elapsed time (in milliseconds) taken by the event.

End Time 1

Time at which the event ended. This column is not populated for starting event classes, such as SQL:BatchStarting or SP:Starting.

Event Class 1

Type of event class that is captured.

Event Sub Class

Type of event subclass. This data column is not populated for all event classes.

Host Name

Name of the computer on which the client is running. This data column is populated if the host name is provided by the client. To determine the host name, use the HOST_NAME function.

Host Process ID

ID assigned by the host computer to the process in which the client application is running. This data column is populated if the host process ID is provided by the client. To determine the host ID, use the HOST_ID function.

Index ID

ID for the index on the object that is affected by the event. To determine the index ID for an object, use the indid column of the sysindexes system table.

Integer Data

Integer value dependent on the event class that is captured in the trace.

NT Domain Name 1

Microsoft Windows NT domain to which the user belongs.

NT User Name 1

Windows NT username.

Object ID

System-assigned ID of the object.

Reads

Number of logical disk reads that are performed by the server on behalf of the event.

Server Name 1

Name of the SQL Server that is traced.

Severity

Severity level of an exception.

SPID 1

Server Process ID assigned by SQL Server to the process associated with the client.

SQL User Name 1

SQL Server username of the client.

Start Time 1

Time at which the event started, when available.

Text

Text value dependent on the event class that is captured in the trace.

Transaction ID

System-assigned ID of the transaction.

Writes

Number of physical disk writes performed by the server on behalf of the event.

1 Default data column, which is automatically populated for all event classes.

 

 

 

 

 

When viewing the captured data in a trace, if SQL Server Profiler can connect to the computer running SQL Server where the trace data was captured from, it will try to populate the Database ID, Object ID, and Index ID data columns with the names of the database, object, and index respectively. Otherwise, identification numbers (IDs) will be displayed instead.

Cursors Event Category

The Cursors event classes can be used to monitor cursor operations.

Event class

Description

CursorClose

A cursor previously opened on a Transact-SQL statement by ODBC, OLE DB, or DB-Library is closed.

CursorExecute

A cursor previously prepared on a Transact-SQL statement by ODBC, OLE DB, or DB-Library is executed. For more information, see "How to prepare and execute a statement (ODBC)" in SQL Server Books Online.

CursorImplicitConversion

A cursor on a Transact-SQL statement is converted by SQL Server from one type to another.
Triggered for ANSI and non-ANSI cursors.

CursorOpen

A cursor is opened on a Transact-SQL statement by ODBC, OLE DB, or DB-Library.

CursorPrepare

A cursor on a Transact-SQL statement is prepared for use by ODBC, OLE DB, or DB-Library. For more information, see "How to prepare and execute a statement (ODBC)" in SQL Server Books Online.

CursorRecompile

A cursor that is opened on a Transact-SQL statement by ODBC or DB-Library has been recompiled either directly or due to a schema change. Triggered for ANSI and non-ANSI cursors.

CursorUnprepare

A prepared cursor on a Transact-SQL statement is unprepared (deleted) by ODBC, OLE DB, or DB-Library.

By monitoring the CursorOpen, CursorExecute, and CursorImplicitConversion event classes, you can determine when a cursor is executed and what type of cursor is used. These event classes are useful to determine the actual cursor type used for an operation by SQL Server, rather than the cursor type specified by the application.

These are the event-specific data columns for the Cursors event category.

Event class

Data column

Description

CursorClose

Event Sub Class

Handle of the cursor.

CursorExecute

Event Sub Class

Handle of the cursor.

 

Integer Data

Cursor type. Values are:
1 = Keyset
2 = Dynamic
4 = Forward only
8 = Static
16 = Fast forward

 

Binary Data

Handle of the prepared cursor.

CursorImplicitConversion

Integer Data

Requested cursor type. Values are:
1 = Keyset
2 = Dynamic
4 = Forward only
8 = Static
16 = Fast forward

 

Binary Data

Resulting cursor type. Values are:
1 = Keyset
2 = Dynamic
4 = Forward only
8 = Static
16 = Fast forward

CursorOpen

Event Sub Class

Handle of the cursor.

 

Integer Data

Cursor type. Values are:
1 = Keyset
2 = Dynamic
4 = Forward only
8 = Static
16 = Fast forward

CursorPrepare

Event Sub Class

Handle of the prepared cursor.

CursorUnprepare

Event Sub Class

Handle of the cursor created by CursorPrepare.

See Also 

In Other Volumes 

"Cursors" in Microsoft SQL Server Database Developer's Companion 

Error and Warning Event Category

The Error and Warning event classes can be used to monitor many of the errors and warnings that are raised by Microsoft SQL Server and components such as OLE DB.

Event class

Description

ErrorLog

Error events have been logged in the SQL Server error log.

EventLog

Events have been logged in the Microsoft Windows NT application log.

Exception

Exception has occurred in SQL Server.

Execution Warnings

Any warnings that occurred during the execution of a SQL Server statement or stored procedure.

Hash Warning

Hashing operation may have incurred a problem.

Missing Column Statistics

Column statistics that could have been useful for the optimizer are not available.

Missing Join Predicate

Query is being executed that has no join predicate. This could result in a long-running query.

OLEDB Errors

OLE DB error has occurred.

Sort Warnings

Sort operations do not fit into memory. This does not include sort operations involving the creation of indexes, only sort operations within a query (such as an ORDER BY clause used in a SELECT statement).

By monitoring the Missing Column Statistics event class, you can determine if there are statistics missing for a column used by a query. This can cause the optimizer to choose a less-efficient query plan than otherwise expected. For more information about creating column statistics, see "Statistical Information" in Microsoft SQL Server Database Developer's Companion. 

The Execution Warnings event class can be monitored to determine if and how long queries had to wait for resources before proceeding. This is important for determining if there are any contention issues in the system that can affect performance and therefore need investigating. Use the Locks event classes to determine the objects affected.

The Sort Warnings event class can be used to monitor query performance. If a query involving a sort operation generates a Sort Warnings event class with an Event Sub Class data column value of 2, the performance of the query can be affected because multiple passes over the data are required to sort the data. Investigate the query further to determine if the sort operation can be eliminated.

The Hash Warning event class can be used to monitor when a hash recursion or hash bail has occurred during a hashing operation. Hash recursion occurs when the build input does not fit into memory, resulting in the input being split into multiple partitions, which are processed separately. If any of these partitions still do not fit into memory, they are further split into sub-partitions, which are then processed separately. This process continues until each partition fits into memory or the maximum recursion level is reached (displayed in the Integer Data data column), thus causing hash bail to occur.

Hash bail occurs when a hashing operation reaches its maximum recursion depth and reverts to an alternate plan to process its remaining partitioned data. Hash bail usually occurs due to skewed data, trace flags, or bit counting. To eliminate or reduce the chance of hash bail occurring, verify that statistics exist on the columns being joined or grouped. For more information, see "Statistical Information" in Microsoft SQL Server Database Developer's Companion. 

If hash bail continues to occur each time the query is executed, consider using an optimizer hint to force a different algorithm to be used by the query optimizer and then compare the performance of the query. For more information about Join Hints, see "FROM" in Microsoft SQL Server Transact-SQL and Utilities Reference.

These are the event-specific data columns for the Error and Warning event category.

Event class

Data column

Description

ErrorLog

Severity

Error severity.

 

Event Sub Class

Error number.

 

Text

Text of the error message.

EventLog

Binary Data

Supplied binary data, if available.

 

Severity

Error severity.

 

Event Sub Class

Error number, if available.

 

Text

Text of the error message, if available.

Exception

Integer Data

Error number.

 

Severity

Error severity.

 

Event Sub Class

Server state.

Execution Warnings

Event Sub Class

Can have these values:
1 = Query wait. The query must wait for resources before it can execute.
2 = Query time-out. The query timed out while waiting for resources it required to execute.

 

Integer Data

The time (in seconds) the query had to wait before continuing or timing out.

Hash Warning

Event Sub Class

The type of hash operation. Can have these values:
0 = Hash recursion.
1 = Hash bail.

 

Integer Data

Recursion level (Hash recursion only).

 

Object ID

Hash partition node ID.

Missing Column Statistics

Text

List of the columns with missing statistics.

OLEDB Errors

Text

Error message.

Sort Warnings

Event Sub Class

Can have these values:
1 = Single pass. When the sort table was written to disk, only a single additional pass over the data to be sorted was required to obtain sorted output.
2 = Multiple pass. When the sort table was written to disk, multiple passes over the data to be sorted were required to obtain sorted output.

See Also 

In This Volume 

Monitoring the Error Logs

In Other Volumes 

"Error Messages" in Microsoft SQL Server Diagnostics 

Locks Event Category

The Locks event classes can be used to monitor Microsoft SQL Server lock activity.

Event class

Description

Lock:Acquired

Acquisition of a lock on a resource, such as a data page, has been achieved. For more information about resources that can be locked, see "Understanding Locking in SQL Server" in Microsoft SQL Server Database Developer's Companion.

Lock:Cancel

Acquisition of a lock on a resource has been canceled (for example, due to a deadlock).

Lock:Deadlock

Two concurrent transactions have deadlocked each other by trying to obtain incompatible locks on resources that the other transaction owns. For more information, see "Deadlocking" in Microsoft SQL Server Database Developer's Companion. 

Lock:Deadlock Chain

Produced for each of the events leading up to the deadlock.

Lock:Escalation

A finer-grained lock has been converted to a coarser-grained lock (for example, a row lock that is converted to a page lock).

Lock:Released

A lock on a resource, such as a page, has been released.

Lock:Timeout

A request for a lock on a resource, such as a page, has timed out due to another transaction holding a blocking lock on the required resource. Time-out is determined by the @@LOCK_TIMEOUT system function and can be set with the SET LOCK_TIMEOUT statement. For more information, see "Customizing the Lock Time-out" in Microsoft SQL Server Database Developer's Companion.

By monitoring the Locks event classes, you can investigate contention issues caused by concurrent users and applications using a database. The Lock:Acquired and Lock:Released event classes can be used to monitor when objects are being locked, the type of locks taken, and for how long the locks were retained. Locks retained for long periods of time may cause contention issues and should be investigated. For example, an application can be acquiring locks on rows in a table, and then waiting for user input. Because the user input can take a long time to occur, the locks can block other users. In this instance, the application should be redesigned to make lock requests only when needed and not require user input when locks have been acquired.

The Lock:Deadlock, Lock:Deadlock Chain, and Lock:Timeout event classes can be used to monitor when deadlocks and time-out conditions occur, and which objects are involved. This information is useful to determine if deadlocks and time-outs are significantly affecting the performance of your application, and which objects are commonly involved. The application code that modifies these objects can then be examined to determine if changes to minimize deadlocks and time-outs can be made. For more information about reducing deadlocks, see "Avoiding Deadlocks" in Microsoft SQL Server Database Developer's Companion.

Because lock events are so prolific, capturing the lock event classes can incur significant overhead on the server being traced and result in very large trace files or trace tables.

These are the event-specific data columns for the Locks event category.

Event class

Data column

Description

Lock:Acquired

Binary Data

Resource ID.

 

Duration

Wait between the time the lock request was issued and the time the lock was acquired.

 

Object ID

ID of the object on which the lock was acquired.

 

Event Sub Class

Lock mode, such as intent exclusive.

Lock:Cancel

Binary Data

Resource ID.

 

Duration

Wait between the time the lock request was issued and the time the lock was canceled.

 

Object ID

ID of the object on which the lock was canceled.

 

Event Sub Class

Lock mode, such as intent exclusive.

Lock:Deadlock

Binary Data

Resource ID.

 

Duration

Wait between the time the lock request was issued and the time the deadlock occurred.

 

Integer Data

Deadlock number. Numbers are assigned beginning with 0 when the server is started and are incremented for each deadlock.

 

Object ID

ID of the object in contention.

 

Event Sub Class

Lock mode, such as intent exclusive.

Lock:Deadlock Chain

Binary Data

Resource ID.

 

Integer Data

Deadlock number. Numbers are assigned beginning with 0 when the server is started, and incremented for each deadlock.

 

Object ID

ID of the object that was locked.

 

Event Sub Class

Lock mode, such as intent exclusive.

Lock:Escalation

Object ID

ID of the object on which the lock was escalated.

Lock:Released

Binary Data

Resource ID.

 

Duration

Wait between the time the lock request was issued and the time the lock was released.

 

Object ID

ID of the object on which the lock was released.

 

Event Sub Class

Lock mode, such as intent exclusive.

Lock:Timeout

Binary Data

Resource ID.

 

Duration

Wait time between the time the lock request was issued and the lock was timed out.

 

Object ID

ID of the object on which the lock was timed out.

 

Event Sub Class

Lock mode, such as intent exclusive.

See Also 

In Other Volumes 

"Locking" in Microsoft SQL Server Database Developer's Companion 

Misc. Event Category

The Misc. event classes can be used to monitor a variety of event classes not found in the other event categories, such as failed logins and query plans.

Event class

Description

Attention

Attention event, such as client-interrupt requests or broken client connections, has occurred.

Auto-UpdateStats

Event associated with the automatic updating of index statistics has occurred.

Exec Prepared SQL

ODBC, OLE DB, or DB-Library has executed a prepared Transact-SQL statement or statements.

Execution Plan

Plan tree of the Transact-SQL statement being executed is displayed.

LoginFailed

Login attempt to SQL Server from a client has failed.

Prepare SQL

ODBC, OLE DB, or DB-Library has prepared a Transact-SQL statement or statements for use.

Server Memory Change

Microsoft SQL Server memory usage has increased or decreased by either 1 megabyte (MB) or 5 percent of the maximum server memory, whichever is greater.

ServiceControl

Server control event, such as server paused or restart events, has occurred.

Unprepare SQL

ODBC, OLE DB, or DB-Library has unprepared (deleted) a prepared Transact-SQL statement or statements.

By monitoring the Attention event class, you can determine if attention events are occurring. If a high number of attention events is generated, this could indicate a network communication problem between the clients and SQL Server that needs further investigation.

Monitoring the LoginFailed event class is useful for monitoring security and performing logon auditing. By monitoring the Event Sub Class and the SQL User Name or NT User Name default data columns, you can determine which users are failing to connect to SQL Server and why.

These are the event-specific data columns for the Misc. event category.

Event class

Data column

Description

Auto-UpdateStats

Event Sub Class

Can have these values:
0 = error
1 = success

Exec Prepared SQL

Event Sub Class

Handle of the prepared Transact-SQL statement.

Execution Plan

Binary Data

Estimated cost.

 

Integer Data

Estimated rows returned.

 

Text

Execution plan tree. Only SQL statement trees are expressed. Transact-SQL constructs are not represented.

LoginFailed

Text

Login error message.

Prepare SQL

Event Sub Class

Handle of the prepared Transact-SQL statement.

Server Memory Change

Event Sub Class

Can have these values:
1 = Memory increase
2 = Memory decrease

 

Integer Data

The new memory size.

ServiceControl

Event Sub Class

Can have these values:
0 = Exit
1 = Start
2 = Pause
3 = Continue
4 = Stop

Unprepare SQL

Event Sub Class

Handle of the prepared Transact-SQL statement.

Objects Event Category

The Objects event classes can be used to monitor when an object such as a database, table, index, view, or stored procedure is opened, created, deleted, or used.

Event class

Description

Object:Closed

Open object has been closed, such as at the end of a SELECT, INSERT, or DELETE statement.

Object:Created

Object has been created, such as for CREATE INDEX, CREATE TABLE, and CREATE DATABASE statements.

Object:Deleted

Object has been deleted, such as for DROP INDEX and DROP TABLE statements.

Object:Opened

Object has been accessed, such as for SELECT, INSERT, or DELETE statements.

By monitoring the Object:Opened event class, you can determine which objects are most commonly used. The Object:Created and Object:Deleted event classes can be used to determine if many ad hoc objects are being created or deleted, for example, by ODBC applications that often create temporary stored procedures. By monitoring the SQL User Name and NT User Name default data columns in addition to the Objects event classes, you can determine the name of the user who is creating, deleting, or accessing objects. This can be useful when monitoring to determine if your security policies are correctly implemented, for example, to confirm that users who are not allowed to create or delete objects are not doing so.

Because object events are so prolific, capturing the object event classes can incur significant overhead on the server being traced and result in large trace files or trace tables.

These are the event-specific data columns for the Objects event category.

Event class

Data column

Description

Object:Closed

Event Sub Class

Object type

 

Object ID

Object that was closed

Object:Created

Event Sub Class

Object type

 

Object ID

Object that was created

Object:Deleted

Event Sub Class

Object type

 

Object ID

Object that was deleted

Object:Opened

Event Sub Class

Object type

 

Object ID

Object that was opened

See Also 

In Other Volumes 

"Overview of Creating and Maintaining Databases" in Microsoft SQL Server Database Developer's Companion 

Scans Event Category

The Scans event classes can be used to monitor when a table or index is being scanned during the execution of a query.

Event class

Description

Scan: Started

Table or index scan has started.

Scan: Stopped

Table or index scan has stopped.

These are the event-specific data columns for the Scans event category.

Event class

Data column

Description

Scan: Started

Event Sub Class

Scan mode represented as a bitmap

 

Index ID

Index that is being scanned

 

Object ID

Object that is being scanned

Scan: Stopped

Event Sub Class

Scan mode represented as a bitmap

 

Index ID

Index that is being scanned

 

Object ID

Object that is being scanned

Using the Scan:Started and Scan:Stopped event classes, it is possible to monitor the type of scans being performed by a query on a specific object.

These are the scan modes for the Event Sub Class data column.

Value

Scan mode

1

Normal

2

First

4

Back

8

Unordered

16

No data

32

Reserved

64

Exlatch

128

Index supplied

256

Marker

By monitoring the Index ID default data column, you can determine the identification number of the index being used by a specific query. The Index ID data column contains either:

  • The value 1 when the clustered index of the table is being scanned. 

  • The value 2 through 255 when a nonclustered index is being scanned. 

Sessions Event Category

The Sessions event classes can be used to monitor Microsoft SQL Server user connections.

Event class

Description

Connect

New connection event has occurred since the trace was started, such as a client requesting a connection to a server running SQL Server.

Disconnect

New disconnect event has occurred since the trace was started, such as a client issuing a disconnect command.

ExistingConnection

Activity by users connected to SQL Server before the trace was started has been detected. Otherwise, these existing connections would not be detected by the Connect event class.

Using the Disconnect and ExistingConnection event classes, it is possible to monitor the length of time each user connection was connected to SQL Server, and the amount of SQL Server processor time the queries submitted on the connection took to execute. This information can be useful for determining:

  • The amount of time and the volume of activity used by each SQL Server user. This can be useful for tracking database activity for different users and charging each user for the time and SQL Server CPU time (CPU data column) they used. 

  • The security of the system, by checking the users who are connecting to and using SQL Server. 

These are the event-specific data columns for the Sessions event category.

Event class

Data column

Description

Connect

Binary Data

Session level settings, including ANSI nulls, ANSI padding, cursor close on commit, null concatenation, and quoted identifiers. For more information, see "SET" in Microsoft SQL Server Transact-SQL and Utilities Reference.

Disconnect

CPU

CPU time used by the connection.

 

Duration

Duration of the connection (time elapsed since the connection was opened).

ExistingConnection

Binary Data

Session level settings, including ANSI nulls, ANSI padding, cursor close on commit, null concatenation, and quoted identifiers. For more information, see "SET" in Microsoft SQL Server Transact-SQL and Utilities Reference. 

 

CPU

CPU time used by the connection until the time of the event.

 

Duration

Duration of the connection until the time of the event.

By monitoring the SQL User Name and NT User Name default data columns, you can map the name of the user to each connection.

To determine failed logins, use the LoginFailed event class in the Misc. Event Category.

See Also 

In This Volume 

Events Monitored by SQL Server Profiler

SQL Operators Event Category

The SQL Operators event classes can be used to monitor when a DELETE, INSERT, SELECT, or UPDATE query occurs.

Event class

Description

Delete

Occurs before a DELETE statement is executed.

Insert

Occurs before an INSERT statement is executed.

Select

Occurs before a SELECT statement is executed.

Update

Occurs before an UPDATE statement is executed.

The Event Sub Class data column for each operator can be used for monitoring if the query optimizer generates a parallel query execution plan for the specific SQL operator.

Event class

Data column

Description

Delete

Event Sub Class

Degree of parallelism (number of CPUs used to perform the DELETE).

Insert

Event Sub Class

Degree of parallelism (number of CPUs used to perform the INSERT).

Select

Event Sub Class

Degree of parallelism (number of CPUs used to perform the SELECT).

Update

Event Sub Class

Degree of parallelism (number of CPUs used to perform the UPDATE).

The degree of parallelism chosen for each query plan execution is defined as follows.

Event Sub Class value

Description

0

No parallelism considered because Microsoft SQL Server could execute the query only on a single CPU. This occurs when:
The computer has only one processor.
The max degree of parallelism server configuration option is set to 1. For more information, see "max degree of parallelism Option" in this volume.
The MAXDOP 1 query hint is specified. For more information, see "SELECT" in Microsoft SQL Server Transact-SQL and Utilities Reference. 
The Desktop edition of SQL Server is being used.

1

Parallel execution considered, but the query is executed using a serial plan because either the query is trivial, the cost of executing the query is less than the cost threshold for parallelism value, or there are not enough resources available to execute a parallel plan. For more information, see "cost threshold for parallelism Option" in this volume.

>1

For a DELETE, INSERT, SELECT, or UPDATE statement, the whole or a portion of the query is executed using a parallel execution plan with the shown degree of parallelism.

To achieve proper and efficient parallel query execution, setting configuration parameters is not required.

Every parallel plan contains either the Distribute Streams, Gather Streams, or Repartition Streams logical operators. For more information, see "Logical and Physical Operators" in Microsoft SQL Server Diagnostics.

Note Although system administrators can influence the query optimizer's generation of parallel query execution plans by changing the values for the cost threshold for parallelism and max degree of parallelism server configuration options using the sp_configure system stored procedure, it is not recommended. For more information, see "Setting Configuration Options" in this volume.

Stored Procedures Event Category

The Stored Procedures event classes can be used to monitor the execution of stored procedures.

Event class

Description

SP:CacheHit

Procedure is found in the cache.

SP:CacheInsert

Item is inserted into the procedure cache.

SP:CacheMiss

Stored procedure is not found in the procedure cache.

SP:CacheRemove

Item has been removed from the procedure cache.

SP:Completed

Stored procedure has completed.

SP:ExecContextHit

Execution version of a stored procedure has been found in the cache.

SP:Recompile

Stored procedure has been recompiled.

SP:Starting

Stored procedure has started.

SP:StmtCompleted

Statement within a stored procedure has completed.

SP:StmtStarting

Statement within a stored procedure has started.

By monitoring the SP:CacheHit and SP:CacheMiss event classes, you can determine how often stored procedures are found in the cache when executed. If, for example, the SP:CacheMiss event class occurs frequently, then it can indicate that more memory should be made available to Microsoft SQL Server, thereby increasing the size of the procedure cache. By monitoring the Object ID of the SP:CacheHit event class, you can determine which stored procedures reside in the cache.

The SP:CacheInsert, SP:CacheRemove, and SP:Recompile event classes can be used to determine which stored procedures are brought into cache (first executed), then later removed from the cache (aged out of the cache), and also when they get recompiled. For more information about recompiling stored procedures, see "Recompiling a Stored Procedure" in Microsoft SQL Server Database Developer's Companion. This information is useful to determine how stored procedures are being used by applications.

By monitoring the SP:Starting, SP:StmtStarting, SP:StmtCompleted, and SP:Completed event classes and all the TSQL event classes, the execution of a stored procedure can be monitored.

A stored procedure has a compiled version that has shared data and an execution context version that has session specific data. When a stored procedure is looked up in the cache, execution contexts are looked for first. If none are found, the cache is searched for compiled plans. Use the SP:ExecContextHit event class to monitor execution contexts. If the SP:ExecContextHit event class is not generated for a stored procedure, then the stored procedure has no execution time cachable queries.

These are the event-specific data columns for the Stored Procedures event category.

Event class

Data column

Description

SP:CacheHit

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

SP:CacheInsert

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

SP:CacheMiss

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

SP:CacheRemove

Event Sub Class

Nesting level of the stored procedure. A value of 0 indicates that the stored procedure was explicitly removed from the buffer pool.

 

Object ID

System-assigned ID of the stored procedure.

SP:Completed

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

 

Text

Text of the stored procedure.

SP:ExecContextHit

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

SP:Recompile

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

SP:Starting

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

 

Text

Text of the stored procedure.

SP:StmtCompleted

Event Sub Class

Nesting level of the stored procedure.

 

Integer Data

Actual rows returned by the statement.

 

Object ID

System-assigned ID of the stored procedure.

 

Text

Text of the statement in the stored procedure.

SP:StmtStarting

Event Sub Class

Nesting level of the stored procedure.

 

Object ID

System-assigned ID of the stored procedure.

 

Text

Text of the statement in the stored procedure.

See Also 

In Other Volumes 

"Stored Procedures" in Microsoft SQL Server Database Developer's Companion 

Transactions Event Category

The Transactions event classes can be used to monitor the status of transactions.

Event class

Description

DTCTransaction

Tracks Microsoft Distributed Transaction Coordinator (MS DTC) coordinated transactions between two or more databases.

SQLTransaction

Tracks Transact-SQL BEGIN, COMMIT, SAVE, and ROLLBACK TRANSACTION statements.

TransactionLog

Tracks when transactions are written to the transaction log.

Use the DTCTransaction event class to monitor the state of MS DTC transactions as they occur. This can be useful when testing an application that makes use of distributed transactions.

Monitor the SQLTransaction event class when testing your application stored procedures or triggers to determine, for example, when transactions are committed or rolled back.

Use the TransactionLog event class when you want to monitor activity in the Microsoft SQL Server transaction log, for example, when you want to find out the types of logging activity that are happening when testing your application.

These are the event-specific data columns for the Transactions event category.

Event class

Data column

Description

DTCTransaction

Binary Data

Globally unique identifier (GUID) (in hexadecimal), if available.

 

Event Sub Class

MS DTC state. For more information, see your MS DTC documentation.

 

Text

GUID (in text), if available.

SQLTransaction

Event Sub Class

Type of SQL transaction event (COMMIT or ROLLBACK).

 

Text

Name of the transaction, or user transaction if the transaction is unnamed.

TransactionLog

Event Sub Class

Type of transaction log event, such as BEGINXACT(null).

TSQL Event Category

The TSQL event classes can be used to monitor the execution and completion of a remote procedure call (RPC), a batch, and a Transact-SQL statement.

Event class

Description

RPC:Completed

RPC has completed.

RPC:Starting

RPC has started.

SQL:BatchCompleted

Transact-SQL batch has completed.

SQL:BatchStarting

Transact-SQL batch has started.

SQL:StmtCompleted

Transact-SQL statement has completed.

SQL:StmtStarting

Transact-SQL statement has started.

By monitoring the TSQL event classes and monitoring the events using single stepping, you can monitor your application queries. The SQL:BatchStarting event class will show the Transact-SQL submitted in a batch, while the SQL:StmtStarting event class shows the individual statement within a batch. By replaying the SQL:BatchCompleted and RPC:Completed event classes, any results returned by the batch or RPC are displayed and can be checked to ensure they match the results you are expecting.

Monitoring the Start Time, End Time, and Duration default data columns shows when the events start and complete, and how long each RPC, batch, or statement takes to complete. By grouping events based on the Duration default data column, you can easily determine the longest running queries. Monitoring the NT User Name and SQL User Name default data columns can also identify users who submit these queries.

These are the event-specific data columns for TSQL event category.

Event class

Data column

Description

RPC:Completed

Text

Text of the RPC

RPC:Starting

Text

Text of the RPC

SQL:BatchCompleted

Text

Text of the batch

SQL:BatchStarting

Text

Text of the batch

SQL:StmtCompleted

Integer Data

Actual rows returned by the statement

 

Text

Text of the statement that was executed

SQL:StmtStarting

Text

Text of the statement that is about to be executed

User Configurable Event Category

The User Configurable event classes can be used to monitor user-defined events. User-defined events can be created to monitor events that cannot already be monitored by the system-supplied events in other event categories. For example, a user-defined event can be created to monitor the progress of the application you are testing. As the application runs, it can generate events at predefined points, allowing you to determine the current execution point in your application.

As user-defined events are generated by your application using the xp_trace_generate_event extended stored procedure, the event_class parameter you specify determines which of the following five event classes to monitor.

Event class

Description

UserConfigurable:1

Event data defined by the user

UserConfigurable:2

Event data defined by the user

UserConfigurable:3

Event data defined by the user

UserConfigurable:4

Event data defined by the user

UserConfigurable:5

Event data defined by the user

The values available for the event_class parameter can be determined by using the xp_trace_geteventnames extended stored procedure.

See Also 

In Other Volumes 

"xp_trace_geteventnames" in Microsoft SQL Server Transact-SQL and Utilities Reference 

Default Event Classes

When a new trace is created, it is defined with a set of default event classes. You can remove these event classes and/or add others when you create new traces. Each time a new trace is created, the default event classes will be present unless explicitly removed.

These are the default event classes to trace.

Default event class

Description

Connect

Connection event has occurred, such as a client requesting a connection to a server running Microsoft SQL Server. These are grouped within the Session event category.

Disconnect

Disconnect event has occurred, such as a client issuing a disconnect command. These are grouped within the Session event category.

ExistingConnection

Activity by all users that were logged on before the trace started has been detected. These are grouped within the Session event category.

RPC:Completed

RPC has completed. This is grouped within the TSQL event category.

SQL:BatchCompleted

Transact-SQL batch has completed. This is grouped within the TSQL event category.

See Also 

In This Volume 

Creating Traces

Creating Traces

Traces can be created using either SQL Server Profiler, Microsoft SQL Server extended stored procedures, or the Create Trace Wizard. You can create one or more traces using any combination of SQL Server Profiler event classes and event class criteria.

Before creating a trace using SQL Server Profiler, you can specify default trace values in the Options dialog box. The default trace values for SQL Server Profiler are:

  • The name of the default computer running SQL Server to be monitored. The default is the local computer. 

  • The share type, which indicates whether the new trace can be used by other users of the computer on which SQL Server Profiler is installed, or only by the user who creates the trace. The options are Shared or Private (default). 

  • The event classes available to be traced. The options are Commonly traced event classes (default), which is a subset of the event classes available, or All event classes

  • The data columns (event data) to be monitored. The options are Commonly captured data columns (default), which is a subset of the data columns available, or All data columns

Additionally, you can specify default display values that determine how SQL Server Profiler displays the running traces, such as starting the trace as soon as it is created, the number of text lines displayed when an event is expanded, the size of the trace buffer, and the font to use.

Note If the trace buffer size is exceeded, the oldest events will be deleted from the trace to allow new events to be captured. The trace buffer size does not apply to traces saved to a file or table.

When creating a trace using SQL Server Profiler, specify:

  • A name for the trace. No two other traces on the computer can have the same name and share type. 

  • The share type (shared or private). 

  • The name of the computer running SQL Server to be monitored. 

  • The event classes to capture. For more information about the event classes available, see "Events Monitored by SQL Server Profiler" in this volume.

  • The data columns to capture. For more information about the data columns available, see "Events Monitored by SQL Server Profiler" in this volume.

  • Any filters you want to use to restrict the event data captured. For more information, see "Filtering Events" in this volume.

  • The destination for the captured event data, such as a file or table. For more information, see "Saving Traces" in this volume.

The Create Trace Wizard can create predefined traces that can be used to monitor common problems, such as finding poorly performing queries quickly, identifying large table scans, tracking application, user, and stored procedure usage and performance, and detecting and analyzing deadlocks.

Using Extended Stored Procedures 

SQL Server Profiler uses extended stored procedures to create traces from the selections you enter in the user interface. The extended stored procedures are used to create the queues to store events and send the trace output to the appropriate destination. These extended stored procedures can be used from within your own applications to create traces manually, instead of using SQL Server Profiler. This allows you to write custom applications specific to the needs of your enterprise that monitor SQL Server. For example, when using extended stored procedures to create traces, you can:

  • Send trace data to the Microsoft Windows NT application log. 

  • Autostart a trace when SQL Server starts. 

  • Forward trace events from one or more servers to another server, file, or table. 

  • Configure queues and consumers. 

Extended stored procedures expose the underlying architecture used to create traces. The architecture components are:

Producer

Generates the events to be monitored. An example of a producer is the SQL Server lock manager, which generates lock events. For more information, see "Locks Event Category" in this volume.

Filter

Restricts the data monitored by the trace. For more information, see "Filtering Events" in this volume.

Queue

Collects event data and queues the events for asynchronous processing by consumers. The consumer extracts the events from the queue. For example, the file consumer extracts the events from the queue and writes them to a destination file.

Destination

The place to send the event data when extracted from the queue. Destinations include files, SQL Server tables, other computers running SQL Server, and the Windows NT application log.

Cc917570.fltr(en-us,TechNet.10).gif

The general steps required to define your own trace using extended stored procedures are:

  1. Create a new queue using xp_trace_addnewqueue

  2. Specify the events to capture using xp_trace_seteventclassrequired

  3. Specify any event filters.

  4. Specify the destination for the captured event data using xp_trace_setqueuedestination

Filtering Events

If a filter is not set, all events of the selected event classes are returned in the trace output. Filters limit the events collected in the trace. For example, limiting the Microsoft Windows NT usernames in the trace to specific users reduces the output data to only those users in which you are interested.

There are three types of filters:

  • Name filters

    Used to include or exclude Microsoft SQL Server or Windows NT users and specific applications. 

  • Range filters

    Used to specify a range of values, such as the minimum and maximum duration of an SQL statement. 

  • Identification numbers (IDs)

    Used to include a specific ID number, such as a server process ID (SPID) or connection ID. 

Trace event criteria are parameters used to restrict (filter) the event data captured within the trace. For example, you can monitor the activity of a specific application or exclude an application from monitoring (the default trace event criteria excludes SQL Server Profiler from monitoring itself). For example, when monitoring queries to determine the batches that take the longest time to execute, you can set the trace event criteria to monitor (trace) only those batches that take longer than 30 seconds to execute (a CPU minimum value of 30,000 milliseconds).

Each trace event criteria parameter has a set of values that determine whether the event data is included in the trace when the event class is monitored by SQL Server Profiler. The values applicable depend on the event criteria chosen.

Event criteria option

Description

Include and Exclude

Specifies that the trace event data must include or exclude the specified value or values listed. For example, by default, SQL Server Profiler is excluded from the list of applications to be traced. A percent (%) wildcard character can be used within the value to specify any series of characters. For example, the character string SQL Server Profiler% specifies any value beginning with the character string 'SQL Server Profiler'. A value of % indicates any valid value.
Multiple values can be included or excluded by separating each value with a semicolon (;), for example, SQL%; ISQL%; MS%.

Minimum and Maximum

Specifies a range of values for applicable trace event data, such as CPU use in milliseconds, execution duration for remote procedure call (RPC) and language statements, and range of severity levels. Any event data value less than the minimum or greater than the maximum is not captured in the trace. This must be a numeric value.

Value

Indicates a single value for the trace event data, such as a specific SQL Server connection ID, to include in the trace. Any event data not matching the value specified is not captured in the trace. Only one value can be specified for the trace event criteria. This must be a numeric value.

These are the trace event criteria.

Trace event criteria

Description

Include and Exclude

Minimum andMaximum

Value

Application Name

Application that generates the event, for example, SQL Server Query Analyzer.

Yes

 

 

Connection ID

Unique ID for each connection.

 

 

Yes

CPU

Amount of CPU time (in milliseconds).

 

Yes

 

Database ID

ID assigned to the database. The value for a database can be determined by using the DB_ID function.

 

 

Yes

Duration

Amount of elapsed time for RPC and language statements, locks, sessions, and other events that have a defined elapsed time.

 

Yes

 

Host Name

Name of the computer that generates the event. To determine the host name, use the HOST_NAME function.

Yes

 

 

Host Process ID

Host process ID for the computer that generates the event. To determine the host ID, use the HOST_ID function.

 

 

Yes

Index ID

Index ID for the object. To determine the index ID for an object, use the indid column of the sysindexes system table.

 

 

Yes

NT Domain Name

Windows NT domain of the client that generates the event.

Yes

 

 

NT User Name

Individual responsible for generating the event.

Yes

 

 

Object ID1

Unique ID for the monitored object. Alternatively, by selecting the Exclude system objects check box, all objects are monitored, except system objects such as the sysobjects table.

 

 

Yes

Reads

Number of logical reads performed by the server that is executing the statement.

 

Yes

 

Server Name

Name of the server used in the trace.

Yes

 

 

Severity

Range of error severity levels.

 

Yes

 

SPID

Each connection has a unique SPID.

 

 

Yes

SQL User Name

User who generates the event.

Yes

 

 

Text

Text contained within the event data.

Yes

 

 

Writes

Number of disk writes performed by the server that executes the statement.

 

Yes

 

1 Not applicable when Exclude system objects is selected.

 

 

 

 

 

As a security mechanism, SQL Server Profiler automatically omits from the trace any of the security-related stored procedures that affect passwords. This security mechanism is nonconfigurable and always in effect, preventing users, who otherwise have permissions to trace all activity on SQL Server, from capturing passwords.

These security-related stored procedures are not monitored:

sp_addapprole

sp_adddistpublisher

sp_adddistributiondb

sp_adddistributor

sp_addlinkedserver

sp_addlinkedsrvlogin

sp_addlogin

sp_addmergepullsubscription_agent

sp_addpullsubscription_agent

sp_addremotelogin

sp_addsubscriber

sp_approlepassword

sp_changedistpublisher

sp_changesubscriber

sp_dsninfo

sp_helpsubscription_properties

sp_link_publication

sp_password

sp_setapprole

xp_startmail

Saving Traces

Saving a trace involves one of these actions:

  • Saving the captured event data to a specified place 

  • Saving the definition of the trace 

Saving Event Data 

The event data captured in a trace can be saved either to a file, a Microsoft SQL Server table, or the Microsoft Windows NT application log, or forwarded to another computer running SQL Server.

Note Only the extended stored procedures can be used to save event data to the Windows NT application log or to forward event data to another computer running SQL Server. Additionally, when SQL Server is running on Microsoft Windows 95/98, the Windows NT application log cannot be specified as a destination for the trace queue data because Windows 95/98 does not have an event log.

Saving the captured event data to a file or SQL Server table is useful when you need to analyze or replay the captured data at a later time, such as for trend forecasting or troubleshooting and debugging application problems. For example:

  • Using a trace file or trace table allows you to create a workload that is used as input for the Index Tuning Wizard. 

  • When trying to resolve a problem on your computer that requires analysis by a third party such as a support provider, using a trace file allows you to capture events and send the trace file to the support provider for analysis. 

  • Saving the captured data to a trace table allows you to use the query processing tools in SQL Server to access the data, or to view the data in SQL Server Profiler. However, only members of the sysadmin fixed server role or the table creator can access the trace table directly. 

Important To capture event data to a SQL Server table, the Application Name data column must be captured within the trace.

Each time the trace is run, captured data is appended to the trace file or trace table.

Important Capturing trace data to a table is slower than capturing to a file. An alternative is to capture a trace to a file, open the trace file, and then save the trace as a trace table.

When using a trace file, SQL Server Profiler saves captured event data (not trace definitions) to a SQLServerProfiler (*.trc) file. The extension is added to the end of the file automatically when the trace file is saved, regardless of any other specified extension. For example, if you specify a trace file called Trace.dat, the file created is called Trace.dat.trc.

Saving the captured event data to the Windows NT application log allows the event data to be viewed using the Windows NT Event Viewer. The Windows NT application log is likely to be used to store captured event data when you are specifically monitoring errors and warnings. For example, when monitoring the OLE DB Errors or Execution Warnings event classes, it can be desirable to record these errors in the same location as other Windows NT application errors.

Forwarding the events captured in a trace to a forwarding server is useful when you want to centrally monitor event data generated by many computers running SQL Server. To forward events, a queue is created on each computer running SQL Server that you want to monitor. Each queue is configured using xp_trace_setqueuedestination to send the events to the forwarding server. By monitoring only the forwarding server using SQL Server Profiler or another consumer, you can effectively monitor all the events sent from the computers running SQL Server. The name of the computer that generated the event is attached as a prefix to the event.

Saving Trace Definitions 

The definition of a trace includes the event classes, data columns, event criteria (filters), and all other properties (except the captured event data) used to create a trace. Trace definitions created using SQL Server Profiler are automatically saved in the registry on the computer running SQL Server Profiler. Trace (queue) definitions created using extended stored procedures are stored in the registry of the computer running SQL Server where the queue was created.

Saved trace definitions can be used as templates for traces. For example, if you frequently monitor SQL Server to analyze performance by capturing the same event data each time, using the same trace definition allows you to monitor the same events without having to define the event classes, data columns, and so on every time you create a trace. Additionally, a trace definition created using SQL Server Profiler can be given to another user to monitor specific SQL Server events. For example, a support provider can supply a customer with a trace definition that can be used to monitor specific events relating to a problem the customer is experiencing. The trace definition is used by the customer to capture the required event data, which is then sent to the support provider for analysis.

To save a trace definition so that it can be used on another computer, export the trace definition to a file using SQL Server Profiler. Trace definitions are stored in trace definition (*.tdf) files. The trace definition can then be imported from the file to another computer using SQL Server Profiler.

Note In order to start an imported trace, the server name specified may need to be changed.

Modifying Traces

You can modify trace definitions saved in the registry on the local computer running SQL Server Profiler, trace definitions imported from files, and trace definitions created using extended stored procedures stored on the computer running Microsoft SQL Server. Trace definition properties, such as event classes and data columns to be monitored, are modified in the same way in which the properties were originally set. Event classes and data columns can be added or removed, and filters can be similarly changed. Once the trace definition is modified, saving the trace definition with the same trace or queue name overwrites the original trace definition. For more information, see "Creating Traces" in this volume.

When working with existing trace files and trace tables containing captured event data, only certain properties of the trace can be modified:

  • Additional event classes and data columns cannot be added. Only existing event classes and data columns captured in the trace can be removed. 

  • Existing trace event filter criteria only can be modified. 

Warning Saving a trace file with the same name overwrites the original file, causing any of the originally captured events or data columns that were removed or filtered to be lost.

Starting, Pausing, and Stopping Traces

After you have created a trace definition or imported a trace definition using SQL Server Profiler, you can start, pause, or stop capturing data using the new trace.

When using SQL Server Profiler, starting a trace opens a new trace window (if one is not already open), and data is immediately captured. When using Microsoft SQL Server extended stored procedures, you choose to start a trace either manually or automatically every time SQL Server starts. A soon as the trace is started, data is captured in the event queue.

Pausing a trace prevents further event data from being captured until the trace is restarted (continued). Restarting a trace resumes trace operations. Any previously captured data is not lost. When the trace is restarted, data capturing is resumed from that point onward.

Stopping a trace stops data from being captured. Once a trace is stopped it cannot be restarted without losing any previously captured data, unless the data has been captured to a trace file or trace table, in which case the newly captured data is appended to the trace file or trace table.

Advanced Options

When you start a trace and the server is the defined source, Microsoft SQL Server creates a queue that provides a temporary holding place for captured server events.

Each trace can have multiple producers. A producer collects events in a specific event category and sends the data to the queue. Events are read off the queue in the order in which they were placed. This reading method is called first-in/first-out (FIFO).

When a client reads from the trace queue, buffer space is freed so that producers can place more events on the queue. The size of the queue determines how many events can be held in the queue at any one time. This can be modified by updating the number of rows to buffer on the source server.

As the number of events in the queue increases, the priority of the thread that is reading the events to pass to the client increases. This allows the client to read events faster to meet the demand. Because the client is reading events faster, either the number of events in the queue decreases or the queue continues to fill, but at a slower rate. As the number of events in the queue decreases, the priority of the thread that is reading the events decreases, thereby reducing the effect on overall server performance.

Sometimes producers have too many events for the queue, and the queue fills up. If the queue fills up, producers cannot write any more events to the queue, and they enter a time-out period. As an event is removed from the queue, then at least one producer moves out of the time-out period. When enough events are removed from the queue, all paused producers can write events to the queue and the queue resumes normal processing.

If the time-out period did not expire, backed-up events are written to the queue and no events are lost. However, if the time-out period expires before any producer can write an event to the queue, then the trace enters an autopause state, which instructs producers to stop submitting events to the queue. As a producer enters the autopause state, an event is written to the queue indicating that the producer has been paused. Producers continue to collect event data, but cannot write to the queue, so any backed-up events or events collected during the autopause state are lost. When the queue opens because enough events have been read by the client, thereby partially emptying the queue, producers leave the autopause state and resume writing events to the queue.

If the queue is full but there are no new events, a time-out does not begin because there are no producers waiting to write events to the queue.

Configuring the Queue Settings 

When you create a trace, you can modify the default queue settings for the server that is used as the source of the trace. Use these settings when trying to improve the performance of a trace and to prevent producers from entering the autopause state.

Queue setting

Description

Server timeout

Duration of the time-out period (in seconds) that the producer enters when the queue is full. If the time-out period is exceeded before events are removed from the queue, the producer enters the autopause state.

Number of rows to buffer

Maximum number of rows in the trace queue. A larger queue uses more memory, but it helps prevent the producers from entering the autopause state if there are too many events or producers.

Boost Priority

Point at which the priority of the thread reading the events from the queue is boosted to increase the rate at which events are read. The higher the percentage, the more the queue is allowed to fill before the thread priority is boosted.

Reduce Priority

Point at which the priority of the thread reading the events from the queue is reduced to decrease the rate at which events are read. The lower the percentage, the emptier the queue must be before the thread priority is reduced.

Considerations for setting the Boost Priority and Reduce Priority settings include:

  • The Boost Priority and Reduce Priority settings have no effect on SQL Server Profiler threads because they affect the server thread that reads the events in the queue for SQL Server Profiler. The server thread is part of the SQL Server engine. 

  • By setting the Boost Priority and Reduce Priority settings to 0, the priority of the server thread reading the events from the queue is always boosted, thereby reducing the chances of the queue becoming full. This helps prevent the producers from entering the autopause state, which causes events to be lost. However, using a permanently boosted server thread can impair server performance. 

  • By setting the Boost Priority and Reduce Priority settings to 100, the priority of the server thread reading the events from the queue is always reduced, thereby reducing the effect on the server. However, the queue is more likely to fill, thereby causing producers to enter the autopause state. 

  • By setting the Boost Priority and Reduce Priority settings to low values, such as 2 and 1 respectively, the priority of the server thread reading the events from the queue is boosted during times of heavy traffic but reduced at other times, which reduces the effect on the server. 

Viewing and Analyzing Traces

Use SQL Server Profiler to view captured event data in a trace. SQL Server Profiler displays data based on defined trace properties. Each line in the trace capture data describes an event in Microsoft SQL Server. One way to analyze SQL Server data is to copy the data to another program, such as SQL Server Query Analyzer or the Index Tuning Wizard. The Index Tuning Wizard can use a trace file that contains SQL batch and remote procedure call(RPC) events (and Text data columns). By specifying a server and/or database name when using the wizard, the captured data can be analyzed against a different server and/or database. For more information, see "Index Tuning Wizard" in Microsoft SQL Server Database Developer's Companion.

When a trace is opened using SQL Server Profiler, it is not necessary for the trace file to have the .trc file extension if the file was created by either SQL Server Profiler or the Profiler extended stored procedures.

Note SQL Server Profiler can also read SQL Trace .log files and generic SQL script files. When opening a SQL Trace .log file that does not have a .log file extension, for example trace.txt, specify SQLTrace_Log as the file format.

The SQL Server Profiler display can be configured with customized font, font size, preview lines, and client buffer size to assist in trace analysis.

Analyzing Data to Find Hotspots 

Using SQL Server Profiler, hotspots, such as queries that perform poorly or have exceptionally high numbers of logical reads, can be found by grouping traces or trace files by the Duration, CPU, Reads, or Writes data columns. For example, to find out which objects are frequently opened, create a trace that captures Object:Opened events. Group the event data by Event Class, Server Name, Database ID, and Object ID to display counts of the Object:Opened events for each database object.

Additional information can be found by saving traces to tables and using Transact-SQL to query the event data. For example, to determine which SQL:BatchCompleted events had excessive wait time, execute:

SELECT TextData, Duration, CPU
FROM trace_table_name
WHERE EventClass = 12 -- SQL:BatchCompleted events
AND CPU < (.4 * Duration)

Displaying Object Names When Viewing Traces 

If you capture the Server Name and Database ID data columns in your trace, SQL Server Profiler displays the object name instead of the object ID (for example, Orders instead of the number 165575628). Similarly, if you capture the Server Name, Database ID, and Object ID, SQL Server Profiler displays the index name instead of the index ID.

If you choose to group by the Object ID data column, group by the Server Name and Database ID data columns first, and then Object ID. Similarly, if you choose to group by the Index ID data column, group by the Server Name, Database ID, and Object ID data columns first, and then Index ID. You need to group in this way because object and index IDs are not unique between servers and databases (and objects for index IDs).

Finding Specific Events Within a Trace 

The following example can be used to find and group events in a trace.

The same technique can be used to find events grouped by Server Name, Database ID, and Object ID. Once you have found the events for which you are looking, group by Connection ID, Application Name, or another event class to view related activity in chronological order.

Note If a trace file is located on a computer running Microsoft Windows 95/98, the trace file cannot be opened by SQL Server Profiler while the file is also being used to capture events. Additionally:

  • SQL Server Profiler, running on Windows NT, cannot open trace or script files located on a Windows 95/98 shared directory.

  • SQL Server Profiler can incur problems when accessing trace or script files located on remote computers if those files subsequently become unavailable. 

See Also 

In This Volume 

Tips for Using SQL Server Profiler

Replaying Traces

When you create or edit a trace, you can save the trace to replay it later. SQL Server Profiler features a multithreaded playback engine that can simulate user connections and SQL Server Authentication, allowing the user to reproduce the activity captured in the trace. Therefore, replay is useful when troubleshooting an application or process problem. When you have identified the problem and implemented corrections, run the trace that found the potential problem against the corrected application or process, then replay the original trace to compare results.

Trace replay supports debugging using break points and run-to-cursor, which especially improves the analysis of long scripts. For more information, see "Single-Stepping Traces" in this volume.

Replay Requirements 

In addition to any other event classes you want to monitor, the following event classes must be captured in a trace to allow the trace to be replayed:

  • Connect 

  • CursorExecute (only required when replaying server-side cursors) 

  • CursorOpen (only required when replaying server-side cursors) 

  • CursorPrepare (only required when replaying server-side cursors) 

  • Disconnect 

  • Exec Prepared SQL (only required when replaying server-side prepared SQL statements) 

  • ExistingConnection 

  • Prepare SQL (only required when replaying server-side prepared SQL statements) 

  • RPC:Starting 

  • SQL:BatchStarting 

In addition to any other data columns you want to capture, the following data columns must be captured in a trace to allow the trace to be replayed:

  • Application Name 

  • Binary Data 

  • Connection ID or SPID 

  • Database ID 

  • Event Class 

  • Event Sub Class 

  • Host Name 

  • Integer Data 

  • Server Name 

  • SQL User Name 

  • Start Time 

  • Text 

Note Use the sample trace definition Sample 6-TSQL for Replay as a template for traces to capture data for replay.

In order to replay a trace against a computer running Microsoft SQL Server (the target), other than the computer originally traced (the source):

  • All logins and users contained in the trace must already be created on the target and in the same database as the source. 

  • All logins and users in the target must have the same permissions they had in the source. 

  • All login passwords must be the same as the user executing the replay. 

Replaying events associated with missing or incorrect logins will result in replay errors, but the replay operation will continue.

In order to replay a trace against a computer running SQL Server (the target), other than the computer originally traced (the source), either:

  • Database IDs on the target must be the same as those on the source. This can be accomplished by creating from the source a backup of the master database, and any user databases referenced in the trace, and restoring them on the target. 

  • The default database for each login contained in the trace must be set (on the target) to the respective target database of the login. For example, the trace to be replayed contains activity for the login Fred in the database Fred_Db on the source. Therefore, on the target, the default database for the login Fred must be set to the database that matches Fred_Db (even if the database name is different). To set the default database of the login, use sp_defaultdb system stored procedure. 

Replay Options 

Before replaying a captured trace, you can specify:

  • Server 

    The server is the name of the computer running SQL Server against which you want to replay the trace. The server must adhere to the replay requirements previously mentioned. 

  • Sync. level 

    The synchronization level is the degree of accuracy that SQL Server Profiler enforces when starting events based on the start time captured in the trace.

    • Full Synchronization ensures that all events across all connections are started in the order in which they originally started. Even if an event takes longer to execute, for example, because the target server is slower than the source, causing other events on the connection to be delayed, all other events on all other connections that should have already started will not start until the delayed events that should have executed before the other events have started. This is the default synchronization level. 

    • Partial Synchronization allows events across all connections to start at their expected start time captured in the trace, even if events that have earlier start times on other connections have not yet started because they have been delayed. However, events will not start if events on other connections with earlier start times are delayed too much. 

    • No Synchronization allows all events on each connection to be started as soon as the previous event on the same connection has completed. There is no synchronization between events on different connections. 

  • Replay rate 

    The replay rate specifies the timing SQL Server Profiler uses to replay the events.

    • As fast as possible replays the events in the trace as quickly as the computer running SQL Server can process them. There is no timing maintained between events: as soon as one event completes, the next event is started. This is the default replay rate. 

    • Maintain interval between events replays the events in the trace with the same duration of time between events as originally occurred. 

      For example, Event 1 occurs three seconds after the trace started, Event 2 five seconds after the trace started, and Event 3 ten seconds after the trace started. SQL Server Profiler will replay Event 1 three seconds after the start of the trace, Event 2 two seconds after Event 1, and Event 3 five seconds after Event 2. 

      If you replay the trace against a different computer than the source, then the overall execution time can vary because the length of time of each event can vary. 

    • Maintain relationship to start time replays the events in the trace at the same time they originally occurred. If an event takes longer to execute than it did originally, SQL Server Profiler will reduce the time interval between events to ensure the events occur at the same time relative to the start of the trace. Using the previous example, if Event 2 takes three seconds longer to execute than it originally did, then the time interval between Event 2 and Event 3 is reduced by three seconds to ensure that Event 3 starts ten seconds after the start of the trace. 

  • Output file 

    The output file contains the result of replaying the trace for later viewing. If Progress is selected, then the output file can be also replayed at a later time. By default, SQL Server Profiler displays only the results of replaying the trace to the screen. 

  • Display 

    The display options determine what the user sees in the replay pane of the trace window and, if specified, what is saved to the replay output file:

    • Progress causes the replayed events to be traced and displayed. 

    • ODBC errors causes any ODBC errors incurred by replay events to be displayed. 

    • Internal errors causes replay errors incurred by replay events to be displayed. 

    • Results causes the result set(s) returned from replay events to be displayed. 

Replay Considerations 

SQL Server Profiler cannot replay traces:

  • Captured from connections that connected to SQL Server using Windows NT Authentication Mode. For information about Windows NT Authentication Mode, see "Authentication" in this volume.

  • Containing replication and other transaction log activity. 

  • Containing operations that involve globally unique identifiers. For information about globally unique identifiers, see "Auto-Numbering and Identifier Columns" in Microsoft SQL Server Database Developer's Companion.

  • Containing operations on text, ntext, and image columns involving the bcp utility, BULK INSERT, READTEXT, WRITETEXT, and UPDATETEXT statements, and full-text operations. 

  • Containing session binding: sp_getbindtoken and sp_bindsession system stored procedures. 

Additionally, SQL Server Profiler cannot replay SQL Trace .log files that contain SQL Server 6.5 server-side cursor statements (sp_cursor).

Unexpected results or replay errors can occur when replaying a trace containing the Sessions event classes (Connect, Disconnect, and Existing Connection) if the Binary Data data column is not also captured in the trace. The Binary Data data column, for the Session event classes, contains information required to set ANSI nulls, ANSI padding, cursor close on commit, concat null yields null, and quoted identifier session settings. For more information, see "SET" in Microsoft SQL Server Transact-SQL and Utilities Reference.

When replaying a trace containing concurrent connections, SQL Server Profiler creates a thread for each connection. Therefore, system performance of the computer replaying the trace can be affected if the trace contains many concurrent connections. To reduce the effect on system performance, filter the trace by specifying a value(s) for the Application Name, SQL User Name, or another data column captured in the trace, to focus the trace on only those events you need to monitor.

See Also 

In Other Volumes 

"bcp Utility" in Microsoft SQL Server Transact-SQL and Utilities Reference 

"BULK INSERT" in Microsoft SQL Server Transact-SQL and Utilities Reference 

"Full-text Indexes" in Microsoft SQL Server Database Developer's Companion 

"READTEXT" in Microsoft SQL Server Transact-SQL and Utilities Reference 

"sp_defaultdb" in Microsoft SQL Server Transact-SQL and Utilities Reference 

"UPDATETEXT" in Microsoft SQL Server Transact-SQL and Utilities Reference 

"WRITETEXT" in Microsoft SQL Server Transact-SQL and Utilities Reference 

Single-Stepping Traces

Rather than replay all events in a trace to completion, SQL Server Profiler allows you to replay a trace in the following ways:

  • A single event at a time 

    By replaying a trace a single event at a time, you can examine the effects of each event after it has occurred. When trace replay is continued using single stepping, the next event is replayed, and then the trace is paused again. 

  • To a breakpoint 

    By specifying one or more breakpoints in the trace, all events to the event marked with the breakpoint are replayed, as specified by the replay options without any user intervention, and then trace replay is paused. Trace replay can continue one event at a time, to the next breakpoint (if one exists), to a cursor, or to the end of the trace. Replaying a trace to a breakpoint is useful if you want to replay a trace without examining each event up to the breakpoint. For example, you have debugged your code and determined that all events up to a breakpoint execute as expected and do not need to be examined further. 

  • To a cursor 

    By replaying a trace to a cursor (a highlighted event in the trace), all events to the highlighted event are replayed without any user intervention. However, if a breakpoint is marked in the trace between the cursor and the point in the trace where execution will next begin from, replay will stop at the breakpoint rather than continue to the cursor. Remove all breakpoints in the trace to replay the trace to the cursor. Similar to a breakpoint, replaying a trace to a cursor is useful if you want to replay a trace without examining each event up to the cursor. 

Single stepping is useful for debugging the events captured in a trace. For example, you can create a trace monitoring the execution of all batches submitted. By replaying the events in the trace one at a time (single stepping), you can determine the effects of each batch as they occur, allowing you to debug your code. This is much more effective than placing large amounts of debug code between batches. Debug code generally creates more output that needs to be separated from the actual results generated, and that must be correctly removed when debugging is complete.

Creating and Replaying SQL Scripts

An SQL script is simply a list of Transact-SQL statements. Any trace, trace file, or trace table can be saved as an SQL script and edited within SQL Server Profiler. The SQL script will contain all the SQL statements captured in the trace. New SQL scripts can also be created within SQL Server Profiler, allowing you to build a set of Transact-SQL statements that can be tested against Microsoft SQL Server. The SQL script can then be saved to a file.

SQL scripts are replayed on a single connection (single thread). To execute the statements in an SQL script against the correct database, either the script must specify the database to be used using the USE <database> Transact-SQL statement or the target database must be the default database of the user replaying the SQL script. To set a default database, use sp_defaultdb system stored procedures.

SQL scripts can be replayed one statement at a time by using single stepping, which allows you to execute each statement and then test or confirm the behavior before proceeding to the next statement. SQL Server Profiler also allows the script to be modified while you are using it, although not while the script is executing. To modify the script, simply stop execution, modify the SQL in the script, and restart execution.

Note SQL Server Query Analyzer can save SQL scripts in three formats: Unicode, ANSI, and CP 437. However, because SQL Server Profiler does not support CP 437 format, some extended characters for some fonts may not appear correctly when a file saved in CP 437 format is opened in SQL Server Profiler.

See Also 

In Other Volumes 

"sp_defaultdb" in Microsoft SQL Server Transact-SQL and Utilities Reference 

Deleting Traces

Deleting a trace permanently removes it and deletes all trace queue characteristics. Delete only traces you no longer need. You must stop the trace before deleting it.

You can also choose to pause or stop the trace instead of deleting it. For more information, see "Starting, Pausing, and Stopping Traces" in this volume.

Common SQL Server Profiler Scenarios

The following scenarios describe monitoring tasks that are commonly performed using SQL Server Profiler:

  • Monitoring Transact-SQL activity per user 

    You can create a trace that captures events relating to Sessions event classes (Connect, Disconnect, and ExistingConnection) and TSQL event classes. Include all data columns in the trace, do not specify any event criteria, and group the captured events by SQL User Name

    Use the Trace Transact-SQL activity by user option in the Create Trace Wizard to automatically create this trace definition. 

  • Finding the worst-performing queries 

    You can create a trace that captures events relating to TSQL event classes, specifically RPC:Completed and SQL:BatchCompleted. Include all data columns in the trace, group by Duration, and specify event criteria, for example, that the Duration of the event must be at least 1,000 milliseconds. This event criteria eliminates short-running events from the trace. The Duration minimum value can be increased as required. If you want to monitor only one database at a time, specify a value for the Database ID event criteria. 

    Use the Find the worst performing queries option in the Create Trace Wizard to automatically create this trace definition. 

  • Identifying the cause of a deadlock 

    You can create a trace that captures events relating to TSQL event classes (RPC:Starting and SQL:BatchStarting) and Locks event classes (Lock:Deadlock and Lock:Deadlock Chain). Include all data columns in the trace and group by Event Class. If you want to monitor only one database at a time, specify a value for the Database ID event criteria. 

    Use the Identify the cause of a deadlock option in the Create Trace Wizard to automatically create this trace definition. 

    To view the connections involved in a deadlock, either:

    • Open the trace containing the captured data, group the data by Connection ID, and expand both connections involved in the deadlock. 

    • Save the captured data to a trace file, and open the trace file twice using SQL Server Profiler to make the file visible in two separate windows. Group the captured data by Connection ID and then expand the connection ID involved in the deadlock; each deadlocked connection is in a separate window. Tile the windows to view the events that cause the deadlock. 

  • Monitoring stored procedure performance 

    You can create a trace that captures events relating to Stored Procedures event classes (SP:Completed, SP:Starting, SP:StmtCompleted and SP:StmtStarting), and TSQL event classes (SQL:BatchStarting and SQL:BatchCompleted). Include all data columns in the trace, and group by Connection ID. If you want to monitor only one database at a time, specify a value for the Database ID event criteria. Similarly, if you want to monitor only one stored procedure at a time, specify a value for the Object ID event criteria. 

    Use the Profile the performance of a stored procedure option in the Create Trace Wizard to automatically create this trace definition. 

Generally, when monitoring Microsoft SQL Server using SQL Server Profiler, only monitor the event classes in which you are interested. Monitoring too many event classes adds overhead to the server and the monitoring process and can cause the trace file or trace table to grow very large, especially when monitoring over a long period of time.

See Also 

In This Volume 

Locks Event Category

Sessions Event Category

Stored Procedures Event Category

TSQL Event Category

Tips for Using SQL Server Profiler

Here are some hints and tips that can help you use SQL Server Profiler more effectively.

Boosting Thread Priority 

If a queue is pausing too often, this may mean that the priority of the thread is too low. To boost thread priority, assign a lower percentage to the Boost Priority field in the From Server dialog box. You should also increase the Reduce Priority field in the From Server dialog box. For more information, see "Advanced Options" in this volume.

Running Too Many Traces 

If a computer running Microsoft SQL Server is running too slowly, SQL Server Profiler may have too many traces or a complex trace may be running. Stop any running traces to see if performance improves. If stopping traces improves performance, then examine your traces carefully to make sure they are not tracing more information than necessary. Make sure you are not running too many complex traces simultaneously.

Managing Large Trace Files 

Large trace files can use significant amounts of disk space and can be slow and expensive to send across networks. Reduce the size of a saved trace file by removing unwanted event types and/or data columns and applying filters to limit the trace to a specific trace event criteria, such as Connection ID, SPID, or a set of values for Application Name. Save the trace file with the same name or a new name.

Warning Saving a trace file with the same name overwrites the original file, causing any of the originally captured events or data columns that were removed or filtered to be lost.

Monitoring with Windows NT Performance Monitor

Windows NT Performance Monitor is a graphical tool in Microsoft Windows NT for measuring the performance of your own computer or other computers on a network. For each computer, you can view the behavior of objects, such as processors, memory, cache, threads, and processes. Each of these objects has an associated set of counters that measure device usage, queue lengths, delays, and other indicators of throughput and internal congestion.

Windows NT Performance Monitor provides charting, alerting, and reporting capabilities that reflect both current activity and ongoing logging. You can open, browse, and chart log files later as if they reflected current activity.

SQL Server provides its own objects and performance counters that can be viewed using Windows NT Performance Monitor. When Microsoft SQL Server is installed, a Performance Monitor icon is installed in the SQL Server program group.

Windows NT Performance Monitor makes it possible to obtain up-to-the-second SQL Server activity and performance statistics. With this graphical tool, you can:

  • View data simultaneously from any number of computers. 

  • View and change charts to reflect current activity, and show counter values that are updated at a user-defined frequency. 

  • Export data from charts, logs, alert logs, and reports to spreadsheet or database applications for further manipulation and printing. 

  • Add system alerts that list an event in the alert log and can notify you by reverting to the Alert view or issuing a network alert. 

  • Run a predefined application the first time or every time a counter value goes over or under a user-defined value. 

  • Create log files that contain data about various objects from different computers. 

  • Append to one file selected sections from other existing log files to form a long-term archive. 

  • View current-activity reports, or create reports from existing log files. 

  • Save individual chart, alert, log, or report settings, or the entire workspace setup for reuse when needed. 

For information about Windows NT objects and counters, see your Windows NT documentation.

Running Windows NT Performance Monitor

Windows NT Performance Monitor collects information from Microsoft SQL Server using remote procedure calls (RPC). Any user who has Microsoft Windows NT permissions to run Performance Monitor can use it to monitor SQL Server.

Note When using Windows NT Performance Monitor, you cannot connect to SQL Server running on Microsoft Windows 95/98.

As with all performance monitoring tools, expect some performance overhead when monitoring SQL Server with Windows NT Performance Monitor. The actual overhead in any specific instance will depend on the hardware platform, the number of counters, and the selected update interval. However, the integration of Windows NT Performance Monitor with SQL Server is designed to minimize the impact.

For more information about using Windows NT Performance Monitor, see Windows NT Performance Monitor Help or your Windows NT documentation.

Charts, Alerts, Logs, and Reports

Windows NT Performance Monitor allows you to create charts, alerts, logs, and reports to monitor a computer running Microsoft SQL Server.

Charts 

Charts can monitor the current performance of selected objects and counters (for example, the CPU usage or disk I/O). You can add to a chart various combinations of Windows NT Performance Monitor objects and counters, as well as Microsoft Windows NT objects and counters.

Each chart represents a subset of information you want to monitor. For example, one chart can track memory usage statistics and a second chart can track disk I/O statistics.

Using a chart can be useful for:

  • Investigating why a computer or application is slow or inefficient. 

  • Continually monitoring systems to find intermittent performance problems. 

  • Discovering why you need to increase capacity. 

  • Displaying a trend as a line chart. 

  • Displaying a comparison as a histogram chart. 

Charts are useful for short-term, real-time monitoring of a local or remote computer, for example, when you want to monitor an event as it occurs.

Alerts 

Using alerts, Windows NT Performance Monitor can track specific events and notify you of these events as requested. An alert log can monitor the current performance of selected counters and instances for objects in SQL Server. When a counter exceeds a given value, the log records the date and time of the event. An event can also generate a network alert. You can have a specified program run the first time or every time an event occurs. For example, an alert can send a network message to all system administrators that the computer running SQL Server is getting low on disk space.

Logs 

Logs allow you to record information on the current activity of selected objects and computers for later viewing and analysis. You can collect data from multiple systems into a single log file. For example, you can create various logs to accumulate information on the performance of selected objects on various computers for future analysis. You can save these selections under a file name and reuse them when you want to create another log of similar information for comparison.

Log files provide a wealth of information for troubleshooting or planning. Whereas charts, alerts, and reports on current activity provide instant feedback, log files enable you to track counters over a long period of time, thereby allowing you to examine information more thoroughly and to document system performance.

Reports 

Reports allow you to display constantly changing counter and instance values for selected objects. Values appear in columns for each instance. You can adjust report intervals, print snapshots, and export data. Use reports when you need to display the raw numbers.

For more information about charts, alerts, logs, and reports, or about Windows NT objects and counters, see your Windows NT documentation.

SQL Server Objects and Counters

Microsoft SQL Server provides objects and counters that can be used by Windows NT Performance Monitor to monitor activity in computers running SQL Server. An object is any Microsoft Windows NT or SQL Server resource, such as a SQL Server lock or Windows NT process. Each object contains one or more counters that determine various aspects of the objects to monitor. For example, the SQL Server Locks object contains counters called Number of Deadlocks/sec or Lock Timeouts/sec.

Some objects have several instances if multiple resources of a given type exist on the computer. For example, the Processor object type will have multiple instances if a system has multiple processors. The Databases object type has one instance for each database on SQL Server. Some object types (for example, the Memory Manager object) have only one instance. If an object type has multiple instances, you can add counters to track statistics for each instance, or in many cases, all instances at once.

When Windows NT Performance Monitor is initially started, it starts monitoring the SQL Server objects and counters found in the Sqlctrs.pmc file. By default, the Sqlctrs.pmc file contains a subset of the available SQL Server counters. SQL Server (and Windows NT) counters can be added or removed from the file as necessary. By adding or removing counters to the chart and saving the chart settings to the Sqlctrs.pmc file, you can change the default objects and counters monitored when Windows NT Performance Monitor is initially started. When Windows NT Performance Monitor is next started, the new settings take effect. These are the default objects and counters monitored.

SQL Server object

Counter

SQL Server: Buffer Manager

Buffer Cache Hit Ratio

SQL Server: General Statistics

User Connections

SQL Server: Memory Manager

Total Server Memory (KB)

SQL Server: SQL Statistics

SQL Compilations/sec

SQL Server: Buffer Manager

Page Reads/sec

SQL Server: Buffer Manager

Page Writes/sec

You can configure Windows NT Performance Monitor to display statistics from any SQL Server counter. In addition, you can set a threshold value for any SQL Server counter and then generate an alert when a counter exceeds a threshold. For more information about setting an alert, see "Setting Up an SQL Server Database Alert" in this volume.

Note SQL Server statistics are displayed only when SQL Server is running. If you stop and restart SQL Server, the display of SQL Server statistics is interrupted and then automatically resumed.

These are the SQL Server objects.

SQL Server object

Description

SQL Server: Access Methods

Searches through and measures allocation of SQL Server database objects (for example, the number of index searches or number of pages that are allocated to indexes and data).

SQL Server: Backup Device

Provides information about backup devices used by backup and restore operations, such as the throughput of the backup device.

SQL Server: Buffer Manager

Provides information about the memory buffers used by SQL Server, such as free memory and buffer cache hit ratio.

SQL Server: Cache Manager

Provides information about the SQL Server cache used to store objects such as stored procedures, triggers, and query plans.

SQL Server: Databases

Provides information about a SQL Server database, such as the amount of free log space available or the number of active transactions in the database. There can be multiple instances of this object.

SQL Server: General Statistics

Provides information about general server-wide activity, such as the number of users that are currently connected to SQL Server.

SQL Server: Latches

Provides information about the latches on internal resources (such as database pages) that are used by SQL Server.

SQL Server: Locks

Provides information about the individual lock requests made by SQL Server, such as lock time-outs and deadlocks. There can be multiple instances of this object.

SQL Server: Memory Manager

Provides information about the SQL Server memory usage, such as the total number of lock structures currently allocated.

SQL Server: Replication Agents

Provides information about the SQL Server replication agents currently running.

SQL Server: Replication Dist.

Measures the number of commands and transactions read from the distribution database and delivered to the Subscriber databases by the Distribution Agent.

SQL Server: Replication Logreader

Measures the number of commands and transactions read from the published databases and delivered to the distribution database by the Log Reader Agent.

SQL Server: Replication Merge

Provides information about SQL Server merge replication, such as errors generated or the number of replicated rows that are merged from the Subscriber to the Publisher.

SQL Server: Replication Snapshot

Provides information about SQL Server snapshot replication, such as the number of rows that are bulk copied from the publishing database.

SQL Server: SQL Statistics

Provides information about aspects of SQL queries, such as the number of batches of Transact-SQL statements received by SQL Server.

SQL Server: User Settable Object

Performs custom monitoring. Each counter can be a custom stored procedure or any Transact-SQL statement that returns a value to be monitored.

SQL Server: Access Methods Object

The Access Methods object in Microsoft SQL Server provides counters to monitor how the logical pages within the database are accessed. Physical access to the database pages on disk is monitored using the Buffer Manager counters. Monitoring the methods used to access database pages can help you to determine if query performance can be improved by adding or modifying indexes or by rewriting queries. The Access Methods counters can also be used to monitor the amount of data, indexes, and free space within the database, thereby indicating data volume and fragmentation (excessive fragmentation can impair performance).

These are the SQL Server Access Methods counters.

SQL Server Access Methods counters

Description

Extent Deallocations/sec

Number of extents deallocated per second from database objects used for storing index or data records.

Extents Allocated/sec

Number of extents allocated per second to database objects used for storing index or data records.

Forwarded Records/sec

Number of records per second fetched through forwarded record pointers.

FreeSpace Page Fetches/sec

Number of pages returned per second by free space scans used to satisfy requests to insert record fragments.

FreeSpace Scans/sec

Number of scans per second that were initiated to search for free space in which to insert a new record fragment.

Full Scans/sec

Number of unrestricted full scans per second. These can be either base-table or full-index scans.

Index Searches/sec

Number of index searches per second. These are used to start range scans and single index record fetches and to reposition an index.

Mixed Page Allocations/sec

Number of pages allocated per second from mixed extents. These are used for storing the first eight pages that are allocated to an index or table.

Page Deallocations/sec

Number of pages deallocated per second from database objects used for storing index or data records.

Page Splits/sec

Number of page splits per second that occur as the result of overflowing index pages.

Pages Allocated/sec

Number of pages allocated per second to database objects used for storing index or data records.

Probe Scans/sec

Number of probe scans per second. These are used to find rows in an index or base table directly.

Range Scans/sec

Number of qualified range scans through indexes per second.

Scan Point Revalidations/sec

Number of times per second that the scan point had to be revalidated to continue the scan.

Skipped Ghosted Records/sec

Number of ghosted records per second skipped during scans.

Table Lock Escalations/sec

Number of times locks on a table were escalated.

Workfiles Created/sec

Number of work files created per second.

Worktables Created/sec

Number of work tables created per second.

Worktables From Cache Ratio

Percentage of work tables created where the initial pages were immediately available in the work table cache.

See Also 

In Other Volumes 

"Pages and Extents" in Microsoft SQL Server Introduction 

"Indexes" in Microsoft SQL Server Database Developer's Companion 

SQL Server: Backup Device Object

The Backup Device object provides counters to monitor Microsoft SQL Server backup devices used for backup and restore operations. Monitoring backup devices is useful when you want to determine the throughput of specific backup devices, rather than the entire database backup or restore operation, which may be using several backup devices in parallel. By monitoring the throughput of a backup device, you can determine the progress and performance of your backup and restore operations on a per device basis. To monitor the throughput of the entire database backup or restore operation, use the Backup/Restore Throughput/sec counter of the SQL Server Databases object. For more information, see "SQL Server: Databases Object" in this volume.

These are the SQL Server Backup Device counters.

SQL Server Backup Device counters

Description

Device Throughput Bytes/sec

Throughput of read and write operations (in bytes per second) for a backup device used when backing up or restoring databases. This counter exists only while the backup or restore operation is executing.

See Also 

In This Volume 

Disk, Tape, and Named Pipe Devices

SQL Server: Buffer Manager Object

The Buffer Manager object provides counters to monitor how Microsoft SQL Server uses both memory to store data pages, internal data structures, and the procedure cache, and counters to monitor the physical I/O as SQL Server reads database pages from and writes database pages to disk. Monitoring the memory used by SQL Server can help determine, for example, if bottlenecks exist due to a lack of available physical memory for storing frequently accessed data in cache, in which case SQL Server must retrieve the data from disk. By monitoring memory, you can determine if query performance can be improved by adding more memory or by making more memory available to the data cache or SQL Server internal structures. Monitoring the physical I/O is especially important in determining how often SQL Server needs to read data from disk. Compared to other operations, such as memory access, physical I/O consumes a lot of time. Minimizing physical I/O can improve query performance.

These are the SQL Server Buffer Manager counters.

SQL Server Buffer Manager counters

Description

Buffer Cache Hit Ratio

Percentage of pages found in the buffer cache without having to read from disk. The ratio is the total number of cache hits divided by the total number of cache lookups since SQL Server was started. After a long period of time, the ratio moves very little. Because reading from the cache is much less expensive than reading from disk, you want this ratio to be high. Generally, you can increase the buffer cache hit ratio by increasing the amount of memory available to SQL Server.

Cache Size (pages)

Size of the procedure cache (in pages). Includes all instances of cache sizes monitored for different types of plans under the Cache Manager object in addition to pages currently assigned to the procedure cache but not used by any plan.

Checkpoint Writes/sec

Number of pages flushed to disk per second by a checkpoint or other operations that cause all dirty pages to be flushed to disk.

Committed Pages

Number of buffer pages committed.

ExtendedMem Cache Hit Ratio

Percentage of page requests satisfied from the extended memory cache.

ExtendedMem Cache Migrations/sec

Number of pages migrated into the extended memory cache region per second.

ExtendedMem Requests/sec

Number of requests for pages from the large memory region per second.

Free Buffers

Number of free buffers available.

Lazy Writer Buffers/sec

Number of buffers examined by the lazy writer per second. As buffers are needed to read new pages into the cache, the lazy writer buffer pointer is advanced through the cache. Each buffer is examined to determine if they have been touched since the last time they were checked, and for dirty pages that can be written to disk.

Lazy Writes/sec

Number of buffers written per second by the buffer manager's lazy writer. The lazy writer is a system process whose main task is to flush out batches of dirty, aged buffers (buffers that contain changes that must be written back to disk before the buffer can be reused for a different page) and make them available to user processes. The lazy writer eliminates the need to perform checkpoints frequently for the purpose of creating available buffers.

Page Reads/sec

Number of physical database page reads that are issued per second. This statistic displays the total number of physical page reads across all databases. Because physical I/O is expensive, you may be able to minimize the cost, either by using a larger data cache, intelligent indexes, more efficient queries, or by changing the database design.

Page Requests/sec

Number of requests for buffer pages per second. A page request is a logical read. If the page is found in the cache, there will be no physical I/O associated with the request.

Page Writes/sec

Number of database page writes that are issued per second. Page writes are generally expensive. Reducing page-write activity is important for optimal tuning. One way to do this is to ensure that you do not run out of free buffers in the free buffer pool. If you do, page writes will occur while waiting for an unused cache buffer to flush.

Readahead Pages/sec

Number of requests per second to asynchronously prefetch pages before they are actually encountered. Provides a general idea of how busy read-ahead is in terms of actual physical I/O.

Reserved Page Count

Number of buffer cache reserved pages.

Stolen Page Count

Number of buffer cache pages stolen to satisfy other server memory requests.

See Also 

In This Volume 

Server Memory Options

SQL Server: Cache Manager Object

In Other Volumes 

"Pages and Extents" in Microsoft SQL Server Introduction 

SQL Server: Cache Manager Object

The Cache Manager object provides counters to monitor how Microsoft SQL Server uses memory to store objects such as stored procedures, ad hoc and prepared Transact-SQL statements, and triggers. Multiple instances of the Cache Manager object can be monitored at the same time, with each instance representing a different type of plan to monitor.

Instance

Description

Ad hoc SQL Plans

Query plans produced from an ad hoc Transact-SQL query, including auto-parameterized queries. SQL Server caches the plans for ad hoc SQL statements for later reuse if the identical Transact-SQL statement is later executed.

Misc. Normalized Trees

Normalized trees for views, rules, computed columns, and check constraints.

Prepared SQL Plans

Query plans that correspond to Transact-SQL statements prepared using sp_prepare, sp_cursorprepare, or auto-parameterization. User parameterized queries (even if not explicitly prepared) are also monitored as Prepared SQL Plans.

Procedure Plans

Query plans generated by creating a stored procedure.

Replication Procedure Plans

Query plans of a replication system stored procedure.

Trigger Plans

Query plans generated by creating a trigger.

These are the SQL Server Cache Manager counters.

SQL Server Cache Manager counters

Description

Cache Hit Ratio

Percentage of pages found in the cache without having to read from disk. The ratio is the total number of cache hits divided by the total number of cache lookups since SQL Server was started. After a long period of time, the ratio moves very little. Because reading from the cache is less expensive than reading from disk, you want the ratio to be high. The higher this value is, the better. Generally, you can increase the cache hit ratio by increasing the amount of memory available to SQL Server.

Cache Object Counts

Number of objects found in the cache. After a long period of time, the count moves very little.

Cache Pages

Number of pages used by objects in the cache. After a long period of time, the value moves very little.

Cache Use Counts/sec

Number of times per second that each type of object in the cache has been used. The higher this value is, the better. After a long period of time, the count moves very little.

For more information about caching query plans, see "Execution Plan Caching and Reuse" in Microsoft SQL Server Introduction. 

See Also 

In This Volume 

Server Memory Options

SQL Server: Buffer Manager Object

SQL Server: Databases Object

The Databases object in Microsoft SQL Server provides counters to monitor bulk copy operations, backup and restore throughput, and transaction log activities. Monitoring transactions and the transaction log is important for determining how much user activity is occurring in the database and how full the transaction log is becoming. The amount of user activity can determine the performance of the database and affect log size, locking, and replication. Monitoring low-level log activity to gauge user activity and resource usage can help you to identify performance bottlenecks.

Multiple instances of the Databases object, each representing a single database, can be monitored at the same time.

These are the SQL Server Databases counters.

SQL Server Databases counters

Description

Active Transactions

Number of active transactions for the database.

Backup/Restore Throughput/sec

Read/write throughput for backup and restore operations of a database per second. For example, you can measure how the performance of the database backup operation changes when more backup devices are used in parallel or when faster devices are used. Throughput of a database backup or restore operation allows you to determine the progress and performance of your backup and restore operations. Counter only exists while the backup or restore operation is executing.

Bulk Copy Rows/sec

Number of rows bulk copied per second.

Bulk Copy Throughput/sec

Amount of data bulk copied (in kilobytes) per second.

Data File(s) Size (KB)

Cumulative size (in kilobytes) of all the data files in the database including any automatic growth. Monitoring this counter is useful, for example, for determining the correct size of tempdb.

DBCC Logical Scan Bytes/sec

Number of logical read scan bytes per second for database consistency checker (DBCC) statements.

Log Bytes Per Flush

Number of bytes in the log buffer when the buffer is flushed.

Log Cache Hit Ratio

Percentage of log cache reads satisfied from the log cache.

Log Cache Reads/sec

Reads performed per second through the log manager cache.

Log File(s) Size (KB)

Cumulative size (in kilobytes) of all the transaction log files in the database.

Log Flush Wait Time

Total wait time (in milliseconds) to flush the log.

Log Flush Waits/sec

Number of commits per second waiting on the log flush.

Log Flushes/sec

Number of log flushes per second.

Log Growths

Total number of times the transaction log for the database has been expanded.

Log Shrinks

Total number of times the transaction log for the database has been shrunk.

Log Truncations

Total number of times the transaction log for the database has been truncated.

Percent Log Used

Percentage of space in the log that is in use.

Repl. Pending Xacts

Number of transactions in the transaction log of the publication database marked for replication, but not yet delivered to the distribution database.

Repl. Trans. Rate

Number of transactions per second read out of the transaction log of the publication database and delivered to the distribution database.

Shrink Data Movement Bytes/sec

Amount of data being moved per second by autoshrink operations, or DBCC SHRINKDATABASE or DBCC SHRINKFILE statements.

Transactions/sec

Number of transactions started for the database per second.

See Also 

In Other Volumes 

"Transaction Logs" in Microsoft SQL Server Database Developer's Companion 

"Transactions" in Microsoft SQL Server Database Developer's Companion 

SQL Server: General Statistics Object

The General Statistics object in Microsoft SQL Server provides counters to monitor general server-wide activity such as the number of current connections and the number of users connecting and disconnecting from computers running SQL Server per second. Monitoring the number of logins and logouts per second is useful when you are working on large online transaction processing (OLTP) type systems where there are many clients connecting and disconnecting from computers running SQL Server.

These are the SQL Server General Statistics counters.

SQL Server General Statistics counters

Description

Logins/sec

Total number of logins started per second.

Logouts/sec

Total number of logout operations started per second.

User Connections

Number of user connections. Because each user connection consumes some memory, configuring overly high numbers of user connections could affect throughput. User connections should be set to the maximum expected number of concurrent users.

SQL Server: Latches Object

The Latches object in Microsoft SQL Server provides counters to monitor internal SQL Server resource locks called latches. Monitoring the latches to determine user activity and resource usage can help you to identify performance bottlenecks.

These are the SQL Server Latches counters.

SQL Server Latches counters

Description

Average Latch Wait Time (ms)

Average latch wait time (in milliseconds) for latch requests that had to wait.

Latch Waits/sec

Number of latch requests that could not be granted immediately and had to wait before being granted.

Total Latch Wait Time (ms)

Total latch wait time (in milliseconds) for latch requests that had to wait in the last second.

See Also 

In Other Volumes 

"Latching" in Microsoft SQL Server Introduction 

SQL Server: Locks Object

The Locks object in Microsoft SQL Server provides information about SQL Server locks on individual resource types. Locks are held on SQL Server resources, such as rows read or modified during a transaction, to prevent concurrent use of resources by multiple transactions. For example, if an exclusive (X) lock is held on a row within a table by a transaction, no other transaction can modify that row until the lock is released. Minimizing locks increases concurrency, which can improve performance. Multiple instances of the Locks object can be monitored at the same time, with each instance representing a lock on a resource type.

SQL Server can lock these resources.

Item

Description

RID

Row ID. Used to lock a single row within a table.

Key

Row lock within an index.

Page

8 kilobyte (KB) data page or index page.

Extent

Contiguous group of eight data pages or index pages.

Table

Entire table, including all data and indexes.

Database

Database.

These are the SQL Server Locks counters.

SQL Server Locks counters

Description

Average Wait Time (ms)

Average amount of wait time (in milliseconds) for each lock request that resulted in a wait.

Lock Requests/sec

Number of new locks and lock conversions per second requested from the lock manager.

Lock Timeouts/sec

Number of lock requests per second that timed out, including internal requests for NOWAIT locks.

Lock Wait Time (ms)

Total wait time (in milliseconds) for locks in the last second.

Lock Waits/sec

Number of lock requests per second that could not be satisfied immediately and required the caller to wait.

Number of Deadlocks/sec

Number of lock requests per second that resulted in a deadlock.

See Also 

In Other Volumes 

"Understanding Locking in SQL Server" in Microsoft SQL Server Database Developer's Companion 

SQL Server: Memory Manager Object

The Memory Manager object in Microsoft SQL Server provides counters to monitor overall server memory usage. Monitoring overall server memory usage to gauge user activity and resource usage can help you to identify performance bottlenecks. Monitoring the memory used by SQL Server can help determine, for example, if bottlenecks exist due to a lack of available physical memory for storing frequently accessed data in cache, in which case SQL Server must retrieve the data from disk. By monitoring memory, you can determine if query performance can be improved by adding more memory or by making more memory available to the data cache or SQL Server internal structures.

These are the SQL Server Memory Manager counters.

SQL Server Memory Manager counters

Description

Connection Memory (KB)

Total amount of dynamic memory the server is using for maintaining connections.

Granted Workspace Memory (KB)

Total amount of memory currently granted to executing processes such as hash, sort, bulk copy, and index creation operations.

Lock Blocks

Current number of lock blocks in use on the server (refreshed periodically). A lock block represents an individual locked resource, such as a table, page, or row.

Lock Blocks Allocated

Current number of allocated lock blocks. At server startup, the number of allocated lock blocks plus the number of allocated lock owner blocks depends on the SQL Server Locks configuration option. If more lock blocks are needed, the value increases.

Lock Memory (KB)

Total amount of dynamic memory the server is using for locks.

Lock Owner Blocks

Number of lock owner blocks currently in use on the server (refreshed periodically). A lock owner block represents the ownership of a lock on an object by an individual thread. Therefore, if three threads each have a shared (S) lock on a page, there will be three lock owner blocks.

Lock Owner Blocks Allocated

Current number of allocated lock owner blocks. At server startup, the number of allocated lock owner blocks plus the number of allocated lock blocks depends on the SQL Server Locks configuration option. If more lock owner blocks are needed, the value increases dynamically.

Maximum Workspace Memory (KB)

Maximum amount of memory available for executing processes such as hash, sort, bulk copy, and index creation operations.

Memory Grants Outstanding

Total number of processes per second that have successfully acquired a workspace memory grant.

Memory Grants Pending

Total number of processes per second waiting for a workspace memory grant.

Optimizer Memory (KB)

Total amount of dynamic memory the server is using for query optimization.

SQL Cache Memory (KB)

Total amount of dynamic memory the server is using for the dynamic SQL cache.

Target Server Memory (KB)

Total amount of dynamic memory the server is willing to consume.

Total Server Memory (KB)

Total amount of dynamic memory (in kilobytes) that the server is currently using.

See Also 

In Other Volumes 

"Understanding Locking in SQL Server" in Microsoft SQL Server Database Developer's Companion 

SQL Server: Replication Agents Object

The Replication Agents object in Microsoft SQL Server provides counters to monitor the SQL Server replication agents that are currently running. Monitoring the number of running Distribution and Merge Agents is useful to determine the number of Subscribers to which published databases are replicating. Multiple instances of the Replication Agents object can be monitored at the same time, with each instance representing a single replication agent: Log Reader; Snapshot; Distribution; and Merge.

These are the SQL Server Replication Agents counters.

SQL Server Replication Agents counters

Description

Running

Number of instances of a given replication agent currently running.

See Also 

In Other Volumes 

"Monitoring Replication Agents" in Microsoft SQL Server Distributed Data Operations and Replication 

SQL Server: Replication Dist. Object

The Replication Dist. object in Microsoft SQL Server provides counters to monitor how many commands and transactions are read from the distribution database and delivered to the Subscriber databases by the SQL Server Distribution Agent.

These are the SQL Server Replication Dist. counters.

SQL Server Replication Dist. counters

Description

Dist:Delivered Cmds/sec

Number of distribution commands delivered per second to the Subscriber.

Dist:Delivered Trans/sec

Number of distribution transactions delivered per second to the Subscriber.

Dist:Delivery Latency

Distribution latency (in milliseconds). The current amount of time elapsed from when transactions are delivered to the Distributor to when they are applied at the Subscriber.

See Also 

In Other Volumes 

"Replication Distribution Agent Utility" in Microsoft SQL Server Transact-SQL and Utilities Reference 

SQL Server: Replication Logreader Object

The Replication Logreader object in Microsoft SQL Server provides counters to monitor the SQL Server Log Reader Agent.

These are the SQL Server Replication Logreader counters.

SQL Server Replication Logreader counters

Description

Logreader:Delivered Cmds/sec

Number of Log Reader Agent commands delivered per second to the Distributor.

Logreader:Delivered Trans/sec

Number of Log Reader Agent transactions delivered per second to the Distributor.

Logreader:Delivery Latency

Current amount of time, in milliseconds, elapsed from when transactions are applied at the Publisher to when they are delivered to the Distributor.

See Also 

In Other Volumes 

"Replication Log Reader Agent Utility" in Microsoft SQL Server Transact-SQL and Utilities Reference 

SQL Server: Replication Merge Object

The Replication Merge object in Microsoft SQL Server provides counters to monitor each SQL Server merge execution that moves data changes up from a merge replication Subscriber to the Publisher, and down from the Publisher to the Subscriber.

These are the SQL Server Replication Merge counters.

SQL Server Replication Merge counters

Description

Conflicts/sec

Number of conflicts per second that occurred in the Publisher/Subscriber upload and download. This value should always be zero. A nonzero value may require notifying the losing side, overriding the conflict, and so on.

Downloaded Changes/sec

Number of rows per second (inserted, updated, and deleted) from the Publisher to the Subscriber.

Uploaded Changes/sec

Number of rows per second merged (inserted, updated, and deleted) from the Subscriber to the Publisher.

See Also 

In Other Volumes 

"Monitoring Replication Agents" in Microsoft SQL Server Distributed Data Operations and Replication 

"Replication Merge Agent Utility" in Microsoft SQL Server Transact-SQL and Utilities Reference 

SQL Server: Replication Snapshot Object

The Replication Snapshot object in Microsoft SQL Server provides counters to monitor SQL Server snapshot replication.

These are the SQL Server Replication Snapshot counters.

SQL Server Replication Snapshot counters

Description

Snapshot:Delivered Cmds/sec

Number of commands delivered per second to the Distributor.

Snapshot:Delivered Trans/sec

Number of transactions delivered per second to the Distributor.

See Also 

In Other Volumes 

"Monitoring Replication Agents" in Microsoft SQL Server Distributed Data Operations and Replication 

"Replication Snapshot Agent Utility" in Microsoft SQL Server Transact-SQL and Utilities Reference 

SQL Server: SQL Statistics Object

The SQL Statistics object in Microsoft SQL Server provides counters to monitor compilation and the type of requests sent to SQL Server. Monitoring the number of query compilations and recompilations and the number of batches received by SQL Server give you an indication of how quickly SQL Server is processing user queries and how effectively the query optimizer is processing the queries.

Compilation is a significant part of a query's turnaround time. The objective of the cache is to reduce compilation by storing compiled queries for later reuse, thus eliminating the need to recompile queries when later executed. However, each unique query needs to be compiled at least once. Query recompilations can be caused by the following factors:

  • Schema changes, including base schema changes such as adding columns or indexes to a table, or statistics schema changes such as inserting or deleting a significant number of rows from a table. 

  • Environment (SET statement) changes. Changes in session settings such as ANSI_PADDING or ANSI_NULLS can cause a query to be recompiled. 

These are the SQL Statistics counters.

SQL Server SQL Statistics counters

Description

Auto-Param Attempts/sec

Number of auto-parameterization attempts per second. Total should be the sum of the failed, safe, and unsafe auto-parameterizations. Auto-parameterization occurs when SQL Server attempts to reuse a cached plan for a previously executed query that is similar, but not exactly the same, as the current query. For more information, see "Auto-parameterization" in Microsoft SQL Server Introduction. 

Batch Requests/sec

Number of Transact-SQL command batches received per second. This statistic is affected by all constraints (such as I/O, number of users, cache size, complexity of requests, and so on). High batch requests mean good throughput. For more information, see "Batch Processing" in Microsoft SQL Server Introduction. 

Failed Auto-Params/sec

Number of failed auto-parameterization attempts per second. This should be small.

Safe Auto-Params/sec

Number of safe auto-parameterization attempts per second.

SQL Compilations/sec

Number of SQL compilations per second. Indicates the number of times the compile code path is entered. Includes compiles due to recompiles. Once SQL Server user activity is stable, this value should reach a steady state.

SQL Re-Compilations/sec

Number of SQL recompiles per second. Counts the number of times recompiles are triggered. In general, you want the recompiles to be low.

Unsafe Auto-Params/sec

Number of unsafe auto-parameterization attempts per second. The table has characteristics that prevent the cached plan from being shared. These are designated as unsafe. The fewer of these that occur the better.

See Also 

In This Volume 

SQL Server: Cache Manager Object

SQL Server: User Settable Object

The User Settable object in Microsoft SQL Server allows you to create custom counter instances. Creating your own user-defined counter instances is useful in monitoring aspects of the server not monitored by existing counters, such as components unique to your SQL Server database (for example, determining the number of customer orders logged or the product inventory).

The SQL Server User Settable object contains 10 instances of the query counter: User counter 1 through User counter 10. User counters 1 through 10 map to the SQL Server stored procedures sp_user_counter1 through sp_user_counter10. As these stored procedures are executed by user applications, the values set by the stored procedures are displayed in Windows NT Performance Monitor. A counter can monitor any single integer value, for example, a stored procedure that counts how many orders for a particular product have occurred in one day.

Note The user counter stored procedures are not polled automatically by Windows NT Performance Monitor. They must be explicitly executed by a user application for the counter values to be updated. This is different from earlier versions of SQL Server in which the user-defined counter query was executed whenever Windows NT Performance Monitor needed to update the counter. Similar functionality can now be achieved by using a trigger to automatically update the value of the counter. For example, to create a counter that monitors the number of rows in a table, create an INSERT and DELETE trigger on the table that executes:

SELECT COUNT(*) FROM table

Whenever the trigger is fired because of an INSERT or DELETE operation occurring on the table, the Windows NT Performance Monitor counter is automatically updated.

These are the SQL Server User Settable counters.

SQL Server User Settable counters

Description

Query

Defined by the user.

To make use of the user counter stored procedures, simply execute them from your own application with a single integer parameter representing the new value for the counter. For example, to set User counter 1 to the value 10, execute this Transact-SQL statement:

EXECUTE sp_user_counter1 10

The user counter stored procedures can be called from anywhere that other stored procedures can be called from, such as your own stored procedures. For example, you can create the following stored procedure to count the number of connections and attempted connections made since SQL Server was started:

DROP PROC My_Proc
GO
CREATE PROC My_Proc
AS 
EXECUTE sp_user_counter1 @@CONNECTIONS
GO

The @@CONNECTIONS function returns the number of connections or attempted connections since SQL Server was started. This value is passed to the sp_user_counter1 stored procedure as the parameter.

Important The queries defined in the user counter stored procedures should be as simple as possible. Queries that are expensive to execute, for example, queries that perform large amounts of I/O or memory-intensive queries that perform substantial sort or hash operations, can impact performance.

Common Windows NT Performance Monitor Scenarios

When monitoring Microsoft SQL Server and the operating system to investigate performance-related issues, there are three main areas on which to concentrate your initial efforts:

  • Disk activity 

  • Processor utilization 

  • Memory usage 

It can be useful to monitor Microsoft Windows NT and SQL Server counters at the same time to determine any correlation between the performance of SQL Server and Windows NT. For example, monitoring the Windows NT disk I/O counters and the SQL Server Buffer Manager counters at the same time can show how the whole system is behaving.

Monitoring a computer using Windows NT Performance Monitor can slightly impact the performance of the computer being monitored. Therefore, either log the generated Performance Monitor data to another disk or computer so that it reduces the effect on the computer being monitored, or run Windows NT Performance Monitor remotely. Additionally, only monitor the counters in which you are interested.

Monitoring too many counters adds overhead to the monitoring process and will affect the computer being monitored, possibly affecting the results produced.

Monitoring Disk Activity

Microsoft SQL Server uses Microsoft Windows NT I/O calls to perform disk reads and writes. SQL Server manages when and how disk I/O is performed, but relies on Windows NT to perform the underlying I/O operations. The I/O subsystem includes the system bus, disk controller cards, disks, tape drives, CD-ROM drive, and many other I/O devices. The disks are frequently the biggest bottleneck in a system.

Monitoring Disk I/O and Detecting Excess Paging 

Two of the counters that can be monitored to determine disk activity include:

  • PhysicalDisk: % Disk Time 

  • PhysicalDisk: Avg. Disk Queue Length 

The PhysicalDisk: % Disk Time counter in Windows NT Performance Monitor monitors the percentage of time that the disk is busy with read/write activity. If the PhysicalDisk: % Disk Time counter is high (more than 90 percent), check the Physical Disk: Current Disk Queue Length counter to see how many system requests are waiting for disk access. The number of waiting I/O requests should be sustained at no more than 1.5 to 2 times the number of spindles making up the physical disk. Most disks have one spindle, although redundant array of independent disks (RAID) devices usually have more. A hardware RAID device appears as one physical disk in Windows NT Performance Monitor; RAID devices created through software appear as multiple instances.

Use the values of the Current Disk Queue Length and % Disk Time counters to detect bottlenecks within the disk subsystem. If Current Disk Queue Length and % Disk Time counter values are consistently high, consider using a faster disk drive, moving some files to an additional disk or server, or adding additional disks to a RAID array if one is being used.

If you are using a RAID device, the % Disk Time counter can indicate a value greater than 100 percent. If it does, use the PhysicalDisk: Avg. Disk Queue Length counter to determine how many system requests on average are waiting for disk access.

Applications and systems that are I/O-bound may keep the disk constantly active. This is called disk thrashing.

Monitor the Memory: Page Faults/sec counter to make sure that the disk activity is not caused by paging. In Windows NT, paging is caused by:

  • Processes configured to use too much memory. 

  • File system activity. 

If you have more than one logical partition on the same hard disk, you should use the Logical Disk counters instead of the Physical Disk counters. Looking at the logical disk counters will help you determine which files are heavily accessed. Once you have found the disks with high levels of read/write activity, look at the read-specific and write-specific counters (for example, Logical Disk: Disk Write Bytes/sec) for the type of disk activity that is causing the load on each logical volume.

The PhysicalDisk: Avg. Disk Queue Length counter monitors the number of outstanding requests on the disk. Numbers that are consistently higher on one disk than other disks might indicate the need to redistribute data. This counter's value is affected by the max async IO configuration option. If the counter value is consistently less than the configuration option value, it is an indication that the disk subsystem has enough capacity to handle the batched I/O being generated by SQL Server. In this situation, increasing the max async IO configuration value can improve performance. Conversely, if the counter is consistently greater than the configuration option value, then the disk subsystem is not keeping up with the I/O requests. If the overload is being caused by SQL Server I/O, then reducing the configuration value may be beneficial.

Note Because disk counters can increase disk access time on some older computers, Windows NT does not automatically activate the counters at system startup. To use these disk I/O counters, you must run diskperf -y (or diskperf -ye for RAID devices) from the command prompt. Then, you must shut down and restart the computer. For more information, see your Windows NT documentation.

Isolating Disk Activity Created by SQL Server 

To determine the amount of I/O generated by SQL Server components, examine the following performance areas:

  • Writing pages to disk 

  • Reading pages from disk 

The number of page reads and writes that SQL Server performs can be monitored using the SQL Server: Buffer Manager Page Reads/sec and Page Writes/sec counters. If these values start to approach the capacity of the hardware I/O subsystem, you need to try and reduce the values by either tuning your application or database to reduce I/O operations (such as index coverage, better indexes, or normalization) or increasing the I/O capacity of the hardware.

Monitoring CPU Use

You should periodically monitor the computer running Microsoft SQL Server to determine if CPU utilization rates are within normal ranges. A continually high CPU rate may indicate the need for a CPU upgrade or the addition of multiple processors. Additionally, a continually high CPU utilization may indicate a poorly tuned or designed application. Optimizing the application can lower CPU utilization. For more information, see "Application Design" in Microsoft SQL Server Diagnostics.

A good way to determine this is to use the Processor:% Processor Time counter. This counter in Windows NT Performance Monitor monitors the amount of time the CPU spends processing a nonidle thread. A consistent state of 80 to 90 percent may indicate the need for a CPU upgrade or the addition of more processors. For multiprocessor systems, a separate instance of this counter should be monitored for each processor. This value represents the sum of processor time on a specific processor. To determine the average for all processors, use the System: %Total Processor Time counter instead.

Optionally, you can also monitor:

  • Processor: % Privileged Time 

    This counter corresponds to the percentage of time the processor is spending executing Microsoft Windows NT kernel commands such as processing SQL Server I/O requests. If this counter is consistently high in conjunction with the Physical Disk counters, then SQL Server could be I/O-bound. Consider a faster or more efficient disk subsystem.

    Note Different disk controllers and drivers use different amounts of kernel processing time. Efficient controllers and drivers use less privileged time, leaving more processing time available for user applications, increasing overall throughput. 

  • Processor: %User Time 

    This counter corresponds to the percentage of time the processor is spending executing user processes such as SQL Server. 

  • System: Processor Queue Length 

    This counter corresponds to the number of threads waiting for processor time. A processor bottleneck develops when threads of a process require more processor cycles than are available. If more than a few program processes are contending for most of the processor's time, you might need to install a faster processor or an additional processor, if you are using a multiprocessor system. 

When you examine processor usage, consider the type of work the computer running SQL Server is performing. If SQL Server is performing a lot of calculations, such as queries involving aggregates or memory-bound queries that require no disk I/O, 100 percent of the processor's time can be used. If this causes the performance of other applications to suffer, try changing the workload, for example, by dedicating the computer to running SQL Server.

Values around 100 percent, where many client requests are executing, may indicate that processes are queuing up, waiting for processor time, and causing a bottleneck. Resolve the problem by adding more powerful processors.

Monitoring Memory Usage

You should periodically monitor the computer running Microsoft SQL Server to confirm that memory usage is within normal ranges and that no processes, including SQL Server, are being starved of or consuming too much memory.

To monitor for a low-memory condition, start with the following object counters:

  • Memory: Available Bytes

  • Memory: Pages/sec 

The Available Bytes counter indicates how many bytes of memory are currently available for use by processes. The Pages/sec counter indicates the number of pages that either were retrieved from disk due to hard page faults or written to disk to free space in the working set due to page faults.

Low values for the Available Bytes counter can indicate that there is an overall shortage of memory on the computer or that an application is not releasing memory. A high rate for the Pages/sec counter could indicate excessive paging. Monitor the Memory: Page Faults/sec counter to make sure that the disk activity is not caused by paging.

A low rate of paging (and hence page faults) is normal, even if the computer has plenty of available memory. The Microsoft Windows NT Virtual Memory Manager (VMM) steals pages from SQL Server and other processes as it trims the working-set sizes of those processes, causing page faults. To determine if SQL Server rather than another process is causing excessive paging, monitor the Process: Page Faults/sec counter for the SQL Server process instance.

For more information about resolving excessive paging, see your Windows NT documentation.

Isolating Memory Used by SQL Server 

By default, SQL Server changes its memory requirements dynamically, based on available system resources. If SQL Server needs more memory, it queries the operating system to determine if free physical memory is available and uses the available memory. If SQL Server does not need the memory currently allocated to it, it releases the memory to the operating system. However, dynamically using memory can be overridden using the min server memory, max server memory, and set working set size server configuration options. For more information, see "Server Memory Options" in this volume.

To monitor the amount memory being used by SQL Server, examine the following performance counters:

  • Process: Working Set 

  • SQL Server: Buffer Manager: Buffer Cache Hit Ratio 

  • SQL Server: Buffer Manager: Free Buffers 

  • SQL Server: Memory Manager: Total Server Memory (KB) 

The Working Set counter shows the amount of memory used by a process. If this number is consistently below the amount of memory SQL Server is configured to use (set by the min server memory and max server memory server options), SQL Server is configured for more memory than it needs. Otherwise, fix the size of the working set using the set working set size server option. For more information, see "set working set size Option" in this volume.

The Buffer Cache Hit Ratio counter is application specific; however, a rate of 90 percent or higher is desirable. Add more memory until the value is consistently greater than 90 percent, indicating that more than 90 percent of all requests for data were satisfied from the data cache.

If the value for the Free Buffers counter is consistently low, it may indicate that more memory is required.

If the Total Server Memory (KB) counter is consistently high compared to the amount of physical memory in the computer, it may indicate that more memory is required.

Setting Up an SQL Server Database Alert

Using Windows NT Performance Monitor, you can create an alert that is raised in response to a threshold value for a Windows NT Performance Monitor counter being reached. In response to the alert, Windows NT Performance Monitor can launch an application, such as a custom application written to handle the alert condition. For example, you could create an alert that is raised when the number of deadlocks exceeds a specific value.

Alerts also can be defined using SQL Server Enterprise Manager and SQL Server Agent. For more information, see "Defining Alerts" in this volume.

For more information about creating alerts using Windows NT Performance Monitor, see your Windows NT documentation.

Monitoring with SQL Server Enterprise Manager

Using SQL Server Enterprise Manager, you can view the following information about current Microsoft SQL Server activity:

  • Current user connections and locks. 

  • Process number, status, locks, and commands that the active users are running. 

  • Objects that are locked, and the kinds of locks that are present. 

If you are a system administrator, you can view additional information about a selected process, send a message to a user that is currently connected to a computer running SQL Server, or terminate a selected process.

Use the current activity window in SQL Server Enterprise Manager to perform ad hoc monitoring of a computer running SQL Server. This allows you to determine, at a glance, the volume and general types of activity on the system, for example:

  • Current blocked and blocking transactions. 

  • Currently connected users on SQL Server and the last statement executed. 

  • Locks that are in effect. 

SQL Server activity can also be monitored using the sp_who and sp_lock system stored procedures.

Monitoring the Error Logs

Microsoft SQL Server logs events to the SQL Server error log and the Microsoft Windows NT application log (although only certain system events and user-defined events are written to the Windows NT application log).

The SQL Server error log contains more complete information about events in SQL Server. You can use the information in the error log to troubleshoot problems related to SQL Server.

The Windows NT application log provides an overall picture of events that occur on the Windows NT system, as well as events in SQL Server and SQL Server Agent. The Windows NT Event Viewer, used to display the Windows NT application log, allows information that is displayed in this log to be filtered. For example, you can filter event types, such as Information, Warning, Error, Success Audit, and Failure Audit.

Both logs automatically timestamp all recorded events.

Comparing Error and Application Log Output 

You can use both the SQL Server error log and the Windows NT application log to track down the root cause of problems. For example, while monitoring the SQL Server error log, you may detect a certain set of messages for which you do not know the cause. By comparing the dates and times for events between the SQL Server error log and the Windows NT application log, you can narrow the list of probable causes.

Viewing the SQL Server Error Log

The Microsoft SQL Server error log can be viewed to ensure that processes have completed successfully (such as backup and restore operations, batch commands, or other scripts and processes) and to detect any current or potential problem areas, including automatic recovery messages (particularly if SQL Server has been stopped and restarted), kernel messages, and so on.

The SQL Server error log can be viewed using SQL Server Enterprise Manager or any text editor. By default, the error log is located at \Mssql7\Log\Errorlog.

A new error log is created each time the computer running SQL Server is started, although the sp_cycle_errorlog system stored procedure can be used to cycle the error log files without having to restart the computer running SQL Server. Typically, SQL Server retains backups of the previous six logs and gives the most recent log backup the extension .1, the second most recent the extension .2, and so on. The current error log has no extension.

Viewing the Windows NT Application Log

When Microsoft SQL Server is configured to use the Microsoft Windows NT application log, each SQL Server session writes new events to that log. The Windows NT application log is a special file that can be viewed and managed only by using Windows NT Event Viewer. Unlike the SQL Server error log, a new application log is not created each time you start the computer running SQL Server.

There are three logs used by Windows NT that can be viewed with the Windows NT Event Viewer.

Windows NT log type

Description

System log

Records events logged by the Windows NT system components. For example, the failure of a driver or other system component to load during startup is recorded in the system log.

Security log

Records security events, such as failed login attempts. This helps track changes to the security system and identify possible breaches to security. For example, attempts to log on to the system may be recorded in the security log, depending on the audit settings in the User Manager.
Only members of the sysadmin fixed server role can view the security log.

Application log

Records events that are logged by applications. For example, a database application might record a file error in the application log.

For more information about using Windows NT Event Viewer, managing the application log, and understanding the information it presents, see the online Help for Windows NT Event Viewer, or see your Windows NT documentation.

Monitoring with Transact-SQL Statements

Microsoft SQL Server provides several Transact-SQL statements and system stored procedures that allow ad hoc monitoring of a computer running SQL Server. Use these statements when you want to determine, at a glance, information about server performance and activity. The most common information monitored using Transact-SQL statements includes:

  • Current locks. 

  • Current user activity. 

  • Last command batch submitted by a user. 

  • Data space used by a table or database. 

  • Space used by a transaction log. 

  • Oldest active transaction (including replicated transactions) in the database. 

  • Performance information relating to I/O, memory, and network throughput. 

  • Procedure cache usage. 

  • General statistics about SQL Server activity and usage, such as the amount of time the CPU has been performing SQL Server operations or the amount of time SQL Server has spent performing I/O operations. 

Most of this information can also be monitored using either SQL Server Enterprise Manager, SQL-DMO, or Windows NT Performance Monitor.

See Also 

In This Volume 

Monitoring with SQL Server Enterprise Manager

Monitoring with Windows NT Performance Monitor

Cc917570.spacer(en-us,TechNet.10).gif