Using SQL Profiler

Updated : June 9, 2001

The new utility lets you monitor, analyze, and tune SQL Server

Paul Burke

Article from SQL Server Magazine https://www.sqlmag.com/

The SQL Profiler in SQL Server 7.0 is powerful and easy to use. SQL Profiler is based on the SQL Server 6.x SQL Trace utility (for details on this utility, see the sidebar "An Overview of SQL Trace,"). This article explains how you can use SQL Profiler to monitor, analyze, and tune SQL Server. I cover the differences in functionality between SQL Trace and SQL Profiler and how to use the Trace Wizard.

The revised architecture of SQL Server 7.0 features an integrated event model, in which various subcomponents of SQL Server (the query processor, Open Data Services—ODS, log manager, lock manager, the error log, etc.) act as event producers, initiating events when specific activities occur. An event controller centrally manages these events. Event consumers handle these events in various ways. SQL Profiler can act as an event consumer, reporting on these events and giving detailed server activity information. (For more information about unique terminology, see "SQL Profiler Terms".) For example, SQL Profiler can trace about 60 engine events and can define more. Here are some basic exercises you can do to become familiar with SQL Profiler's functionality.

On This Page

Getting Started with SQL Profiler

Getting Started with SQL Profiler

After you start SQL Profiler, you need to register the SQL Server you want to profile. You need to connect to SQL Server as a member of the Sys Admin role. You can either use the credentials of the NT account you're logged in as (Windows NT authentication) or supply a valid login name and password for the server you're

connecting to (SQL Server authentication). NT authentication means one less password to remember and allows neat NT service-based application access without either hard coding a password or leaving it lying around some- where. If you choose SQL Server authentication, enable Always prompt for username and password. Screen 1 shows the dialog box for connecting to the server.

Screen 1: Registering SQL Server properties

Cc966515.sqlprof1(en-us,TechNet.10).gif

After you register the SQL Server, you can run a sample trace supplied with SQL Profiler, use the Create Trace Wizard, or create a new trace. You can study these options to choose the best one for your purpose. SQL Profiler comes with several sample trace files that you can inspect, edit or use as-is to generate meaningful performance and activity information.

Each trace that you execute contains information on what you're trying to profile, how you want the results presented, and what data you want the trace to include. This trace information—events, data columns, and filters—describes what a SQL Profiler trace captures. Events are incidents that cause SQL Profiler to display an entry. Data columns determine what the trace stores in an entry. Filters determine the criteria (other than event occurrence) that an event must meet before SQL Profiler makes an entry. (Refer to SQL Server Books Online—BOL—for an extended description of event groups, events, etc.). In the samples in this article, I've used only the filter that prevents the display of SQL Profiler activity. Usually, the only activity you won't be interested in when you profile a SQL Server is the information that the act of profiling generates. (However, you might want to know whether other people are profiling a server.) You can execute the following sample traces for a better understanding of SQL Profiler.

Sample 1: Transact SQL (T-SQL)

This sample trace will generate information that's familiar to SQL Trace users. First, open the Trace definition for Sample 1 – T-SQL. (On the SQL Profiler menu, select File, Open, Trace Definition; or, in the Trace Name drop-down list, select the Edit Trace Properties button, then select Sample 1 – TSL.) This basic trace doesn't show any information that a SQL Server 6.5 DBA couldn't get with SQL Trace, but its minimal information makes this basic trace easy to experiment with.

This sample trace generates information on definition, events, session (Connect, Disconnect, and ExistingConnection), T-SQL (RPC:Starting and SQL:BatchStarting), data columns, application name, text, event class, and start time. This trace captures events within two event classes (Session and T-SQL). The three session events capture all SQL Server connections and disconnections, including current connections. The two T-SQL events (RPC:Starting and RPC:BatchStarting) capture SQL calls sent to the server as a set of SQL statements (an SQL batch) or a remote procedure call (RPC).

Sample 2: T-SQL (Grouped)

To display data in a useful format, you can use a GROUP BY function with SQL Profiler to separate data into groups by, for example, application name, NT user name, SQL Server user name, or ConnectionID. In contrast, SQL Trace lets you trace only by individual connection, which SQL Trace displays in a separate window. This SQL Trace function is useful, but it doesn't let you split a user's activity among several windows when an application has more than one connection. And SQL Trace won't accommodate other ways of separating activity, such as by application or by user.

Sample 3: Stored Procedure Counts

This simple trace captures only one event, SP:Starting. However, it lets you generate a simple profile of how often users call a specific stored procedure so you can apply relevant performance improvements.

Sample 4: T-SQL + Stored Procedure Steps

This sample demonstrates one of the most useful and exciting features of profiling with SQL Server 7.0. You can use this sample to debug any large stored procedure that contains multiple paths. You can examine which SQL statement within a stored procedure is executed to trap logic flaws or unanticipated conditions. In addition, you can use this filter to investigate performance variation in a given stored procedure. Using SQL Profiler to debug stored procedures is a big improvement over previous options. For example, if an execution-path problem occurs one time in 10,000 in a large stored procedure with several possible execution paths, you could use SQL Server integrated debugging to correct the problem. However, this method is time-consuming and intrusive to other system users. Or you could wade through the stored procedure to determine the problem. This option, too, is time-consuming. But with SQL Profiler, you can run the stored procedure with a trace active to analyze the problem. After you verify the incorrect behavior, you can examine the trace for a specific time, user, and workstation to obtain a set of SQL statements running within this stored procedure. The definition of this trace is similar to that of sample three, with the addition of SP:stmtstarting, which indicates that a statement within a stored procedure is starting.

Sample 5: T-SQL by Duration

Useful for performance purposes, this sample provides valuable timing information for T-SQL statements. This sample is the basis for the wizard-generated trace for slow-running queries outlined later in this article.

Sample 6: T-SQL for Replay

This interesting sample lets you capture T-SQL events for replay. This feature in SQL 7.0 is powerful for performance tuning and optimization, debugging, or tinkering.

The six samples that SQL Profiler provides demonstrate only about one-tenth of its capabilities, but even these six sample traces give you greater power and flexibility in troubleshooting and performance tuning than SQL Trace does. You can copy and modify these traces (I recommend that you preserve the sample traces as templates) to handle commonly asked questions such as what a stored procedure is doing, why a query is taking so long, and what is being called.

One function that these samples don't cover is lock monitoring. Although you can do lock monitoring in SQL 6.5 with trace flags, you can do it more easily in SQL 7.0 with SQL Profiler.

Using the Create Trace Wizard

The Create Trace Wizard features six common scenarios: finding the worst-performing queries, identifying scans of large tables, identifying the cause of a deadlock, profiling stored procedure performance, tracing T-SQL activity by application, and tracing T-SQL activity by user. You can modify one of the sample filters to easily create the last two scenarios.

To create a trace, select Tools, Create Trace Wizard. An information screen will tell you that the steps involved in using the Create Trace Wizard are identifying the problem, specifying the filter conditions, and completing the trace definition. The next screen will ask you to select a server and one of the predefined scenarios. The next screen will ask you to complete a scenario-specific filter definition section.

Table 1 shows three wizard-generated sample traces. You can use these traces as they are or use the Trace Properties dialog box to modify them. To modify an existing trace or to create a new trace, you need to become familiar with the Trace Properties box. You can choose File, then Properties or use the Trace Properties button to open the dialog box.

Table 1 Three Wizard-Generated Sample Traces

Worst-performing queries

Identifying the cause of a deadlock

Profiling stored procedure performance for a database

Parameters include database to cover, applications to trap, and minimum duration of query to capture

Parameter: All or specific database to check for deadlocks

Parameters include all or specific database and all or specific stored procedure

Events:
RPC:Completed
SQL:BatchCompleted
Data Columns:
Groups:
Duration
Columns:
Event Class
Text
CPU
Application Name
SQL User Name
NT User Name
Filters:
Duration
Minimum:1000

Events:
RPC:Starting
SQL:BatchStarting
Lock:Deadlock
Lock:Deadlock Chain
Data Columns:
Groups:
Event Class
Columns:
Object Id
Event Sub Class
Connection Id
SPID
Text
Server Name
Database Id
Index Id
Application Name
SQL Use Name
Start Time
Integer Data
Binary Data

Events:
SQL:BatchCompleted
SQL:BatchStarting
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
Data Columns:
Groups:
Connection Id
Columns:
Event Class
Object Id
Text
Transaction ID
Duration
CPU
Reads
Writes
Start Time
End Time
Application Name
Server Name
Database Id
SPID
Filters:
Database Id:
Value:6

Screen 2: Viewing the options on the General Tab of the Trace Properties dialog box

Cc966515.sqlprof2(en-us,TechNet.10).gif

Screen 2 shows the General tab of this dialog box. SQL Trace users will recognize most of the options that this dialog box presents. The Trace Type (Shared or Private) determines who can use the specified trace. The trace type determines where the definition is stored—on the SQL Server Registry in \LocalMachine (Shared) or on the local machine under \CurrentUser (Private). You can choose the location. If you want to use Shared traces, you need to modify the server registry. But Private traces have no such requirement. You can use the Import/Export trace options to transfer these traces as trace-definition files. The Capture to Table option puts trace output in a SQL Server table for later analysis or replay. You can specify the table's server, database, owner, and name.

Screen 3: Selecting SQL Server events to trace

Cc966515.sqlprof3(en-us,TechNet.10).gif

Screen 3 shows the SQL Profiler Events tab, which has many options not available with SQL Trace. The tab lists events by event group, and you can add events individually or by class. BOL documents event classes and events. You won't need all events in all situations, and both the sample traces and the Trace Wizard-generated traces have a limited subset of these events. I recommend you include a limited subset of these events in any traces you define. You can double-click an event group to add or remove it. In addition, you can choose Tools, Options to display all event classes (data columns) or only commonly used events.

Screen 4: Choosing the data columns

Cc966515.sqlprof4(en-us,TechNet.10).gif

Screen 4 shows the Data Columns tab. If you select columns carefully, SQL Server will generate meaningful summary data. The Groups facility lets you choose data analysis by application, type of operation, etc. The available data columns depend on the events you selected on the previous tab. Refer to BOL for a full list of data columns and where they are relevant.

Screen 5: Specifying the criteria for determining which events to capture

Cc966515.sqlprof5(en-us,TechNet.10).gif

Screen 5 shows the Filters tab. If you don't set a filter, then SQL Server will return all requested events. If you set a filter, SQL Server will return only events that also meet the criteria you specified. You can choose from three types of filters. Name filters include and exclude specified textual values. Range filters can accept a maximum and minimum value for reporting. ID filters screen for specified IDs. Note that you can specify only one value for ID filters. For specific examples of creating a trace, see the sidebar "Creating a Trace."

More Power, More Options

SQL Profiler in SQL Server 7.0 is powerful and easy to use. Its wizards and samples give you a jump start for gathering useful data about server performance.

SQL Profiler Terms

Events are occurrences on a server running SQL Server.

Event consumers request event information from the event controller.

Event controller manages raised events from producers and deals with requests for event information from event consumers.

Event groups are collections of events that relate to the same area of SQL Server (objects, deadlocks, etc.).

Event producers are components of SQL Server that raise events with the event controller.

Microsoft supplied the SQL Trace utility with SQL Server 6.x and replaced it with SQL Profiler in SQL Server 7.0. You can use SQL Trace, an Open Data Services (ODS) sniffer program, to monitor and record SQL Server 6.x database activity and troubleshoot 6.x systems. For example, you can capture five activity types (connections, SQL statements, remote procedure calls, attentions, and disconnections) within SQL Server 6.x. You can save generated traces as a trace file or an SQL script and apply five filters (login name, application, hostname, remote procedure call filter, and SQL statement filter).

Because SQL Trace is external to the SQL Server architecture, it has limited reporting capability. SQL Trace is an intrusive monitoring program that can report on only high-level events. For example, SQL Trace can report that a user is calling a certain stored procedure. If stored procedures contain many conditional statements, the same stored procedure (depending on the system state at that time and parameters passed to it) can perform wildly different actions. You cannot use SQL Trace to determine what the stored procedures will do. In addition, if you run SQL Trace on a severely stressed server, you might bring the server down.

SQL Server stores trace definitions in the Registry. If the user has appropriate permissions, SQL Server stores those permissions in the Registry of the SQL Server being traced. If the user doesn't have appropriate permissions, SQL Server stores user permissions in the Registry of the machine performing the trace. The location is not negotiable. If many developers or database administrators (DBAs) define traces, one server (typically a development server) could hold many trace definitions.

Although you can still use the old stored procedure xp_trace (a stored procedure for controlling trace activity on a server), SQL Server Books Online (BOL) states that this stored procedure is "for backward compatibility only and may not be supported in future versions." In a future article, I'll tell you about a richer mechanism you can use for profiling that involves about 65 stored procedures, but this method of controlling profiles is beyond the scope of this article.

The options for creating a trace seem endless. However, if you start with the provided samples and wizard-generated traces, you can quickly create custom traces to investigate database problems.

Creating a table that contains an SQL script for replay. If you examine the first sample (Sample 1:T-SQL), you'll see that it includes no filters. It's grouped on application name and event class with Text and StartTime as data columns. If you modify the trace to write to a table within a database, and then execute sp_help <tablename>, you'll receive the results in Table A. The object does not have any indexes.

The large size of the TextData field (1,073,741,823 characters) lets you capture all the information you'll need. The length given in Table A is the size of the pointer to this data. If you plan to use this table to log large amounts of information, you can generate summary data and index the table after you finish writing to it.

Also, you can load the table into SQL Profiler to replay it. Select File, Open, Trace Table. Then complete the dialog box information for Server, Database, Owner, and Table. After you load a valid SQL Profiler trace table into SQL Profiler, you can send output to a file for later examination. You can also replay the table against a server other than the one it was originally created for.

Determining why a certain stored procedure is slow. Let's say that as you're testing a new version of an application, you realize one part of the system is dramatically slower than you expected. The problem is that several developers made improvements to this part of the system. Although all programmers assured you that they had tested their changes in isolation with no noticeable effect on performance, the tester reported regular timeouts.

First, you need to determine which stored procedure is running slowly. SQL Profiler's stored procedure performance wizard-generated trace is ideal for this application. You need to set a minimum value for duration, similar to the filter defined for slow-running queries, and identify the slow-running stored procedure. Then you can run the stored procedure in the SQL Query Analyzer to determine the execution plan, or run the sample trace provided for T-SQL and stored procedures. Let's say that in this example, you determined that developers inserted many additional, independently written conditional statements throughout the stored procedure, which caused a table update with several large triggers attached. You then can consolidate the SQL statements to return to the previous performance level.

Profiling stored-procedure performance, variation, and frequency of execution. You can create a trace that features only the SP:Completed event to generate a profile of how often each stored procedure is called within the system. In addition, you can save this frequency information to a table. Also, you can designate ObjectID as a featured data column, which will be quicker than using the text field for analysis purposes. Then you can determine, with simple queries, the answers to such questions as:

  1. "What stored procedures are never called?"

    • select name from sysobjects where type = 'p'

    and id not in (select objectid from spinfo)

    order by name

  2. "What stored procedures are called the most?"

    • select name from sysobjects join (select top

    30 count(*) freq,objectid from spinfo group by

    objectid order by freq Desc) as tblmax on

    tblmax.objectid = id

  3. "What stored procedure takes longest across all executions to run?"

    • select name from sysobjects join (select top 1

    sum(duration) dur,objectid from spinfo group

    by objectid order by dur Desc) as tblmax on

    tblmax.objectid = id

Don't query system tables directly because Microsoft doesn't guarantee that the table format will remain constant. The queries above are for informational purposes.

Table A

Column_name

Type

Length

EventClass

decimal

 

TextData

ntext

16

Application

nvarchar

256

StartTime

datetime

8