Designing Efficient Applications for Microsoft SQL Server 

 

Peter Hussey
Microsoft Corporation

August 1997

Introduction

Designing a Microsoft® SQL Server™ application for optimal performance can seem a daunting challenge. There are so many choices to make—development tools, database design, application structure, query design, choice of interface—and the "right" choices in each of these areas depend on your unique application requirements and on the skills your development team brings to the project. But you don't have time and resources to try out several different approaches to see which one works best for your situation. You don't even have time to read up on all of the different options to make a fully informed decision. So how do you get started?

The good news is that there are many paths to great performing SQL Server applications. You can take any one of a number of different approaches and end up with an application that performs as well as you need it to. Even so, knowing a few basic principles and trade-offs of SQL Server development will help you tremendously as you begin. This paper conveys these basic principles and trade-offs. If you are developing a new SQL Server application or redesigning an existing one, this paper is for you.

One of the first decisions you need to make is where to locate the running code. Should most of the work be done on the client? In the database? On a separate server? Wrapped up in this question is a choice of development tools. Tools such as language compilers or interpreters, debuggers, and run-time deployment platforms often restrict the development model choices. Therefore, you should address decisions about tools and the development model at the same time. This paper introduces the logical three-tier development model and then describes four physical implementations of that model. This is followed by a discussion of database access styles, choosing and optimizing your interface, and tips for leveraging built-in SQL Server features.

Three-Tier Application Development

Logical Three-Tier Model

The logical three-tier model divides an application into three logical components.

Figure 1. The three-tier model

  • Data services

    These services join records and maintain database integrity—for example, constraints on valid values for a customer number and an enforced foreign-key relationship between the customer table and the orders table.

  • Business services

    These services apply business rules and logic—for example, adding a customer order and checking a customer's credit availability.

  • Presentation services

    These services establish the user interface and handle user input—for example, code to display available part numbers and orders for a selected customer.

When deploying an application, there are many ways you can arrange these three logical layers on physical machines. The following sections describe four physical implementations of the logical three-tier model:

  • Physical two-tier implementation with fat clients
  • Physical two-tier implementation with a fat server
  • Physical three-tier implementation
  • Internet implementation

Physical two-tier implementation with fat clients

A common method for deploying an application is a physical two-tier implementation with fat clients, where the business logic and presentation services all run on the client. In this implementation, the server acts only as a SQL Server database. Most applications written today using the Microsoft Visual Basic® or PowerBuilder programming systems are examples of this model.

Figure 2. Two-tier implementation (fat client)

A new option in this implementation is the ability to do OLE packaging of business rules for improved reuse. For example, using Visual Basic version 4.0 or later you can code business rules into an OLE object that you can call from another Visual Basic application. This allows you to physically separate business rules from your presentation logic in the code base. If both the user interface application and the business object run at the client, it is still a physical two-tier implementation. Separating the code, however, makes it easy to move to the physical three-tier implementation described later in this paper.

A primary advantage of this fat client implementation is that the tools that support it are powerful and well established. As of this writing, PowerBuilder is in its fifth major version, and Visual Basic is about to be released in its fifth major version. A disadvantage of this implementation is that deploying the business services at the client generally means more network traffic because the data has to be moved to the client to make the decisions coded in the business logic. On the other hand, the client computer is a good place to store "state" information associated with the user, such as the primary key of the record the user is currently viewing.

Physical two-tier implementation with a fat server

In a physical two-tier implementation with a fat server, business logic and presentation services are deployed from the server database. In this implementation, business logic is generally written as stored procedures and triggers within the database. For example, in the TPC-C benchmarks published for Microsoft SQL Server, the core transaction logic is coded as Transact–SQL stored procedures in the server. Many internally-developed corporate applications also make extensive use of stored procedure logic. Microsoft uses this implementation to handle internal business functions, such as customer information tracking.

Figure 3. Two-tier implementation (fat server)

The major new development in this implementation is the availability of a Transact–SQL debugger. This debugger is integrated into the Enterprise Editions of both Microsoft Visual C++® version 4.2 and later and Visual Basic version 5.0. This debugger makes it possible to step through Transact–SQL code, set breakpoints, and view local variables.

The major advantage of this fat server implementation is performance. The business logic runs in the same process space as the data access code and is tightly integrated into the data searching engine of SQL Server. This means data does not have to be moved or copied before it is operated on, which results in minimal network traffic and the fewest possible network roundtrips between client and server. The published TPC-C benchmarks from Microsoft Corporation and other major database vendors all use this implementation. In the SQL Server TPC-C implementation, each of the five measured transactions is performed in a single roundtrip from client to server because all of the logic of the transaction takes place in a Transact–SQL stored procedure.

The main disadvantage of this implementation is that it limits your choice of development tools. Stored procedures are written in the language supported by the database. SQL Server supports calls from the server to code written in languages other than Transact–SQL, but this option adds complexity and is generally not as efficient as the same functionality written in Transact–SQL.

Physical three-tier implementation

The physical three-tier implementation is one of many implementations of the logical three-tier model. Commonly referred to as the "three-tier model," it is often incorrectly thought of as the only physical implementation of the logical three-tier model. In this implementation, business logic runs in a separate process that can be configured to run on the same server or a different server from the server the database is running on. The key distinction of the physical three-tier implementation is that there is a cross-process boundary, if not a cross-computer boundary, between data services and business services, and another cross-process or cross-computer boundary between business services and presentation services. SAP's R/3 application suite is a physical three-tier implementation, as are many of the large financial and line-of-business packages from other vendors. Transaction processing monitor products such as Encina or Tuxedo also use this implementation.

Figure 4. Three-tier implementation

A major new option for using this implementation is Microsoft Transaction Server. Transaction Server can host business services written in any language that can produce OLE objects. Transaction Server manages the middle layer and provides many of the run-time services that would otherwise have to be built for a physical three-tier implementation. For example, Transaction Server provides a mechanism for reusing object instances among multiple users.

The physical three-tier implementation offers advantages of database independence. Most physical three-tier implementations access several databases. These applications generally treat databases as standardized SQL engines and make limited use of database-specific features.

Some variations of the physical three-tier implementations also offer language independence. Microsoft Transaction Server, for example, supports any language that can produce OLE/COM in-process objects, including Visual C++, Visual Basic, and Micro Focus COBOL. Any of these languages can be used to write business logic that is then hosted at run time by the Transaction Server. SAP's application, on the other hand, does not offer language independence—all application code developed in R/3 is written in their language called Advanced Business Application Programming (ABAP).

In some cases, the physical three-tier implementation is more scalable than other physical implementations. If the business logic code consumes a great deal of processor time or physical memory, it can be advantageous to locate those business processes on one or more servers separate from the database to avoid contention for resources. This potential scalability gain is offset by the additional cost of moving data across the network to the middle-tier servers, so it is not a win in all cases. Physical three-tier applications can also potentially access partitioned databases on multiple computers, giving an additional dimension of scalability. Partitioning the database, however, introduces enormous complexities into the application and is not a widespread practice in the industry today.

A disadvantage of the physical three-tier implementation is that it tends to require more management. Also, while the physical three-tier implementation can offer the capability to employ more physical computers on an application, it generally does not offer as compelling a price/performance ratio as an application whose logic is implemented in stored procedures.

Internet implementation

The Internet has introduced a new twist on the logical three-tier model: the ability to split the presentation services onto a browser client and a Web server. The Web server is actually responsible for formatting the pages that the user sees. The browser is responsible for displaying these pages and downloading additional code they may need. Between the Web server and the database, the choices remain the same for locating the business services logic.

A common Internet implementation is to run both business and presentation services at the Web server. In some products, the business logic can run in the Web server's process space, thus avoiding the overhead of crossing an additional process boundary. An example of a product that uses this implementation for database applications is Microsoft Internet Database Connector (IDC), which is part of the Microsoft Internet Information Server (IIS) in the Microsoft Windows NT® operating system. IDC connects to any ODBC data source, including SQL Server, retrieves data, and formats the data into an HTML page that is sent immediately to a browser client.

Figure 5. Internet implementation

There are many newly released products that support Internet implementations of database applications. For example, IIS version 3.0 allows developers to write business and presentation services in Visual Basic Script and includes the ability to load and invoke an OLE Automation object. Also, Microsoft ActiveX™ controls offer a way to run more of the presentation services and possibly the business services from the browser client. These extensions to Internet technologies give more flexibility for where you can deploy the logical three tiers of a database application written for browser clients.

One key advantage of Internet implementations is that anybody who has a browser client can access these applications. With little or no additional development effort, an application can be accessed simultaneously from the Microsoft Windows® operating system version 3.1, Windows 95, Windows NT, Apple® Macintosh®, OS/2, and UNIX clients. All of the client functionality required is provided by standard Web browsers. Ease of management is another key advantage of an Internet implementation. In an Internet application, an update to the Web server automatically updates all clients. Managing Web page code at a few servers is easier than managing application versions at many clients.

The basic Internet implementation today (for example, using IIS and IDC and putting business services at that Web server layer) is not a high volume online transaction processing (OLTP) solution. But it is important to note that the application implementations discussed in this paper can be mixed to combine their advantages. For example, an implementation that uses an application's business services in stored procedures and that handles presentation services at the Web server can be very efficient. In fact, Microsoft's latest TPC-C benchmarks use IIS to handle browser clients, as opposed to using the alternatives (see note below). So an Internet-style application can be used for high volume OLTP if business services are executed as stored procedures in the database.

Note The TPC-C benchmark requires fixed-function workstations (terminals) for end users. Most TPC-C benchmarks from other database vendors use Telnet-like servers running on UNIX to support these terminals. The IIS based on Windows NT has proven to be more cost-effective and more efficient at managing clients for SQL Server benchmarks.

Choosing an Implementation

Your application requirements will determine which physical implementation of the logical three-tier model you choose. The key requirements to consider are:

  • Performance and scalability

    If your throughput requirements are high and optimum price/performance is the goal, an implementation that uses business logic in stored procedures may be called for.

    If your business services are resource intensive and the ability to apply many servers to the application is the goal, a physical three-tier implementation may be best.

    On the other hand, PC hardware has become so powerful and cost effective that your application performance requirements can be satisfied easily by any one of these implementations.

  • Client platform and access

    If a variety of client platforms must have access to your application, an Internet implementation is compelling.

  • Developer skills, especially skills in a particular language.

    If you have developer skills or existing code in a particular language, the cost of choosing an implementation supported by that language is significantly lower.

  • Administration

    Different implementations require different administrative overhead.

  • Database and/or tool independence

    Some implementations require an application to be oriented to a specific database or language.

All of these considerations affect the decision of how to physically implement a three-tier application. There is no one correct answer—the best course of action is to thoroughly understand the alternatives and the trade-offs before choosing an implementation.

When you choose a development tool and decide how to implement the logical three-tier model, you must also choose the interface by which your client application will communicate with SQL Server. The best interface to use depends on the development language and the type of application under development. The choices fall into three categories:

  • Call-level interfaces
  • Object interfaces
  • Embedded SQL

Choosing an Appropriate Interface

This section provides an examination of the interface choices and recommendations for using each interface.

Call-level Interfaces

A call-level interface offers a set of function calls or APIs that enable client applications to interact with a server database. Call-level interfaces usually use parameters specified as pointers to data input and output buffers owned by the application. Because of this reliance on pointers, call-level interfaces are almost always used from the C/C++ language. With some mapping code, these interfaces can be called from languages that lack pointer support, such as Visual Basic, but usually developers in these languages are more comfortable and productive using an object interface.

SQL Server offers two call-level interfaces:

  • Open Database Connectivity (ODBC)—an industry-standard, call-level interface
  • DB-Library—the original call-level interface that is specific to SQL Server

At a functional level, ODBC and DB-Library are similar interfaces. They both offer function calls to perform tasks such as opening a connection to SQL Server, executing an SQL statement, and retrieving data and metadata from SQL Server. They also have similar performance characteristics. While many database vendors support ODBC by using a mapping layer on top of their proprietary interface, this is not true for Microsoft SQL Server. For SQL Server, these two APIs are implemented at the same logical layer in the software architecture; both are "native" interfaces for SQL Server. (In fact, all current SQL Server TPC-C benchmarks are published using the ODBC interface, which is strong evidence of the performance of SQL Server's ODBC implementation.) Both APIs offer full access to the same feature sets, with minor exceptions. In particular, ODBC does not impose a generic, lowest-common-denominator approach to building applications. An ODBC application can be written to be compatible with all databases, or tuned for SQL Server implementation, or anywhere in between.

ODBC is the recommended interface and offers the following advantages over DB-Library:

  • ODBC is easier to learn.

    DB-Library uses different API sets for similar functions that are implemented differently, such as retrieving data using a default result set (dbsqlexec, dbresults, dbnextrows) versus a server cursor (dbcursoropen, dbcursorfetch). ODBC implements these similar functions using the same APIs (SQLExecute, SQLExtendedFetch) and a simple statement option to distinguish a default result set from a server cursor. Because of these special-purpose function calls, DB-Library has many more APIs to learn than ODBC (150 versus 50).

  • The ODBC driver uses the performance features of SQL Server automatically.

    For example, SQL Server stored procedures can be executed using an efficient procedure call network format. DB-Library uses a separate set of APIs to send requests in the network format. ODBC uses the same APIs used for sending nonstored procedure requests and looks for the standard ODBC "call" syntax to trigger the use of this efficient network format.

  • ODBC is an industry-standard interface.

    The code and skills used building a SQL Server application on ODBC can be leveraged to build applications for almost any other SQL database. Of course, the code that uses SQL Server features that are not implemented in other ODBC drivers may have to be isolated in a common code base, but this is usually a small portion of the code.

These advantages present a strong case for developing new call-level applications using ODBC. If a company has existing DB-Library applications, there is no need to rewrite them to ODBC unless they are being revised to take advantage of ODBC features. DB-Library applications have excellent performance and will continue to be supported by Microsoft SQL Server for some time. DB-Library, however, will not generally receive feature enhancements in future releases of SQL Server.

Object Interfaces

Object interfaces offer a model of database programming "objects" (such as connections, SQL statements, and result sets) that can be created by your application and used to send and retrieve data from the database. You can use the objects by calling methods defined for the object (such as Execute) and by setting or getting properties on the object (such as CursorType).

Object interfaces vary widely in their level of abstraction, exposure of database features, and performance characteristics. They are also usually restricted to specific programming languages. Microsoft offers several object interfaces with overlapping functionality, including OLE DB, ActiveX Data Objects (ADO), Remote Data Objects (RDO), and Data Access Objects (DAO). Other vendors of database programming tools such as PowerBuilder or SQL Windows offer their own object interfaces as part of their tools.

With so many choices, it is difficult to sort out the features and decide which is best for your application. This paper does not compare Microsoft's development tools and object interfaces with third-party tools and their associated object models. Such a comparison is complex and highly dependent on factors such as programmer skills, application requirements, and vendor support. This paper does, however, present the major object interfaces provided by Microsoft Corporation and give some recommendations based on development language and application requirements.

OLE DB and ADO

In 1996, Microsoft released the OLE DB Software Development Kit (SDK), establishing a new Component Object Model (COM) interface to tabular (row and column) data providers. Like other parts of OLE, OLE DB defines an object hierarchy and a set of defined interfaces, each of which has defined methods for manipulating data. An implementation of an interface must support all of the methods defined for the interface, so that an application that uses ("consumes") the interface can rely on those methods being fully supported in that implementation. OLE DB abstracts the concept of tabular data, so that SQL and non-SQL data sources can expose common interfaces for data retrieval and manipulation. This allows a number of different data sources to expose common OLE DB interfaces.

To access SQL data, you can view OLE DB as an object version of the ODBC API: a standard, high performance interface to a wide variety of data sources. In fact, an important implementation of OLE DB that ships with the OLE DB SDK supports ODBC data sources. This implementation, known as the ODBC Provider, makes any ODBC data source accessible to OLE DB consumers. The ODBC Provider provides performance and feature support comparable to using the ODBC call-level interface to these same data sources. The SQL Server ODBC driver has been thoroughly tuned and tested to work with the ODBC Provider. Thus, the ODBC Provider can be an excellent alternative for applications that need an OLE COM interface to general SQL Server data.

OLE DB offers some powerful capabilities, such as notifications, interface sharing, and OLE objects as column types, that make it possible to build data-aware OLE components that are difficult to build using the ODBC interface. So if you are building data-aware OLE components to be used by other applications, OLE DB is likely to be your best interface choice. Likewise, if you have your own data source to expose programmatic access to, you should look at OLE DB. For more details about the capabilities of OLE DB, see the white paper, "OLE DB for the ODBC Programmer."

Unlike most object interfaces to SQL Server, however, OLE DB does not make programming any easier than using a call-level interface. OLE DB uses pointer data types extensively, which makes it directly accessible only from C/C++. Writing an application to OLE DB also requires a great deal of interface creation and release code that is unrelated to the task of working with application data. Fortunately, the OLE DB SDK also includes a higher-level object interface called ActiveX Data Objects (ADO) that addresses these concerns.

ADO offers a higher-level object interface on top of OLE DB that avoids much of the programming work without sacrificing functionality or performance. ADO is a pointerless object interface that supports OLE Automation, which makes it callable from both C/C++ and from scripting languages like Microsoft JScript™ development software or Visual Basic Script. For these reasons, ADO is a more appropriate interface for most business applications, including SQL Server, that need access to OLE DB data sources.

ADO is the recommended and supported data interface for Internet applications written with the Microsoft Visual InterDev™ Web development system. These applications call ADO from Active Server Pages (ASPs), incorporating program logic written in Visual Basic Script or JScript. The OLE DB SDK also includes instructions for how to call ADO from Java or C/C++ programs that are authored outside of Visual InterDev. The ADO and OLE DB components shipped in the OLE DB SDK can be freely redistributed with applications that use them.

RDO and DAO

Microsoft Visual Basic ships with two object interfaces to data:

  • Remote Data Objects (RDO)
  • Data Access Objects (DAO)

RDO is an object interface that is closely tied to ODBC and optimized for accessing server databases. It exposes nearly all the functionality available in the SQL Server ODBC driver and makes this functionality easily accessible to Visual Basic programs. RDO supports binding visual controls, such as list boxes and grids, directly to SQL Server data, greatly reducing the amount of code that must be written to display data on the screen. RDO also makes the ODBC call-level interface "handles" available so that a Visual Basic program can use any features in an ODBC driver that were not incorporated in the RDO object model. Even with these many features, RDO imposes little performance overhead on top of ODBC and has a relatively small memory footprint. For these reasons, RDO makes an excellent choice for SQL Server access from Visual Basic programs.

The DAO interface is closely tied to and optimized for the Jet local database, which ships in Visual Basic and Microsoft Access. It offers similar syntax and many of the same features as RDO for getting directly to SQL Server data. For applications that only access SQL Server data, however, DAO offers less functionality and has a larger memory footprint, so it is not the recommended interface. But DAO offers features that RDO does not, such as efficient access to local Jet data and the ability to join data from multiple data sources in a single query. If your application requires these unique features, DAO is the right choice. Conversely, if your application does not involve local Jet data or heterogeneous queries, stick with RDO.

Microsoft plans to develop ADO into the successor of both RDO and DAO. Over time, ADO will become a superset of most of the unique functionality and supported data sources of both of these interfaces. At that point, ADO will be incorporated into Visual Basic and become the primary object interface to data for Visual Basic programs. All three interfaces have a similar object hierarchy and syntax, so migrating applications in the future will not be difficult. In the meantime, ADO is callable from the Visual Basic development environment, but this is not an officially supported or documented configuration as of this writing.

SQL-DMO

SQL Distributed Management Objects (SQL-DMO) is a specialized object interface for SQL Server administration tasks. It has object properties and methods to support operations such as creating devices and databases, establishing replication relationships, scheduling housekeeping tasks, and defining alert actions on errors reported by SQL Server. SQL Enterprise Manager is layered on top of SQL-DMO, which means that almost anything you can do in the user interface can also be done programmatically. SQL-DMO supports OLE Automation interfaces, which makes it accessible from interpreted languages such as Visual Basic, as well as C/C++.

SQL-DMO is not intended to be a general-purpose interface for application data. Most applications, however, need to accomplish some database administration tasks as part of their setup and maintenance modules. These portions of your application can be programmed using standard Transact–SQL statements and stored procedures or through calls to SQL-DMO. In most cases the SQL-DMO method is easier to write because the object syntax is more consistent and handles tasks like repeating an operation for each object in a database much more concisely. The scheduled tasks and alerts objects of SQL-DMO can be an effective tool for making your application self-managing through automatic maintenance tasks and predefined responses to error conditions. A self-managing server will keep your users happy.

Embedded SQL

Embedded SQL is an ANSI-standard programming interface in which SQL statements, delineated by EXEC SQL tags, are incorporated into the source code of an application. The source code is input to a precompiler, which identifies the SQL blocks and replaces them with the appropriate low-level function calls for communicating with the database. An Embedded SQL precompiler for SQL Server is currently available for programs written in C in the form of a toolkit that ships on the MSDN™ Library, professional level subscription. This precompiler technology has also been licensed to Micro Focus, who offers it as a toolkit for Cobol programmers. Both precompilers have passed the NIST tests for ANSI-standard compliance with SQL Server version 6.5.

Embedded SQL offers a familiar programming model for developers of applications for other databases such as Oracle or DB2. For Cobol programmers, Embedded SQL is also the most commonly supported database interface of any kind and is an excellent solution for accessing SQL Server. For applications written in C, however, Embedded SQL is somewhat slower than the call-level interfaces and doesn't allow you to take advantage of specific SQL Server features and performance optimizations. The primary design goal for Embedded SQL for C is to follow the strict ANSI standard for maximum portability of applications. Embedded SQL for C is useful if you are porting an application from another database and have a large code base that would be difficult to adapt to ODBC. For these applications, Embedded SQL for C will perform adequately but may not offer optimal performance and control.

Optimizing Your Interface

Regardless of which interface you choose, you need to learn how to use it effectively. Any interface can be used in ways that create unnecessary overhead and degrade performance. To understand how to get the best performance out of your chosen interface, first look for any technical documentation available on optimizing client/server performance for that interface. Programmers using ODBC or RDO, for example, should read the white paper, "Using ODBC with Microsoft SQL Server." Much of the advice in this paper, such as when and how to use prepared statements and stored procedures, applies to OLE DB and ADO applications as well.

Use SQL Trace, which ships with SQL Server 6.5, to examine the client/server traffic generated by your application. What you find may surprise you. Many commands being sent to the server may not be generated by your application but instead are sent by the interface you are using. Often these extra commands can be avoided by changing a property or an option in the interface and without affecting the functionality of the application.

For example, the ODBC driver sends a SET TEXTSIZE statement to the server every time a statement handle has a nondefault value for SQL_MAX_LENGTH statement option. If your application does not need to limit the maximum length of returned column data, you can avoid unnecessary overhead by leaving that statement option at its default value. SQL Trace can also point to other opportunities for optimization, such as statements that can be batched or transactions that can be committed sooner. If you can optimize the client/server traffic identified by SQL Trace, you are a long way toward optimizing your overall application.

Database Access Styles

Another key aspect of database application design is how the application code interacts with the database. Some applications treat the database simply as a storage place for records. The application itself performs most of the operations on the data, such as filtering, aggregating, or matching records. Other applications treat the database as a data management engine, performing all of these data operations at the server. The first style of database access is common in programs written to use an indexed sequential access method (ISAM) database. The second style of database access is more appropriate for a program interacting with a relational database like SQL Server. This section compares these two styles of database access.

ISAM-Style Database Access

Many applications are originally developed on an ISAM database and are later moved to a relational database. Usually porting to a relational database supports better data sharing with other applications or fits customer database standards. The first port of such an ISAM-style application often substitutes a relational database table for the equivalent ISAM file and leaves the bulk of the application code unchanged. SQL Server will work for these ISAM-style applications, but the application performs much better if it is adapted to take full advantage of relational-style database access and asks the server to do the jobs it is capable of doing.

An ISAM-style application example

An example illustrates why an ISAM-style application does not perform optimally on SQL Server. This application example involves three tables: a master customer table, a detail orders table, and a summary invoice table. The program task is to extract the appropriate record for customer "Viki Parrott," find the customer's current orders, and then aggregate the order totals for that customer into the invoice summary table. An ISAM-style application to accomplish this task is coded as follows:

  1. The client sends a BEGIN TRANSACTION request to the server.
  2. The client requests that the server open a cursor on the customer table (A) for records with the last name >= 'Parrott'.
  3. The client fetches one or more customer records until the correct record is found.
  4. Having found the customer ID, the client program asks the server to open a cursor on the orders table (B) for orders matching the customer ID.
  5. The client fetches all matching order records for the customer.
  6. The client calculates the aggregate of those detailed records.
  7. The client sends a command to update the invoices table (C) with the calculated aggregate.
  8. The client sends a COMMIT TRANSACTION statement.

Figure 6. An ISAM-style application

Performance problems of ISAM-style database access

Why doesn't the ISAM-style application perform well? The problems with this style of coding are:

  • Excessive network traffic.

    The ISAM-style moves data unnecessarily across the network. This example application moved the master and detail records to the client to calculate an aggregate, a task that could have been done at the server. More importantly, this example required at least seven roundtrips between client and server to accomplish a simple task. Network roundtrips are usually the number one factor for poor performance in a client/server application, an even greater factor than the amount of data transferred between server and client.

  • Excessive use of cursors.

    Cursors are a useful tool in relational databases; however, it is almost always more expensive to use a cursor than to use set-oriented SQL to accomplish a task. In set-oriented SQL, the client application tells the server to update the set of records that meet specified criteria. The server figures out how to accomplish the update as a single unit of work. When updating through a cursor, the client application requires the server to maintain row locks or version information for every row, just in case the client asks to update the row after it has been fetched. Also, using a cursor implies that the server is maintaining client "state" information such as the user's current rowset at the server, usually in temporary storage. Maintaining this state for a large number of clients is an expensive use of server resources. A better strategy with a relational database is for the client application to get in and out quickly, maintaining no client state at the server between calls. Set-oriented SQL supports this strategy.

  • Unnecessarily long transactions.

    The ISAM-style application example uses a "conversational" style of communicating with the server that involves many network roundtrips within a single transaction. The effect is that transactions get longer in duration. Transactions handled this way can span seconds or minutes or even tens of minutes between the BEGIN TRANSACTION and the COMMIT TRANSACTION statements. These long-running transactions can be fine for a single user, but they scale poorly to multiple users. To support transactional consistency, the database must hold locks on shared resources from the time they are first acquired within the transaction until the transaction commits. If other users need access to the same resources, they must wait. As individual transactions get longer, the queue and other users waiting for locks gets longer and system throughput decreases. Long transactions also increase the chances of a deadlock, which occurs when two or more users are simultaneously waiting on locks held by each other.

Relational-Style Database Access

The problems that result from running an ISAM-style application on SQL Server stem from its failure to take advantage of the capabilities of SQL Server. In contrast, an application that is well designed for SQL Server:

  • Uses SQL effectively.
  • Minimizes network roundtrips to the server during a transaction.
  • Uses stored procedures.
  • Incorporates the essential indexing and configuration steps for SQL Server.

These characteristics don't have to be applied with an all-or-nothing approach. They can be incorporated into an application over time.

Structured Query Language

Structured query language (SQL) is the language of Microsoft SQL Server. It makes sense that application developers learn to "speak" this language fluently if they want their applications to communicate effectively with the server. By using SQL effectively, an application can ask the server to perform tasks rather than requiring application code and processing cycles to achieve the same result. More importantly, effective use of SQL can minimize the amount of data that must be read from and written to disk devices at the server. Finally, effective use of SQL can minimize the amount of data shipped to and from SQL Server across the network. Saving disk I/O and network I/O are the most important factors for improving application performance.

Database design

Using SQL effectively begins with a normalized database design. Normalization is the process of removing redundancies from the data. When converting from an ISAM-style application, normalization often involves breaking data in a single file into two or more logical tables in a relational database. SQL queries then recombine the table data by using relational join operations. By avoiding the need to update the same data in multiple places, normalization improves the efficiency of an application and reduces the opportunities for introducing errors due to inconsistent data.

There are sometimes trade-offs to normalization: A database that is used primarily for decision support (as opposed to update-intensive transaction processing) may not have redundant updates and may be more understandable and efficient for queries if the design is not fully normalized. Nevertheless, unnormalized data is a more common design problem in SQL Server applications than over-normalized data. Starting with a normalized design and then selectively denormalizing tables for specific reasons is a good strategy.

Whatever the database design, you should take advantage of the features in SQL Server to automatically maintain the integrity of the data. The CREATE TABLE statement supports data integrity:

  • CHECK constraints ensure that column values are valid.
  • DEFAULT and NOT NULL constraints avoid the complexities (and opportunities for hidden application bugs) caused by missing column values.
  • PRIMARY KEY and UNIQUE constraints enforce the uniqueness of rows (and implicitly create an index to do so).
  • FOREIGN KEY constraints ensure that rows in dependent tables always have a matching master record.
  • IDENTITY columns efficiently generate unique row identifiers.
  • TIMESTAMP columns ensure efficient concurrency checking between multiple user updates.
  • User-defined data types ensure consistency of column definitions across the database.

By taking advantage of these CREATE TABLE options, you can make the data rules visible to all users of the database, rather than hide them in application logic. These server-enforced rules help avoid errors in the data that can arise from incomplete enforcement of integrity rules by the application itself. Using these facilities also ensures that data integrity is enforced as efficiently as possible.

Efficient data retrieval

One of the capabilities of SQL is its ability to filter data at the server so that only the minimum data required is returned to the client. Using these facilities minimizes expensive network traffic between the server and client. This means that WHERE clauses must be restrictive enough to get only the data that is required by the application.

It is always more efficient to filter data at the server than to send it to the client and filter it in the application. This also applies to columns requested from the server. An application that issues a SELECT * FROM... statement requires the server to return all column data to the client, whether or not the client application has bound these columns for use in program variables. Selecting only the necessary columns by name will avoid unnecessary network traffic. It will also make your application more robust in the event of table definition changes, because newly added columns won't be returned to the client application.

Beyond the SQL syntax itself, performance also depends on how your application requests a result set from the server. In an application using ODBC, the "how" is determined by the statement options set prior to executing a SELECT. When you leave the statement options at default values, SQL Server sends the result set the most efficient way. SQL Server assumes that your application will fetch all the rows from a default result set immediately. Therefore, your application must buffer any rows that are not used immediately but may be needed later. This buffering requirement makes it especially important for you to specify (by using SQL) only the data you need.

It may seem economical to request a default result set and fetch rows only as your application logic or your application user needs them, but this is false economy. Unfetched rows from a default result set can tie up your connection to the server, blocking other work in the same transaction. Still worse, unfetched rows from a default result set can cause SQL Server to hold locks at the server, possibly preventing other users from updating. This concurrency problem may not show up in small-scale testing, but it can appear later when the application is deployed. The lesson here is simple—immediately fetch all rows from a default result set.

Some applications cannot buffer all the data they request of the server. For example, an application that queries a large table and allows the user to specify the selection criteria may return no rows or millions of rows. The user is unlikely to want to see millions of rows. Instead, the user is more likely to re-execute the query with narrower selection criteria. In this case, fetching and buffering millions of rows only to have them thrown away by the user would be a waste of time and resources.

For applications like these, SQL Server offers server cursors that allow an application to fetch a small subset or block of rows from an arbitrarily large result set. If the user wants to see other records from the same result set, a server cursor allows the application to fetch any other block of rows from the result set, including the next n rows, the previous n rows, or n rows starting at a certain row number in the result set. SQL Server does the work to fulfill each block fetch request only as needed, and SQL Server does not normally hold locks between block fetches on server cursors. Server cursors also allow an application to do a positioned update or delete of a fetched row without having to figure out the source table and primary key of the row. If the row data changes between the time it is fetched and the time the update is requested, SQL Server detects the problem and prevents a lost update.

All of these features of server cursors come at a cost. If all the results from a given SELECT statement are going to be used in your application, a server cursor is always going to be more expensive than a default result set. A default result set always requires just one roundtrip between client and server, whereas each call to fetch a block of rows from a server cursor results in a roundtrip. Moreover, some types of server cursors (those declared as dynamic) are restricted to using unique indexes only, while other types (keyset and static cursors) make heavy use of temporary storage at the server. For these reasons, only use server cursors where your application needs their features. If a particular task requests a single row by primary key, use a default result set. If another task requires an unpredictably large or updatable result set, use a server cursor and fetch rows in reasonably sized blocks (for example, one screen of rows at a time).

Advanced features

Structured query language (SQL) goes far beyond the basics of creating tables, inserting records, and retrieving data from tables. Your application will perform better if it requests the server to perform a task with a more sophisticated SQL statement than if it performs the same task using simple SQL statements combined with application logic. Therefore, learning the full capabilities of SQL is a wise investment of your time. For SQL Server, these capabilities include:

  • An UPDATE statement that can modify rows in one table based on related data in other tables.
  • Rich JOIN capabilities, including INNER, CROSS, LEFT OUTER, RIGHT OUTER, and FULL OUTER joins, as well as combinations of these join types.
  • Subqueries that can be used in any expression context.
  • A CASE expression that provides a powerful conditional logic capability in an ANSI-standard syntax.
  • Sophisticated aggregation functions, including the CUBE and ROLLUP operators to produce multidimensional aggregates in decision support applications.

An ISAM-style application will use few or none of these advanced SQL capabilities and, as a result, will be less efficient than an application designed to take advantage of them when performing equivalent tasks.

Short transactions

A primary goal of using SQL appropriately is to reduce the amount of data transferred between server and client. Reducing the amount of data transferred will usually reduce the time it takes to accomplish a logical task or transaction. As discussed earlier, long transactions may work fine for a single user, but they can cause concurrency problems and scale poorly for multiple users. Techniques you can use to reduce transaction duration include:

  • Commit transactional changes as soon as possible within the requirements of the application.

    Applications often perform large batch jobs, such as month-end summary calculations, as a single unit of work (and thus one transaction). With many of these applications, individual steps of the job can be committed without compromising database consistency. Committing changes as quickly as possible means that locks are released as quickly as possible. Because SQL Server uses page locks for most operations, releasing locks quickly minimizes their impact on other users who may need access to rows on the same data pages.

  • Take advantage of SQL Server statement batches.

    Statement batches are a way of sending multiple SQL statements to the server at one time, thereby reducing the number of roundtrips to the server. If the statement batch contains multiple SELECT statements, the server will return multiple result sets to the client in a single data stream.

  • Use parameter arrays for repeated operations.

    The ODBC SQLParamOptions function allows multiple parameter sets for a single SQL statement to be sent to the server in a batch, again reducing the number of roundtrips.

  • Use SQL Trace to diagnose application traffic.

    SQL Trace can monitor, filter, and record all of the calls sent from client applications to the server. It will often reveal unexpected application overhead due to unnecessary calls to the server. SQL Trace can also reveal opportunities for batching statements that are currently being sent separately to the server.

Stored procedures

All well-designed SQL Server applications should use stored procedures. This is true whether or not the application's business logic is written into stored procedures. Even standard SQL statements with no business logic component gain a performance advantage when "packaged" as stored procedures with parameters.

When they are created, stored procedures go through the parsing and normalization steps that SQL Server performs on any SQL statement. The stored procedure is then saved to disk in the parsed, normalized form of a query tree. At first execution, SQL Server retrieves the saved query tree and optimizes the execution of the SQL based on the values of parameters passed in and the available statistics of the referenced tables and their indexes. On subsequent executions of the same stored procedure by any user, SQL Server looks to the procedure cache first to see if a matching optimized execution plan is available for use with a new set of parameters. SQL statements compiled into stored procedures can thus save a significant amount of processing at execution time.

Figure 7. Stored procedures tree

Another advantage of stored procedures is that client execution requests use the network more efficiently than equivalent SQL statements sent to the server. For example, suppose an application needs to insert a large binary value into an image data field. To send the data in an INSERT statement, the application must convert the binary value to a character string (doubling its size) and then send it to the server. The server then converts the value back into a binary format for storage in the image column. In contrast, the application can create a stored procedure of the form:

CREATE PROCEDURE P(@p1 image) AS INSERT T VALUES (@p1)

When the client application requests an execution of procedure P, the image parameter value will stay in its native binary format all the way to the server, thereby saving processing time and network traffic.

SQL Server stored procedures can provide even greater performance gains when they include business services logic. As described earlier, this "fat server" implementation yields outstanding performance because it moves the processing to the data, rather than moving the data to the processing. Nevertheless, using stored procedures is not restricted to the fat server development model. Stored procedures that encapsulate predefined SQL statements provide performance benefits in any implementation. Furthermore, you can easily configure the SQL Server ODBC driver to create and use temporary stored procedures automatically, any time your application calls the SQLPrepare API. These temporary stored procedures are not (currently) as efficient as permanent stored procedures created by the application itself, but they can yield good performance gains when an application executes the same prepared SQL statement multiple times. If you write an application to use the ODBC prepare/execute combination, you enable it to selectively take advantage of the current automatic stored procedure option as well as future enhanced versions of this option.

Indexes and configuration

Indexing and configuring a SQL Server database for optimal performance is usually considered a database administrator's job, not a development task. However, as a developer, you should know the fundamentals of SQL Server indexes and configuration so that:

  • Database designs incorporate clustered indexes in the original creation scripts.
  • Development computers execute test scenarios as efficiently as possible.
  • Your application can automatically configure the database as part of the installation process, reducing the complexity for the end user.

There are books, self-study courses, and classroom courses that teach SQL Server configuration and tuning in great depth. The following sections, however, cover basic configuration steps that are often overlooked. If an application is well written and these configuration guidelines are followed, SQL Server will perform optimally. Additional tuning can yield small performance gains, but there probably won't be configuration changes that will result in dramatic performance boosts.

Indexes

SQL Server has two types of indexes: clustered and nonclustered. A clustered index keeps the data rows of a table physically stored in index order. Because there is only one physical order to the rows in a table, there can be a maximum of one clustered index per table. All other indexes on a table are nonclustered. A clustered index is usually the most efficient method to finding a row or set of rows in a table, so choosing the column(s) by which to define the clustered index is an important part of application tuning. But if you don't know what columns to use, don't leave off a clustered index entirely. In SQL Server version 6.5, it is rarely a benefit to have no clustered index on a table. For an application in which the query patterns are not well known, you can define the clustered index on the primary key columns. Subsequent analysis of some tables may suggest that the most common queries perform better with the clustered index on a nonkey column such as a foreign key column, in which case the default could be overridden for those tables only.

An index includes distribution statistics about the data values in the index. The SQL Server cost-based query optimizer uses these statistics to determine if a particular index will be useful to solve a query. When a large number of rows are added or changed after an index is defined on a table, these statistics will be inaccurate until you run an UPDATE STATISTICS statement. This can mislead SQL Server into using a poor index or a table scan to resolve a query, which can cause poor query performance and reduced concurrency. An update operation that uses a table scan instead of an index, for example, will lock the entire table for the duration of the transaction.

It is therefore important to update statistics regularly on the active tables in your application. You should also learn how to use the SHOWPLAN feature of SQL Server. It reports information such as the optimizer's index selections and join orderings, and shows you whether the indexes are being used for your application's queries. Every unused nonclustered index slows down inserts, updates, and deletes on a table, so make sure that you are using only the indexes you need.

Data distribution

The axiom of SQL Server disk management is that spreading the database out over multiple physical disk drives improves performance. In almost all cases, the log for a database should be placed on a separate physical disk from the data portion of the database. The data portion of the database should be "striped" across multiple physical devices to achieve maximum throughput. Striping is easy to configure with the disk management facilities built into the Microsoft Windows NT operating system. For the operating system to spread out the data randomly with striping usually yields equivalent performance to placing database tables and indexes on specific devices, and at a fraction of the effort. Windows NT also offers disk configurations (RAID levels) that can protect a database against the failure of a hard disk. All SQL Server databases should take advantage of these capabilities.

Memory configuration

SQL Server uses memory for data cache and procedure cache, among other things. Using data or a procedure from cache is an order of magnitude faster than reading it from disk. Therefore, allowing SQL Server to use as much of the physical memory of the server computer as practical will yield better performance. On the other hand, assigning too much memory to SQL Server can conflict with other applications on the server. This can cause the Windows NT operating system to start paging real memory out to disk, resulting in worse performance for SQL Server. SQL Server memory usage is not automatically configured at installation. You must manually configure SQL Server memory usage based on published guidelines, and check it with performance monitoring tools. The procedure is simple, and it is essential to achieving optimal throughput on a computer.

A relational-style application example

The following SQL application performs the same task as the previous ISAM-style example. This time, however, the application task has been rewritten to use the principles of relation-style database access. The application task now uses a stored procedure, P, that takes one ID parameter. Procedure P is created as follows:

CREATE PROCEDURE P (@p1 int) AS
UPDATE C SET c1 =

      (select sum(B.qty)
      from A, B
      where A.pk=C.id
      and A.fk=B.pk)
   WHERE C.id = @id

Figure 8. A relational-style application

This procedure updates the summary table C for a passed-in customer ID, using the detailed customer and order records for that customer ID from tables A and B. This stored procedure uses only ANSI-standard SQL; it does not involve writing business logic in Transact–SQL. Yet it reduces the transaction to a single roundtrip to the server; data stays at the server, rather than moving unnecessarily across the network. Because the action is an atomic SQL statement, the procedure doesn't even need to wrap the statement in a BEGIN TRANSACTION/COMMIT TRANSACTION pair—the statement is automatically its own transaction. This transaction will be significantly shorter in duration than the equivalent ISAM-style transaction. The user of this SQL application is going to have less impact on other users because the application holds locks for much shorter duration.

Tips for Using Other SQL Server Features

As you develop your SQL Server application, you look for ways to take advantage of features and code that already exist, rather than reinventing these features yourself. Consider the following possibilities:

  • If your application involves multiple SQL Servers, there is a good chance that some data movement between servers will be required. Use the built-in replication facilities of SQL Server to accomplish this data movement. Replication can be set up and monitored through SQL-DMO programming. Replication can play an especially important role in partitioned applications that are designed to scale by adding more SQL Servers.
  • If your application needs to notify users when a particular change occurs in the database, check out the SQL Mail feature. SQL Mail provides a mechanism to send and receive e-mail directly from SQL Server. For example, when inventory falls below a certain threshold, a trigger can automatically run SQL Mail to send an e-mail message to the purchasing department. Conversely, you can use e-mail as the input vehicle for data that is destined for a SQL Server database. E-mail messages can be sent to a mailbox set up for SQL Server, and a SQL Mail task can check the mailbox regularly to automatically insert the data from the messages it finds and reply with a confirmation message.
  • If your application must publish existing SQL Server data to the Internet, look at the SQL Server Web Assistant. The Web Assistant and its associated stored procedures can create and maintain Web pages containing results from SQL Server queries and links to other SQL Server data. These pages can be refreshed on a scheduled or triggered basis as needed, giving you complete control over the database load generated by Web access to your server.
  • If your application needs to take advantage of functions that are available in Windows NT but not available in Transact–SQL, it makes sense to call these functions directly from SQL Server by using extended stored procedures or OLE Automation stored procedures. With extended stored procedures, you can create custom functions in a C DLL that can be executed from SQL Server just like any other stored procedure. These custom functions can in turn call any function available in Windows NT, pass parameters, and return data to SQL Server.
  • OLE Automation stored procedures provide a mechanism for loading and calling methods on any OLE Automation object that exists on the server computer. Since so many tools are capable of creating OLE Automation objects, this capability opens up a wide range of possibilities for extending SQL Server's functionality. For example, Visual Basic includes many financial functions, such as Net Present Value, that are not offered in Transact–SQL. Visual Basic Enterprise Edition makes it easy to package these functions as methods on your own OLE Automation object. You can then call these methods from SQL Server stored procedures and triggers by using the OLE Automation stored procedure feature.

Conclusion

This paper has pointed out some of the most important principles and trade-offs of SQL Server development. For many decisions, there is no one correct choice. In these cases, know the trade-offs and make an informed decision. For other decisions, consider the themes that crop up again and again in this discussion:

  • Keep your transactions short.
  • Minimize the number of roundtrips to the server.
  • Take advantage of SQL features.
  • Use stored procedures.

Follow these simple guidelines, and you will achieve most of your application's performance potential without a lot of work.

For More Information

This section describes some other sources for information about designing for SQL Server.

SQL Server Web Site

Every SQL Server developer should get to know and use what's available on the Microsoft SQL Server Web site (https://www.microsoft.com/sql/default.asp). This Web site is updated regularly and includes product and service pack announcements, benchmark results, white papers, partner information, and much more. The SQL Server Web site also includes links to the following areas of specific interest to developers:

  • Support (https://www.microsoft.com/sql/support)

    Includes a Support Newsgroup and the Knowledge Base, a searchable catalog of articles on SQL Server development tips and pitfalls. When you are stuck on a SQL Server problem, the Knowledge Base is usually the best place to look for an answer.

  • SQL Server Developer's Conference

    Developer's Conferences are a means for the SQL Server product team to communicate the latest and greatest information on how to most efficiently develop SQL Server applications and to get feedback about what you would like to see in the product.

Articles

Articles published by Microsoft Corporation give detailed guidance to SQL Server developers. Several of these articles are relevant to the issues discussed in this paper:

  • "Using ODBC with Microsoft SQL Server"

    Discusses how to get optimal performance from the SQL Server ODBC driver. The advice in this white paper applies to developers using any interface that runs on top of the ODBC driver, including RDO, OLE DB (The ODBC Provider), and ADO.

  • "OLE DB for the ODBC Programmer"

    Describes the features of OLE DB and how to build applications using this interface in terms that are familiar to ODBC programmers.

  • "Migrating Btrieve Applications to Microsoft SQL Server"

    Focuses on the specific issues involved with migrating an ISAM-style application to SQL Server. It uses an example Btrieve application to illustrate the process, but many of the principles it discusses apply to any ISAM-style code that is rewritten for SQL Server.

Microsoft Press Books

Two books published by Microsoft Press® are of particular interest to SQL Server developers: Inside SQL Server by Ron Soukup (available in 1997) and Inside ODBC by Kyle Geiger. Both of these books give in-depth advice based on the authors' years of experience.

Other Microsoft Products

The best place to get information about the many Microsoft products that work with SQL Server is the Microsoft Web site (https://www.microsoft.com/). In particular, check out the following products and technologies:

  • Microsoft Visual InterDev is an environment for building SQL Server applications for the Internet and intranets.
  • Microsoft Transaction Server provides a powerful framework for building physical three-tier applications using any programming tool that can create OLE objects.
  • Microsoft Visual C/C++ Enterprise Edition includes a Transact–SQL editor and debugger that can greatly simplify the process of creating stored procedures and triggers.
  • Microsoft Visual Basic Enterprise Edition is a rapid-development environment for building Windows NT and Windows 95–based applications, including high-performance SQL Server applications developed with RDO.

© Microsoft Corporation. All rights reserved.