Online Transaction Processing (OLTP)---a Technical Reference Guide for Designing Mission-Critical OLTP Solutions
Want more guides like this one? Go to Technical Reference Guides for Designing Mission-Critical Solutions.
Operational, or online transaction processing (OLTP), workloads are characterized by small, interactive transactions that generally require sub-second response times. It is common for OLTP systems to have high concurrency requirements, with a read/write ratio ranging from 60/40 to as low as 98/2. Modifications are predominantly singleton statements, and most queries are constrained to simple joins. While limiting joins to as few tables as possible is desirable, a significant number of application systems do join many tables. Standard practices call for indexing strategies in OLTP systems to target an increase in concurrency versus query support; however, more indexes have to be created than is desired to reach acceptable query performance. The lower the proportion of write operations is in the system, the higher the level of indexing that can be tolerated, unless the timing of specific write operations is critical. Database plans generally start with third normal form (3NF) enforced with referential integrity (RI) constraints, and then selectively deviate to second normal form (2NF) when necessary to enhance performance.
For OLTP systems, the middle tier plays an important role in defining the overall success of the system; while the database is important, it is not the only system component. When a new OLTP system is developed, the initial challenges in scalability and performance are often encountered in the middle tier. It is only after the challenges in the middle tier are addressed that the database tier scalability and performance issues are revealed. We recommend that you refer to the Technical Reference Guides included in the Data Warehouse and AppFabric topics when gathering data points for operational considerations.
The following resources provide general operational OLTP reference material. (Note that the full URLs for the hyperlinked text are provided in the Appendix at the end of this document.)
The article Top SQL Server 2005 Performance Issues for OLTP Applications1 examines the significance of database design, resource utilization, and system performance.
Managing I/O performance is crucial for OLTP workload, as explained in Designing High Performance I/O Systems for SQL Server.2
Applications that are migrated from Oracle to Microsoft SQL Server often suffer from blocking issues as they assume that versioning of rows will be used to implement transaction isolation rather than locks. To alleviate the issue, consider the new isolation levels introduced in SQL Server 2005. Microsoft Dynamics applications (for example, Microsoft Dynamics CRM, and Microsoft Dynamics AX) also benefit from this feature, as explained in the following articles:
For ISV applications deployed on previous versions of SQL Server, the article Top 10 SQL Server 2008 Features for ISV Applications5 provides reasons to upgrade to SQL 2008. Most of these benefits apply to custom applications.
For complex OLTP queries, indexed views might be appropriate. The query optimizer considers indexed views only for queries with nontrivial cost. This eliminates the possibility that finding an appropriate plan costs more than the potential savings from using an indexed view. Indexed views are most useful for data that is queried regularly but updated much less frequently. For a number of reasons however, indexed views have found only limited acceptance in mission-critical OLTP deployments. For more information, read Improving Performance with SQL Server 2008 Indexed Views.6
Large tables will often benefit from partitioning, especially if a substantial number of rows need to be deleted, such as is common in "sliding window" configurations where an organization keeps data for a specific period of time. For more information, see the following articles:
Case Studies and References
SQL Server has been deployed for many tier-1 OLTP and data warehousing customers. Examples are described in the following case studies:
Questions and Considerations
This section provides questions and issues to consider when working with your customers.
Fully understand user requirements and usage scenarios, including the number of users and transactions, and performance and scalability requirements. You should also understand the high availability (HA) and disaster recovery (DR) requirements, including the windows available for maintenance.
Be careful when customers specify expected transaction volumes and response times. "Transaction" can be an ambiguous term, and should be clarified with the customer. Consider translating business transactions to SQL statements issued or batches per second.
Denormalization is sometimes adopted based on expected usage patterns, and transaction, data and user volumes. For example, call center personnel need an entire current auto policy display to respond to customer inquiries. The policy is stored in three or four XML columns, which helps meet performance service level objectives (SLOs) by reducing joins and I/O. Use of XML indexes can be sufficient in some cases if the storage space and performance are adequate. However, consider promoting common search attributes to relational columns with appropriate indexes to expedite the retrieval of relevant rows. Almost all systems, including large, mission-critical systems depend on promotion to the relational columns and not on XML-based indexing.
Similarly, DBMS RI may be disabled if the development process is sufficiently mature. Database administrators (DBAs), who are typically accountable for data integrity and the application development team might disagree about this. Design strategy in such situations can be determined by management decision rather than technical merit. Some customers have used RI during testing but have turned it off during production (once they have confidence in the application logic) to ensure data integrity.
Understand how the application code calls the DBMS. Will it use static or dynamic SQL? To what extent will it use stored procedures? Will a large result set be returned over slow transaction processing (TP) links? Refer to the Technical Reference Guide on Data Access for more information. Parameterization is important in the SQL calls for OLTP, and dynamic SQL should be used only when essential in any well-designed high-throughput OLTP system.
Understand HA/DR requirements. For more information, see the Technical Reference Guide on High Availability and Disaster Recovery.
Understand whether projected volumes can be met simply by scaling up, or if the intent is to scale out. If possible, design the system for scale-out.
Determine how the current deployment compares with other deployments or earlier versions. Understand hardware requirements and operational considerations. Examining other deployments or earlier versions is particularly useful for ISV-supplied applications because these applications likely exist in other installations.
Consider the network topology and bandwidth between the client (typically an application server) and the database. Network delays can have a significant impact on OLTP systems because round trips require many small transactions.
Consider that management of the transaction log is critical for a high performing OLTP system. For more details on log I/O, view Designing High Performance IO Systems for SQL Server. 2
Typically performance bottlenecks are first resolved through query tuning and disk I/O optimizations, then by examining the CPU processing power, and finally by considering the system’s memory. (This is a generalization; the details of addressing performance bottlenecks are the subject for a series of white papers.)
Following are the full URLs for the hyperlinked text.
1 Top SQL Server 2005 Performance Issues for OLTP Applicationshttp://sqlcat.com/top10lists/archive/2007/11/21/top-sql-server-2005-performance-issues-for-oltp-applications.aspx
2 Designing High Performance IO Systems for SQL Serverwww.sqlbits/Downloads%2F86%2FDesigning%2520I%2520O%2520systems%2520for%2520SQL%2520Server%2520-%2520Thomas%2520Kejser.pptx
3 Understanding Row Versioning-Based Isolation Levelshttp://msdn.microsoft.com/en-us/library/ms189050.aspx
4 SQL Server 2005 Row Versioning-Based Transaction Isolationhttp://msdn.microsoft.com/en-us/library/ms345124.aspx
5 Top 10 SQL Server 2008 Features for ISV Applicationshttp://sqlcat.com/top10lists/archive/2008/11/24/top-10-sql-server-2008-features-for-isv-applications-burzin.aspx
6 Improving Performance with SQL Server 2008 Indexed Viewshttp://msdn.microsoft.com/en-us/library/dd171921(SQL.100).aspx
7 Partitioned Tables and Indexes in SQL Server 2005http://msdn.microsoft.com/en-us/library/ms345146(SQL.90).aspx
8 Partitioned Table and Index Strategies Using SQL Server 2008http://msdn.microsoft.com/en-us/library/dd578580(v=SQL.100).aspx
9 Global Online Gaming Company Deploying SQL Server 2008 to support 100 Terabyteshttp://www.microsoft.com/casestudies/Microsoft-SQL-Server-2008/bwin/Global-Online-Gaming-Company-Deploying-SQL-Server-2008-to-support-100-Terabytes/4000001470
10 Online Event Software Gets Disaster Recovery with SQL Server 2005 Database Mirroringhttp://www.microsoft.com/casestudies/Case_Study_Search_Results.aspx?Type=1&Keywords=serviceU&LangID=46
11 Progressive Prepares for Future Growth, Gains Agility with SQL Server 2005http://www.microsoft.com/casestudies/Microsoft-SQL-Server-2005-Enterprise-X64-Edition/Progressive-Group-The/Progressive-Prepares-for-Future-Growth-Gains-Agility-with-SQL-Server-2005/4000002133