SQL Q+ATemp Tables, 64-bit Processing, Mirroring, and More

Edited By Nancy Michell

Temporary Tables

Q I've heard that to avoid contention problems it's not always a good idea to use the temporary database, tempdb. Is that true? I've used it because I need to get data that's spread among many tables and even modify it along the way. I can't do this easily with views.

A It's true that a heavily used tempdb may experience contention. But contention is usually a problem only in rare situations such as when you get hundreds of SPIDs (server process identifiers) creating and deleting large numbers of temp tables all at the same time, as explained in the Knowledge Base article support.microsoft.com/kb/328551.

Using a temporary table (which may or may not be in tempdb, by the way) can actually be quite an acceptable solution. The key is whether the increased performance you gain when pulling data from the temporary table outweighs the overhead in building the table in the first place.

You need to consider the size, life­span, usage frequency, and especially the number of active instances of the temp table(s) before rejecting the solution. Server dimensioning matters as well, since tempdb contention can be hardware related.

If you need a high number of small, short-lived tables, you can try table variables, which have some advantages over temporary tables. (See the answer to our next question for more detail on the differences between table variables and temp tables, and take a look at "Frequently Asked Questions - SQL Server 2000 - Table Variables" for more information.)

In your case, if all queries to the database must go through temp table creation procedures, it might also be wise to question the original data model rather than trying to just live with it.

Q What are table variables and what is their performance like? Can I really use them instead of temporary tables?

A Temporary tables and table variables provide the same basic functionality; indeed, both are materialized as temporary tables in the tempdb database. However, when only a small number of rows are inserted, table variables can provide superior performance. This is because table variables do not maintain either statistics or indexes, resulting in lower overhead. If the table contents will not exceed a few 8KB-sized database pages that the database engine will keep resident in the data cache, then table variables are preferred.

As per SQL Server™ Books Online:

  • A table variable behaves like a local variable. It has a well-defined scope, which is the function, stored procedure, or batch it is declared in. Within its scope, a table variable can be used like a regular table.
  • Table variables are automatically cleaned up at the end of the function, stored procedure, or batch in which they are defined.
  • Table variables used in stored procedures cause fewer recompilations of the stored procedures than when temporary tables are used.
  • Table variables require fewer locking and logging resources.
  • In some cases, performance may improve, though stats aren't kept.

Mirroring

Q I'm planning to use SQL Server 2005 mirroring to support high availability. The system will be running in a high-availability mode to support automatic failover. But I also want to run some other applications on the mirror to fully utilize the resource. The mirror server will support two databases: Production DB as the mirror to the primary and Staging DB as a standalone database. It will also support two types of clients: one is configured with the mirroring failover to access the Production DB, and the other connects to the Staging DB directly.

In addition, some SQL Server Integration Services (SSIS) packages will be running on the mirror server. They will access both databases, transferring the data from the Staging DB on the local to the Production DB running on the primary server. During the failover, the SSIS packages will transfer the data from the local Staging DB to the local Production DB (failover). Are there any issues with these configurations I need to be aware of?

A In the very general sense, this kind of configuration is not recommended because after failover the new principal could be overpowered.

Functionally, it doesn't seem like there will be issues from the mirroring perspective, but you should test thoroughly if you're going to go ahead with this solution. Testing should be done at peak load, in standard configuration, and post-failover. And your peak load should be an estimated future, not current, peak load; if you don't take future business needs into account, the solution will stop working if the load increases past capacity.

Testing might very well uncover network, disk, and even CPU bottlenecks that would manifest as poor application response times, unacceptable throughput, timeout errors, or extremely high redo queues on the mirror server (which might lead to unexpectedly long failover times depending on the actual redo rate).

In addition, during failover, running packages will lose connectivity with the faulty principal. Once failover completes and the database is available on new principal, the packages should be able to reconnect to the new principal. This assumes, of course, that the packages are built with connectivity retry mechanisms and that they will gracefully handle connectivity errors, such as being able to recover from "broken" connection conditions.

SQL Server and 64-Bit OSs

Q Is it true that if I'm running a 64-bit version of a Windows Server® operating system, I must use SQL Server 2000 Enterprise Edition (64-bit) and I can't use 32-bit SQL Server 2000?

A From the SQL Server 2000 point of view, the only native 64-bit architecture that is supported is Intel's ia64. Thus when SQL Server 2000 talks about 64-bit support, it means ia64. The only version of SQL Server 2000 that is supported on native 64-bit ia64 is Enterprise Edition. There is no Standard Edition SQL Server 2000 that runs on ia64, either natively as a 64-bit application or under ia64 Windows® on Windows (WOW, an OS subsystem that can run 32-bit apps).

For an AMD x64 machine, you have two options with SQL Server 2000. You can either run a 32-bit operating system, in which case you can run any edition of SQL Server 2000 and any service pack (SP). Or you can run a 64-bit operating system and SQL Server 2000 (any edition) SP4 under WOW. In that environment, SQL Server 2000 is not a native 64-bit application. It is running under WOW in 32-bit mode-so it doesn't even know that a 64-bit world is out there.

Just to make things more complicated, ia64 WOW and x64 WOW are two totally different subsystems. On the ia64, WOW has to emulate both a restricted virtual address space and make actual machine instruction emulation. The ia64 machine instructions are totally different from x86-it is a completely different machine. SQL Server does not support any of its components (SQL Server 2000 or SQL Server 2005) running under ia64 WOW.

Under x64 WOW, the story is different. The x86 and x64 machine architectures are very close. The instruction set is the same (or almost the same) so only the 32-bit virtual address space has to be emulated-and there is hardware assistance for that. Thus, there are many SQL Server components that support x64 WOW, such as SQL Server 2000 SP4 and any edition of SQL Server 2005.

Upgrade to SQL Server 2005

Q I want to upgrade from SQL Server 2000 to SQL Server 2005. Besides the SQL code remediation, what application changes do I have to make? For example, if I'm connecting SQL Server 2005 using a Microsoft® .NET Framework client, do I have to upgrade my desktop clients to Windows XP SP2?

I am running Windows XP SP1 and Visual Basic® 6.0, as well as some apps that use .NET. Are SQL Server Native Client, the .NET Framework 2.0, and Windows XP SP2 required? In other words, what are the overall minimum client requirements for an upgrade? And if I decide to adopt SQL Server 2005 mirroring, do the client requirements change?

A To take full advantage of mirroring, you need to use SQL Native Client (for OleDb or ODBC) or the ADO.NET 2.0 SqlClient. And the connection string should be changed to reference both the principal and the mirror.

But, you only need SQL Native Client or the .NET Framework 2.0 to make the client mirror-aware. Older clients can connect, but won't automatically try the mirror if the principal is down.

Without touching the clients, you can add a BIG-IP switch between the clients and server to move them from the one box to the other manually on failover. Or you can change code in the application to attempt a connection to the mirror server on connection failure to the principal.

The system requirements include the following: Windows Installer 3.0, Microsoft Windows XP SP1 or later, Microsoft Windows 2000 SP4 or later, or Microsoft Windows Server 2003. For more information on requirements, see "Using ADO with SQL Native Client"; "Updating an Application to SQL Native Client from MDAC" and "System Requirements for SQL Native Client"

SQL Server 2000 Out of Memory

Q I am experiencing strange out-of-memory exceptions on my production SQL Server 2000 Enterprise Edition SP4 on Windows Server 2003 SP1. Figure 1 shows the error I get from the logs.

Figure 1 Out-of-Memory Error

2006-06-23 14:41:40.72 spid77    WARNING:  Failed to reserve contiguous memory of Size= 24641536.
2006-06-23 14:41:40.85 spid77    Buffer Distribution:  Stolen=4800 Free=1744 Procedures=39391
                                Inram=0 Dirty=90621 Kept=0
                                I/O=0, Latched=99, Other=3063345
2006-06-23 14:41:40.85 spid77    Buffer Counts:  Commited=3200000 Target=3200000 Hashed=3154065
                                InternalReservation=587 ExternalReservation=0 Min Free=1024 Visible= 173320
2006-06-23 14:41:40.85 spid77    Procedure Cache:  TotalProcs=9555 TotalPages=39391 InUsePages=1031
2006-06-23 14:41:40.85 spid77    Dynamic Memory Manager:  Stolen=44191 OS Reserved=7648 
                                OS Committed=7619
                                OS In Use=7603
                                Query Plan=39088 Optimizer=0
                                General=4183
                                Utilities=140 Connection=7651
2006-06-23 14:41:40.85 spid77    Global Memory Objects:  Resource=1617 Locks=313 
                                SQLCache=1836 Replication=204
                                LockBytes=2 ServerGlobal=42
                                Xact=63
2006-06-23 14:41:40.85 spid77    Query Memory Manager:  Grants=0 Waiting=0 Maximum=94533 Available=94533
2006-06-23 14:41:40.88 spid77    Error: 17803, Severity: 20, State: 12
2006-06-23 14:41:40.88 spid77    Insufficient memory available..

My server has plenty of RAM (32GB). Strangely, this problem seems to occur randomly; the stored procedure that causes this error does so perhaps once or twice out of 20 executions.

The server's performance counters indicate plenty of available memory. Is the lazy writer not paging to disk fast enough? Why does the process request such a large amount of memory when executing my stored proc? Is it the number of select statements or use of temp tables?

A The amount of physical memory on the machine is not relevant here. Depending on how your boot.ini is set up, applications will have between 2 and 3GB of virtual address space. Virtual address space is a precious commodity on 32-bit systems. By default, the buffer pool will consume all but 384MB of the virtual address space. That 384MB is set aside for thread stacks and allocations from components that are either too large to be serviced by the buffer pool, or are unaware of how to allocate from the buffer pool (third-party xprocs, linked servers, COM components).

The warning indicates that an allocation of approximately 23MB failed. Whether or not allocations of this size succeed will depend on the number and placement of other allocations within the 384MB block.

You might consider removing some of the third-party xprocs that are loaded on the system, determining if there is any use of COM components via sp_oacreate or linked servers.

It's quite possible that the process is requesting such a large amount of memory when executing your SQL stored procedure due to the use of the FOR XML clause in your sproc (which we did not reprint here). However, if you want more information, Process\sqlservr\Virtual Bytes will give you an indication of how much virtual address space is still available. And there is a tool called VMStat (included on the CD for the book Programming Applications for Microsoft Windows by Jeffrey Richter) that will determine the largest virtual address block size. The lazy writer is not involved in this allocation; the area of memory being used for the allocation is not in bpool.?

Thank you to the following Microsoft IT Pros for Their Technical Expertise: Ramon Arjona, Stephen Borg, Sandu Chirica, Robert Djabarov, Guillaume Fourrat, Osamu Hirayama, Alejandro Mihanovich, Maxwell Myrick, Uttam Parui, Shashi Ramaka, Gavin Sharpe, Vijay Sirohi, Jimmie Thompson, Madhusudhanan Vadlamaani, Jian Wang, and Dave Wickert.

© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.