SQL Q&A: Speed and Size Matter

The relative speed of SQL Server queries and the size of data files and transaction logs seem to be primary performance factors.

Paul S. Randal

Plan Your Query

Q. We recently had an issue with a query that was taking a while to run. The query ran faster after we rebuilt the clustered index on the table it involved. I couldn’t see any scans in the query plan, so why would fragmentation have affected the query runtime?

A. In this case, it seems index fragmentation had nothing to do with the query runtime. A non-optimal query plan was likely the source of the problem.

When you do an index rebuild, all column statistics for that index are automatically rebuilt. It’s the equivalent of a full scan. All column values are considered when creating the statistics, so it creates a full representation of the common value distribution.

All query plans created using those statistics are essentially invalidated and will be recompiled. Recompiling a plan means the next time you execute the query, the query optimizer will go through the process of choosing a new, more optimal way of producing the query results. This is the query plan.

In this case, your index rebuild triggered a query plan recompile. The new plan was more optimal than the previous one. There are a number of possible reasons for this. The poorly performing query plan could have been highly optimal and allowed fast queries when it was first compiled. As the distribution of data values within the table changed over time, the query plan could have become less and less optimal.

The old query plan could have been using a nonclustered index based on the fact that a particular column (part of the nonclustered index) was highly selective. Therefore, it made sense to use the nonclustered index to find data values and then further columns from the table itself. That’s called a key lookup operation.

If the data distribution changed drastically such that the column was no longer highly selective, this could have caused a large number of costly key lookups. Considering the new data distribution, a better plan would have been to use a different nonclustered index.

When the clustered index was rebuilt, the statistics were updated. This caused a plan recompile, which chose the more selective nonclustered index. This in turn produced a more efficient plan.

While I’m hypothesizing about the cause of the query acceleration, you can see what I mean about the index rebuild simply being the trigger for plan recompilation. It might not have directly fixed the root cause of the performance problem in the first place.

More Files, More Space

Q. I have a filegroup with two files and both are very full. I want to add some more space to the filegroup, so I’m going to add two more files and then have SQL Server rebalance the data across all four files. Is that possible?

A. Unfortunately, there isn’t a good way to rebalance data across files in a filegroup after you add new files for extra space. I’ve blogged in the past about how having more than one file per filegroup can lead to performance gains for some workloads. It’s common knowledge that this is the case.

That’s a big generalization, though. How much gain you’ll achieve depends on the I/O subsystem, the data file layout and the workload. There will be a point at which the number of data files becomes too many and is actually a performance detriment. Check out these blog posts on benchmarking multiple data files and multiple data files on solid-state drives (SSDs).

SQL Server simply doesn’t have a rebalancing mechanism for data in a filegroup. The data file where the next allocation will come from is determined by round-robin allocation and proportional fill. Round robin is where allocations happen from each data file in turn. There will be an allocation from file one, then an allocation from file two, then back to file one again. However, the allocations are done in a proportional manner. More allocations are done from data files that have proportionally more free space than other data files in the filegroup.

The basic premise of proportional fill is that each file has a weighting, where files with less free space will have a higher weighting. Files with lots of free space will have a lower weighting. The files with lower weightings will be allocated from more frequently, meaning those files with more free space will be allocated from the most.

What all this means is that when you add new files to a full filegroup, subsequent allocations will come mainly from the new files. They will have much lower proportional-fill weightings than the older files that have inherently more data. The new files become allocation hot spots, leading to potentially lower overall performance with some workloads.

You can’t circumvent the proportional fill algorithm. Nor can you change the weightings. Even trying something like rebuilding the indexes in the filegroup won’t work, as the allocations for the new indexes will come from the new data files.

If you want to add more files to a filegroup, the best way is to create a new filegroup with more files. Then move the table and index data to the new filegroup using the CREATE INDEX … WITH (DROP_EXISTING = ON) command, specifying the new filegroup as the target location. After you’ve moved all the data, you can drop the old, empty filegroup. You can even move line-of-business data to the new filegroup, using some trick from Kimberly Tripp.

Clear the Log

Q. I recently had a problem with a transaction log file that grew very large. I’ve been unable to shrink it. Can you suggest some things for me to check?

A. There are two common causes of a database consistency checking (DBCC) SHRINKFILE not working correctly on the transaction log. As a side note, shrinking a log file doesn’t introduce performance-damaging index fragmentation in the way that a data file shrink does. However, it should still be a rare operation.

A log file shrink simply removes any inactive or currently unused portions of the transaction log at the end of the transaction log file. These “portions” of the transaction log are known as virtual log files (VLFs). There are two problems that can prevent you from being able to shrink VLFs: not performing the actual operation that lets VLFs become inactive, and not having inactive VLFs at the end of the transaction log.

VLFs become inactive through a process known as “clearing the log.” You can do this with a checkpoint if using the SIMPLE recovery model. You can also do this with a transaction log backup if using the FULL or BULK_LOGGED recovery models. As long as the transaction log records in the VLFs aren’t required by SQL Server in any way, you can make the VLFs inactive.

SQL Server may still require the log records for certain situations, such as if they’re part of a long-running transaction, if they haven’t been scanned by the replication Log Reader Agent job, or if they’re in the process of being sent to a database mirror or Availability Group replica. You can ask SQL Server why a particular transaction log won’t “clear” using the following command:

SELECT [log_reuse_wait_desc] FROM sys.databases WHERE [name] = N'MyDBName';

Use the output from that command as an indicator of what to do next. Once the transaction log is able to clear, if DBCC SHRINKFILE still isn’t able to shrink the log, this means it was only able to shrink down to the currently active VLF (or VLFs). These might happen to be in the middle of the transaction log file. In that case, perform the log-clearing operation again, and then another shrink.

You might have to do this a few times and, ultimately, it might be difficult or impossible to shrink the transaction log to its minimum size on a busy production database. However, addressing these common problems should help you shrink the transaction log file enough to be satisfied. You can read more about these topics in my February 2009 TechNet Magazine article, “Understanding Logging and Recovery in SQL Server.”

I/O Integrity

Q. I keep seeing messages in the error log of one of my SQL Server instances that I/Os have to be tried several times before they succeed. This looks ominous to me. Can you explain what the messages mean?

A. These messages are instances of message 825. This message was introduced in SQL Server 2005. It’s an early warning that your I/O subsystem has integrity issues.

If SQL Server issues a read I/O and the I/O fails (either the OS tells SQL Server the I/O failed, or data returned by the OS is judged by SQL Server as corrupt), SQL Server will retry the read operation four more times to see if one of them will succeed. The premise for this is that sometimes I/O subsystems have transient faults, so retrying a failed I/O might work on a subsequent attempt. This avoids the immediate possibility of downtime.

If none of the retry attempts succeed, SQL Server raises an 823 or 824 error, and the connection is broken (as these errors are severity 24). If one of the retry attempts succeeds, the workload continues as normal, and SQL Server writes the 825 message to the error log.

The 825 message has the following format:

Msg 825, Level 10, State 2, Line 1.

This means a read of the file “J:\SQLskills\MyDatabase_DF1.ndf” at offset 0×000004AA188000 succeeded after failing one time with error: incorrect checksum (expected: 0×33d1d136; actual: 0×0a844ffd). Additional messages in the SQL Server error log and system event log might provide more detail.

This error condition threatens database integrity and you’ll need to correct the situation. Complete a full DBCC CHECKDB. This error can be caused by many factors. For more information, see SQL Server Books Online. What it’s really saying is that the I/O subsystem is starting to fail. A similar facility exists in Exchange Server, where the idea for this mechanism originated.

Although this feature is useful, the 825 message is only severity 10 (which means informational). Unless you’re looking through the error logs or have an Agent Alert set up for message 825, these critical messages may go unnoticed. Nevertheless, you should have an alert set up for 825 messages and take action as soon as a read-retry message happens. You can read more about this message and about setting up an alert to catch it in this blog post.

Paul S. Randal

Paul S. Randal is the managing director of SQLskills.com, a Microsoft regional director and a SQL Server MVP. He worked on the SQL Server Storage Engine team at Microsoft from 1999 to 2007. He wrote DBCC CHECKDB/repair for SQL Server 2005 and was responsible for the Core Storage Engine during SQL Server 2008 development. Randal is an expert on disaster recovery, high availability and database maintenance, and is a regular presenter at conferences worldwide. He blogs at SQLskills.com/blogs/paul, and you can find him on Twitter at twitter.com/PaulRandal.