SQL Q & A: Tales of Transactions

Transaction logs are critical to backup efforts, but you must ensure they’re properly configured and clearing when they’re supposed to clear.

Paul S. Randal

Clear Those Logs

Q. I’ve read that in the full recovery model, transaction log clearing only occurs when there’s a transaction log backup. I’m seeing strange behavior, though. Sometimes the log backups don’t clear the log. It seems to be the data backups that are doing it. What’s going on?

A. You’re correct: In the full and bulk-logged recovery models, the transaction log can only be cleared (portions of the log are marked as reusable) when there’s a transaction log backup. In the simple recovery model, it’s a checkpoint that clears the transaction log. For more background on logging and recovery in general, see my TechNet article, “Understanding Logging and Recovery in SQL Server.”

However, there is a twist. Although the transaction log-clearing operation will occur at the conclusion of a transaction log backup, there’s no guarantee that any portions of the transaction log will actually be cleared. The fact that a portion of the transaction log has been backed up is not enough to let it be cleared. SQL Server must not need that portion of the transaction log for any other purpose at any other time.

SQL Server may still require access to a portion of the transaction log because a data backup (like a full database backup) is running. A data backup must include part of the transaction log. It will at least need the transaction log data generated while the data is being copied from the database. It may even need more.

This means that while a data backup is running, log clearing can’t occur. This is true even if a concurrent transaction log backup takes place (concurrent data and log backups have been possible since SQL Server 2005).

In this special case, the backup portion of SQL Server “remembers” there has been a transaction log backup, and remembers which portions of the transaction log were backed up. When the concurrent data backup completes, SQL Server can clear the transaction log portions that were backed up by the transaction log backup (as long as something else doesn’t need that transaction log, like an uncommitted, long-running transaction).

This behavior is called deferred log truncation. This is the behavior you’re experiencing. It looks like the data backup is clearing the transaction log, but it’s really an artifact of the concurrent transaction log backups.

Mirror, Mirror

Q. We’re in the process of implementing database mirroring to protect against data loss for one of our databases. Can you tell me what we should monitor to ensure our database mirroring is working properly?

A. When implementing database mirroring, you absolutely must monitor the sizes of the SEND queue and the REDO queue. These directly relate to data loss and downtime, respectively.

The SEND queue tracks how much transaction log from the principal database has not yet been sent to the mirror server. This portion of the transaction log describes changes to the principal database that would be lost if a disaster rendered it unavailable.

Many people think that if you configure database mirroring for high safety or high availability (also known as synchronous mirroring), the SEND queue will always be zero. A transaction on the principal database can’t commit until all the transaction logs for the transaction have been sent to the mirror server. However, this isn’t true. It’s possible, under certain circumstances, for the principal and mirror servers to lose contact with each other and the principal database to remain online. In this case, the SEND queue will start to grow. This increases the data-loss exposure. Zero data loss is only guaranteed when the mirroring state is synchronized.

The REDO queue tracks how much transaction log on the mirror server has not yet been replayed on the mirror database. It’s a common misconception that when the principal server sends the transaction log to the mirror server, it’s also immediately replayed on the mirror database. This, also, is not true. All that’s required is that the transaction log is written to durable storage on the mirror server.

This means that depending on the mirror server hardware (including its I/O subsystem, any concurrent workload and other factors that impact performance), there can be a sizable transaction log queue that hasn’t yet been replayed on the mirror database.

When there’s a mirroring failover, the mirror database won’t come online until the transaction log has been replayed. This is roughly proportional to the amount of un-replayed transaction log. It also represents the downtime the application and its users will experience when a mirroring failover occurs.

You should also be careful about the size of the REDO queue if you’re going to use database snapshots from the mirror database for reporting purposes. When you create a database snapshot, database recovery must process the outstanding transaction log so the database snapshot is a transactionally consistent view of the underlying database. The larger the REDO queue, the longer it will take to create the database snapshot. It will also affect the performance of the mirror server, which can make the REDO queue grow.

There are several other metrics you should monitor, such as network latency between the principal and mirror servers. This translates into the overhead per transaction of implementing synchronous mirroring. You can use the Performance Monitor (sys.dm_os_performance_counters Dynamic Management View [DMV]) to track these metrics. You could also use the Database Mirroring Monitor tool built into Management Studio and described in SQL Server Books Online. The tool lets you easily create alerts based on queue size thresholds.

Performance Gains

Q. I’ve been advised that I should have multiple data files to help with performance, and I understand why. Now I’m being told I should also have multiple transaction log files so SQL Server can do more efficient I/O operations to the transaction log. Is this correct?

A. No, this is not correct. Unfortunately, I hear of this recommendation every so often.

Multiple data files can help with I/O subsystem contention. In some cases (commonly with tempdb), they can also help contention on database allocation structures that are in memory. There are all kinds of guidelines on how many data files to create.

The recommendation to have multiple transaction log files is extrapolated from the same recommendation for data files, but it’s incorrect. Multiple transaction log files don’t provide any gain in performance, availability, scalability or any other measurable metric.

The transaction log is and must be sequential in nature so that SQL Server won’t perform parallel I/Os to the transaction log if there are multiple transaction log files present. The first file will be used in its entirety, then the second file, then the third and so on. This will happen until the transaction log wraps around and begins again at the first transaction log file. So those extra log files provide no gain.

There is a situation where a second transaction log file may be needed. If the first transaction log file is full, the transaction log can’t clear (see the answer to the first question for additional explanation). The first log file can’t grow to accommodate more transaction log records. In that case, adding a second transaction log file will temporarily allow database modifications to continue until the first transaction log file can clear.

Reduce I/O, Increase Performance

Q. We’ve been investigating the use of solid-state disks (SSDs) to try to reduce some of our I/O performance problems, but we’re confused about which databases to put on them. Can you give us some guidance on how best to use SSDs?

A. This is an interesting question. There are a variety of “definitive” answers you’ll see on Internet help forums, such as, “always put tempdb on your SSD” or “always put the transaction logs on your SSD.” Neither of these is appropriate.

There are some factors to bear in mind when considering how best to use SSDs with SQL Server:

  • SSDs are expensive, so you want to make sure you’re getting the best ROI from them.
  • SSDs provide the most performance gain for random I/O workloads, not sequential I/O workloads.
  • For any portion of an overloaded I/O subsystem, an SSD will provide a performance boost—regardless of the I/O pattern—due to its nature of vastly reducing read-and-write latency.
  • Direct-attached SSDs should provide a more profound performance boost than those accessed over any kind of communications fabric.
  • Transaction logs are sequential write, with mostly sequential reads (and some random reads if there’s a lot of potential for transactions rolling back).
  • Tempdb may be very lightly used on your SQL Server. Even if they’re moderately used, they may not experience a high amount of data-file write activity.

When you consider these factors, you realize that putting tempdb or transaction logs on your SSDs may not be the best way to use them. Identify the portions of the I/O subsystem that are the biggest performance bottleneck for your workload. These may well be data files for a volatile online transaction processing (OLTP) database or a transaction log for a database with a heavy insert workload—or they may well be tempdb. These are the candidates for placing on your SSDs, rather than just picking some portion of the SQL Server storage without performing any investigation.

Another piece of bad advice regarding SSDs is that you can stop being concerned about index fragmentation when using SSDs to store data files. This is absolutely not the case. It’s true that the effects of less-efficient read ahead from fragmentation will be mitigated somewhat by the vastly reduced read latency, but there’s still the cost of more I/Os than necessary. You can reduce that by addressing fragmentation.

What most people don’t consider is that fragmentation is not just about read ahead. One of the side effects of what causes fragmentation (operations called “page splits”) is that the percentage of unused space on data file pages can increase dramatically—from 40 percent to 50 percent (this is called the “page density”).

If a large proportion of database indexes are fragmented, then without regular index maintenance the data file pages are going to contain a lot of empty space. Apart from reducing the efficiency of I/Os and memory usage, that also means you’re using expensive SSDs to store empty space. That’s not a good ROI in anyone’s book.

Paul Randal

Paul S. Randal is the managing director of SQLskills.com, a Microsoft regional director and a SQL Server MVP. He worked on the SQL Server Storage Engine team at Microsoft from 1999 to 2007. He wrote DBCC CHECKDB/repair for SQL Server 2005 and was responsible for the Core Storage Engine during SQL Server 2008 development. Randal is an expert on disaster recovery, high availability and database maintenance, and is a regular presenter at conferences worldwide. He blogs at SQLskills.com/blogs/paul, and you can find him on Twitter at Twitter.com/PaulRandal.