5 Tips for Friday: Transaction Log

5 Tips for Friday
5 Tips for Friday
For this week’s 5 Tips for Friday, I want to talk about some best practices for the transaction log. The transaction log in SQL Server is a critical component of a database, and it can affect performance and recoverability of the database. It warrants special care and maintenance.

  1. Back it up often: The goal with transaction log management is to maintain the log at a reasonable size. There is no single size that is always reasonable for every database. It’s going to depend heavily on the size of the transactions you perform. If you perform operations on tables that are TB size, no amount of log backups will prevent you from needing a log file big enough to handle those transactions. With regular, frequent backups, you should be able to keep it at whatever size your workload has deemed is reasonable.

    My general recommendation is to back up the log at least every half hour. The busier a database is, the more frequently it needs to be backed up.

  2. RPO is your max log backup frequency: Recovery Point Objective (RPO) is the amount of data you and the business have decided is reasonable to lose in the event of a disaster requiring a restore from backup. In the event of a disaster, you stand to lose everything since the last backup. So your potential for data loss is the potential amount of time since the last backup.

    So if you have an RPO of 10 minutes, 30 minute log backups will not cut it. The log backup frequency would have to be at least every 10 minutes to meet that RPO. You can back it up more frequently than the RPO, but it has to be at least that frequency.

  3. Repeatedly shrinking the log file will make you go blind: Well, okay, you won’t go blind, but you will hurt performance of the operations that scan the log file due to a high number of virtual log files (VLFs). VLFs are the logical units of a transaction log, and scans of the log file scan it one VLF at a time. When you hear about it taking a really long time for the database to come online after a failover or restart, it’s usually caused by a high VLF count.

    It also hurts performance because if you are shrinking it regularly, then clearly it is growing back out to a larger size. Auto-growths require stopping all activity that hits the log file for the duration of the growth. And since instant file initialization (IFI) doesn’t apply to the log file, that means all queries in the database having artificial pauses injected into them.

    You can check how many VLFs a database has by running DBCC LOGINFO(); in the database. This command returns 1 row for each VLF. The number of rows returned is the number of VLFs you have.

  4. Shrinking the log should be for emergencies only: People will say that you should never shrink the log file. What they mean is to not shrink it repeatedly. Shrinking the log once to right-size the file size is perfectly acceptable, but it should be viewed as a one-time, emergency operation.

    As an example, let’s say you inherit an existing SQL Server that has not had proper log backups performed and the log file is 10 times the size of the data file. This is a situation where you should perform a one-time shrink and make sure it has proper backups in place to maintain it at that size.

    If you do need to shrink it, don’t shrink it to a really small size. Shrink it to a size you think is reasonable for that database.

  5. 1 is the only number that you’ll ever need: I still come across databases with more than 1 log file these days, but I think most people realize that SQL Server uses the log file sequentially and will never use more than 1 log file at a given point of time. I also occasionally run into people who insist that they “witnessed” the database writing to 2 log files in parallel.

    To this latter group of people, I usually reply that lots of people say they witnessed Elvis Presley coming out of a 7-11 in Phoenix, Arizona, but that doesn’t make it true. They didn’t see what they thought they saw. A couple have even tried to reproduce it for me and were surprised that they couldn’t get it to work because it worked before …. yeah, sure it did.

    No, it didn’t.

    There are reasons why you might need to temporarily add another log file. For example, if you are doing a massive operation, like partitioning a humongous table, that requires more log space than the drive can accommodate. You may not be able to expand the drive, or may not want to because once you’ve grown out the drive in Windows, you can’t shrink it back down. In that case, it may make more sense to add a 2nd drive for the extra log space and put a 2nd log file there. You can then drop the 2nd log file and the 2nd drive when you have completed your work.

54321
(1 vote. Average 4 of 5)