SQL Server Blog

How To Manage The SQL Server Transaction Log File

“Help! My SQL Server transaction log file has run out of space on my SQL Server!”, “Why is my SQL Server Log file so big?”, “How do I shrink my SQL LDF?” – I hear these questions all the time. I’ve been blogging since 2009 and way back then I wrote some posts about shrinking, still to this day one of those posts is consistently my number one hit in Google searches. We do SQL Server Health assessments for clients all the time, and I would say that in the top 3 of my findings, I consistently see “Poor SQL Server Transaction Log File Management” as a finding. This doesn’t need to be this way. Here in this post in my Straight Up SQL Server tips series, I want to focus on a few key tips to make your life easier here.

What the SQL Server Transaction Log File Does

SQL Server uses a “Write-Ahead Logging” methodology like many RDBMSs. This means transactions are written to the log file before they are committed. It’s a part of the commit. This is to serve the two main purposes of the transaction log file:

Crash/Restart Recovery

One purpose of the transaction log file is for crash/restart recovery. For the rolling forward and rolling back of work that was either done (rolling forward/redo) before a crash or restart and the work that was started but not finished after a crash or restart (rolling back/undo). It is the job of the transaction log to see that a transaction started but never finished (rolled back or crash/restart happened before the transaction committed). In that situation It is the log’s job to say “Hey.. this never really finished, let’s roll it back” during recovery. It is also the log’s job to see that you did finish something and that your client application was told it was finished (even if it hadn’t yet hardened to your data file) and say “Hey.. this really happened, let’s roll it forward, let’s make it like the applications think it was” after a restart. Now there is more but that is the main purpose

Point In Time Recovery

The other purpose for a SQL transaction log file is to be able to give us the ability to recover to a point in time due to an “oops” in a database or to guarantee a recovery point in the event of a hardware failure involving the data and/or log files of a database. If this transaction log contains the records of transactions that have been started and finished for recovery, SQL Server can and does then use this information to get a database to where it was before an issue happened. But that isn’t always an available option for us. For that to work we have to have our database in the right recovery model, and we have to take log backups.

Myth # 1 – There is a myth involved in these facts. Many people think that if you don’t need point in time recovery, the transaction log file is useless. Obviously, for crash recovery, it isn’t. Imagine you started a debit/credit transaction. The debit happened, but then before the transaction could be finished. The transaction log is a necessary and required component. Point in time recovery is optional, consistent crash recovery isn’t.

[optin-monster-shortcode id=”fgh1apjqzr102u8l”]

Recovery Model’s Impact on Transaction Log File

So you understand basically what the transaction log does.  Every transaction goes there first. But point in time recovery is optional. So that’s what we’ll talk about here. SQL Server has three recovery models, but I’m going to talk about the “main” two – FULL and SIMPLE. Bulk Logged is a variation of the two and not typically a problem here if you are in bulk logged you probably have a good idea of what you are doing:

Simple Recovery Model

In this model, you are telling SQL Server: “I am fine with you using your transaction log file for crash and restart recovery…” (You really have no choice there. Look up ACID properties and that should make sense quickly.) “…but once you no longer need it for that crash/restart recovery purpose, go ahead and reuse the log file.” SQL Server listens to this request in Simple Recovery and it keeps the information it needs to do crash/restart recovery ONLY. Once SQL Server is sure it can recover because data is hardened to the data file (more or less), the data that has been hardened is no longer necessary in the log and is marked for truncation – which means it gets re-used.

Full Recovery Model

With FULL mode, you are telling SQL Server that you want to be able to recover to a specific point in time, as long as your log file is available or to a specific point in time that is covered by a log backup. In this case, when SQL Server reaches the point where it would be safe to truncate the log file in Simple Recovery Model, it will not do that. Instead, It lets the log file continue to grow and will allow it to keep growing until you take a log backup (or run out of space on your log file drive) under normal circumstances. FULL Recovery Mode is the default recovery mode for new databases. It’s the default recovery mode on a new installation for the MSDB database, and every database that gets created is a copy of MSDB effectively.

Problem #1 So this is where the first problem most people bump into. Your transaction log is probably growing out of control because you are in FULL RECOVERY MODEL and not taking regular log backups. There are other causes also for sure, but I would say that 75% of the times I bump into folks doing nightly shrink jobs – this is why.

SQL Server Transaction Log Management Gotchas

So with those facts, I want to talk about a few gotchas. I talk about a lot more in a rather lengthy answer I wrote over at DBA.stackexchange.com. But I cover some of the more common ones here:

Transaction Log Too Big/Ran Out of Space

Again this is likely a recovery model concern.

But it can also be a few other things. Most likely it is long running transactions. If you think about how the log file works and what it is for – even in simple mode it can’t truncate in the middle of a long-running transaction.  If you have a transaction that takes a long time or does a lot of changes, the log cannot truncate on checkpoint for any of the changes that are still in open transactions or that have started since that transaction started. So it is there to protect you for rollback or roll forward.

If you do very large updates or inserts or deletes – even if you don’t put a BEGIN TRAN  – they are part of an implicit transaction. So are your index rebuilds. So you need to size your log appropriately.

Transaction Log Performance – Writelog Waits

Like I said, the T-Log is performance sensitive. Everything needs to be written there first. Before a transaction is considered committed (before you can do something in the app after clicking that button or before that lock is done being held), it needs to be written to the transaction log file.  This means your log file is sensitive to being on fast storage that can be written to quickly. It is written sequentially, it likes Mirrors or RAID10 as a result, but you don’t have to change your modern storage system that presents the RAID it presents, just make sure you optimize for log file writes when you can.

Virtual Log File Fragmentation

Too many (or too few) VLFs can cause other problems. Not often performance within your applications, but it can make restores slower, it can make log reuse less efficient. You should be in the habit of right-sizing your transaction log file to the needed size and growing it in chunks to give you an optimal number of VLFs. A VLF is sort of a file within the log file. It’s a “Virtual File” inside of the log file, SQL uses this to track where it is in writes and manage log reuse. Every time you grow a log file you get at least 4 VLFs, the more you grow it at once the more  VLFs you get. Also the more frequently you have to grow it the more you get. Hard to say what the right number is but when I see more than a few hundred I start paying attention when I see more than 1,000 or so I definitely say something. The best approach here is to work on calculating log file needs (there are many methods, but I think it is best to watch a week or two and see where it gets, then add 10% or so) and grow it in “chunks”. So shrink the log file down this one time, then grow it in chunks that are large enough to give more VLFs but not so many.  4,000 – 8,000 MB depending on how large your log file needs to be, sometimes 16,000 MB chunks work. For most log files I want to be somewhere between 50 and 150 VLFs depending on how large the LDF is.

Mike Walsh
Article by Mike Walsh
Mike loves mentoring clients on the right Systems or High Availability architectures because he enjoys those lightbulb moments and loves watching the right design and setup come together for a client. He started Straight Path in 2010 when he decided that after over a decade working with SQL Server in various roles, it was time to try and take his experience, passion, and knowledge to help clients of all shapes and sizes. Mike is a husband, father to four great children, and a Christian. He’s a volunteer Firefighter and EMT in his small town in New Hampshire, and when he isn’t playing with his family, solving SQL Server issues, or talking shop, it seems like he has plenty to do with his family running a small farm in NH raising Beef Cattle, Chickens, Pigs, Sheep, Goats, Honeybees and who knows what other animals have been added!

Subscribe for Updates

Name

2 thoughts on “How To Manage The SQL Server Transaction Log File”

  1. Shouldn’t that be “model” database, instead of “msdb”?

    “It’s the default recovery mode on a new installation for the MSDB database, and every database that gets created is a copy of MSDB effectively.”

    Reply
    • Hey there! I’d send you an e-mail but you didn’t use a real address, hope you see this. THANKS! Of course, it’s the Model database. I was actually not intending on making this post go live while editing it but ended up clicking publish last night instead of saving draft when doing another round of review. I hope I would have caught that on a read through. MSDB holds backup history, job information, etc. Model is, well, a model for new DB creation. Thanks for taking the time to point that quick typing goof out.

      Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This