SQL Server Blog

Don’t Touch that SQL ServerShrink Database Button!

Updated [2016] This post continues to be a popular destination for searches like “How do I shrink a database?”  I made some copy edits. I also wanted to reiterate a default position: It isn’t usually a good idea to shrink your SQL Server database. There are exceptions, but they are exceptions.

This post is part 1 in a series on shrinking databases in SQL Server and transaction log management. You can see them all in the Shrinking & Transactions category where I have a few links about SQL. As the image above indicates, this isn’t a “DON’T!” it’s more of a “Stop and Think! Slowdown and understand the road ahead”.

This topic has indeed been done to death. But as a SQL Server consultant, unnecessary shrinking or recovery confusion are common findings we find in a SQL Server Health Assessment, even in 2016. If you have misses here? It’s our experience you’ll have them elsewhere.

What Happens when you Shrink a Database?

When you shrink a database, you are asking SQL Server to remove the unused space from your database’s files. The process SQL uses can be ugly and result in Index fragmentation. This fragmentation affects performance in the long run. You’ve freed that space and are letting the O/S do what it needs to with it, so you got what you asked for at least. If you have a growing database, this means that database will grow again. Depending on your autogrowth settings, this growth will probably be more than necessary and you will end up shrinking again. At best this is just extra work (shrink grow/shrink grow) and the resulting file fragmentation is handled alright. At worse this is causing index fragmentation, file fragmentation, and potentially causing performance problems during the shrink.

What should we do then, leave free space in a SQL Server data file?!

Yes! In fact I find it best to plan for where your data size needs will be, not where they are at the time of initial go live. Set your initial database size to what you expect it to be in the future. If you can, look a year or more into the future. That’s your initial size.

Create the database for this future size and set the autogrowth to a reasonable number in bytes rather than percent. (I hate the default of 10%.. if you have a 1TB DB that means you will be growing 100GB anytime you need to grow a file.. Better in SQL Server 2005 and up with Instant File Initialization but still not “responsible”). Monitor your free space and look at size trending over time so you can plan for a large allocation of more space should your planning be off. That free space in the file is “just sitting there doing nothing” as your SAN team may say, but I’d rather have the space ready for the future already.

Yeah, but my transaction log is always running out of space!

Yet another pet peeve I talk about this more in other posts in this category and in a Question and Answer I did on log growth for DBA.StackExchange. For here though suffice it to say:

If you are in Full Recovery Mode on a database that means you intend to be able to recover to a point in time in the event of a failure. This means you plan on using a combination of Full Backups and Transaction Log Backups (and possibly differentials). SQL Server understands your intent, and it will not truncate (free up space within the file.. notice the file stays the same, I didn’t say shrink I said truncate.. Truncate frees space within a file, shrink removes that “free” space to make the physical file smaller) the log file(s) of your database (the .LDF files).

Instead, they will continue to grow until you take a transaction log backup. 90% of the time when I am helping someone with an out of control Transaction Log growth problem, it is because they are incurring the “cost” of Full Recovery mode (growing log file, the full logging of qualified events, etc.) but none of the benefit (being able to restore to a point in time from your transaction log backups if your log chain is unbroken and your log files survive)…

Simple solution here.. Look at your backup/recovery plan!! Why aren’t you taking Log backups? Is it because you don’t understand them? There are plenty of resources to help. It is relatively simple to begin working on a proper backup/recovery strategy and the face you save may be your own 🙂 Is it because you don’t need point in time recovery? Well then consider Simple Recovery mode which will truncate the log at certain events. Don’t go right to simple recovery mode though, analyze your backup and recovery needs.

Yeah, that means you shouldn’t be doing backup log with truncate_only all the time… 🙂

If this is all new to you.. Check out a couple of resources about these topics:

  • Paul Randal has plenty to say about Shrinking Files (and says it much better than I). Check out this blog entry and look at the rest of his blog.
  • Recovery Models overview – Based on SQL Server 2005 but concepts apply to it and future versions of SQL Server up to 2012, plus you can see the version you need.

As always, feel free to e-mail me with any questions specific to your environment. I am happy to exchange quick e-mails with tips/tricks especially when it comes to my “pet peeve” topics.

Related Posts

Article by Mike Walsh

Subscribe for Updates


28 thoughts on “Don’t Touch that SQL ServerShrink Database Button!”

  1. I know this comment comes late in the game, but I have a question that you may be able to answer:

    I was under the impression that one of the reasons to shrink a database was to defragment the database files, something that can have a positive effect on transaction heavy databases (or so Oracle would like me to believe).You seem to be saying this is true:

    Deallocate that space and let the O/S do what it needs with it. If you have a growing database (as the majority of non-static databases tend to be), this means that that database will grow again. Depending on your autogrowth settings (another pet peeve for another post) this growth will probably be more than necessary and you will end up shrinking again… At best this is just extra work (shrink grow/shrink grow) and the resulting file fragmentation is handled alright by your I/O subsystem. At worse this is causing file fragmentation, interrupting what would have otherwise been contigous files and potentially causing I/O related performacne problems."

    I’m not clear what the difference is. Do you mean that the OS (or a commercial disk defragger for that matter) will handle defragmentation adequately without worrying about the internal shrink function? Another way of saying this: If file level fragmentation is an issue for an organization, simply running Windows deframentation tools will address this issue regardless of what’s done in SQL Server and you don’t need to run a separate routine for this purpose, unlike, say Oracle database?


  2. Hey Barry

    I apologize for any confusion I may have caused with my stream of thought style and wording. I can’t speak towards Oracle as I am not as familiar with the physical file organization or the indexing structures in Oracle databases.

    For SQL Server the major point I was trying to make, however, was that physical file fragmentation is bad. The Best case is that this fragmentation is handled somewhat alright by the O/S and its defragmentation processes and doesn’t end up a big deal. That was slightly tongue-in-cheek. The more likely case is closer to the worst case: the physical fragmentation will cause pain, the cost of the growths will cause pain and it is sort of all for not when you could have right sized the database from the beginning for a value that you can grow into over time.

    Your first thought about reorganizing the database may be induced by some of the wording in SQL Server 2000 maintenance plans which sort of lump an index reorganization and shrink into what seemed like one operation (IIRC).

    There are really two main kinds of fragmentation to think about here: Physical File Fragmentation (you want to prevent this by trying to minimize growths and don’t shrink your production database) and index fragmentation (which comes in two forms.. One form being your index pages no longer physically in the logical order of the doubly linked list and the other form being 8KB data pages, due to page splits from inserts/updates and due to delete activities, no longer being as full as they should be). The best way to handle this type of fragmentation is through rebuilding or reorganizing indexes, not shrinking a database.

    To be honest, I can’t find any positive benefit from shrinking a production database. All that is designed to do is reduce the free space within your database files. Yes there is an option during a shrink that can reorganize data as best as possible to determine where that free space comes from but this doesn’t mean deal with index fragmentation.

  3. I think Brent’s answer on his blog for this same question is the right one: Properly size your database in the first place with the amount you need, room for growth and monitor it closely over time. Shouldn’t have a lot of growth and shrink if you set it up right from the start.

  4. Thanks for the article Mike.

    I have a client who’s asked a question about shrinking a database. I’d be interested to get yours Mike or other people’s thoughts. Here’s the background;

    The client has moved a large table to a new database and the result is that the original database file is now 75% free space (presumably they were intending to free the space up and possibly going for performance perks).

    They asked my opinion of shrinking. I started writing basically that while auto or scheduled-manual shrinking is typically a bad idea, a one off will be ok, but before sending it Googled “don’t shrink sql server database” to find arguments against doing it – I came across this article.

    Lets assume the growth is predictable and auto-growth settings are sensible. Do you think there would be any problem with performing a one-off shrink in this case? How about a shrink, manually setting the shrunk file size to be a little greater than 1 auto-growth unit so that an auto-grow will not immediately occur?

    Index defrag is regularly performed and I’m also recommending performing a full backup first and doing this in a change window in case a restore is required.

    Perth, Australia

    • Hey Chris –

      Thanks for the comment. I’ll give a quick response now before hitting bed. Feel free to shoot me an e-mail mike at the domain of this blog works, your e-mail address looks like a fake one to prevent spam.

      I would say in this case a shrink sounds like an alright idea. I would say set the final size to the size you expect the file to be in a couple/few years if possible. This will prevent the file from having to autogrow in small chunks. While at it, I would also suggest to change the auto growth from the default 1MB, something more in line with the size of the DB. Paul and Kimberly blog about this over at their site and that link from Paul at the end of the above article is a good place to go look for more.

      The goal is to avoid physical file fragmentation from lots of growths. Growths were more expensive before Instant File Initialization starting in 2005 but the effect on physical files (not index fragmentation but on disk file fragmentation) still exists.

      So I would say a shrink of the file to an appropriate size is not a horrible solution. I would caution you to not shrink the database but just the data files and consider the impact of VLF fragmentation

  5. Pingback: Brent Ozar
  6. I would like your opinion please.

    My database was about 300 MB a few days back. Due to an error it grew to 17GB overnight. I have fixed the problem to an extent by deleting the swollen records and database backups are in the region of 300MB again. However the mdf file still remains 17GB. Is it a good idea to shrink the database?

    My concern is that this database is very small and so is the server where it stands; in relative terms 17GB (x2, production and test) is a lot of space to block and could be used more effectively otherwise.

    Thanks in advance

    • Hi Natasha –

      That is some error to make the DB grow to 17GB from 300! 🙂 This was your .mdf not your .ldf (log file) right?

      While it is not good to be in a repeated cycle of shrink-grow-shrink-grow or even do frequent shrink databases on a production system, a one off isn’t bad in a situation like this. What I suggest you do is shrink the database FILE that needs to be shrunk, leave a healthy (up to a few years growth would be ideal, even if a guess) amount of free space in the file and then rebuild your indexes after the fact.

      • Are you saying that if you’re forced to perform a shrink (i.e. running out of HD space), that you should rebuild and degragment your indexes?

        • Hey Sean –

          Yeah that is what I am saying. You should perform your index maintenance operations when you have the allocated time on the system to do it after a shrink as these operations can cause fragmentation. I also would not shrink it down to no free space in the data files since they will likely grow again.

  7. Pingback: Argenis Fernandez
  8. Pingback: Aaron Bertrand
  9. Pingback: Aaron Bertrand
  10. Sorry for being late to the party, I’ve got a question. I think having about 20% extra space in the datafile is reasonable (74 gb datafile). Recently we’ve had a table that uses the text data type which is used in the processing of a lot of data. Every once in a while when we restart the SQL service or reboot the server, SQL server bugs out and does not release the space used by the text datatype field. Next thing ya know the mdf grows up to 100 gbs. Once we fix the issue I’m left with 45 gbs of free space for the datafile (45%). So I’m conflicted. Should I just leave it for the next time this happens to avoid autogrowth or is it reasonable for me to shrink it back down to 20% free space?

    Thank you.

    • Hey Nick ,

      I’m not 100% sure I understand the question. I am assuming that the Text data type column is a temporary column used in processing and then data is deleted/removed from there? There are a few variables here but I’ll send you an e-mail with a couple questions.

  11. Ok, i´ve understood what you explain, but in SQL 2008 the transaction log backup doesn´t reduce the size of the LDF file. So is it necessary to do it manually? my ldf is 74 GB and my mdf 25 GB. The LDF grows up about 1 Gb per day.

    This is my script:

    USE mydatabase;

    ALTER DATABASE mydatabase

    DBCC SHRINKFILE (mydatabase_Log, 1);

    ALTER DATABASE mydatabase

    But it doesn´t work. I obtain the message: ” Cannot shrink log file %d (%s) because of minimum log space required.”

    The database is in full mode. I have programmed a complete backup every day at 22:00 and a backup of transaction log every hour from 08:00 to 21:00
    Any idea???

    • You’ll end up with the free space that that table was holding left in your database. This normally isn’t a problem as other growth in the DB will just end up using that free space. Also the truncate command (if you literally mean using the TRUNCATE TABLE command) will be a little kinder to your transaction log and won’t cause it to grow like a DELETE command would.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This