SQL Server In Place Upgrades: Too Expensive?

About eight years ago, I wrote a (kind of long) answer to a DBA. Stackexchange question, “Are SQL Server In-Place upgrades as ill-advised as they used to be?” My summary answer on the top of my tome was:

Short AnswerΒ – In Place is okay. You can review your configuration afterward and implement the best practices for SQL Server 2012 (edit: Or SQL Server 2019/SQL Server 2016 or whatever version you are on. Remember I wrote this in 2012 πŸ˜‰ )

But then I spent the rest of the answer counting all the reasons why I, frankly, just used to hate doing a SQL Server upgrade in-place. Surely, though, eight years later, I’m here writing a post recanting of my awful advice, right? I mean, I was only 12 years into SQL then, I was a relatively new MVP, just trolling Stack Exchange bored on the Monday after Thanksgiving, right? No… I’m here to tell you that “back then Mike (the one with hair on his head)” was right. In fact. He was right.

Let me walk through my reasons with you. I’m going to steal some highlights from myself 8 years ago, but I’m giving attribution. Go check out my answer if you want even more words. . .

Why I Hate In-Place SQL Server Upgrades

And why you should hate upgrading a SQL Server in place, too.

Why your CFO should hate them- First a story. A few years ago a client had asked us our thoughts. I gave my standard answer NO! NO! NO! and explained that 95% maybe even 99% of the time they go fine with no issues. They were upgrading SQL Server 2014 to SQL Server 2016, I believe. So in-place upgrading didn’t sound -that- scary to them. I talked about how if things go bad, it’s ugly. And touted the benefits of a migration upgrade. Anyway “expediency won”. Things went bad. Then when those things went bad, other things went bad. The snapshot they thought they took before of the VM wasn’t right. There were MANY DBs there and a lot of (billable) time was spent first trying to fix the situation, then recovering the environment and all the databases.

That’s just one anecdote. And, sadly, we made some money on that one. But there are others just like it. Listen. If you want to try and perform an in-place upgrade of your production database server, it may work. In fact, it probably will work. But what if it doesn’t?

Just don’t.

Some Reasons In Place SQL Server Upgrades stink

I’ll list some of the biggest reasons here. My answer is sort of all the questions you never knew you didn’t know you wanted to know answers to, so visit if you wish. But here are the big ones we use at Straight Path when planning an upgrade project with a client.

In-Place SQL Server Upgrades Stink! Change my mind…
  • Testing/Rollback – Going to a new server means you can test the process a couple of times before you do it. It means you can build up success criteria ahead of time of go-live night. You can verify that criteria, smoke-test (but don’t write new data) to the apps, and still have a failback option of aborting and going back to the old server. That’s huge!!!
  • New New New!! When you chose to get the install media for the next version of SQL Server and just do an in-place upgrade, you’re saying you like all the hardware you are on, the OS, the setups, the mistakes made three years ago. Migration install means – you start fresh!
  • Downtime Reduced – A migration upgrade can dramatically minimize downtime over an in place upgrade. Think about it. In place means you start. And once you start, you are committed. And your downtime is a bit unknown, and we already said you can’t rollback. And at some point, you’ll be done, and (hopefully) your users can get back in and work. Doing a migration based SQL Server upgrade means you can do a log shipping method to copy data over and dramatically minimize your downtime – and have a planned window for migration. How cool is that?

There are many more reasons. But those are my favorite and, frankly, the ones that should make in-place upgrade show stoppers. Do a migration upgrade and save yourself time and money!

Check out that answer from “Younger Mike With all the hair” if you want.

I also did a webinar series on SQL Server upgrades last year. You can see more about why I don’t like in place upgrades and how to plan an upgrade by checking that series out. It’s all there for you to watch, we have an upgrade checklist we use here, too.

Subscribe for Updates

Name

7 thoughts on “SQL Server In Place Upgrades: Too Expensive?”

  1. I simply can’t stand the number of “relevant folders” that have to be maintained under the Microsoft SQL Server directory after an in place upgrade.

    Reply
  2. We recently upgrade from SQL 2012 to SQL 2016 and one of the reasons it went smoothly is that we could take a snapshot of our entire production environment. We’d run the upgrade on the snapshot, see what broke, and fix it; put everything back as it was, repeat. Finally we had the whole thing down, scripts for all the required fixes, etc.
    The day we went live was seamless. Ran the upgrade, ran the scripts, and everything just worked. Just a couple of hours of downtime – less than we planned for – and frankly the only reason the business even noticed was that we told them we’d be doing it.
    In my experience (been working in SQL Server since 2001) it all comes down to planning and testing. When folks run into trouble, it’s usually because they made assumptions about what would work, and what they’d do when it didn’t work.
    One last point… planning and testing apply to everything. Not just the upgrade itself, but to the way you’re taking snapshots, backups; what you do if those fail, etc. Assume that any part of the process or underlying architecture could go wrong, and know what you’re going to do if it does go wrong.
    For example, one of the things we did in addition to the snapshots and backups was setup a log-shipped copy of the database server; that we we knew for certain that even if the snapshot critically failed, and even if the backups critically failed, we had a working copy of the server that we could fall back on – even if it meant a few extra hours of configuration.

    Reply
    • Hey Craig,

      Sounds like a well executed plan. I have an old blog post about planning. Actually a series… Wow. Ten years old now. The first post is called Plan to Fail or Don’t Expect to Succeed πŸ˜‰

      I’m glad your process went smoothly. Snapshots can certainly help mitigate failure. Though that sounds a bit scary to just snapshot, try, roll back, try again – especially to a production server. I suppose it depends on your size and your downtime tolerance. You sort of did an in-place upgrade almost like it was a migration upgrade – but the final step just had an in-place upgrade anyway πŸ˜‰ Most folks don’t put that sort of extra time into the in-place upgrades, so I commend you. We still support them here at Straight Path but I really council folks away from them. Especially on a VM – I love the benefit of starting fresh.

      Thanks for the comment and the experience!

      Reply
  3. In place upgrade could go smoothly if you have an identical test environment to production to test the process, ideally scripted.

    Reply
    • For sure! If you have the budget for that and you have a true identical environment and identical installs and no one did a quick tweak to one but not the other along the way – you’d have a better chance. I still will almost always suggest migration πŸ™‚

      Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This