SQL Server Blog

Hyper-V and SQL Server Best Practices: What We Wish You Knew

If you’ve ever asked yourself, “Why does my SQL Server seem slower on Hyper-V than it should be?!”, this post might help. And if you asked me about Hyper-V ten years ago, I’d probably have laughed. Maybe even less than that. But here’s the thing: it scales, it works, and with the Broadcom/VMware “fun” squeezing the world for profit, we’re seeing more and more folks making the move to Hyper-V. You have to pay attention to some key configurations, just as with VMware..

I want to thank Jeff Iannucci for working with me on this post and sharing his insights. He’s been reminding me to get a post out about this and a checklist-style document to share with clients for far too long now. So he decided to help me write it.

We’re mostly a bunch of DBAs at Straight Path, but we often get asked to help set up SQL Server on Hyper-V virtual machines by clients who ask us, “What are the best practices?” Many (most?) system administrators will already know this stuff. Still, if they don’t, or if you have to wear all the hats in your organization and figure this out yourself, we’ve put together this guide on the key considerations to ensure high availability and consistent performance of SQL Server on Hyper-V.

These aren’t deep and mysterious secrets. But they are things we’re still seeing folks miss in the wild when we work with a new client. And just like with our VMware and SQL Server best practices post from a few years back, I’d rather you find these issues and fix them yourself first, then bring us in for the more interesting and challenging problems!

Use Fixed-Size VHDs for Storage

SQL Server I/O demands can be high, especially for OLTP systems. Dynamically expanding VHDs may be convenient during setup, but they introduce latency when the virtual disk needs to grow during runtime. This can cause unexpected pauses and significantly degrade SQL Server performance.

Think of it like this: a dynamically expanding VHD is like renting a storage unit that keeps getting bigger as you add more stuff. Every time you need more space, someone has to come add another section. That takes time, and your database operations are stuck waiting for the “construction team” – and you don’t get to pick when they need to show up. A fixed-size VHD, on the other hand, is like having all your space allocated up front – you know exactly what you have, and there’s no waiting around for expansions.

Recommendation: Use fixed-size VHDs (VHDX preferred for modern systems) for all SQL Server storage volumes. This eliminates overhead from disk expansion and provides predictable, consistent performance.

Leverage Multiple Virtual SCSI Controllers

Hyper-V allows up to 4 virtual SCSI controllers per VM, and each controller can support up to 64 virtual disks. Here’s what many folks don’t realize: disk I/O traffic on the same controller is serialized to some extent. But by separating high-I/O paths like tempdb, data files, and log files across different controllers, you can reduce contention and improve throughput significantly. The goal with designing systems is “make the SAN suffer” – and then buy a storage subsystem that won’t suffer. So the more choke points we can remove between your queries and the storage, the more potential threads we can use to send requests to disks and get the storage throughput we’ve paid for in our storage.

Recommendation: Assign virtual disks for SQL Server data, logs, tempdb, and backups to separate SCSI controllers from the OS where possible. Even just splitting your most critical I/O across two controllers can make a noticeable difference.

Disable Dynamic Memory for SQL Server VMs

This one is critical, and we see it violated all the time. Dynamic memory may work fine for general-purpose workloads, but it’s absolutely not appropriate for SQL Server. Why? SQL Server has its own sophisticated memory management system built in. When Hyper-V tries to reclaim memory dynamically, SQL Server may be forced to shrink its buffer pool, causing performance hits. Very bad performance hits.

It’s like trying to have two drivers controlling the same car – SQL Server thinks it has control of the memory, makes plans based on that, and then suddenly Hyper-V yanks the steering wheel. Nothing good comes from that.

Recommendation: Disable dynamic memory and assign a static memory allocation based on SQL Server’s actual workload and anticipated growth patterns. Let SQL Server manage its own memory – it’s really good at it when you give it consistent resources to work with.

CPU Configuration: Sockets, Cores, and NUMA Awareness

SQL Server is NUMA-aware and performs best when VMs are configured in alignment with physical NUMA boundaries. Incorrect CPU configurations can lead to remote memory access and degraded cache performance. Moreover, if you’re using SQL Server Standard Edition (which we often prefer if you can do it), incorrect configurations can literally prevent you from accessing all your CPU cores and memory due to licensing restrictions.

I’ve lost track of the number of VMs running SQL Server Standard with 8 cores but configured as 1 core per socket – meaning they’re using 8 sockets, which is 4 more than SQL Server Standard Edition even supports. We’ve helped many clients get significant performance improvements simply by fixing this socket-to-core ratio. Almost like we suddenly gave them access to the CPUs they thought they had…

Recommendation: Configure your VMs with fewer sockets and more cores per socket when possible, depending on your licensing and physical architecture. Avoid configurations that span multiple NUMA nodes unnecessarily. Monitor the catalog views sys.dm_os_sys_info and sys.dm_os_memory_nodes regularly to understand NUMA behavior inside SQL Server and catch any misconfigurations early.

Avoid Overprovisioning Memory and CPU

Virtualizing resources too aggressively can lead to contention, high latency, and instability. These are three things you definitely do not need more of in your life. We get it – the temptation with virtualization is to pack everything in tight and maximize your hardware investment. But SQL Server really does need dedicated, predictable resources to perform well.

Recommendation: Ensure SQL Server memory is adequately provisioned and not competing with other memory-hungry VMs on the same host, especially if you’re enabling Lock Pages in Memory. Avoid assigning more vCPUs than the physical host can support efficiently – keep your CPU overcommit ratios low. Monitor Ready Time and Processor Queue Length religiously to detect CPU contention before it becomes a crisis. Memory for SQL Servers should be dedicated and not over-committed. CPUs – your mileage may vary – just don’t go crazy.

Set Hyper-V VM to High Performance Power Plan

Hyper-V VMs inherit the power settings of the host OS unless explicitly configured otherwise. The “Balanced” power plan can cause CPU throttling, and unfortunately, this is still the default for many installations. We’ve seen 5-10% performance improvements from this simple change alone. (And we see it all.the.time.)

Recommendation: Set both the host and VM power plans to “High Performance”. While you’re at it, ensure CPU C-state and P-state settings in your BIOS are also configured to minimize latency rather than maximize power savings. Yes, your electric bill might go up slightly, but your SQL Server will thank you.

Use 64KB Allocation Unit Size for NTFS Volumes

SQL Server prefers to read and write data in 64KB chunks – that’s just how it’s optimized to work. However, the default NTFS allocation unit size is only 4KB, which is suboptimal for SQL Server’s I/O patterns. It’s like trying to move a pile of bricks by carrying one at a time instead of loading up a wheelbarrow.

Recommendation: Format all SQL Server volumes (data, logs, tempdb) with a 64KB allocation unit size to optimize nearly all I/O operations. You’ll need to do this during the initial formatting – you can’t change it after the fact without reformatting. Plan ahead.

Separate Drives for SQL Server Components

SQL Server performs best when I/O paths are isolated. Separation reduces I/O contention for the same storage resources. Even in a virtualized world, this separation still matters because you’re ultimately dealing with physical spindles or SSD channels underneath.

Recommendation: Here’s a simple example of a typical storage layout that works well:

  • C: OS and system databases (as needed)
  • D: SQL Server data files (.mdf, .ndf)
  • L: SQL Server log files (.ldf)
  • T: tempdb files
  • Z: Backups

You might adjust this based on your specific needs, but the principle remains: keep different I/O workload types separated. Your transaction logs have very different I/O patterns than your data files, and they shouldn’t be competing for the same underlying resources.

Mitigate Noisy Neighbor VM Effects

In a shared virtualization environment, SQL Server can suffer from “noisy neighbors” – those other VMs consuming excessive CPU, I/O, or network resources on the same host. We’ve seen this countless times: someone spins up a file server or a development environment on the same host as production SQL Server, and suddenly your carefully tuned database starts having mysterious performance issues.

Recommendations: For mission-critical SQL Server workloads, seriously consider dedicated hosts. If you must share, use Hyper-V Resource Controls to set CPU reserves and limits for VMs. At minimum, make sure you know who your neighbors are and what they’re doing. A good monitoring solution (and we’re big fans of SentryOne for this) can help you identify when other VMs on your host are causing problems.

Some Closing Thoughts

Proper Hyper-V configuration for SQL Server is a balance between performance, scalability, and resource efficiency. While virtualization adds tremendous flexibility and can reduce costs, SQL Server requires dedicated and predictable resources for optimal performance. You can’t just throw SQL Server on a VM with default settings and expect it to perform like it would on bare metal – but with the right configurations in place, you can get really close.

Following the practices outlined here will help your virtualized SQL Server instances remain robust, fast, and ready to scale with your business needs. And look, these aren’t just theoretical recommendations – we’ve worked with clients to implement every single one of these changes and seen real, measurable performance improvements. Sometimes the improvements are modest, sometimes they’re dramatic, but they’re always worth the effort.

The virtualization landscape has changed a lot over the years. With the Broadcom acquisition of VMware and the subsequent price increases and licensing changes squeezing many organizations, Hyper-V is looking more attractive than it has in a long time. It’s a solid platform, and when configured correctly, it can absolutely handle demanding SQL Server workloads.

If you’re making the move from VMware to Hyper-V, or if you’re already on Hyper-V and experiencing performance issues, take the time to walk through this checklist. You might be surprised at how much performance you can unlock with some relatively simple configuration changes.


Looking for more virtualization and SQL Server content? Check out our post on VMware and SQL Server Best Practices for guidance on that platform as well.

Mike Walsh
Article by Mike Walsh
Mike loves mentoring clients on the right Systems or High Availability architectures because he enjoys those lightbulb moments and loves watching the right design and setup come together for a client. He loves the architecture talks about the cloud - and he's enjoying building a Managed SQL Server DBA practice that is growing while maintaining values and culture. He started Straight Path in 2010 when he decided that after over a decade working with SQL Server in various roles, it was time to try and take his experience, passion, and knowledge to help clients of all shapes and sizes. Mike is a husband, and father to four great children and lives in the middle of nowhere NH.

Subscribe for Updates

Name

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This