In this Straight Up SQL Server Tips series, we’re going back to basics, focusing in on SQL Server memory, specifically talking about setting SQL Server MAX memory. Many of these are findings in our SQL Server Health Checks or things I bump into on the forums and Q&A sites when I’m out answering questions. Today we’re talking a bit about the SQL Server Memory configuration option – SQL Server Max Memory. We won’t talk about SQL Server Min memory – there are varying schools of thought about it, we care less about that being set than the max! If you find yourself wondering about your configurations, settings, performance settings, etc.- you can always reach out for a Free SQL Server consultation with our founder, SQL Server MVP Mike Walsh. We’ll have you run a quick data collection tool before the chat if you like, and we’ll help point you in the right direction and see if our team of SQL Server experts can help guide you to the best settings… Anyway! Enough with the intro! Let’s talk about your SQL Server memory and get your SQL Server max memory configured right!!
SQL Server Max Memory (TL;DR Version)
The short story here? The default setting for SQL Server’s Max Memory isn’t great. It’s less harmful in SQL Server 2016 and higher, but still not great. You want to leave some memory behind for the OS and other apps (if you have installed other apps on your SQL Server’s server, which we don’t suggest!) How much? It depends on a few things – but I say at least 10% and usually not more than 20% – so if you take your total memory and take 10% or 20% off, that’s what you should choose. Then you should monitor and watch to see what else you are missing or if you should tweak the setting. If you have other services on the SQL Server server (you really shouldn’t, that’s an audit finding when we do a SQL Server health check), you may need to leave behind more. Especially if those other uses are memory hogs.
You can change this via sp_configure. You can change it via the GUI. Changing SQL Server’s Max Server memory is an online option – you don’t need to restart SQL Server. Though when you do make the change, you can and likely will cause data or procs to leave their caches so things could be a slight bit slower for a short while after you run it. Nothing terrible most of the time but something to keep in mind.
Read on for more background and more about how and why. Also, if this stuff feels new to you or answers a question you’ve long wanted to be answered, I suggest you check out our free SQL Server Health Checklist – it’s the top things we find when doing SQL Server Health Checks, that I’d love you to see and solve on your own instead. Most of the findings have blog posts in this straight-up series explaining the how and why behind them also. Feel free to subscribe to the blog or our newsletter also for more tips.
A Primer
This setting in SQL Server configuration options (I blogged about sp_configure a few years ago) is what it sounds like – the max memory SQL Server can consume. Now the one problem with talking about this setting is there are probably readers of this blog here in 2017 still using SQL Server 2005, 2008, 2012, 2014, and 2016. The setting works a slight bit differently in the earlier versions here. It used to be that this only affected the buffer pool – that is to say, the memory allocated by SQL Server to keep data pages in memory.
(what? Data pages in memory? Ok so this isn’t the textbook answer but a quick explanation to keep it in line with the series: SQL Server has data. We agree there. That data lives on disk – in the data file. That data in the data file is organized -ultimately- in pages – 8KB blocks of data and extents – blocks of 8 pages. Well anytime you or I want to interact with this data it has to get from disk (slow, even with SSDs still slower than RAM) to memory (faster). SQL has to put that data into memory first. And once it puts that data in memory, it wants to try and keep those pages in memory. In fact, an important memory perfmon counter (for another post) is called “page life expectancy” – how long a page lives in memory. So that’s the buffer pool. The data cache. This is where most of your memory consumed in SQL Server goes)
So in SQL Server 2005 and 2008 Max Server Memory, for the most part, was just this memory. Memory for things like CLR (yes it needs its memory), linked servers, connections, the lock managers, etc. were typically managed outside of this allocation pool. Starting in SQL Server 2012 this changed – the max memory controls more memory allocation areas. We could very quickly complicate this post by getting into what is and isn’t included, instead, I’ll link to this post from Microsoft describing the change that started in SQL Server 2012. You can click here and see some more differences and read about single-page allocations vs. multi-page allocations and what Max Server Memory controls specifically.
Checking Max Server Memory
For this post, and really for your environment, we can keep the advice here a bit simpler. First, check and see what your max memory is. Through SQL Server Management Studio you would right-click on your instance in object explorer and look at the properties tab. Through a SQL Query you would look at the sys.configurations table:
Through SQL Server Management Studio you would right-click on your instance in object explorer and look at the properties tab. Through a SQL Query you would see at the sys.configurations table:
Code SyntaxSELECT * FROM sys.configurations
WHERE name = 'Max Server Memory (MB)' Block
The results of either looking in the GUI or this script should tell you if you are in one of a few categories:
- Best Practice Environment – If the value is calculated to take a few variables described below into play, it’s probably set well. Some number other than the default and that allows for some RAM for other instances, the OS or other applications. You want to leave some memory but not too little or too much.
- Sort of in the Middle – So the max is set, but maybe way too much memory is left over and when you look at perfmon or even task manager performance tab – you see that memory is never used, but you see signs of memory pressure in SQL. Maybe it’s set away from the default, but only 2% is left over which means there is potential for memory pressure on the server.
- Worst Practice Environment – One could argue this is less of a concern in SQL Server 2012 and higher – but it is still a setting you should pay attention to. If you see a high number – 2PB – (2147483647 MB) – then you can know you are running at the default setting and no one has changed this setting ever, that’s probably not a great place to be.
Fixing Max Server Memory
So if you see that you aren’t in the best practices set up, you should analyze your situation. What else is running? Do you need to leave a memory for other applications? Do you have multiple instances to leave memory for? Figure this all out and try and leave 10-20% of your memory available for the OS and occasional other use situation and set your max memory appropriately. I always like to keep the number evenly divisible by 1024 and keep the NUMA nodes in mind. I haven’t tested not being so precise with the settings, so your mileage may vary with that approach, but it’s always worked out for me.
So for example, if a server was running SQL Server only, one instance only and it had 128GB of RAM with two NUMA nodes – I’d want to leave somewhere between around 10% free at least.
A more accurate calculation as memory increases could be something around 1-2GB for the OS, plus 1GB for every 4GB up to 16GB, then 1GB or so for every 8. For 128GB that would end up being 2 (base) + 4 (1 for every four up to 16) + 14 (1 for every 8 between 16 and 128) or 20GB. 10% would be 12% free, 20% would be 24% free. So you can use a calculation like that, but I find that 10-20% range works then watch and tweak if and as needed with data from your environment. So if we go with that 20GB free, that means we’d want to leave 108GB for SQL Server max. That works. It’s divisible by the NUMA nodes. So I’d want to set that number to 108GB, but the setting is in MB. So I multiply 108 * 1,024 (MB per GB) and get 110,592MB. I could change this in the GUI or I could use SP_Configure:
SP_CONFIGURE 'Max Server Memory' , 110592
GO
RECONFIGURE
GO
(note if you have never looked at the “advanced options” you’ll get an error saying this setting doesn’t exist or it may be an advanced option. In that case, you would need to change the configuration option “Show Advanced Options” to true or enabled… SP_Configure ‘Advanced Options’, 1 then run RECONFIGURE.)
Finally, this is an online setting. You can change it without restarting SQL Server, but please note when you run it, if you are lowering it, you could and will potentially flush some data from the caches for data cache and or procedure cache. This is normally fine, but it could cause a few more reads to disk when that memory is essentially “refilled” as data is queried. This means SQL could perform like it would after a restart for a little bit. So don’t change this during the busiest part of your day.
What happens on a Big server when you have 3TB of Ram? would you still use the 10-20% idea
Hey Doug!
Thanks for the question. As with many things – mileage varies based on specifics. I think reserving 300GB here could be excessive. It depends though on what else you are doing. If you are on a server with, say, 3TB of RAM, there are a few blanket rules which may may not apply. When you have time and expertise on your side, I’d aim a bit lower and watch free/available RAM and see how it looks. See what SQL even uses. Maybe 30GB or 40GB is a better number here – closer to 1%. I am surprised how often I still bump into this just not being set at all.
Nice post!!.
i have a question.
Support a server is having 8 Gb and SQL Server max memory is set to default value. When wecheck the memory utilisation SQL server is consuming more memory and if we check on SQL server no process is running on it.
In this case since max memory is set to default value sql is not releasing the memory even though no processes are running on sql server ? Please explain .
Thanks in advance.
Regards,
Saisampath
Hey Saisampath!
Thanks for your question. SQL Server often will take your memory and hold onto it. This is why it is so important to set a max. SQL Server will generally grab the memory it is allowed to grab and the memory it needs and not release it until such time that the OS signifies significant pressure and forces SQL to release it.
You should not leave it set to the default. Even when no process is running on it we’ll see it continue to hold onto that memory. 8GB of RAM is not particularly a lot of memory either. So I would consider setting the max and looking. Depending on other apps/etc I’d consider 6GB max or maybe even 4GB. Again – not a lot of memory, but don’t expect SQL to release the memory if there is no pressure/call for it to do so.
How to allocate the memory If Server has multiple instance ?