Welcome to part 3 in this series on a thought process around SQL Server Installation. In Part 1, we started talking about some basic steps and a thought process for a repeatable success story with your SQL Server installations. Part 2 discussed security, playing around before doing a first install and smoke testing afterwards.
Today we’ll close the discussion with some talk about maintenance, defaults and some post installation tips.
“The install is done, here you go”
Wait! You clicked finish, you reviewed the logs and tested connectivity like we talked about in Part 2. You aren’t ready to deliver yet though. Let’s cover a few things first. This, again, is not meant to be exhaustive but to help you think about the full process. Check out the documentation and blogs and explore in a dev environment.
If you just did the installation and changed a few things like some of the directories and options, you don’t have all of the defaults but there are things to look at after the installation. There are defaults to change, or at least understand and consider. Let’s hit a few of them here.
As the model database goes, so goes each new database you create (without manually changing it). The Model database is just that – a model database. Add a table to model and each new database you create will have that table in it. The same goes for recovery model. On all new installations of SQL Server the recovery model for the model database is “FULL” out of the box. This means that every new database you create will start out in FULL recovery model and the log file will continue to grow (after you take your first full backup — until you take a full backup you are not truly in full recovery mode either when first starting out or when switching to full from simple). This is important to consider because if you are only planning on ever doing full backups and your recovery plans only have you recovering to a daily full backup, for example, you will see your log file grow to a point that can get out of control. (Then you have to start playing with -ugh- shrinking)
If you don’t want this then change your recovery model. If you don’t know what it all means, start with books online.
By default new database files are set to a small size (2MB for data, 1MB for log). Don’t accept the defaults here. How big will your database grow to? What kind of transaction log space do you expect to need? Plan for that growth, add an overage percentage and “right-size” your files at the beginning. This prevents file fragmentation from frequent growths each time the file needs to grow and it helps give you peace of mind in knowing your database is right sized from the beginning.
For the Log File, you may want to check out this post by Kimberly L. Tripp on some tips for right sizing the transaction log.
Hopefully you setup a separate drive each for data files, log files and tempdb. That is generally a best practice though in lighter workloads it may not be absolutely necessary to split out tempdb. I say for log files it is always good from a performance and recovery standpoint to split log files from data files.
Where is your tempdb file? Do you need to move it to the drive you requested and setup for tempdb? (Check out this books online article on Alter Database. There is an example there for moving tempdb, in fact)
Are your default directories for new database log and data files setup right? (Right click on the server in SQL Server Management Studio –> Go to Database Settings –> What’s specified for the default directories?)
Data and Log File Growth
The default growth options for data and log files are likely not what you want to see. While it’s important to “right-size”, I am also of the opinion that you should allow for autogrowth as an insurance policy. I just don’t like the default growth sizes (1MB for data files and 10% for log files). Instead I like to specify an amount that allows a “healthy” growth for that particular database and workload while looking to minimize file fragmentation from small chunks grown at a time. This varies from database to database. On an environment I just configured for a database that started at 25GB data and 5GB log file I chose 500MB and 250MB respectively.
I like to setup alerts for autogrowth operations with a monitoring tool watching my SQL environments. I like the insurance policy but in my mind using the insurance means there was a problem with the right-sizing.
Backups (or lack thereof)
We just talked above about understanding your recovery model. That is the first step in the game. Step two is setting up a backup strategy that matches your recovery strategy. My philosophy is that people focus on their backups too much. What I mean by that is – remember why we take backups – we take them to have a way of recovering our databases in a crisis. If you can remember that, your backup plan should come together and you should have backups setup and running in your new instance sooner than most other tasks.
There isn’t any being done yet. You can change the defaults above and deliver, but you’ll be getting to know this newly created instance at some point in production. Could it be a need to recover from failure? Maybe performance was fine and just started getting worse all of the sudden? SQL Server databases, like most other tools and applications, require regular maintenance. Let’s briefly talk about some of the quick hits I throw on most instances as a basic rule.
Backups and history maintenance
Can’t have too many reminders about backups. Make sure you are taking backups and have them running in an automated fashion.
Also make sure you are cleaning up your backup history. There is a great blog post by Jonathan Kehayias here. Rather than regurgitate or try my own attempt, check it out.
Index and Statistics Maintenance
Your databases should have indexes for performance minded querying of data. They also help in the performance of updates (which are essentially making changes to data based on a query of that data) and having a clustered index can sometimes mean an improvement to inserts and deletes as well.
They grow less useful if your indexes are heavily fragmented and more work is required when working with an index than would otherwise be necessary. Look at rebuilding or reorganizing indexes as time allows. A great script by Michelle Ufford for this is here.
Statistics also help your database performance. Without them, the optimizer has a lot less to work with. Statistics help show the optimizer how unique your data is, how many qualifying rows could be found for queries, etc. They can get out of data even with auto update statistics set on (the default behavior). Updating statistics is a good practice to get into with maintenance as well. Paul Randal blogs about the debate on what to do first (statistics update or index rebuild).
Error Log Recycles
Look up sp_cycle_errorlog and sp_cycle_agent_errorlog in books online. They are system stored procedures that do what they sounds like. They cycle the error logs in SQL Server and the SQL Server Agent. If you don’t do this from time to time the error logs can get a little too large to easily scroll through if you don’t restart your instance a lot (the error logs are reset each time your instance is restarted).
I typically recycle them weekly, especially in environments where I don’t have a monitoring tool and/or I do daily log checks on production environments ensuring no issues.
I also increase the number of error logs to a new number. If someone were to compromise a SQL environment they could theoretically run the sp_cycle_errorlog command the right number of times and some of their tracks are gone. I increase this to help alleviate that. It’s not ironclad but it is one more way to help harden SQL Server or at least provide more of a forensics ability.
I don’t like database corruption. It doesn’t happen much and I take frequent backups to help alleviate potential data loss. Database corruption is rare nowadays. With each new release of SQL it seems to become rarer and rarer. But… It happens.
I like the comfort of knowing I check my databases for corruption as frequently as I can. I will typically setup a job to perform a “DBCC CHECKDB with no_infomsgs” on all of my databases as frequently as I can. While DBCC CHECKDB can be run concurrently with users in the system, I like to pick either low use or no use times to run it, especially for larger databases. I try to run it at least weekly and the more frequently, the better.
This will help my chances of detecting corruption before a user does.
Wake me up, as you go-go
(Is it wrong to paraphrase a wham song in public? I grew up in part of the 80s, but I digress…)
What kind of monitoring do you use at your company? There are a lot of great tools out there to use. There are even homegrown approaches you can find on SQLServerPedia or in articles at SQL Server Central.
There are pros and cons to many of the tools and some quick web searches will show you a lot of these. The point is – use something!
Don’t rely on the users to tell you when you have a problem or when a job fails. Before I hand an environment over to a project team or users, I like to know that I will find out if a job runs too long, a job finishes too quickly, the CPU is pegged for a substantial amount of time, the SQL Services become unavailable, etc.
It bears restating: Discover your problems before your users report them to you. (Or at least have a chance to do so!)
Those are the basics I like to focus on when installing SQL Server. I am sure there are items that others look at as well. Please feel free to share them in the comments below. Thanks for reading the series and happy installations!
If you have any problems, feel free to leave a comment – I check regularly and have no problem helping you out with an issue during installation with some advice, clarification or a point in the right direction.
Some Related Thoughts
- While Talking about planning, do you ask potential or new vendors any questions about their database needs?
- Do you know about all of the SQL installations in your network? As a DBA, you should.
- Are you a new DBA? Maybe a sysadmin wearing a DBA’s hat? Here are some tips on where to start.
- Doctors, Chefs and Pilots can teach us DBAs a thing or two about Checklists, Recipes and Algorithms. Can be very useful in an installation or setup process.
- I used to hate documentation. I now see it as something to help me get help and have someone else do the next installation. Embrace documentation.
- Part one of this series – preparing for SQL Server installation by planning.
- Part two of this series – securing and testing your SQL Server instance after installation.
Want more content like this? Subscribe to this blog’s feed in your favorite feed reader and stay up to date with tips and tricks. I like to share techniques and free tools I use to help me with my day job.