Or how I learned to stop asking so much and start trying…
Empirical, as defined by Webster: 1.) originating in or based on observation or experience 2.) relying on experience or observation alone often without due regard for system and theory 3.)capable of being verified or disproved by observation or experiment.
I think empirical thought and evidence are far too often ignored or bypassed in technology. With quick answers available through newsgroups, forums, points based answer exchanges and blogs running rampant (present company included 😉 ) folks are fine with a quick answer and are often quick to apply that answer… Even in a production environment.
I spoke about the pet peeve of rushed troubleshooting in my first blog post. Not using empirical evidence is definitely not a great troubleshooting strategy, unless all options have been exhausted and time is of the essence.. Even then, it’s a tough argument but I can see the “it’s broken no matter what, the problem looks like this and it is our only hope”approach and have even taken it a couple times in the past. This can be minimized by having good test environments (that mimic as best as possible the production environment) and rigid controls on releases.
Empirical Evidence in Development
I am sure you have also found code that was compliments of an internet search. That’s not all bad if it gives you a pattern to solve a problem. It is entirely bad if it was just thrown in without even understanding why you used it, how it helps you but the results seemed right. You do yourself a disservice, you do your users a disservice if you just grab and apply.
Instead, try to understand the issue. Treat each opportunity where you are working with something new as a chance to gain a deeper understanding of the product you work with and experiment with a couple ways and see the effects yourself. If most developers tried this approach and tested their methods, looked at query plans with different syntax, etc. there would be less performance tuning work needed in the SQL world.
Empirical Evidence vs. Tips From The Internet
I often see questions on the newsgroups or forums where people just want an answer, they don’t want to know why or how something works, they don’t want to know how to arrive at an answer; they want the answer. I can see the rationale when something is down or it is a question without a lot of grey area. Even still, how do they know that is the best advice if they don’t test it? How do they know that person knew what they were talking about? How can they explain it to their manager and users? How can they understand why they had a problem?
For example, a lot of search engine traffic to my blog (it’s still new to me so I still enjoy looking at those stats) have been with questions like “How do I shrink my database” or “SQL Server Shrink Transaction Log”. This is because I posted about not using shrink. Now had I posted instructions on shrinking a database they would have grabbed the info, taken it back and done it. I have seen answerers on newsgroups suggest tactics that break a log chain and mess up recoverability. I am sure they are well intending but the folks getting the answers and applying them without thought, understanding and testing are losing in the end.
So what are you saying we do?
- Try it yourself – Want to know what SELECT statement methodologies perform better? Try them and see. Look at the query plans of each. If you don’t understand the operators, learn about them from books online or some of the Inside SQL Server 2005 series (itzik’s 2 books really go in depth here).
- Seek to understand a solution even just one level deeper than you want – if you get an answer, ask a follow-up of yourself, of your resources try to understand the how and the why and the mechanics.
- Learn a bit more about the internals – For me, seeking to break things down to the basic level with everything meant I wanted to learn about the internals of SQL Server. I have learned a lot but still have TONS to learn. The point is, I rarely make a wild stab at something, I seek to understand the problem, understand the parts of the solution and end up learning a lot of extra information in the process.
It really comes down to what you want to be and where you want to go. If you want to be growing and learning constantly then experimenting and learning deeper is a great approach.
Just balance it 🙂 For me, I can often go off the deep end learning How’s and Why’s to exponential levels. You know you have that same problem if a visit to wikipedia with a simple question ends up in 30 browser tabs open and 3 hours of your life erased.
I would add to this, "just try it in development." Make sure to have a development environment – doesn’t need to accurately reproduce production, but you need to have the same schema and same quantity of data. Don’t go willy-nilly trying things in production.
Indeed, Brent –
I was actually suggesting tries in development with testing. I realize now that I missed pointing out that important caveat. I don’t like anything tried in production unless there is no other alternative. To which I add another caveat: the cases where there should be no other alternative should be rare. So rare that you can remember each instance of having to do it and procedures and environments were changed as a result.
One thing I maintain as a practice is a "ProofOfConcept" database, on a dev box that I can rebuild if I need to. I test stuff in there. Build solutions for online questions, etc., but also test things to make sure I understand them myself. It’s a practice I highly recommend.
Very good idea. I call it a sandbox but the same concept. An environment you can tear down and rebuild often and easily. Test to your hearts content and learn. Great point and thanks