Art of the DBA Rotating Header Image

January, 2012:

A Temporary Configuration

A couple weeks ago, I participated in T-SQL Tuesday #26 with a trick to dynamically build a script for doing your restores. In the spirit of that laziness, I wanted to share with you another script I have for configuring Tempdb. There’s not much special here and you could use any other scripting language to get the same result, but the idea is to automate the bulk of the task so that you can save yourself some work as well as creating consistent processes for your database configurations.

--Configure TempDB
--Turn off nasty rowcount
SET NOCOUNT ON

--Declare some variables
declare @files int,
  @path varchar(max),
  @totalsize_gb float,
  @v_sql varchar(max),
  @filesize_mb int,
  @logsize_mb int,
  @currfile int

--Set these to configure tempdb
set @files = 8
set @path = 'T:\TempDB\'
set @totalsize_gb = 40

--script calculates individual file sizes
select @filesize_mb=FLOOR((@totalsize_gb*1024)/(@files)),@logsize_mb = FLOOR(@totalsize_gb*1024)* .25

--Build your config script
set @v_sql ='use tempdb
dbcc shrinkfile (tempdev,100)

alter database tempdb
modify file
(NAME=tempdev,
FILENAME='''+@path+'tempdb.mdf'',
SIZE='+CONVERT(nvarchar(10),@filesize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

alter database tempdb
modify file
(NAME=templog,
FILENAME='''+@path+'templog.ldf'',
SIZE='+CONVERT(nvarchar(10),@logsize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

'
set @currfile = 1

while @currfile < @files
begin
set @v_sql = @v_sql + 'alter database tempdb
add file
(NAME=tempdb'+CONVERT(nvarchar(10),@currfile)+',
FILENAME='''+@path+'tempdb'+CONVERT(nvarchar(10),@currfile)+'.ndf'',
SIZE='+CONVERT(nvarchar(10),@filesize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

'

set @currfile = @currfile+1

end

select @v_sql = @v_sql + char(10)+'--Verify your configuration!'+char(10) + 'select name,size/128.0 [size_mb],physical_name from sys.database_files'

print(@v_sql)

PRINT '/***RESTART SERVER***/'

There are three things we want to declare up near the top, and that is number of files, location of the files, and total size of the files. Without getting into the larger discussion of how many files you should have (for those wondering, start at Paul Randal’s(b|t) blog post on it), this script will help you make all your files consistent.

Some other notes about things I have set:

  • The total file size is total data file size. Log file size is in addition to this, so plan accordingly.
  • I’ve got the log file calculation to size to 25% of the total data files.
  • I’ve got autogrowth on all files set to half a GB (512 MB).

These settings are completely personal preference, so your miles may vary. If you have different practices, by all means implement those. The point here, as with my RESTORE script, it to pre-build most of your configuration script so that you only have to tweak a couple things.

 

P.S. It should be noted that I call out a DBCC SHRINKFILE on tempdb.  The idea of this script is your running it either while configuring a server or during a maintenance period.  Shrinking a live tempdb can cause corruption, so only execute this script when you’re planning on restarting the server and rebuilding tempdb as part of your configuration.  (Thanks to Vicky Harp (b) for reminding me about that.)

Monitoring – Where to start?

It’s real easy in the tech industry to jump in and start slinging code or writing configs.  Most of us learn this way, throwing technological spaghetti up against the wall and seeing what stick.  It’s not what we want to do, though, when we’re looking to put something long lasting in place.  We need a plan for understanding and measuring the health of our SQL Servers, something we can take to the check writers to show them how we’re doing and give them measurable indicators of success.  This is where Service Level Agreements(SLAs) come in.

Ah, you were afraid I was going to say that?  I know for some people, “SLAs” is as bad as any other four letter word.  I hear it all the time “I’ve been doing for a long time, I know what I need to do.”  And it’s true, much of our work comes naturally: how much space do we have on disk, what’s our index fragmentation, is SQL Server up or down, etc.  The SLA, however, is what we can use to not only communicate to our users what we’re watching, but also help control things so we’re not waking up in the middle of the night because a developer dropped a table.

SQL Server high (level)

We need to keep in mind that most of our management doesn’t care about DBCC checks and backup strategies, they just want to know that the database is up and working.  We’ll take care of the details.  Our SLAs provide a framework that helps our users define operational standards without getting lost in the weeds.  My approach is to outline this by breaking my SLA structure into 3 general areas:

Availability: This is the easy one, are your databases or servers up and available?  When we define this SLA, we’re telling our users that their data will be available when they need it.  Some of the elements that fall under this are:

  • Is a database available 24/7 or only during business hours?
  • Do we need high availability?  What sort of service interruption can the database support?
  • In the event of a disaster, how much data will we lose (Recovery Point Objective) and how long will we be down (Recovery Time Objective)?
  • What sort of resource overhead, such as the free space on disk necessary keep the database up and running?
  • When will the database be down for maintenance and how long can that maintenance be?

Performance: Now we get into a little grey area, because we’re tell our users how well the system will perform.  It’s tough to put any parameters around “the database is slow”, but we can put some hard numbers around this, primarily by looking at the server health metrics.  Some things to consider are:

  • Acceptable query response times for standardized processes.
  • Thresholds for CPU, memory, and disk I/O.
  • Blocking process monitoring and resolution.
  • Index and Statistics maintenance.
  • Batch requests per second.

Support: And now it gets even fuzzier.  There’s always going to be drive-bys and break fix requests.  Whether or not you have a service ticket system or some other method for tracking database team requests, you’ll want to define your response and resolution times as well as work with your customers to determine what’s acceptable from a customer service perspective.  Also, if you have custom code or application procedures, you’ll probably want to have documented what level of response items like failed SQL job, ETL loads, and similar processes will get.

Not being all things

The next thing to define is your tiers of service.  Just like you shouldn’t be all things to all users, you won’t be able to support all your applications in the same way.  By defining your different application types, you can set reasonable expectations for your support.  This area is fairly open, because you might have only two types of applications, or you might have four or five.  A sample breakdown might look like this:

  • Critical 24X7 applications – Applications in this category must be up 24 hours a day, 7 days a week and be able to handle continuous transaction load.  Interruption or degradation of service is considered a priority and requires immediate response.
  • High demand business applications – Applications in this category need to be available 24 hours a day during the business week, midnight Monday to midnight Friday.  Interruption of service requires quick response.
  • Standard business applications – Applications in this category should be available during standard business hours, Monday through Friday.  Interruption of service should be limited during business hours.
  • Lower environments – All non-productions environments should be maintained and supported to allow development resources to be able to complete projects.  Downtime should be kept to a reasonable minimum during working hours.

Your tiers will likely be driven by availability needs first.  The key is being able to categorize your application needs so that you can respond appropriately.  This is also to protect you, because if you’re burned out trying to get development boxes taken care of, you won’t be able to handle things if that webserver handling your company’s sales goes down.

Agreeing on something

The key with creating these frameworks is to remember it’s an agreement between you and your users or customers.  You want to work with them to create reasonable expectations so that if something does go haywire, both sides will know how the issue will be handled and what sort of time frames are in place for resolution.

The other piece of this being an agreement is your users or customers will need to meet you part way.  If you have SLAs defined for a highly available 24X7 application, you want your developers to make sure that the application can meet those needs.  It’s key that your users understand their stake in providing good service, such as calling to report errors or ensuring that an application can handle a database failing underneath it.

It’s a start

I’ll be honest, I’m still working on fleshing much of the above out as I roll out an SLA framework at my company.  What you see here will undergo some tweaking and fine tuning over the next few months as I get some more definition around it, but the intent is to follow the basic structure I’ve laid out here.  There will be more specifics under each of the three areas that will be shared as they develop.  If you have anything to contribute or are curious about, please feel free to leave your input or questions in the comments and I will respond as I can.

SQL Saturday #104 – Colorado Springs (#sqlsat104)

If you’ve been reading my blog, you probably saw the posts I made about SQL Rally, the PASS Summit, and a couple SQL Saturdays.  It is the SQL Saturdays, in particular, that really show the strength of the SQL community.  For those unaware, SQL Saturday is a run of regional events, put on by local user group chapters and supported by the Professional Association for SQL Server, that provide a day of free training and networking.  It’s a great chance to connect with others who do what you do, along with learning about how to do your job better.

SQL Saturday #104 in Colorado Springs was a stellar example of what these events offer and ended up being a great way to start off a new year of career development.  Here’s a brief overview of some of what went on:

  • Scheduled networking activities included as part of the presentation tracks:  Many events will only have a dinner or some after party, but I thought it was a nice way to break up the sessions by including games and other opportunities to network with other data professionals.
  • Regional and national speakers on a variety of topics:  I thought the speaker mix was fantastic, with many “big name” speakers such as Karen Lopez(b|t), Grant Fritchey(b|t), and Tom LaRock(b|t) (amongst others), but also the local talent, including: Marc Beacom(b|t), Doug Lane(b|t), Jason Horner(b|t), and…..me!
  • Professional resume reviews: Face it, job hunting sucks.  We all have to do it sometime and it’s rare that we get a chance for someone who deals with resumes all day to help us with ours.  I think the organizers of #104 scored a coup getting professionals to come in and go over resumes with people.
  • Free precons: Thanks to the sponsors of #104 for helping out with this one.  It was great to spend an entire day learning concentrated SQL info from Glenn Berry(b|t).  I know a lot of other folks benefited from this.  Not many SQL Saturdays can squeeze these in, but I’m glad the Springs folks made it happen.

As for my experience, I had a couple great highlights.  First off, I got to present again, and with a whole new presentation.  It went very well and I got great feedback.  There were also some great sessions that I learned from (have I mentioned how awesome Grant Fritchey is?)  Catching up with those of my SQL family who flew in for the event is always great, because sometimes having friends in other states sucks (and going skiing with these folks was AWESOME!).  Finally, being immersed in the SQL community gives me such a great feeling, both from being able to contribute and all the stuff I learn from it.

If you haven’t ever been to a SQL Saturday, go.  Keep an eye on the website and if there’s an event within easy travel distance, I can’t recommend enough that you get there.  It’s more than learning about SQL Server, it is about getting connected to SQL Server and the people (just like you) who work with it day in and day out.  It is finding out the gotchas and hidden gems within the application that will make your life easier.  It’s about boosting your career, knocking yourself out of that rut, and becoming “the DBA” instead of just a DBA.

I want to thank everyone who made this event possible(such a long list!), but especially Chris Shaw(b|t) and Jeremy Lowell(b|t), the engines that made this awesome event happen.  Keep up the great work!

T-SQL Tuesday(#tsql2sday) #26 – Lazy Restore Commands

I missed out on T-SQL Tuesday #25 (Tips ‘n Tricks) on the first go around, so I’m glad that with #26 I get another crack at it. Mine is pretty simple, but has saved me a lot of time when doing database restores. Whether I’m getting a database back online or (more likely) moving a database to another instance, I don’t want to be bothered for digging around to get my restore going.

As a DBA, I am two things:

  1. I hate using the GUI. Clicking through countless windows is for the birds and I would much rather type three commands than click ‘Next’ three times.
  2. I’m lazy. So those three commands? I write them once and save ‘em to a file so I can run them over and over and over….etc. Automation FTW!

So, restores. We all do them and they can get irritating. I know one of the most annoying things is relocating the files with MOVE when you’re bringing the database up on a different server. This is why a lot of folks use the GUI, because SQL Server can handle that behind the scenes and, if you really want, you can script it out. Well I’ve got another way.

You know you can RESTORE WITH FILELIST to get the database files within a backup. Why not take that a step further and capture it in a table? Then, once it’s in a table, we can use a little bit of SQL coding to give us a template to generate our restore command. Take a look:

--Turn off annoying rowcount
SET NOCOUNT ON

--Some variables
declare @v_restore varchar(1000)
declare @v_backup varchar(1000)
declare @v_sql varchar(max)
declare @datadir varchar(1000)
declare @logdir varchar(1000)

--Set backup file location, database name
set @v_backup = 'C:\demo\test.bak'
set @v_restore='Test_demo'
set @datadir = 'C:\Restore\Data'
set @logdir = 'C:\Restore\Log'

--Storage table

declare @restorelist table
(LogicalName nvarchar(128)
,PhysicalName nvarchar(260)
,Type char(1)
,FileGroupName nvarchar(128)
,Size numeric(20,0)
,MaxSize numeric(20,0)
,Fileid tinyint
,CreateLSN numeric(25,0)
,DropLSN numeric(25, 0)
,UniqueID uniqueidentifier
,ReadOnlyLSN numeric(25,0)
,ReadWriteLSN numeric(25,0)
,BackupSizeInBytes bigint
,SourceBlocSize int
,FileGroupId int
,LogGroupGUID uniqueidentifier
,DifferentialBaseLSN numeric(25,0)
,DifferentialBaseGUID uniqueidentifier
,IsReadOnly bit
,IsPresent bit
,TDEThumbprint varchar(100)) –-Be careful, this last field (TDEThumbprint) isn’t in 2k5

--Capture the file list
insert into @restorelist
exec('RESTORE FILELISTONLY FROM DISK='''+@v_backup+'''')

--Build your restore command
select @v_sql = 'RESTORE DATABASE '+@v_restore+' '+char(10)+'FROM DISK=''' +@v_backup+ ''''+ CHAR(10)+'WITH '
select @v_sql = coalesce(@v_sql,'')+'MOVE '''+logicalname +
''' TO '''+CASE when type='L' then @logdir else @datadir end +'\'+ right(physicalname,charindex('\',reverse(physicalname))-1)+''',' + char(10)
from @restorelist

--display the restore command, trim trailing comma and char(10)
print substring(@v_sql,1,LEN(@v_sql)-2)

So that’s a lot of stuff! Well, not really. If you look at it, most of the script is taken up defining the file list table. Change your variables at the top, run the script, and copy the output into another window…BAM! There’s your restore command. And if it’s not perfect, you only need to make one or two changes to the output to clean it up.

This is not the only sproc output you can capture. The biggest lesson from this trick is you should think about other system stored procedures and look for opportunities where you can grab outputs to make your life easier. Help yourself look like a hero and get more sleep by taking advantage of what’s in SQL.

Thanks to Dave Howard(b|t) for offering folks a second chance to share for T-SQL #26!

DBA Survivor: Learning how to rock out with databases

When we get a “SQL book” handed to us, it’s usually to solve a specific technical problem. Maybe you need to understand how to write a PIVOT statement, read a query plan, set up a fail over cluster, or automate something with power shell. It’s a rare occasion when we come across a book that steps back from the daily tactical struggles and gives us a strategic view of what it takes to be a DBA and where to start when thrown in to the shark infested waters of corporate database administration.

Arriving with only a little bit of fanfare (and maybe a 2-3 angel choir) is DBA Survivor by notable #sqlfamily member, Tom LaRock(b|t). It’s advertised by Tom as the book he wishes someone had handed to him when he first became “the DBA”. A fresh approach to career development, the book finds a nice middle ground between purely technical guides and the generic career success compilations. Sure, there’s some technical sections that talk about about Dynamic Management Views, performance metrics, and backups, but the real meat is the “fuzzy” bits, such as instruction on defining daily check lists, writing Service Level Agreements, handling the work/life balance, and getting involved in the community.

While I enjoyed the entire book (all 171 pages!), several key parts stuck out for me. First is the discussion about Mr. Right versus Mr. Right Now. We’ve all seen the superheroes on our teams, the guys who drop everything to fix a problem (but don’t always fix it the right way). A former boss of mine called these folks “White Knights” or, as Tom labels them, “Mr. Right Now”. Contrast this with “Mr. Right”, who is rarely seen because when he’s on the case, things don’t break. He’s the DBA that’s proactive about attacking problems before they become crises, and when things do break he fixes them such that things don’t break again. To often we get caught up in the moment, trying to put out the immediate fires, that we lose sight of the long term. I’m glad that Tom takes the time to delineate between these two roles and emphasizes that, while we need to be Mr. Right Now sometimes, our goal needs to be Mr. Right.

Secondly, it’s nice to see a database book that instructs you about the importance of disconnecting from work. We’ve all been there: 60+ hour weeks, all-nighters, the on-call shifts from hell. The IT industry can run people ragged and burn the tech love right out of them if they’re not careful. I’m glad that Tom includes a chapter on maintaining that balance between your job and your life, because this is another area that so often gets lost in the moment. Steve Jones(b|t) often talks about life being to short to work a job you don’t enjoy, and this is very much an extension of that philosophy. You can be a great DBA and not work yourself to the bone, it’s just a matter of understanding when you should put the Blackberry down.

Finally, it should be no surprise that Tom has a chapter on community. I’ll wager most of you reading this already understand the benefits of the PASS organizations and local user groups, but many folks picking this book up may not. By including a discussion on connecting through user groups and professional organizations, Tom offers the new DBA an avenue towards excellence. I’ve seen the benefits of PASS and keep catching myself saying “If I had only gotten in to this years ago….” If I had read Tom’s book then, I probably would have.

Now many folks probably think this is just a book for the junior DBA, for someone who’s just getting started, but I know this book has value for data professionals of all levels. This book is not a detailed guide or roadmap for solving specific problems, but a series of highway sign posts to get people headed in the right direction. Maybe you are a fresh DBA, looking to get in to the industry or just survive your first week on the job. Or maybe you’re like me, a career DBA who is looking to refocus my career and looking for that “big picture view”. No matter how you got here, DBA Survivor is an excellent starting point for the rest of your database career.