Art of the DBA Rotating Header Image

T-SQL Tuesday

T-SQL Tuesday #39 – A PoSH Post #tsql2sday

This month’s T-SQL Tuesday is graciously hosted by Wayne Sheffield(@DBAWayne) and is a topic near to my heart: Powershell.  I’ve blogged about Powershell before, occasionally sharing some code.  I’m excited to share some more and fit it into the blog party.

earlgreyI hate clicking buttons in Windows.  It’s inefficient for me, especially considering I grew up typing commands at a prompt.  Also, I’m one of those lazy DBAs, so when I get a chance to kick off a script to do work while I’m drinking some Earl Grey, I’ll jump at it.  This is not only why I’m a fan of Powershell, but I’m also a huge proponent of unattended SQL installs.

In my environment, I have a six node failover cluster which hosts all of the company’s SQL instances.  This means that whenever I have to install a new instance to the cluster, I need to perform a cluster node install 5 times.  Suffice to say, this is tedious.  So I said to myself:

“Self, isn’t there a better way to do this?”

“Quite right!  Unattended installs are easy, we just have to run the node install once, take that .ini to the other machines, and we can do 4 command line installs!”

“I don’t know, self.  It’s not like much is changing between instance installs, so running setup twice (once for the full install and once for the first cluster node) still seems like to much work.  To the PoSH mobile!”

See, a lot of my scripting experience comes from the *nix world.  Search and replace logic is pretty easy using sed and awk.  The beauty of Powershell is that, since everything is an object, I don’t even have to mess with that funky syntax.  I can just leverage the .replace() method of a regular ol’ string variable.  This lead me to the following solution:

  • Create a template .ini file for adding a node.  This template contains all my basic node options and stubs for the replaceable values.
  • Write a Powershell script that takes that template and some inputs from me, does some basic search and replace, then spits out a custom .ini for my cluster node install.
  • Use that .ini for the /CONFIGURATIONFILE parameter in an install and *bam*.

I won’t bore you with the whole template file, you can download it and look at it here.  Where the real “magic” (such as it is) is the following(downloadable):


<#
.SYNOPSIS
   Creates an .ini for adding a cluster node based off of the add_node_template.ini
.DESCRIPTION
   Takes the add_node_template.ini and performs a text replace to insert passed values
	 for Server, Instance, and service account information.
.PARAMETER <paramName>
   -server - Server name of the virtual network name
	 -instance - Named instance name
	 -sqlsvc - SQL Server Service account
	 -sqlpwd - SQL Server Service password
	 -agtsvc - SQL Server Agent account
	 -agtpwd - SQL Server Agent password
.EXAMPLE
   .\Create-NodeIni.ps1 -server "FOO" -instance "BAR01" -sqlsvc "EMINENT\sqlsvc" -sqlpwd "password" -agtsvc "EMINENT\sqlasvc" -agtpwd "password"
#>

param([parameter(Mandatory=$true)][string] $server,
      [parameter(Mandatory=$true)][string] $instance,
			[parameter(Mandatory=$true)][string] $sqlsvc,
			[parameter(Mandatory=$true)][string] $sqlpwd,
			[parameter(Mandatory=$true)][string] $agtsvc,
			[parameter(Mandatory=$true)][string] $agtpwd)

$output="$server`_$instance`_addnode.ini"

(Get-Content "add_node_template.ini") | Foreach-Object{
	$_ -replace "<SRV>",$server `
		-replace "<INST>",$instance `
		-replace "<AGTACCT>",$agtsvc `
		-replace "<AGTPWD>",$agtpwd `
		-replace "<SQLACCT>",$sqlsvc `
		-replace "<SQLPWD>",$sqlpwd } | Set-Content $OUTPUT

While there’s a lot going on here, it’s the last seven lines where the work gets done. The Get-Content comandlet reads in the template file, then gets piped to the Foreach-Object commandlet to process each line of the file. This works because when you make a text file a string object in Powershell, it becomes a multi-value array where each line is an array member. Then the script executes several different replaces (wherever it finds matches for my stubs) on each member. The result is piped to the Set-Content commandlet, writing out the new post-replace array out to a new file.

It’s a simple trick, but with this I not only save myself a ton of time by generating files for SQL installs, I also make my installations consistent. Every node install is run with the same options and selections. Human error is reduced and increased tea time is provided. Future enhancements would be to update my templates so I can generate the initial SQL install .ini, but since I do so few instance installs, it’s not really worth the effort at this point. Plus, it looks like someone is already working on that.

Enjoy the rest of T-SQL Tuesday. Plenty of great people are contributing and the great thing about Powershell is there are so many ways it can be used. I hope that our scripts will not only get you excited about learning Powershell yourself, but give you some great ideas on how you can leverage it for your own success.

T-SQL Tuesday #36 (#tsql2sday) – Coolness factor

“Come in, sit down, I want to show you something cool.”

This (paraphrased) is what Kevin Kline(@kekline) shared as part of the Summit 2012 Women In Technology panel.  While the session was intended to talk about how we can bring more women in to technology careers, it really struck me as a fantastic descriptor of the SQL Server community in general.  Many fields and groups invite people in, but there’s some “thing” about the SQL Server community that makes it the most open and supportive technology community I’ve ever had the pleasure to participate in, and I think it’s this desire to share that sits at the core of our success.

The Summit is built on this sharing mindset.  From the speakers to the board members to the participants, the people who put this on are volunteers.   Hundreds of professionals hope to be selected to speak at this event, even if just for a lightening talk, because they have something cool to share.  People hang out in the halls just to talk about what can help the next guy out.  What’s especially noticeable is how the community reaches out to the first timers.  I know some folks were a little adverse to wearing those ribbons, but if you had a first timer ribbon, alumni of the convention would reach out and make sure you were welcome.

It seems that typically, people hoard knowledge as a means of job security and a way to emphasize their superiority over others.  With SQL Server, it’s more that we want others to have the fun we’re having.  We work in tech because it’s fun and cool and share our knowledge so others can see the cool stuff we do.  Because when all is said and done, if we’re all having fun, we’re all going further.

So come in, sit down, and let me show you something cool.

(This month’s T-SQL Tuesday is brought to you by Chris Yates(@YatesSQL).

T-SQL Tuesday #33 Wrap-up (#tsql2sday)

Last week I had the opportunity to host this month’s SQL blog party, T-SQL Tuesday.  I hope folks enjoyed the topic (it seemed like it), because the idea of experimenting with something is very much a core part of the learning process.  SQL Server is so deep and has a number of nooks and crannies.  To become proficient at it (let alone an “expert”), we have to keep pushing our boundaries and asking “What happens when….?”

I want to again thank everyone who participated.  Here’s a quick round up of the posts:

Rob Farley(b|t) peels back the cover on recursive Common Table Expressions (CTEs).  If you haven’t worked with them yet, CTEs are an incredibly useful tool in the SQL arsenal.  Using them recursively is just one example of how powerful they can be to a developer.

Jeffrey Verheul (b|t) chimes in with a big picture discussion on using cursors versus set based operations.  While we database folks prefer set based operations, sometimes it’s to our advantage to use a cursor.

Jim McLeod (b|t) jumps in with a great little way to use SQL Profiler to gain some insight into the workings of SQL Server by tracing Management Studio.  Not a lot of people are aware that almost everything that happens in Management Studio is a query that can be scripted out, and profiler lets us see that.

Oliver Asmus (b|t) shares some of his knowledge gained from putting together preventative maintenance tasks (a vital yet often overlooked part of being a DBA).

Aaron Bertrand (b|t) provides a cool trick to swap tables around using schemas.  A lot of SQL DBAs overlook the power of schemas in databases, but Aaron’s trick really gives as an efficient way to handle offline updates and other resource intensive tasks.

Nigel Sammy (b|t) pokes at the Transaction log with DBCC SQLPERF.  It’s important for DBAs to really understand their transaction log usage, and Nigel’s trick can give a DBA some real insight.

Tracy McKibben (b|t) gives us a trick using one of my favorite system views, sys.partitions.  There are times I lament not being able to see how long a query or process will take (I’d LOVE to have something like Oracle’s V$SESSION_LONGOPS view), but Tracy shows us how to use sys.partitions for insight into index rebuilds.

Chris Shaw (b|t) gives us some information on cluster failovers. While we want our operations to appear seamless to our users, as DBAs it’s important that we’re always aware of where our instances are living within our environment.

Rob Volk (b|t) really gets down the rabbit hole, using one of the Sysinternals tools to crack open SQL Server.  I’ve seen some folks do some really cool and powerful stuff with Sysinternals, so Rob’s post is a great way to introduce people to this awesome tool suite.  (Plus, how many guys even get to use the word “hekaton”?!?)

Jennifer Salvo (b|t) adds a little business intelligence to our topic, giving us a cool SSAS trick to manage currency conversions.  Businesses will ALWAYS care about their money, so having a good way to manage multiple currencies is a must.

Finally, first timer Jose Rivera (b|t) brings us full circle with his own post on using recursive CTEs to solve a tricky problem for drug costs.

(Oh, and look at this post if you want to see my contribution.  :D  )

I really appreciate everyone’s participation.  This is such a great part of the SQL Family and I hope that if you’re reading this you’ll consider participating and, perhaps, hosting a T-SQL Tuesday in the future.

T-SQL #33 – What happens if….? (#tsql2sday)

Sometimes a trick shot is the result of asking “What happens?” What happens if I draw off this ball, can I sink this other ball? What happens if I hit these two balls that are touching each other? The premise is that you look at a situation and explore the result.

For my SQL “trick”, I asked “what happens if you update a value to the same value?” It’s a question that gets asked by a lot of folks, but I wanted to explore the tools in SQL Server to answer that question for me. We can read about someone else figuring this out, but I think my best learning is done when I puzzle out a problem on my own.

I created two simple scenarios: an update of a column to itself and an update of a column to itself plus one. Fairly simple stuff:

create table test
(cola int not null identity (1,1),
colb int not null default 0,
colc varchar(100),
constraint pk_test primary key clustered (cola))

insert into test(colc)
select top 1000 a.name
from sys.objects a,sys.objects b

--Reset, write all pages out from buffers
checkpoint
dbcc dropcleanbuffers

--These tests need to be run separately!

--Test 1
update test
set colb = colb

--Test 2
update test
set colb = colb+1

(Note, the checkpoint and dbcc dropcleanbuffers is to make sure I’ve cleaned out all my log activity before I run either test.)

My first actual stop was to use DBCC PAGE to see if I could see if looking at the raw page would show if the pages had been modified. Unfortunately, this information wasn’t there, though I did get a lot of exposure to reading the output of the command. Not sure where to go next, I reached out to the Twitterverse on #sqlhelp. I was directed towards two items:

  • sys.fn_dblog – Undocumented function that shows all the individual actions in your transaction log.
  • sys.dm_os_buffer_descriptors – DMV that lists information on all the pages in your buffer pool

Querying both of these tools gave me the answer to “What happens?”, but in two different ways. When I looked at sys.fn_dblog, I could see no actual transactions when I updated the column to itself, but a transaction for each update when the value was actually modified:

--check for log activity
select [Current LSN],Operation,Context,[Transaction ID]
from sys.fn_dblog(null,null)

Sys.dm_os_buffer_descriptors has a field labeled ‘is_modified’ which is set to 1 if the page has been changed. When I queried for pages with is_modified equal to 1, I saw further evidence of what fn_dblog showed me:

--List pages in the pool that are modified
select * from sys.dm_os_buffer_descriptors
where is_modified = 1 and database_id = db_id('junk')

In both cases, I was able to see clearly what was going on under the covers and that SQL Server is actually intelligent enough to save some work when no work is actually being done.

Of course, the funny thing about trick shots is someone usually can do it better. Shortly after I figured this all out on my own, I found this fantastic blog post by Paul White (b|t) showing all this with a little more detail. However, what’s important is what you can learn by doing things yourself. It may take you time to reinvent the wheel (something you might not be able to afford), but when you do it yourself you learn a lot about how a wheel works.

Thanks for joining me this month, where I not only get to contribute but also host T-SQL Tuesday. I’m excited to see what other things our peers can teach us about the things they’ve learned by playing around with SQL Server, trying to see “What happens?”

Invitation to T-SQL Tuesday #33 – Trick Shots (#tsql2sday)

For a while, I was in an amateur pool league.  No, not the one involving water and swimming, but where you try to sink balls into pockets.  It was a lot of fun and is a challenge both for your motor skills as well as your strategy.  I still shoot from time to time, as well as hang out with my old pool buddies.

One thing guys would get into is trick shots.  Two and three rail bank shots, masse shots, or jumping the cue ball to hit the target.  Most of these shots weren’t tournament legal, but they were fun to try and nice to impress the ladies.  More than that, they were a tool to teach you the physics of your pool game.  You could see how throw and English could affect your shot, or how balls would behave after impact.

Just like so many other things I do in my life, the trick shot lessons translate over to SQL Server.  How many times have we built something neat or puzzled out a particular bit of logic that, while it may not have been particularly useful, taught us about how SQL Server behaves.  This month’s T-SQL Tuesday is all about this and the assignment is two-fold:

  1. Show us a cool trick or process you developed, maybe a DMV you used or some reporting logic you created.  It doesn’t have to be useful, just something that you thought was pretty neat.
  2. Tell us what you learned from this trick.  Is it something about an oddity in SSRS?  Maybe with the query processor?  Whatever you did, tell us how it gave you insight in to how SQL Server works.

Now with that out of the way, keep in mind the ground rules for T-SQL Tuesday:

  • Posts must be made between August 14, 2012 00:00:00 GMT and August 15, 2012 00:00:00 GMT.
  • Your post should contain the T-SQL Tuesday logo and a link back to this post.
  • If you don’t see your post in trackbacks (give me ~30 minutes to approve), post a link to your post in the comments of this post.

If you’re on Twitter, you can follow T-SQL Tuesday at #tsql2sday.  Follow it, it’s a great way to see what everyone is writing about!  And if you’re interested, Adam Machanic(b|t) handles the T-SQL Tuesday hosting list, contact him if you have an idea for one.

T-SQL Tuesday #32 – A Day in the Life (#tsql2sday)

This month’s T-SQL Tuesday is brought to us by one of the more statistically important data professionals out there, Erin Stellato(b|t).  The assignment is simple enough: Record your day on either Wednesday 7-11 or Thursday 7-12.  Easy enough, but leave it to me to screw it up, right?  Anyway, I was travelling on Thursday (heading down to Albuquerque to present at the local 505 user group), so I cheated and recorded my activities for Tuesday, 7-10.  It was an average enough day, so a good cross section for the series.  So, without further adieu:

  • 6:55 AM – 7:10 AM Check on I/O trace – I can work remotely, so commonly when I get up I’ll check in on things just to make sure everything is ok.   This time, I had set a profiler trace to run overnight to give me some info on I/O issues we were having.  All I did here was log in and make sure it ran, I was going to drill in to the detail later.
  • 8:15 AM – 8:30 AM Review alerts from previous night – Still at home, but I did a quick glance over the alerts from last night just to make sure there weren’t any fires.  Everything was cool, so I hit the road to get in to the office.
  • 9:00 AM – 9:20 AM Arrive in the office, get my day started – This is administrative time, responding to emails and getting my tea (I hate coffee.  There.), settling in for the day.  This bleeds into the next part….
  • 9:20 AM – 9:40 AM General maintenance work – This was basically some file clean up and responding to some of the alerts I saw from over night.  Nothing major.
  • 9:40 AM – 10:40 AM I/O research – So we’re having an I/O issue in our lower environments.  We’ve got a LUN on one of our instances that is getting slammed.  This was what I was using my trace to research and discovered that a whole lot of work was going through tempdb.  I spent this time reviewing my data and then talking with the relevant developers and QA engineers.  Once I had my info collected, I reported out to the systems team, DBA team, and the dev guys.  Unfortunately, this is a situation where not much can be done.  There really wasn’t any alternatives for spreading out the I/O load (at least none worth pursuing for a lower environment system) and the proper way to fix it was to have the dev team file things away for code fixes.  Still, with the info I collected we could come back to that with a better strategy.
  • 10:40 AM – 11:00 AM TempDB cleanup – Got some additional space for one of our dev instances to allow us more tempdb space, so I cleaned that up and arranged the files.
  • 11:00 AM – 12:00 PM CLR Research – So I’ve never really done much CLR work.  We had a legacy sproc that we used that was reporting incorrectly, so I was doing some research as to why. Really didn’t have much luck, but since I was used to the WMI in Powershell, I figured I’d try and rewrite the CLR logic using that.
  • 12:00 PM – 1:00 PM Lunch – Every Tuesday we go to this awesome thai place down the road.  Basil chicken for the win!
  • 1:00 PM – 5:00 PM CLR Research – I basically spent the rest of my day fighting with CLR.  Keep in mind, I’m a DBA with a sys admin background.  I’ve dabbled in .Net code, but I’m very rusty and my code is less than elegant.  However, it was a good learning experience, and taught me several things:
    1. CLR only supports a limited set of libraries.
    2. The System.Management libraries apparently have a lot of dependencies on forms and drawing (I have no idea why).
    3. CLR is a real pain to debug, depending on local security policies.

Honestly, this was one of my lighter days.  Probably because we had just come out of a holiday week where we had locked systems down and allowed minimal change, meaning we also did have much breaking.  This is what makes the job enjoyable: not every day is a fire drill and the ones that aren’t afford me an opportunity to experiment and learn.  Because of this day, I’m a whole lot more comfortable with the concepts of CLR (even though I still haven’t built a successful CLR function) and it’s made me a stronger DBA.

Thanks to Erin for hosting T-SQL Tuesday #32!   Make sure you visit the master post for other great blogs!

T-SQL Tuesday #31: Logtime (#tsql2sday)

This month’s T-SQL Tuesday is hosted by Aaron Nelson(b|t), asking participants to talk a little about logging. Now there are plenty of technical operations and other logging tricks we could talk about, but Aaron’s request is a broad one and we have the opportunity to cover other forms of logging. So while I could talk about writing out text logs of Powershell actions or parsing and maintaining the various logs within SQL Server, I want to talk about a different kind of logging: time tracking my day.

Time tracking is one of those annoying administrative aspects of our job, much like documentation and project planning. While these tasks can be overdone, they’re all necessary for making our job more efficient and manageable (when done right, of course). The thing about time tracking is it’s hard to really understand where we spend our day when we’re in the trenches fighting fires. When we look back on our week, can we say that we mostly did client work? Administrative tasks? Maintenance jobs? And when we go to our boss and say “Hey, I can’t get this project done because I’m swamped doing ______”, how do we prove our case?

Now the problem with time tracking is it can be really tedious. We’re very much limited by the tools we have. This is why I was thrilled when I was reading a post by Kendra Little(b|t) about a great tool called Toggl. This handy application is a blessing. It has a simple to manage interface(either by using the web interface, mobile application, or desktop client) with a timer, where a simple click can either start a new task or continue something else you were working on. Because I can create time entries with a click or two (maybe some typing if I’m creating a new entry), tracking my time becomes less tedious and doesn’t intrude much on my day to day work.

Toggl Desktop

Also easy to manage in the interface is grouping tasks in to projects. Because the interface is so simple, I can create a project on the fly. Mostly, it’s just a description in the application, but it helps me group the individual tasks together. Personally, I have four projects: Client Work, Foundation Projects (internal work), Maintenance (break/fix activities), and Administrative. What you might have will vary, but this break down works well for me. The interface makes it very simple to assign a new task to one of these projects or create a new project.

Of course, we all know that while this may be easy to create at entry time, there’s always going to be some cleanup. Whether a misspelled project, an erroneous time entry, or just forgetting to enter a record, there will be times we’ll need to go back and correct things. Again, the application allows for simple lookups of our records as well as easy modification. I can use this to either update or add new records when I’ve missed something.

The beauty of this tool, however, is in the reporting. By including some simple bar graphs, pie charts, and date ranges, I can quickly see how my time breaks down. To me, this is the most important part, because then I can see if I really am spending too much time on meetings and other administrivia (I try to keep it to about 25% of my total time). Also, if my maintenance time is high, it probably means that we have some systemic issues that should be addressed. The nice thing is I can show interested parties how my time is being utilized at a glance with minimal interaction on my part.

The biggest benefit of this tool is that I tremendously reduce the effort I need for time tracking. I would easily spend one to two hours a week trying to track my time and log it properly. While this may not seem like much, it starts to eat in to everything when you start including that into time spent in meetings and similar administrative work. Also, let’s be honest, that’s time I could be spent writing a new script or solving a cool problem. With Toggl, I now only need about 15 minutes of the work week for time tracking.

Toggle has been a boon to me, but I wouldn’t have it without reading some of the great SQL bloggers out there. Thanks again to Kendra for cluing me in to this application, and thanks to Aaron for giving me the opportunity to write about it. I’m going to definitely be checking out some of the other T-SQL Tuesday posts in this series to see what other time save tips, tools, and tricks I can leverage to make my life easier.

Because, in the end, isn’t it all about making my life (and yours) easier?

T-SQL Tuesday #29 2012 Party: Dancing on the FileTable (#tsql2sday)

For this month’s edition of T-SQL Tuesday, Nigel Sammy(b|t) asks us to talk a bit about a feature of SQL 2012 that we’re excited about. Now, while my most favoritist SQL 2012 feature is the ability to install on Server Core, I’ve already written about that and I want to talk about something that hasn’t gotten quite so much publicity. Enter the File Table, new functionality that exposes more of the Windows file system to SQL Server. With it, we get some additional tools for file handling that before we had to do with other applications and workarounds like xp_cmdshell.

A File Table is a special type of table in SQL 2012. It’s got a fixed set of columns and points at a directory managed by SQL’s FILESTREAM technology. If you do a SELECT from the File Table, you’ll get a listing of all the files within that directory. What’s cool is that “inserting” rows into this table is as copying files into the directory. Once a file is in there, we can use use SQL to do other operations, like copying files between two different File Tables, or cleaning up old files based on the file dates or archive flags.

Setting up a File Table doesn’t take much, but since it uses FILESTREAM it can be a little tricky. The first thing you need to do is actually enable FILESTREAM for your SQL Server instance. To do so, just follow the steps from MSDN. For the purposes of this, I’m going to set up my share name as my instance name, SQL2012. This means that to get to my FILESTREAM objects and my File Tables, I’m going to be using the UNC path of \\SHION\SQL2012 for my laptop.

Now that we’ve got FILESTREAM going, we can go ahead and create our File Table. The first step is to set our database options to allow non-transactional access and a default virtual directory for the FILESTREAM.

--Set database specific options for FILESTREAM
ALTER DATABASE demoFileStream
SET FILESTREAM (non_transacted_access=FULL,directory_name=N'FSDemo')

Next, we need to add a FILESTREAM filegroup to the database, which defines the physical directory where our file tables will exist. Note, this directory will be a physical directory, but SQL Server must create it so it cannot already exist.

--Filegroup for the filestream
ALTER DATABASE demoFileStream
ADD FILEGROUP fgFileStream CONTAINS FILESTREAM

--Directory for the filestream filegroup
ALTER DATABASE demoFileStream
ADD FILE (NAME=flFileStream,FILENAME=N'C:\FileStream')
TO FILEGROUP fgFileStream

Finally, we can now create our file table with a simple CREATE TABLE statement. Note, we won’t declare any columns as those are pre-defined for the file table structure.

CREATE TABLE FT_Test
AS FILETABLE

And we’re done! We can now browse to the following UNC filepath: \\SHION\sql2012\FSDemo\FT_Test. This directory is the physical location of our file table. For now, I’ll put a couple test files in there.

At this point, we’ll just run a simple select and, bam! There’s a listing of our files.

SELECT
  name,
  file_type,
  creation_time,
  last_write_time
FROM FT_TEST


Voila! A happy little file table, ready for our use. I’ll be honest, there’s some limited use here and having file tables doesn’t solve a lot of problems. It’s very useful, though, for certain use cases and can be a very powerful tool in your tool kit. If you’re trying to think of situations where you need it, consider times where you need to work with files on the file system:

  • Managing backup files: If you’re like me, you write your backups to a unique filename for each backup. This means I usually need some other tool to clean up my old files. With File Tables, I can manage that clean up completely within SQL.
  • Restores: If you’ve got a lot of transaction logs to restore and you want an easy way to parse through files to find the ones you need, you can now do that using File Tables.
  • Basic ETL: If you need a simple ETL solution, File Tables can give you some additional leverage so you can keep your entire solution within SQL.

Thanks again to Nigel Sammy for hosting this month’s T-SQL Tuesday. There’s so much new with SQL 2012, it’s good for Nigel to give the community a chance to share some of the new features that are now out. Enjoy the rest of T-SQL Tuesday #29!

T-SQL Tuesday #28 (#tsql2sday): Jacks and Aces

DBA. I’ll be honest, there are times I *hate* that title. People toss it around without really understanding what it means. Heck, how many of us specialize in backups, monitoring, and high availability solutions only to get a call from a recruiter who has an “immediate need” for an expert in SSRS and cube design. Unfortunately, this is a direct result of non-database folks not really understanding what we do. They simply see “database” on our resume and figure that if a database is involved, we know how to handle it.

We’re often considered a Jack-of-all-trades, the IT handyman. This has evolved from the roles we have to play in our careers and how so many of “fell” into a database career. The rule of thumb is that the smaller the shop, the greater our breadth of knowledge needs to be. While I’ve been a SQL Server administrator for over ten years now, my job responsibilities have required me to learn something about networking, Windows server administration, .Net development, report writing, and Oracle administration (amongst many, many other topics). Because so many things touch a database, we’re expected to understand all of these different pieces as well as the database itself.

The problem with all this is that it’s more than any one person can really know. With SQL Server becoming more and more complex (a good thing, by the way), it’s hard enough for us to keep up with that platform alone. Since it’s rarely an option to tell our boss “No, I can’t do that”, there’s a couple things that I’ve found help me stay ahead of the game:

  • Building your personal network. This is more than just the SQL Server community (fantastic as it is). Sure, attending user group meetings has helped me find experts in areas of SQL Server I’m not as knowledgeable in. But you’ll need more than that. I have Oracle DBA friends, hardware geek friends, sysadmin friends…..you get the idea. By building out this network, I always have someone I can go to if I’m out of my depth.
  • Stay educated. Sure, we spend a lot of time learning about SQL Server, but remember why we get into this business in the first place. Technology is cool, so keep learning about it. Pay attention to trends and tech so that when your boss comes to you about something, you won’t be caught by surprise. And if you *are* surprised, don’t sweat it, but make some time that day to read up on whatever they were talking about.
  • Write it down. When you do something, document it! So many people bemoan documentation, but the cold hard truth is you’re going to forget something when your focus gets completely shifted the next day. Being a small shop DBA means you’re going to get bounced around on a lot of different things, so you need to record what you do so that you don’t have to relearn it later.

Being a small shop DBA can be a tough gig, but it’s where most of us cut our teeth. Hindsight being 20/20, there’s a lot I’d do differently. Fortunately, Argenis Fernandiz (b|t) has given us this great T-SQL Tuesday topic for us to share with our SQL family and help others learn from our experiences.

The biggest lesson I would take out of all of this is that, while our job requires us to generalize most of the time, we can really only advance our careers when we specialize. If you’re simply treading water at your current job, pick some part of the SQL Server platform that interests you and focus on it. Once you start becoming an expert at something, opportunities will open up for you along with more interesting work and learning. That will start you along the path of moving beyond being a jack of all trades and becoming an ace in the database deck.

T-SQL Tuesday(#tsql2sday) #26 – Lazy Restore Commands

I missed out on T-SQL Tuesday #25 (Tips ‘n Tricks) on the first go around, so I’m glad that with #26 I get another crack at it. Mine is pretty simple, but has saved me a lot of time when doing database restores. Whether I’m getting a database back online or (more likely) moving a database to another instance, I don’t want to be bothered for digging around to get my restore going.

As a DBA, I am two things:

  1. I hate using the GUI. Clicking through countless windows is for the birds and I would much rather type three commands than click ‘Next’ three times.
  2. I’m lazy. So those three commands? I write them once and save ‘em to a file so I can run them over and over and over….etc. Automation FTW!

So, restores. We all do them and they can get irritating. I know one of the most annoying things is relocating the files with MOVE when you’re bringing the database up on a different server. This is why a lot of folks use the GUI, because SQL Server can handle that behind the scenes and, if you really want, you can script it out. Well I’ve got another way.

You know you can RESTORE WITH FILELIST to get the database files within a backup. Why not take that a step further and capture it in a table? Then, once it’s in a table, we can use a little bit of SQL coding to give us a template to generate our restore command. Take a look:

--Turn off annoying rowcount
SET NOCOUNT ON

--Some variables
declare @v_restore varchar(1000)
declare @v_backup varchar(1000)
declare @v_sql varchar(max)
declare @datadir varchar(1000)
declare @logdir varchar(1000)

--Set backup file location, database name
set @v_backup = 'C:\demo\test.bak'
set @v_restore='Test_demo'
set @datadir = 'C:\Restore\Data'
set @logdir = 'C:\Restore\Log'

--Storage table

declare @restorelist table
(LogicalName nvarchar(128)
,PhysicalName nvarchar(260)
,Type char(1)
,FileGroupName nvarchar(128)
,Size numeric(20,0)
,MaxSize numeric(20,0)
,Fileid tinyint
,CreateLSN numeric(25,0)
,DropLSN numeric(25, 0)
,UniqueID uniqueidentifier
,ReadOnlyLSN numeric(25,0)
,ReadWriteLSN numeric(25,0)
,BackupSizeInBytes bigint
,SourceBlocSize int
,FileGroupId int
,LogGroupGUID uniqueidentifier
,DifferentialBaseLSN numeric(25,0)
,DifferentialBaseGUID uniqueidentifier
,IsReadOnly bit
,IsPresent bit
,TDEThumbprint varchar(100)) –-Be careful, this last field (TDEThumbprint) isn’t in 2k5

--Capture the file list
insert into @restorelist
exec('RESTORE FILELISTONLY FROM DISK='''+@v_backup+'''')

--Build your restore command
select @v_sql = 'RESTORE DATABASE '+@v_restore+' '+char(10)+'FROM DISK=''' +@v_backup+ ''''+ CHAR(10)+'WITH '
select @v_sql = coalesce(@v_sql,'')+'MOVE '''+logicalname +
''' TO '''+CASE when type='L' then @logdir else @datadir end +'\'+ right(physicalname,charindex('\',reverse(physicalname))-1)+''',' + char(10)
from @restorelist

--display the restore command, trim trailing comma and char(10)
print substring(@v_sql,1,LEN(@v_sql)-2)

So that’s a lot of stuff! Well, not really. If you look at it, most of the script is taken up defining the file list table. Change your variables at the top, run the script, and copy the output into another window…BAM! There’s your restore command. And if it’s not perfect, you only need to make one or two changes to the output to clean it up.

This is not the only sproc output you can capture. The biggest lesson from this trick is you should think about other system stored procedures and look for opportunities where you can grab outputs to make your life easier. Help yourself look like a hero and get more sleep by taking advantage of what’s in SQL.

Thanks to Dave Howard(b|t) for offering folks a second chance to share for T-SQL #26!