Art of the DBA Rotating Header Image

Scripts

Slipstreamin’ Away

It’s nothing new: I love anything that makes less work for me.  One thing that has been a great contributor to this has been slipstream installs.  While that’s a bit of a fancy name, basically it’s an install of SQL Server where the service packs and cumulative updates are rolled up into one install package.  This saves the tedious chore of constant install after install to get a new SQL Server installation up to a patch level consistent with the rest of your enterprise.

Now, the problem with slipstream installs is that they’re a pain in the butt to create, at least previous to SQL 2012.  Not difficult, but tedious.  You have to copy some files, merge some directories, and edit some config files.  It’s nothing that can’t be done, but very repetitive.  To boot, if something isn’t copied or configured right, the whole package won’t work and you won’t know until you attempt the install (and it fails).

Enter Powershell.

Building a slipstream is no different than any other task that we should automate.  After all, we’re doing the same actions every time, the only thing that might change are the files we use to build our slipstream.  What I’ve done is put together a script that does all this for me.  As with any script, there are a few rules to how you use it:

  • The script was designed for building SQL 2008 and 2008 R2 slipstreams.  I tried it with 2005, but that process is different enough that this script won’t work.
  • RTMSource and output are the two mandatory parameters.  The SPFiles and CUFiles parameters will be skipped if left null.  Basically, if you are making just a SQL 2008 R2 SP2 slipstream, you would only need to supply the service pack files.
  • The script expects the base install (the RTM media) to be un-extracted and the service packs or cumulative updates in their .exe form.  The script will extract the additional files from the .exe into the slipstream package.
  • I built the script to work along my organizational thought process, which means I put each set of files in their own subdirectory.  This shouldn’t be a big deal, but be aware that if you point the script at a location, it won’t just pull out the files it needs, it will try and use every .exe file in the subdirectory.

Other than that, here’s the script:

<#
.SYNOPSIS
   Builds a SQL Server slipstream install.
.DESCRIPTION
   This script will take the various components of a SQL Server isntall (RTM binaries, SPs, CUs) and assemble
	 them to create a slipstream install.  To use this script, you will need the unextracted RTM binaries (.iso
	 or DVD media) and the executables for the SP and CUs you want to use.
	 Mike Fal (htp://www.mikefal.net) 2-28-2013
.PARAMETER <paramName>
   RTMSource - File path for the RTM source files
	 SPSource - File path for the Service Pack executables (if no value is passed, no SP will be added)
	 CUSource - File path for the Cumulative update executables (if no value is passed, no CU will be added)
	 output - filepath where slipstream will be written, directory will be created if it doesn't exist.
.EXAMPLE
   .\Build-Slipstream.ps1 -RTMSource "C:\Users\mfal\Downloads\en_sql_server_2008_r2_developer_x86_x64_ia64_dvd_522665"
	 												-SPSource "C:\Users\mfal\Downloads\2008R2SP1"
													-CUSource "C:\Users\mfal\Downloads\2008R2SP1CU2"
													-output "C:\SQL2008SP1CU2_Slipstream"
#>

param([parameter(Mandatory=$true)][string] $RTMSource,
			[parameter(Mandatory=$false)][string] $SPSource,
      [parameter(Mandatory=$false)][string] $CUSource,
      [parameter(Mandatory=$true)][string] $output)

#function to extract and copy SPs and CUs
function Modify-Source{
	param($TYPE="PCU",$WORKDIR,$PATCHDIR)
	$PATCHFILES=get-childitem "$PATCHDIR\*" -include *.exe
	if (!(test-path("$WORKDIR\$TYPE"))) {mkdir $WORKDIR\$TYPE }
	if (!(test-path("$WORKDIR\$TYPE"))) {Throw "Error: Patch file destination not available"}

	set-location $PATCHDIR

	foreach($FILE in $PATCHFILES)
		{

			iex "./$($FILE.basename)$($FILE.extension) /x:""$WORKDIR\$TYPE"" /q"
			"Waiting for $FILE to extract..."
			while (@(Get-Process $FILE.Name.Replace(".exe","") -ErrorAction SilentlyContinue).Count -ne 0)
			{
			Start-Sleep 1
			}
		}
  "Copying $TYPE files..."
	robocopy "$WORKDIR\$TYPE" "$WORKDIR" setup.exe /ndl /nfl /njh /njs
	robocopy "$WORKDIR\$TYPE" "$WORKDIR" setup.rll /ndl /nfl /njh /njs

if (test-path("$WORKDIR\$TYPE\x86")) {robocopy "$WORKDIR\$TYPE\x86" "$WORKDIR\x86" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}
if (test-path("$WORKDIR\$TYPE\x64")) {robocopy "$WORKDIR\$TYPE\x64" "$WORKDIR\x64" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}
if (test-path("$WORKDIR\$TYPE\ia64")) {robocopy "$WORKDIR\$TYPE\ia64" "$WORKDIR\ia64" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}

set-location $WORKDIR
}

#Test path locations for validity

if (!(test-path($RTMSource + "\setup.exe"))) {Throw "SQL RTM Source does not exist!"}
if (!(test-path($SPSource)) -and !($SPSource -eq $null)) {Throw "Invalid Service Pack file location!"}
if (!(test-path($CUSource)) -and !($CUSource -eq $null)) {Throw "Invalid Cumulative Update file location!"}
if (!(test-path($output))) {mkdir $output}
if (!(test-path($output))) {Throw "Unable to create build directory!"}

#begin merge process, RTM
"Copying RTM..."
robocopy "$RTMSource" "$output" /s /ndl /nfl
$startloc=pwd
set-location $output

Copy-Item ".\x86\DefaultSetup.ini" ".\"

#merge Service Pack
if(!($SPSource -eq $null))
{
	Modify-Source "PCU" $output $SPSource
	"`n`nPCUSOURCE=`".\PCU`" `n" |Out-File "DefaultSetup.ini" -Append
}

#merge Cumulative Update
if(!($CUSource -eq $null))
{
	Modify-Source "CU" $output $CUSource
	"`n`nCUSOURCE=`".\CU`" `n" |Out-File "DefaultSetup.ini" -Append
}

#cleanup
copy-item "defaultsetup.ini" .\ia64 -force
copy-item "defaultsetup.ini" .\x64 -force
copy-item "defaultsetup.ini" .\x86 -force

Set-Location $startloc

Powershell Shredding

I’ve been playing around a bit recently with Powershell and XML. It’s one of those expirements where I’m not sure what the immediate benefit is, but it certainly is interesting seeing what kind of functionality we have available to us as data folks. I’m going to see what more I can coax out of it, but I wanted to share with you what I’ve learned so far.

First off, understand that I’m not that strong when it comes to XML. I get what it is, I understand the basic structure, but wrangling it isn’t something I’ve had to do a lot of. As a result, I’m still very much a newbie with XPath and XQuery. I understand nodes and properties, but then it starts to get muddy. Just a disclaimer before we get to far into this.

.NET, and by extension Powershell, has an XML data type.  This is useful because query plans are XML documents, whether we save them off or stored in the plan cache. So it’s a fairly simply matter to suck a query plan into an XML variable:

[xml]$plan=(gc SomeSQLQuery.sqlplan)

From here, we can start browsing through our plan using the dot notation to parse the plan. The query plan itself is going to be found under the ShowPlanXML node. Under that, there’s a fairly complex layout that you can really dig into by looking at the full schema documentation. Suffice to say, if we want to see the SQL text from the query, we’d need to look at:

$plan.ShowPlanXML.BatchSequence.batch.Statements.StmtSimple.StatementText

That’s a lot of drilldown! What’s worse is, if we wanted to started finding specific operators, we would quickly get lost in a recursive arrangement of RelOp nodes and actual operators. So if we want to extract something useful out of the XML, we need to leverage XPath using the .SelectNodes() method of the XML data. The only problem here is that, for some reason, the namespace that is contained within the plan confuses any calls using .SelectNodes(). To get around this, I basically ripped out the namespace so that the defaults can be used:

[xml]$plan=(gc SomeSQLQuery.sqlplan) -replace 'xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan"'

At this point, I can now start using XPath to analyze my query plan. So if I wanted to pull up all my table scans:

$plan.SelectNodes('//TableScan/Object') | ft

Or, if I wanted to get all my table or index scans:

$plan.SelectNodes('//*[contains(name(),"Scan")]/Object') | ft

And so and so forth.

Now, what does this get me? At this point, I’m not sure. I started down this road after seeing Jason Strate’s(@StrateSQL) presentation on shredding the plan cache with T-SQL. My thought process was that this might be an easier way to dissect the plan cache for useful information. In a way, I was right, because it was a little easier to grasp, but it also seems like it’s the long way around the horn to get at that information. I’ll continue to poke at it and see what I can coax out of it.

Finding Orphan Logins

Cleaning up old security info is always something that’s hard to stay on top of.  Employees who leave or change responsibilities can make it difficult to track which logins need to remain in our instances.  The best way to manage this is by using Active Directory groups and managing access from the group level instead of by explicit login.  However, because of the communication and organization that’s required to set up these groups in the first place, it’s not uncommon to find ourselves managing explicit logins in our severs.

The unfortunate side effect is this typically results in orphaned logins within our SQL Servers.  A question on how to manage this came up recently on dba.stackexchange.com and, while I couldn’t find a way to answer the specific question (because it was focused around finding orphaned logins in SQL 2000), I was able to come up with a handy utility script for 2005+ that makes use of xp_logininfo.  It’s not a complex script and provides a list of the actual drop login statements you can execute as part of your cleanup.

declare @user sysname
declare @domain varchar(100)

--store total output
declare @return table
(login_name sysname
,sql_statement varchar(max)
,error_msg varchar(100))

--Store output of xp_logininfo
declare @junk table
(a_name sysname NULL
,a_type char(8) NULL
,priv char(9) NULL
,m_login sysname NULL
,p_path sysname NULL)

--Declare domain you are checking
set @domain = 'FOO'

declare recscan cursor for
select name from sys.server_principals
where type = 'U' and name like @domain+'%'

open recscan
fetch next from recscan into @user

while @@fetch_status = 0
begin
	begin try
		insert into @junk
		exec xp_logininfo @user
	end try
	begin catch
		--Error on sproc because login does not exist
		insert into @return
		values(@user,'drop login '+convert(varchar,@user),ERROR_MESSAGE())
	end catch

	fetch next from recscan into @user
end

close recscan
deallocate recscan

select * from @return

A couple notes on this script:

  • It currently doesn’t handle removing the logins from individual databases.  I have another script I use to do that and will be working on combining the two.
  • I don’t automatically execute the drop login because I’m paranoid.  :)  By providing a list, I can review the logins, along with the error message output, and verify that I want to remove them

Hopefully this little utility script can help you with login management in your environments.

Some Backup Info

So I’ve been pretty bad about blogging lately.  I don’t buy in much to excuses and so I have none to offer.  Just got away from me (something I’ll talk about shortly when I review my 2012 goals).  Anyway, let’s talk about something a little more useful to you, the reader.

A couple weeks ago, I gave a short presentation on MSDB’s BackupSet.  It was a fun little presentation and gave me a chance to share with the Boulder SQL user group one of my favorite tables.  Why one of my favorites?  Simply because there’s so much useful information packed in there, particularly when it comes to the critical job of monitoring your backups.  If you haven’t looked at it, I highly recommend that you check it out.

While developing the presentation, I put together a handy query that I wanted to share.  It’s very useful for not only showing you when your latest backups were, but also the size and location of those backups:

select
  bs.database_name
  ,bs.backup_finish_date
  ,bs.backup_size/1024.0/1024.0 [backup_size_mb]
  ,bs.compressed_backup_size/1024.0/1024.0 [compressed_size_mb]
  ,datediff(ss,backup_start_date,backup_finish_date) [backup_time]
  ,((bs.backup_size/1024.0/1024.0)/(datediff(ss,backup_start_date,backup_finish_date)+1)) [mb_per_second]
  ,1.0-(bs.compressed_backup_size*1.0/bs.backup_size) [compression_ratio]
  ,bm.physical_device_name
from
  msdb.dbo.backupset bs
  join msdb.dbo.backupmediafamily bm on (bs.media_set_id = bm.media_set_id)
  join (select database_name
          ,max(backup_finish_date) last_backup
        from msdb.dbo.backupset
        where type = 'D'
        group by database_name) lb on (bs.database_name = lb.database_name and bs.backup_finish_date = lb.last_backup)
where type = 'D'

Improved Laziness

I’m really digging Powershell the more that I work with it.  It’s nice to have so much rich functionality within the operating system and not having to fuss with compilers or syntax.  One of my more recent projects was automated restore testing for our servers.  Now, while I won’t be showing that to you (yet), working with that script got me more time with the SMO.

The SMO is incredibly powerful.  You have so much functionality that it can be a little overwhelming.  There’s also a lot of stuff you can do in the SMO that you can also do in standard T-SQL.  As with anything, it’s a matter of using the right tool.  I’m finding that what makes Powershell the right tool is it’s ability to work outside of the database, avoiding some of the restrictions placed (rightly so) on the engine and how it interacts with the OS.  It makes Powershell ideal for things like file manipulation and talking with the OS.  It also gave me a slightly cleaner way to generate restore scripts.

If you have seen my previous restore building script, it was handy that it worked in T-SQL, but because it had no visibilty into the file system, I had no real way to logically combine fulls, diffs, and transaction logs to build a single script.  Sure, I could probably assemble something looking at msdb.dbo.backupset, but that only works if I’ve got access to the server the backups were done on.  By leveraging Powershell and the SMO, I can now read the file system, interogate each backup file to see that it fits in the LSN sequence, and build out a SQL script.

The steps are simple:

  1. Point the script at a target directory where all the backup files live and a SQL instance that has access to the file directory (typically, this is the instance you’ll run the restore on).  You can also declare a database name (if you want something different) and an output directory (which will otherwise default to your My Documents folder).
  2. The script will then use the SMO to read in the backup files, coordinating the LSNs to order them properly for the restore.
  3. The entire restore script will be written to the output directory.

Some caveats:

  • The script identifies files based on extension (.bak for Full, .dff for Differential, and .trn for Transaction Log).  This is based on how I write out my backup files, so this is the key piece of logic that will need to be altered to make this more generic.
  • The full restore will “move” the files to their original location.  If restoring to a different server, you’ll want to alter these file locations.

Now, I’m by no means a Powershell guru, but I do like making my life easier through scripting.  There’s probably more elegant ways to do this, but it’s clean enough for my purposes.  Hopefully in makes your life easier too!

(Feel free to download the script here.)

<#
.SYNOPSIS
Builds Database Restore script, coordinating Full, Diff, and Log backups.
.DESCRIPTION
Generates a database restore .sql script for restoring a database.
Mike Fal (htp://www.mikefal.net) 2012-07-09
.PARAMETER
String
    dir(REQUIRED) - Target directory where backup files reside
    server(REQUIRED) - Target server restore used by SMO to build restore script.  Should be server you want to restore to.
    database - Database name to restore. If blank, database name from the backup will be used.
    outputdir - Output directory for script.  If empty, user's My Documents will be used.
.EXAMPLE
    .\Build-Restore.ps1 -dir "C:\database_backups" -server "localhost"
.NOTES
    Script acquires files based on extension:
        .bak = Full
        .dff = Differential
        .trn = Transaction log
#>
param([parameter(Mandatory=$true)][string] $dir,
      [parameter(Mandatory=$true)][string] $server,
      [string] $database,
      [string] $outputdir=([Environment]::GetFolderPath("MyDocuments")))

#load assemblies
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null

#processing functions
function Get-Restore{
    Param($db,
          $backupfile)

    $rs=new-object("Microsoft.SqlServer.Management.Smo.Restore")
    $rs.Devices.AddDevice($backupfile.FullName, "File")
    $rs.Database=$db
    $rs.NoRecovery=$true
    $rs.Action="Database"

    return $rs
}#Get-Restore

function Get-Header{
    Param($rs,$srv)

    $dt=$restore.ReadBackupHeader($srv)
    return $dt.Rows[0]
}#Get-Header

#START SCRIPT
$sqlout = @()
$smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $server

$full = gci $dir | where {$_.name -like "*.bak"} | Sort-Object LastWriteTime -desc | Select-Object -first 1
$diff = gci $dir | where {$_.name -like "*.dff"} | sort-object LastWriteTime -desc | select-object -first 1
$trns = gci $dir | where {$_.name -like "*.trn"} | sort-object LastWriteTime

#initialize and process full backup
$restore=Get-Restore $database $full
$hfull=Get-Header $restore $smosrv
if($database.Length -eq 0)
{
    $database=$hfull.DatabaseName
    $restore.Database=$database
}

$LSNCheck=$hfull.FirstLSN
$files=$restore.ReadFileList($smosrv)
foreach($file in $files){
        $newfile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") ($file.LogicalName,$file.PhysicalName)
        $restore.RelocateFiles.Add($newfile) | out-null
}

$sqlout+="/****************************************************"
$sqlout+="Restore Database Script Generated $(Get-Date)"
$sqlout+="Database: "+$database
$sqlout+="****************************************************/"
$sqlout+="--FULL RESTORE"
$sqlout+=$restore.Script($smosrv)

#process differential backups
if($diff -ne $null){
    $restore=Get-Restore $database $diff
    $hdiff=Get-Header $restore $smosrv

    if($hdiff.DifferentialBaseLSN -eq $LSNCheck){
        $sqlout+="--DIFF RESTORE"
        $sqlout+=$restore.Script($smosrv)
        $LSNCheck = $hdiff.LastLSN
    }
    else{
        $LSNCheck = $hfull.LastLSN
    }
}

#process transaction log backups
if($trns -ne $null){
    $sqlout+="--TRN LOG RESTORE"

    foreach ($trn in $trns){
        $restore=Get-Restore $database $trn
        $htrn=Get-Header $restore $smosrv
        if($htrn.FirstLSN -le $LSNCheck -and $htrn.LastLSN -ge $LSNCheck){
            $sqlout+=$restore.Script($smosrv)
            $LSNCheck = $htrn.LastLSN
        }
    }
}

#Write final recovery line
$sqlout+="`r`n"
$sqlout+="--COMPLETE RESTORE/ONLINE DB"
$sqlout+="RESTORE DATABASE "+$database+" WITH RECOVERY"

#output script file
$sqlout | Out-File "$outputdir\restore_$database.sql"

Power(shell)ing your CMS

If you’re looking at Powershell scripts for SQL Server management, you’ll find a lot of them consume a list of servers in order to perform an action. For example, I put a script out a couple of weeks ago that can monitor the service state for your SQL Servers, emailing out a notice if services are down. This is handy, but maintaining a server list in a file is hardly elegant. For something more dynamic, many shops(including my own) will maintain a list of their servers in their own tracking database, giving you a resource you can query. However, there’s an easier way to handle this if you’re using Central Management Server.

The nice thing about Central Management Server is that, just like SQL Server instances, it can be accessed through the SMO with Powershell. This gives us a handy way to browse our sever listings as a folder structure. Just open up your Powershell window and (assuming you’ve loaded your SQLPS module/snapin), change your directory to:

CD SQLSERVER:\SQLRegistration\Central Management Server Group

Once you’re there, get a listing of the contents with your favorite command (dir/ls/gci) and you’ll see your registered Central Management Server repositories. You can browse from there into the appropriate repository and see a listing of all your registered instances. Now, keep in mind, you’ll still have to browse the directory structure if you’ve created folders to organize your servers into, but now we have access to them as if they were files in a folder.

Let’s take it a step further now and build out a server listing using some pretty basic commands within Powershell. We only have to keep in mind two gotchas:

  • The SMO recognizes both the folders and the instances as “containers”, so we can’t filter by PSIsContainer being false (the object property indicating container-ship).
  • The instance object has both Name and ServerName properties. The Name is the descriptive name, while ServerName is the specific name of the instance we can connect to.

Keeping this in mind, we can build a list by running the following command in the repository:

ls –recurse | where {$_.ServerName –ne $NULL}

Breaking this down, it’s a standard looking command for Powershell. By using “-recurse”, we tell Powershell to recursively browse through all the folders and subfolders in our directory for the lookup. Filtering on “$_.ServerName –ne $NULL” means we exclude the directories (since they don’t have a ServerName). Now we have an array of all of our servers that we can use for processing loops:

$srvs= ls –recurse | where {$_.ServerName –ne $NULL}

ForEach ($srv in $srvs)
{
  $inst=$srv.ServerName
  #do some stuff
}

Another handy element in that object is the ConnectionString property. This provides you with a ready made connection string to use if you need to connect to SQL Server using OLEDB or ODBC calls. While I haven’t had a need to use this yet, I can definitely see the benefits. There are also other properties that can be explored and used, depending on your scenario.

All of this makes for a great extension of Central Management Server, allowing DBAs to leverage the repository for more than just centralizing lookups in Management Studio. This can be a handy tool if you need something with a little less administrative overhead than a server inventory database or some sort of file listing. This means that the Central Management Server, which most of us see as something we only really use within Management Studio, can be leveraged to be so much more.

The Tools to Survive

Last week, I blogged about monitoring backups, service status, and disk space usage. These are the keys to survival for you and your databases. There are several ways that these can be monitored and there are many third party tools that you can use for these tasks. However, third party tools are often not an option.  Fortunately, you can also make use of SQL and Powershell to handle basic monitoring. Here’s a couple scripts that you can use in your environments.

Overdue Backups

The backupset table makes it easy to know when your backups are being taken. By making use of a couple queries, you can find out if your databases are actually being backed up and take appropriate action. This particular script is designed to check for any databases that haven’t had a full backup (type=’D’) in the last three days and email out an alert if any are found.

with db_fulls(dbname,last_backup)
as
(select
	database_name,
	max(backup_start_date)
from
	msdb.dbo.backupset
where
	type='D'
group by
	database_name)
select
	d.name,
	d.recovery_model_desc,
	df.last_backup
into #results
from
	sys.databases d
	left join db_fulls df on (d.name = df.dbname)
where
	isnull(df.last_backup,'1/1/1900') < dateadd(dd,-3,getdate()) 

if (select count(*) from #results) > 1
begin
	DECLARE @tableHTML  VARCHAR(MAX)
	,@emailrecip varchar(200)

	--SET YOUR EMAIL ADDRESS HERE
	set @emailrecip = 'you@yourcompany.com'

	SET @tableHTML =
		N'</pre>
<h1>Overdue Backup Report</h1>
<pre>
' +
		N'</pre>
<h2>No full backups in the last 3 days</h2>
<pre>
' +
		N'</pre>
<table width="1250" border="1">' + N'
<tbody>
<tr>
<th align="left" width="40%">Database</th>
<th align="left" width="30%">Recovery Model</th>
<th align="left" width="30%">Last Backup</th>
</tr>
' + CAST ( ( SELECT td = rtrim(name), '', td = rtrim(recovery_model_desc), '', td = isnull(convert(varchar(100),rtrim(last_backup),100),'NO BACKUP'), '' 
from #results order by name FOR XML PATH('tr'), TYPE ) AS VARCHAR(MAX) ) + N'</tbody>
</table>
<pre>
' ;

	exec msdb.dbo.sp_send_dbmail
		@recipients = @emailrecip,
		@subject = 'Overdue Backup Report',
		@body = @tableHTML,
		@body_format = 'HTML' ;
end

drop table #results

The basics here can also be modified for your particular needs. In my own environment, I use this and a second script that monitors for log backups(type=’L’). I schedule the full backups to run every morning. For the log backups, I run the report every 4 hours and alert me if a database hasn’t had a log backup in the last 12 hours.

Stopped Services

It’s hard to monitor SQL Services from SQL Server itself, so for this task I turned to Powershell. By making use of the Get-Service commandlet, you can collect all service related information from a computer. Add some functionality for email, and you can build a handy little service monitoring script.

$srvlist = Get-Content $args[0]
$report=@()

foreach ($server in $srvlist)
{
	try
	{
          $svcs=Get-Service -ComputerName $server | where {$_.name -like "*SQL*"}
		foreach ($svc in $svcs)
		{
			$output = New-Object System.Object
			$output | Add-Member -type NoteProperty -name Instance -value $server
			$output | Add-Member -type NoteProperty -name SvcName -value $svc.Name
			$output | Add-Member -type NoteProperty -name DisplayName -value $svc.DisplayName
			$output | Add-Member -type NoteProperty -name Status -value $svc.Status
			$report+=$output
		}
	}
	catch
	{
		$output = New-Object System.Object
		$output | Add-Member -type NoteProperty -name Instance -value $server
		$output | Add-Member -type NoteProperty -name SvcName -value "No_Service_Collected"
		$output | Add-Member -type NoteProperty -name DisplayName -value "No Service Collected - COLLECTION ERROR"
		$output | Add-Member -type NoteProperty -name Status -value "ERROR"
		$report+=$output
	}
}

#Set these for your environment
$smtp="yourmail.server.com"
$from="SvcAlert@yourserver.com"
$to="You@yourcompany.com"

if(($report | where {$_.Status -ne "Running"}).Length -gt 0)
{
	[string]$body=$report|where{$_.Status -ne "Running"}| ConvertTo-HTML
	Send-MailMessage -To $to -from $from -subject "Service Monitor Alert!" -smtpserver $smtp -body $body -BodyAsHtml
}

To call this script, you’ll want a text file that contains a list of your servers (passed through as an argument to the script).  Then, run the script as a Windows Scheduled Task.  I recommend setting it up to check the status regularly every few minutes, running from a different computer than your SQL Server. This guards against your monitoring process failing in case your SQL Server shuts down. You will also need to ensure that whatever account the task runs under has remote access to your SQL Server.

Free Space

There is actually quite a bit written about this particular topic. There are two ways you can go about this, depending on whether or not you want to monitor your space from within SQL Server or externally. One option is to use xp_fixeddrives for querying data from directly within SQL Server. If you’ve upgraded to SQL 2008 R2 SP1, you can also make use of sys.dm_os_volume_stats. Personally, I lean more towards making use of Powershell (again) because of the flexibility it gives me. For this, there’s several options, but a good place to start is this script off of Simple Talk. Any of these choices will give you the information you need.

With these basic tools, you can start monitoring your systems for continued operation as well as being alerted when critical issues arise.  From here, you can then move to monitor other aspects of your server and database health.

A Temporary Configuration

A couple weeks ago, I participated in T-SQL Tuesday #26 with a trick to dynamically build a script for doing your restores. In the spirit of that laziness, I wanted to share with you another script I have for configuring Tempdb. There’s not much special here and you could use any other scripting language to get the same result, but the idea is to automate the bulk of the task so that you can save yourself some work as well as creating consistent processes for your database configurations.

--Configure TempDB
--Turn off nasty rowcount
SET NOCOUNT ON

--Declare some variables
declare @files int,
  @path varchar(max),
  @totalsize_gb float,
  @v_sql varchar(max),
  @filesize_mb int,
  @logsize_mb int,
  @currfile int

--Set these to configure tempdb
set @files = 8
set @path = 'T:\TempDB\'
set @totalsize_gb = 40

--script calculates individual file sizes
select @filesize_mb=FLOOR((@totalsize_gb*1024)/(@files)),@logsize_mb = FLOOR(@totalsize_gb*1024)* .25

--Build your config script
set @v_sql ='use tempdb
dbcc shrinkfile (tempdev,100)

alter database tempdb
modify file
(NAME=tempdev,
FILENAME='''+@path+'tempdb.mdf'',
SIZE='+CONVERT(nvarchar(10),@filesize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

alter database tempdb
modify file
(NAME=templog,
FILENAME='''+@path+'templog.ldf'',
SIZE='+CONVERT(nvarchar(10),@logsize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

'
set @currfile = 1

while @currfile < @files
begin
set @v_sql = @v_sql + 'alter database tempdb
add file
(NAME=tempdb'+CONVERT(nvarchar(10),@currfile)+',
FILENAME='''+@path+'tempdb'+CONVERT(nvarchar(10),@currfile)+'.ndf'',
SIZE='+CONVERT(nvarchar(10),@filesize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

'

set @currfile = @currfile+1

end

select @v_sql = @v_sql + char(10)+'--Verify your configuration!'+char(10) + 'select name,size/128.0 [size_mb],physical_name from sys.database_files'

print(@v_sql)

PRINT '/***RESTART SERVER***/'

There are three things we want to declare up near the top, and that is number of files, location of the files, and total size of the files. Without getting into the larger discussion of how many files you should have (for those wondering, start at Paul Randal’s(b|t) blog post on it), this script will help you make all your files consistent.

Some other notes about things I have set:

  • The total file size is total data file size. Log file size is in addition to this, so plan accordingly.
  • I’ve got the log file calculation to size to 25% of the total data files.
  • I’ve got autogrowth on all files set to half a GB (512 MB).

These settings are completely personal preference, so your miles may vary. If you have different practices, by all means implement those. The point here, as with my RESTORE script, it to pre-build most of your configuration script so that you only have to tweak a couple things.

 

P.S. It should be noted that I call out a DBCC SHRINKFILE on tempdb.  The idea of this script is your running it either while configuring a server or during a maintenance period.  Shrinking a live tempdb can cause corruption, so only execute this script when you’re planning on restarting the server and rebuilding tempdb as part of your configuration.  (Thanks to Vicky Harp (b) for reminding me about that.)

T-SQL Tuesday(#tsql2sday) #26 – Lazy Restore Commands

I missed out on T-SQL Tuesday #25 (Tips ‘n Tricks) on the first go around, so I’m glad that with #26 I get another crack at it. Mine is pretty simple, but has saved me a lot of time when doing database restores. Whether I’m getting a database back online or (more likely) moving a database to another instance, I don’t want to be bothered for digging around to get my restore going.

As a DBA, I am two things:

  1. I hate using the GUI. Clicking through countless windows is for the birds and I would much rather type three commands than click ‘Next’ three times.
  2. I’m lazy. So those three commands? I write them once and save ‘em to a file so I can run them over and over and over….etc. Automation FTW!

So, restores. We all do them and they can get irritating. I know one of the most annoying things is relocating the files with MOVE when you’re bringing the database up on a different server. This is why a lot of folks use the GUI, because SQL Server can handle that behind the scenes and, if you really want, you can script it out. Well I’ve got another way.

You know you can RESTORE WITH FILELIST to get the database files within a backup. Why not take that a step further and capture it in a table? Then, once it’s in a table, we can use a little bit of SQL coding to give us a template to generate our restore command. Take a look:

--Turn off annoying rowcount
SET NOCOUNT ON

--Some variables
declare @v_restore varchar(1000)
declare @v_backup varchar(1000)
declare @v_sql varchar(max)
declare @datadir varchar(1000)
declare @logdir varchar(1000)

--Set backup file location, database name
set @v_backup = 'C:\demo\test.bak'
set @v_restore='Test_demo'
set @datadir = 'C:\Restore\Data'
set @logdir = 'C:\Restore\Log'

--Storage table

declare @restorelist table
(LogicalName nvarchar(128)
,PhysicalName nvarchar(260)
,Type char(1)
,FileGroupName nvarchar(128)
,Size numeric(20,0)
,MaxSize numeric(20,0)
,Fileid tinyint
,CreateLSN numeric(25,0)
,DropLSN numeric(25, 0)
,UniqueID uniqueidentifier
,ReadOnlyLSN numeric(25,0)
,ReadWriteLSN numeric(25,0)
,BackupSizeInBytes bigint
,SourceBlocSize int
,FileGroupId int
,LogGroupGUID uniqueidentifier
,DifferentialBaseLSN numeric(25,0)
,DifferentialBaseGUID uniqueidentifier
,IsReadOnly bit
,IsPresent bit
,TDEThumbprint varchar(100)) –-Be careful, this last field (TDEThumbprint) isn’t in 2k5

--Capture the file list
insert into @restorelist
exec('RESTORE FILELISTONLY FROM DISK='''+@v_backup+'''')

--Build your restore command
select @v_sql = 'RESTORE DATABASE '+@v_restore+' '+char(10)+'FROM DISK=''' +@v_backup+ ''''+ CHAR(10)+'WITH '
select @v_sql = coalesce(@v_sql,'')+'MOVE '''+logicalname +
''' TO '''+CASE when type='L' then @logdir else @datadir end +'\'+ right(physicalname,charindex('\',reverse(physicalname))-1)+''',' + char(10)
from @restorelist

--display the restore command, trim trailing comma and char(10)
print substring(@v_sql,1,LEN(@v_sql)-2)

So that’s a lot of stuff! Well, not really. If you look at it, most of the script is taken up defining the file list table. Change your variables at the top, run the script, and copy the output into another window…BAM! There’s your restore command. And if it’s not perfect, you only need to make one or two changes to the output to clean it up.

This is not the only sproc output you can capture. The biggest lesson from this trick is you should think about other system stored procedures and look for opportunities where you can grab outputs to make your life easier. Help yourself look like a hero and get more sleep by taking advantage of what’s in SQL.

Thanks to Dave Howard(b|t) for offering folks a second chance to share for T-SQL #26!

Monday Scripts – Getting Partition Information

I’m going to follow up my post on partitioning with a query I wrote while implementing the partitions and cleaning up the empty ones. One of the issues I ran in to was clearly seeing each partition’s range and the number of rows within it. These two questions are answered in different places in the catalog views, sys.partition_range_values and sys.partitions. To build this out, I started with a nice little query from Derek Dieter, adding in a couple joins to get me to sys.partitions. Here’s the result!

select 
	t.name as TableName
	, ps.name as PartitionScheme
	, pf.name as PartitionFunction
	, p.partition_number
	, p.rows
	, case 
		when pf.boundary_value_on_right=1 then 'RIGHT' 
		else 'LEFT' 
	  end [range_type]
	, prv.value [boundary]
from sys.tables t
    join sys.indexes i on t.object_id = i.object_id
    join sys.partition_schemes ps on i.data_space_id = ps.data_space_id
    join sys.partition_functions pf on ps.function_id = pf.function_id
    join sys.partitions p on i.object_id = p.object_id and i.index_id = p.index_id
    join sys.partition_range_values prv on pf.function_id = prv.function_id and p.partition_number = prv.boundary_id
where i.index_id < 2  --So we're only looking at a clustered index or heap, which the table is partitioned on
order by p.partition_number