Art of the DBA Rotating Header Image

Scripts

A PoSH way to skin the cat

Today my friend John Morehouse(@SqlrUs) posted a handy T-SQL script to configure your SQL Server memory based on some rules provided by Jonathan Kehayias(@SQLPoolBoy).  This is a great script and helps automate a lot of the work a DBA needs to do when setting up a new instance. Hopefully, you know by now about my passion for automating and standardization.  However, when reading John’s post, I had the following internal conversation:

“Wow, self, John’s got a neat little script here.”

“Yeah, but T-SQL is so passé.  Everyone does it.  Plus, not sure how easily we could add that into a server deployment script if we wanted total hands-off-ness.”

“Self, I know what you mean.  Nothing wrong with it, it just doesn’t fit my own particular….oh…particular….”

Idiom?

“Yes!  Idiom!  Hrmmm…how about a Powershell script?”

“A Powershell script?  Brilliant!”

And so I sat down and cranked out the following script, which essentially does what John’s script does, just using Powershell.

</pre>
<#
.SYNOPSIS
 Configures a SQL Server instance per the Jonathan Kehayias' guidelines.
.DESCRIPTION
 This script will configure your SQL Server instance per the guidelines
 found in Jonathan Kehayias' blog post: http://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/
 The rules are:
 - 1GB for initial OS reserve
 - +1GB per 4GB server RAM up to 16GB
 - +1GB per 8GB server RAM above 16
.PARAMETER
 -instance SQL instance name, i.e. localhost\SQL2012, DBASERVER01
 -apply Switch parameter, call if you want to actually apply the changes. Otherwise, a report will be produced.
.EXAMPLE
 Configure-SQLMemory -instance DBASERVER01 -apply
#>

param([parameter(Mandatory=$true)][string] $instance
 , [Switch] $apply
 )

#load SMO
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

if($instance.Contains("\")){
 $sqlhost = $instance.Split("\") | Select -First 1
 }
else{
 $sqlhost = $instance
 }

#set memory variables
$totalmem = (gwmi Win32_ComputerSystem -computername $sqlhost).TotalPhysicalMemory/1GB
$sqlmem = [math]::floor($totalmem)

#calculate memory
while($totalmem -gt 0){
 if($totalmem -gt 16){
 $sqlmem -= [math]::floor(($totalmem-16)/8)
 $totalmem=16
 }
 elseif($totalmem -gt 4){
 $sqlmem -= [math]::floor(($totalmem)/4)
 $totalmem = 4
 }
 else{
 $sqlmem -= 1
 $totalmem = 0
 }
}

#if not in debug mode, alter config. Otherwise report current and new values.
$srv = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server $instance
 "Instance:" + $instance
 "Max Memory:" + $srv.Configuration.MaxServerMemory.ConfigValue/1024 + " -> " + $sqlmem
 "Min Memory:" + $srv.Configuration.MinServerMemory.ConfigValue/1024 + " -> " + $sqlmem/2
if($apply){
 $srv.Configuration.MaxServerMemory.ConfigValue = $sqlmem * 1024
 $srv.Configuration.MinServerMemory.ConfigValue = $sqlmem/2 * 1024
 $srv.Configuration.Alter()
 "Configuration Complete!"
 }
<pre>

Now it should be noted that I’m not going to tell you which of these scripts are “better”. Neither is, really, they’re just different ways of approaching the problem. And that’s the fun of it. When working in technology, cats can be skinned in multiple ways, we just have to use something. Whether it’s T-SQL, Powershell, C#, or some other thing, the point is that you want scripts to automate your work.

What was also cool about this was it gave me another exercise to practice Powershell.  After all, practice makes perfect and when learning something, any excuse to make me use it is good.  John and I actually had a conversation shortly after I wrote the script about these sort of opportunities, and we’ll continue to look for ways to challenge each other for learning.

Edit:  Well, that was embarrassing.  Amazing how sometimes you can get the math little screwy.  I’ve updated this script as of 9:44 PM MST to correct me having the math backwards.  If you grabbed/reviewed this script before hand, please make sure up grab the updated version.

The Moving Van

A problem DBAs commonly face is moving database files around, especially when we’re trying to manage space on a drive.  I know that I’ve had to do it and, when you have a database instance with hundreds of databases, you need an automated way to manage this.  Recently, someone on DBA Stackexchange had this specific problem and I was able to provide a Powershell script that I’ve used in the past to manage this problem.  And when I haven’t blogged for a while, a good way back into it is to share a Powershell script.  Mostly because it’s easy.  :)

First, let me talk about a pattern I commonly use.  A lot of times, I could write a single script to take an action, but instead I build a script to write a script.  The main reason I do this is that it’s very hard, especially with many of our administrative tasks, to write a one-size-fits-all process.  So instead, I have a tool that gets me 90% of the way which I can then tweak or alter that script to address that final 10% to fit that specific need.

That’s what this script is doing.  It doesn’t perform the actual file moves itself, but writes out a script that will execute those files moves.  It’s fairly generic, creating a script that moves all the database files from their current location (wherever that may be) to two common directories: data and log. It’s rare that you would move all your database files in this fashion, but you can edit the generated script to suit your specific situation.

This script makes use of the techniques detailed here by Ben Miller(@dbaduck) with one significant exception. I’ve found that the .Offline() method for databases is very unreliable because you’re basically executing your normal “ALTER DATABASE [foo] SET OFFLINE;”. The main problem with this command is it has to wait for all connections to close gracefully before it can proceed, something that doesn’t happen to frequently. I prefer to use “ALTER DATABASE [foo] SET OFFLINE WITH ROLLBACK IMMEDIATE;” because it forces all connections to close and open transactions to rollback, no waiting involved. Since the SMO methods don’t have an option for that,  I call the SQL using the Invoke-SQLCmd cmdlet. I find this is a much cleaner option for my script.

Specifically, the script accepts 4 parameters:

  • $newdata – New data file path

  • $newlog – New log file path

  • $instance – SQL instance you want to move files for (defaults to local)

  • $outputfile – Output file name (defaults to Documents, FileMover.ps1)

When you run it, it will spit out a FileMover.ps1 script to you Documents folder (though, using that last parameter, you can change the output location and name).  This FileMover.ps1 will be the script you can then edit to suit your specific needs.  The script is below, but you can also download it here.

<#
.SYNOPSIS
   Builds a Powershell script for moving many database files from one
	 set of directories to another.
.DESCRIPTION
   Used to generate a Powershell script for moving many database files
	 from one location to another.  This will typically be used with a
	 large number of databases that need to be relocated.  Caveats for
	 this script include:
	 		-Destination for all files will be the same.
			-User that runs the script must have access to source and destination locations
			-This uses the 2008 R2 version of the SMO.

	The script generates a FileMover.ps1 script (by default in My
	Documents).  The reason for generating a separate script is so
	specific configurations can be edited/updated before actually
	execyting the move.

.PARAMETER <paramName>
   instance - Instance owning the databases to be moved
	 newdata - New data file location, no trailing '\'.  example: "C:\DBFiles\Data"
	 newlog - New log file location, no trailing '\'.  example: "C:\DBFiles\Log"
	 $outputfile - Full path and name of output file.  By default, FileMover.ps1 in My Documents.

.EXAMPLE
   .\Build-FileMover.ps1 -newdata "C:\DBFiles\Data" -newlog "C:\DBFiles\Log"
#>

param([parameter(Mandatory=$true)][string] $newdata,
			[parameter(Mandatory=$true)][string] $newlog,
			[string] $instance="localhost",
      [string] $outputfile=([Environment]::GetFolderPath("MyDocuments"))+"`\FileMover.ps1")

#load SMO
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

#Create server object and output filename
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server $instance

#get your databases
$db_list=$server.Databases

#build initial script components
"Add-PSSnapin SqlServerCmdletSnapin100" > $outputfile
"Add-PSSnapin SqlServerProviderSnapin100" >> $outputfile
"[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') `"$instance`" | out-null" >> $outputfile
"`$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server " >> $outputfile

foreach($db_build in $db_list)
{
	#only process user databases
	if(!($db_build.IsSystemObject))
	{
		#script out all the file moves
		"#----------------------------------------------------------------------" >> $outputfile
		"`$db=`$server.Databases[`""+$db_build.Name+"`"]" >> $outputfile

		$dbchange = @()
		$robocpy =@()
		foreach ($fg in $db_build.Filegroups)
		{
			foreach($file in $fg.Files)
			{
				$shortfile=$file.Filename.Substring($file.Filename.LastIndexOf('\')+1)
				$oldloc=$file.Filename.Substring(0,$file.Filename.LastIndexOf('\'))
				$dbchange+="`$db.FileGroups[`""+$fg.Name+"`"].Files[`""+$file.Name+"`"].Filename=`"$newdata`\"+$shortfile+"`""
				$robocpy+="ROBOCOPY `"$oldloc`" `"$newdata`" $shortfile /copyall /mov"

			}
		}

		foreach($logfile in $db_build.LogFiles)
		{
			$shortfile=$logfile.Filename.Substring($logfile.Filename.LastIndexOf('\')+1)
			$oldloc=$logfile.Filename.Substring(0,$logfile.Filename.LastIndexOf('\'))
			$dbchange+="`$db.LogFiles[`""+$logfile.Name+"`"].Filename=`"$newlog`\"+$shortfile+"`""
			$robocpy+="ROBOCOPY `"$oldloc`" `"$newlog`" $shortfile"
		}

		$dbchange+="`$db.Alter()"
		$dbchange+="Invoke-Sqlcmd -Query `"ALTER DATABASE ["+$db_build.Name+"] SET OFFLINE WITH ROLLBACK IMMEDIATE;`" -ServerInstance `"$instance`" -Database `"master`""

		$dbchange >> $outputfile
		$robocpy >> $outputfile

		"Invoke-Sqlcmd -Query `"ALTER DATABASE ["+$db_build.Name+"] SET ONLINE;`" -ServerInstance `"$instance`" -Database `"master`""  >> $outputfile
	}
}

Server Inventories

Ok, I’ve been promising this to people for a while, time to get at it.  Brace yourself for a long one.

 As enterprise DBA’s, we usually have many servers to manage.  Whether it’s around ten, a hundred, or a thousand (or more!), tracking this manually is a bear.  However, there are still many shops maintain convoluted spreadsheets and other manual documentation to track their inventory.  Manual sucks.  As John Sansom(@SQLBrit) says, the best DBAs automate everything, including their server inventories.

 There’s a lot of approaches and third party tools you can use for your own environment.  In my mind, I’m totally cool with spending the money on a third party tool because of the depth they can provide in my tracking and monitoring.  A lot of times, though, we might not have access to those tools or we may need more information than those tools provide.  It’s at this point we sit down at our keyboard, crack our fingers, and start banging out some code.  Of course, the code I started banging out was Powershell (shocking, I know!).

 I didn’t start from scratch, though.  When putting my own script together (which we’ll get to later), I found two great resources that got me about 80% of the way:

 Allen White(@SQLRunr) – Let PowerShell do an Inventory of your Servers
Colleen Morrow(@ClevelandDBA) - Building A SQL Server Inventory

These are both great resources that will get you started, but I needed a little more.  The biggest challenge I had was a need to poll cluster information and collect SQL instance information separate from my machine information.  I also wanted to do some consolidation of my data sets.  So while I borrowed heavily from both Allen and Colleen, but then molded it to my own purposes.

Process

Before we dig into the code, I first want to touch on the collection process and how I handle it.  It’s not complex, but code doesn’t make sense until you understand the “why” of the different components.

What I’m collecting

  • SQL Instance information – Whether it’s an instance living on a cluster node, a physical stand alone, or a VM host, I want the SQL Instance without really caring about the machine it lives on.  At least not meshed in directly with the machine, though I DO collect the physical host name so I can identify which machine information I need.

  • Physical Machine information – Since I’m not tying the machine info directly with the SQL Instance collection, I want the physical information for the servers my instances live on.  This is where I get everything that isn’t tied directly to the SQL Instance.

How I’m collecting it

  • I’ve got 4 tables, 2 live tables and 2 stage tables, that I use for the collection.  Nothing out of the ordinary, but there’s an instance table with a stage table and a machine table with a stage table.

  • I use a Powershell script to perform the collection itself.  It uses instance table as its primary driver, where the idea is that I provide the initial population of instance names to track into the table and the inventory process will fully populate the rest of the information.

  • When the Powershell scripts complete, there is a stored procedure it executes to load data from the stage tables into the live tables.  The load is simple, where the instance table is updated with data from the stage and the machine information is deleted/replaced.

  • The Powershell script is run by a SQL Agent job running under a specific monitoring Active Directory account created as a credential in SQL Server.  The gotcha here is that the monitoring account needs domain access to the clusters and machines it’s going to be querying.

The Code

I won’t list the full scripts here, but you can download and review them here:

SQL Objects
Powershell script

 Let’s first talk about the dependencies:

  • The SMO, so you’re going to need to have either your script or your profile load the 2008 snap-ins or import the 2012 sqlps module.

  • Chad Miller’s Out-DataTable and Write-DataTable (I combined them into one file, DataTables.ps1).

  • The FailOverClusters module, part of the Remote Admin pack if you’re not running the code on a machine with the Windows Server OS, part of the Fail Over Cluster components if you are.

There are two key functions I use, Get-Instance to gather the SQL instance information and Get-Machine to gather the machine information:

#Collects and returns SQL Instance information
function Get-Instance([string]$instcoll,[int]$id,[string]$name)
{
try
{
	$smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $name
	$sname = $smo.NetName
	$iname = $smo.InstanceName
	if($iname.Length -eq 0 -or $iname -eq $null) { $iname = "MSSQLSERVER" }

	$managedcomp = new-object ('Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer') $sname
	$output = New-Object System.Object

	$port = $managedcomp.ServerInstances[$iname].ServerProtocols["Tcp"].IPAddresses["IPAll"].IPAddressProperties["TcpPort"].Value
	$ip = (Test-Connection $sname -count 1).IPV4Address.ToString()

	$output | Add-Member -type NoteProperty -name InstanceId -value $id
	$output | Add-Member -type NoteProperty -name SQLVersion -value $smo.VersionString
	$output | Add-Member -type NoteProperty -name SQLVersionDesc -value $smo.ProductLevel
	$output | Add-Member -type NoteProperty -name SQLEdition -value $smo.Edition
	$output | Add-Member -type NoteProperty -name MemoryMinMB -value $smo.Configuration.MinServerMemory.RunValue
	$output | Add-Member -type NoteProperty -name MemoryMaxMB -value $smo.Configuration.MaxServerMemory.RunValue
	$output | Add-Member -type NoteProperty -name MAXDOPVal -value $smo.Configuration.MaxDegreeOfParallelism.RunValue
	$output | Add-Member -type NoteProperty -name IP -value $ip
	$output | Add-Member -type NoteProperty -name Port -value $port
	$output | Add-Member -type NoteProperty -name PhysicalHost -value $smo.ComputerNamePhysicalNetBIOS
	return $output
}
catch
{
	write-host "Error collecting $name"
	return $null
}
}#Get-Instance

#Get host machine information via WMI
function Get-Machine([string]$name,[string]$clst)
{
 try
{
	$comp = gwmi Win32_ComputerSystem -Computer $name | select Model,Manufacturer,TotalPhysicalMemory
	$proc = gwmi Win32_Processor -Computer $name | select NumberOfLogicalProcessors,MaxClockSpeed
	$os = gwmi Win32_OperatingSystem -Computer $name | select OSArchitecture,Name,Version,ServicePackMajorVersion,ServicePackMinorVersion

	$output = New-Object System.Object

	$output | Add-Member -type NoteProperty -name MachineName -value $name
	$output | Add-Member -type NoteProperty -name Model -value $comp.Model
	$output | Add-Member -type NoteProperty -name Manufacturer -value $comp.Manufacturer
	$output | Add-Member -type NoteProperty -name Architechture -value $os.OSArchitecture
	$output | Add-Member -type NoteProperty -name PhysicalCPUs -value $(if(!$proc.Length){"1"}else{$proc.Length})
	$output | Add-Member -type NoteProperty -name LogicalCPUs -value ($proc | Measure-Object NumberOfLogicalProcessors -sum).Sum
	$output | Add-Member -type NoteProperty -name CPUSpeed -value ($proc | Measure-Object MaxClockSpeed -max).Maximum
	$output | Add-Member -type NoteProperty -name MaxMemory -value ($comp.TotalPhysicalMemory/1MB)
	$output | Add-Member -type NoteProperty -name OSName -value $os.name.split("|")[0]
	$output | Add-Member -type NoteProperty -name OsVersion -value $os.Version
	$SPMaj = $os.ServicePackMajorVersion
	$SPMin = $os.ServicePackMinorVersion
	$output | Add-Member -type NoteProperty -name SPVersion -value "$SPMaj.$SPMin"
	$output | Add-Member -type NoteProperty -name Cluster -value $clst

	return $output
}
catch
{
	write-host "Error collecting $name"
	return $null
}
}#Get-Machine

Both leverage the SMO and the WMI to get relevant info.  I’ll let you sort through the individual elements I’m pulling, it’s fairly standard config info.  Data gathering, because I think in terms of tables, is a collection of objects that are essentially rows.  I append each new object with all the appropriate properties to a larger collection, then use Write-Datatable to push that data directly into the related stage table.

Look at the entire script for my full process.  I’ve been pretty happy with it and it’s been chugging away in my environments now for about 8-9 months without a whole lot of interaction from me.  If we stand up a new server, then I just pop that name in the instance table, run the job, and everything gets updated.  What made me most proud was that we had a recent meeting with a new VP and pulled up the SSRS report built off of this information to give him an idea of what our environments.  His comment was “This is the most documentation I’ve seen since I’ve got here.”  The beauty is that it was provided without a whole lot of manual maintenance on my part.

This is very much an evolving work on my part, but hopefully gives you some insight into how I manage my environments.  I’ve still got lots of ideas on improvements, including automatic population (or other use) of Central Management Server.  If you have any ideas on how this can be improved, I’d love to hear them.

Slipstreamin’ Away

It’s nothing new: I love anything that makes less work for me.  One thing that has been a great contributor to this has been slipstream installs.  While that’s a bit of a fancy name, basically it’s an install of SQL Server where the service packs and cumulative updates are rolled up into one install package.  This saves the tedious chore of constant install after install to get a new SQL Server installation up to a patch level consistent with the rest of your enterprise.

Now, the problem with slipstream installs is that they’re a pain in the butt to create, at least previous to SQL 2012.  Not difficult, but tedious.  You have to copy some files, merge some directories, and edit some config files.  It’s nothing that can’t be done, but very repetitive.  To boot, if something isn’t copied or configured right, the whole package won’t work and you won’t know until you attempt the install (and it fails).

Enter Powershell.

Building a slipstream is no different than any other task that we should automate.  After all, we’re doing the same actions every time, the only thing that might change are the files we use to build our slipstream.  What I’ve done is put together a script that does all this for me.  As with any script, there are a few rules to how you use it:

  • The script was designed for building SQL 2008 and 2008 R2 slipstreams.  I tried it with 2005, but that process is different enough that this script won’t work.
  • RTMSource and output are the two mandatory parameters.  The SPFiles and CUFiles parameters will be skipped if left null.  Basically, if you are making just a SQL 2008 R2 SP2 slipstream, you would only need to supply the service pack files.
  • The script expects the base install (the RTM media) to be un-extracted and the service packs or cumulative updates in their .exe form.  The script will extract the additional files from the .exe into the slipstream package.
  • I built the script to work along my organizational thought process, which means I put each set of files in their own subdirectory.  This shouldn’t be a big deal, but be aware that if you point the script at a location, it won’t just pull out the files it needs, it will try and use every .exe file in the subdirectory.

Other than that, here’s the script:

<#
.SYNOPSIS
   Builds a SQL Server slipstream install.
.DESCRIPTION
   This script will take the various components of a SQL Server isntall (RTM binaries, SPs, CUs) and assemble
	 them to create a slipstream install.  To use this script, you will need the unextracted RTM binaries (.iso
	 or DVD media) and the executables for the SP and CUs you want to use.
	 Mike Fal (htp://www.mikefal.net) 2-28-2013
.PARAMETER <paramName>
   RTMSource - File path for the RTM source files
	 SPSource - File path for the Service Pack executables (if no value is passed, no SP will be added)
	 CUSource - File path for the Cumulative update executables (if no value is passed, no CU will be added)
	 output - filepath where slipstream will be written, directory will be created if it doesn't exist.
.EXAMPLE
   .\Build-Slipstream.ps1 -RTMSource "C:\Users\mfal\Downloads\en_sql_server_2008_r2_developer_x86_x64_ia64_dvd_522665"
	 												-SPSource "C:\Users\mfal\Downloads\2008R2SP1"
													-CUSource "C:\Users\mfal\Downloads\2008R2SP1CU2"
													-output "C:\SQL2008SP1CU2_Slipstream"
#>

param([parameter(Mandatory=$true)][string] $RTMSource,
			[parameter(Mandatory=$false)][string] $SPSource,
      [parameter(Mandatory=$false)][string] $CUSource,
      [parameter(Mandatory=$true)][string] $output)

#function to extract and copy SPs and CUs
function Modify-Source{
	param($TYPE="PCU",$WORKDIR,$PATCHDIR)
	$PATCHFILES=get-childitem "$PATCHDIR\*" -include *.exe
	if (!(test-path("$WORKDIR\$TYPE"))) {mkdir $WORKDIR\$TYPE }
	if (!(test-path("$WORKDIR\$TYPE"))) {Throw "Error: Patch file destination not available"}

	set-location $PATCHDIR

	foreach($FILE in $PATCHFILES)
		{

			iex "./$($FILE.basename)$($FILE.extension) /x:""$WORKDIR\$TYPE"" /q"
			"Waiting for $FILE to extract..."
			while (@(Get-Process $FILE.Name.Replace(".exe","") -ErrorAction SilentlyContinue).Count -ne 0)
			{
			Start-Sleep 1
			}
		}
  "Copying $TYPE files..."
	robocopy "$WORKDIR\$TYPE" "$WORKDIR" setup.exe /ndl /nfl /njh /njs
	robocopy "$WORKDIR\$TYPE" "$WORKDIR" setup.rll /ndl /nfl /njh /njs

if (test-path("$WORKDIR\$TYPE\x86")) {robocopy "$WORKDIR\$TYPE\x86" "$WORKDIR\x86" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}
if (test-path("$WORKDIR\$TYPE\x64")) {robocopy "$WORKDIR\$TYPE\x64" "$WORKDIR\x64" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}
if (test-path("$WORKDIR\$TYPE\ia64")) {robocopy "$WORKDIR\$TYPE\ia64" "$WORKDIR\ia64" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}

set-location $WORKDIR
}

#Test path locations for validity

if (!(test-path($RTMSource + "\setup.exe"))) {Throw "SQL RTM Source does not exist!"}
if (!(test-path($SPSource)) -and !($SPSource -eq $null)) {Throw "Invalid Service Pack file location!"}
if (!(test-path($CUSource)) -and !($CUSource -eq $null)) {Throw "Invalid Cumulative Update file location!"}
if (!(test-path($output))) {mkdir $output}
if (!(test-path($output))) {Throw "Unable to create build directory!"}

#begin merge process, RTM
"Copying RTM..."
robocopy "$RTMSource" "$output" /s /ndl /nfl
$startloc=pwd
set-location $output

Copy-Item ".\x86\DefaultSetup.ini" ".\"

#merge Service Pack
if(!($SPSource -eq $null))
{
	Modify-Source "PCU" $output $SPSource
	"`n`nPCUSOURCE=`".\PCU`" `n" |Out-File "DefaultSetup.ini" -Append
}

#merge Cumulative Update
if(!($CUSource -eq $null))
{
	Modify-Source "CU" $output $CUSource
	"`n`nCUSOURCE=`".\CU`" `n" |Out-File "DefaultSetup.ini" -Append
}

#cleanup
copy-item "defaultsetup.ini" .\ia64 -force
copy-item "defaultsetup.ini" .\x64 -force
copy-item "defaultsetup.ini" .\x86 -force

Set-Location $startloc

Powershell Shredding

I’ve been playing around a bit recently with Powershell and XML. It’s one of those expirements where I’m not sure what the immediate benefit is, but it certainly is interesting seeing what kind of functionality we have available to us as data folks. I’m going to see what more I can coax out of it, but I wanted to share with you what I’ve learned so far.

First off, understand that I’m not that strong when it comes to XML. I get what it is, I understand the basic structure, but wrangling it isn’t something I’ve had to do a lot of. As a result, I’m still very much a newbie with XPath and XQuery. I understand nodes and properties, but then it starts to get muddy. Just a disclaimer before we get to far into this.

.NET, and by extension Powershell, has an XML data type.  This is useful because query plans are XML documents, whether we save them off or stored in the plan cache. So it’s a fairly simply matter to suck a query plan into an XML variable:

1$plan=(gc SomeSQLQuery.sqlplan)

From here, we can start browsing through our plan using the dot notation to parse the plan. The query plan itself is going to be found under the ShowPlanXML node. Under that, there’s a fairly complex layout that you can really dig into by looking at the full schema documentation. Suffice to say, if we want to see the SQL text from the query, we’d need to look at:

$plan.ShowPlanXML.BatchSequence.batch.Statements.StmtSimple.StatementText

That’s a lot of drilldown! What’s worse is, if we wanted to started finding specific operators, we would quickly get lost in a recursive arrangement of RelOp nodes and actual operators. So if we want to extract something useful out of the XML, we need to leverage XPath using the .SelectNodes() method of the XML data. The only problem here is that, for some reason, the namespace that is contained within the plan confuses any calls using .SelectNodes(). To get around this, I basically ripped out the namespace so that the defaults can be used:

1$plan=(gc SomeSQLQuery.sqlplan) -replace 'xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan"'

At this point, I can now start using XPath to analyze my query plan. So if I wanted to pull up all my table scans:

$plan.SelectNodes('//TableScan/Object') | ft

Or, if I wanted to get all my table or index scans:

$plan.SelectNodes('//*[contains(name(),"Scan")]/Object') | ft

And so and so forth.

Now, what does this get me? At this point, I’m not sure. I started down this road after seeing Jason Strate’s(@StrateSQL) presentation on shredding the plan cache with T-SQL. My thought process was that this might be an easier way to dissect the plan cache for useful information. In a way, I was right, because it was a little easier to grasp, but it also seems like it’s the long way around the horn to get at that information. I’ll continue to poke at it and see what I can coax out of it.

Finding Orphan Logins

Cleaning up old security info is always something that’s hard to stay on top of.  Employees who leave or change responsibilities can make it difficult to track which logins need to remain in our instances.  The best way to manage this is by using Active Directory groups and managing access from the group level instead of by explicit login.  However, because of the communication and organization that’s required to set up these groups in the first place, it’s not uncommon to find ourselves managing explicit logins in our severs.

The unfortunate side effect is this typically results in orphaned logins within our SQL Servers.  A question on how to manage this came up recently on dba.stackexchange.com and, while I couldn’t find a way to answer the specific question (because it was focused around finding orphaned logins in SQL 2000), I was able to come up with a handy utility script for 2005+ that makes use of xp_logininfo.  It’s not a complex script and provides a list of the actual drop login statements you can execute as part of your cleanup.

declare @user sysname
declare @domain varchar(100)

--store total output
declare @return table
(login_name sysname
,sql_statement varchar(max)
,error_msg varchar(100))

--Store output of xp_logininfo
declare @junk table
(a_name sysname NULL
,a_type char(8) NULL
,priv char(9) NULL
,m_login sysname NULL
,p_path sysname NULL)

--Declare domain you are checking
set @domain = 'FOO'

declare recscan cursor for
select name from sys.server_principals
where type = 'U' and name like @domain+'%'

open recscan
fetch next from recscan into @user

while @@fetch_status = 0
begin
	begin try
		insert into @junk
		exec xp_logininfo @user
	end try
	begin catch
		--Error on sproc because login does not exist
		insert into @return
		values(@user,'drop login '+convert(varchar,@user),ERROR_MESSAGE())
	end catch

	fetch next from recscan into @user
end

close recscan
deallocate recscan

select * from @return

A couple notes on this script:

  • It currently doesn’t handle removing the logins from individual databases.  I have another script I use to do that and will be working on combining the two.
  • I don’t automatically execute the drop login because I’m paranoid.  :)  By providing a list, I can review the logins, along with the error message output, and verify that I want to remove them

Hopefully this little utility script can help you with login management in your environments.

Some Backup Info

So I’ve been pretty bad about blogging lately.  I don’t buy in much to excuses and so I have none to offer.  Just got away from me (something I’ll talk about shortly when I review my 2012 goals).  Anyway, let’s talk about something a little more useful to you, the reader.

A couple weeks ago, I gave a short presentation on MSDB’s BackupSet.  It was a fun little presentation and gave me a chance to share with the Boulder SQL user group one of my favorite tables.  Why one of my favorites?  Simply because there’s so much useful information packed in there, particularly when it comes to the critical job of monitoring your backups.  If you haven’t looked at it, I highly recommend that you check it out.

While developing the presentation, I put together a handy query that I wanted to share.  It’s very useful for not only showing you when your latest backups were, but also the size and location of those backups:

select
  bs.database_name
  ,bs.backup_finish_date
  ,bs.backup_size/1024.0/1024.0 [backup_size_mb]
  ,bs.compressed_backup_size/1024.0/1024.0 [compressed_size_mb]
  ,datediff(ss,backup_start_date,backup_finish_date) [backup_time]
  ,((bs.backup_size/1024.0/1024.0)/(datediff(ss,backup_start_date,backup_finish_date)+1)) [mb_per_second]
  ,1.0-(bs.compressed_backup_size*1.0/bs.backup_size) [compression_ratio]
  ,bm.physical_device_name
from
  msdb.dbo.backupset bs
  join msdb.dbo.backupmediafamily bm on (bs.media_set_id = bm.media_set_id)
  join (select database_name
          ,max(backup_finish_date) last_backup
        from msdb.dbo.backupset
        where type = 'D'
        group by database_name) lb on (bs.database_name = lb.database_name and bs.backup_finish_date = lb.last_backup)
where type = 'D'

Improved Laziness

I’m really digging Powershell the more that I work with it.  It’s nice to have so much rich functionality within the operating system and not having to fuss with compilers or syntax.  One of my more recent projects was automated restore testing for our servers.  Now, while I won’t be showing that to you (yet), working with that script got me more time with the SMO.

The SMO is incredibly powerful.  You have so much functionality that it can be a little overwhelming.  There’s also a lot of stuff you can do in the SMO that you can also do in standard T-SQL.  As with anything, it’s a matter of using the right tool.  I’m finding that what makes Powershell the right tool is it’s ability to work outside of the database, avoiding some of the restrictions placed (rightly so) on the engine and how it interacts with the OS.  It makes Powershell ideal for things like file manipulation and talking with the OS.  It also gave me a slightly cleaner way to generate restore scripts.

If you have seen my previous restore building script, it was handy that it worked in T-SQL, but because it had no visibilty into the file system, I had no real way to logically combine fulls, diffs, and transaction logs to build a single script.  Sure, I could probably assemble something looking at msdb.dbo.backupset, but that only works if I’ve got access to the server the backups were done on.  By leveraging Powershell and the SMO, I can now read the file system, interogate each backup file to see that it fits in the LSN sequence, and build out a SQL script.

The steps are simple:

  1. Point the script at a target directory where all the backup files live and a SQL instance that has access to the file directory (typically, this is the instance you’ll run the restore on).  You can also declare a database name (if you want something different) and an output directory (which will otherwise default to your My Documents folder).
  2. The script will then use the SMO to read in the backup files, coordinating the LSNs to order them properly for the restore.
  3. The entire restore script will be written to the output directory.

Some caveats:

  • The script identifies files based on extension (.bak for Full, .dff for Differential, and .trn for Transaction Log).  This is based on how I write out my backup files, so this is the key piece of logic that will need to be altered to make this more generic.
  • The full restore will “move” the files to their original location.  If restoring to a different server, you’ll want to alter these file locations.

Now, I’m by no means a Powershell guru, but I do like making my life easier through scripting.  There’s probably more elegant ways to do this, but it’s clean enough for my purposes.  Hopefully in makes your life easier too!

(Feel free to download the script here.)

<#
.SYNOPSIS
Builds Database Restore script, coordinating Full, Diff, and Log backups.
.DESCRIPTION
Generates a database restore .sql script for restoring a database.
Mike Fal (htp://www.mikefal.net) 2012-07-09
.PARAMETER
String
    dir(REQUIRED) - Target directory where backup files reside
    server(REQUIRED) - Target server restore used by SMO to build restore script.  Should be server you want to restore to.
    database - Database name to restore. If blank, database name from the backup will be used.
    outputdir - Output directory for script.  If empty, user's My Documents will be used.
.EXAMPLE
    .\Build-Restore.ps1 -dir "C:\database_backups" -server "localhost"
.NOTES
    Script acquires files based on extension:
        .bak = Full
        .dff = Differential
        .trn = Transaction log
#>
param([parameter(Mandatory=$true)][string] $dir,
      [parameter(Mandatory=$true)][string] $server,
      [string] $database,
      [string] $outputdir=([Environment]::GetFolderPath("MyDocuments")))

#load assemblies
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null

#processing functions
function Get-Restore{
    Param($db,
          $backupfile)

    $rs=new-object("Microsoft.SqlServer.Management.Smo.Restore")
    $rs.Devices.AddDevice($backupfile.FullName, "File")
    $rs.Database=$db
    $rs.NoRecovery=$true
    $rs.Action="Database"

    return $rs
}#Get-Restore

function Get-Header{
    Param($rs,$srv)

    $dt=$restore.ReadBackupHeader($srv)
    return $dt.Rows[0]
}#Get-Header

#START SCRIPT
$sqlout = @()
$smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $server

$full = gci $dir | where {$_.name -like "*.bak"} | Sort-Object LastWriteTime -desc | Select-Object -first 1
$diff = gci $dir | where {$_.name -like "*.dff"} | sort-object LastWriteTime -desc | select-object -first 1
$trns = gci $dir | where {$_.name -like "*.trn"} | sort-object LastWriteTime

#initialize and process full backup
$restore=Get-Restore $database $full
$hfull=Get-Header $restore $smosrv
if($database.Length -eq 0)
{
    $database=$hfull.DatabaseName
    $restore.Database=$database
}

$LSNCheck=$hfull.FirstLSN
$files=$restore.ReadFileList($smosrv)
foreach($file in $files){
        $newfile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") ($file.LogicalName,$file.PhysicalName)
        $restore.RelocateFiles.Add($newfile) | out-null
}

$sqlout+="/****************************************************"
$sqlout+="Restore Database Script Generated $(Get-Date)"
$sqlout+="Database: "+$database
$sqlout+="****************************************************/"
$sqlout+="--FULL RESTORE"
$sqlout+=$restore.Script($smosrv)

#process differential backups
if($diff -ne $null){
    $restore=Get-Restore $database $diff
    $hdiff=Get-Header $restore $smosrv

    if($hdiff.DifferentialBaseLSN -eq $LSNCheck){
        $sqlout+="--DIFF RESTORE"
        $sqlout+=$restore.Script($smosrv)
        $LSNCheck = $hdiff.LastLSN
    }
    else{
        $LSNCheck = $hfull.LastLSN
    }
}

#process transaction log backups
if($trns -ne $null){
    $sqlout+="--TRN LOG RESTORE"

    foreach ($trn in $trns){
        $restore=Get-Restore $database $trn
        $htrn=Get-Header $restore $smosrv
        if($htrn.FirstLSN -le $LSNCheck -and $htrn.LastLSN -ge $LSNCheck){
            $sqlout+=$restore.Script($smosrv)
            $LSNCheck = $htrn.LastLSN
        }
    }
}

#Write final recovery line
$sqlout+="`r`n"
$sqlout+="--COMPLETE RESTORE/ONLINE DB"
$sqlout+="RESTORE DATABASE "+$database+" WITH RECOVERY"

#output script file
$sqlout | Out-File "$outputdir\restore_$database.sql"

Power(shell)ing your CMS

If you’re looking at Powershell scripts for SQL Server management, you’ll find a lot of them consume a list of servers in order to perform an action. For example, I put a script out a couple of weeks ago that can monitor the service state for your SQL Servers, emailing out a notice if services are down. This is handy, but maintaining a server list in a file is hardly elegant. For something more dynamic, many shops(including my own) will maintain a list of their servers in their own tracking database, giving you a resource you can query. However, there’s an easier way to handle this if you’re using Central Management Server.

The nice thing about Central Management Server is that, just like SQL Server instances, it can be accessed through the SMO with Powershell. This gives us a handy way to browse our sever listings as a folder structure. Just open up your Powershell window and (assuming you’ve loaded your SQLPS module/snapin), change your directory to:

CD SQLSERVER:\SQLRegistration\Central Management Server Group

Once you’re there, get a listing of the contents with your favorite command (dir/ls/gci) and you’ll see your registered Central Management Server repositories. You can browse from there into the appropriate repository and see a listing of all your registered instances. Now, keep in mind, you’ll still have to browse the directory structure if you’ve created folders to organize your servers into, but now we have access to them as if they were files in a folder.

Let’s take it a step further now and build out a server listing using some pretty basic commands within Powershell. We only have to keep in mind two gotchas:

  • The SMO recognizes both the folders and the instances as “containers”, so we can’t filter by PSIsContainer being false (the object property indicating container-ship).
  • The instance object has both Name and ServerName properties. The Name is the descriptive name, while ServerName is the specific name of the instance we can connect to.

Keeping this in mind, we can build a list by running the following command in the repository:

ls –recurse | where {$_.ServerName –ne $NULL}

Breaking this down, it’s a standard looking command for Powershell. By using “-recurse”, we tell Powershell to recursively browse through all the folders and subfolders in our directory for the lookup. Filtering on “$_.ServerName –ne $NULL” means we exclude the directories (since they don’t have a ServerName). Now we have an array of all of our servers that we can use for processing loops:

$srvs= ls –recurse | where {$_.ServerName –ne $NULL}

ForEach ($srv in $srvs)
{
  $inst=$srv.ServerName
  #do some stuff
}

Another handy element in that object is the ConnectionString property. This provides you with a ready made connection string to use if you need to connect to SQL Server using OLEDB or ODBC calls. While I haven’t had a need to use this yet, I can definitely see the benefits. There are also other properties that can be explored and used, depending on your scenario.

All of this makes for a great extension of Central Management Server, allowing DBAs to leverage the repository for more than just centralizing lookups in Management Studio. This can be a handy tool if you need something with a little less administrative overhead than a server inventory database or some sort of file listing. This means that the Central Management Server, which most of us see as something we only really use within Management Studio, can be leveraged to be so much more.

The Tools to Survive

Last week, I blogged about monitoring backups, service status, and disk space usage. These are the keys to survival for you and your databases. There are several ways that these can be monitored and there are many third party tools that you can use for these tasks. However, third party tools are often not an option.  Fortunately, you can also make use of SQL and Powershell to handle basic monitoring. Here’s a couple scripts that you can use in your environments.

Overdue Backups

The backupset table makes it easy to know when your backups are being taken. By making use of a couple queries, you can find out if your databases are actually being backed up and take appropriate action. This particular script is designed to check for any databases that haven’t had a full backup (type=’D’) in the last three days and email out an alert if any are found.

with db_fulls(dbname,last_backup)
as
(select
	database_name,
	max(backup_start_date)
from
	msdb.dbo.backupset
where
	type='D'
group by
	database_name)
select
	d.name,
	d.recovery_model_desc,
	df.last_backup
into #results
from
	sys.databases d
	left join db_fulls df on (d.name = df.dbname)
where
	isnull(df.last_backup,'1/1/1900') < dateadd(dd,-3,getdate()) 

if (select count(*) from #results) > 1
begin
	DECLARE @tableHTML  VARCHAR(MAX)
	,@emailrecip varchar(200)

	--SET YOUR EMAIL ADDRESS HERE
	set @emailrecip = 'you@yourcompany.com'

	SET @tableHTML =
		N'</pre>
<h1>Overdue Backup Report</h1>
<pre>
' +
		N'</pre>
<h2>No full backups in the last 3 days</h2>
<pre>
' +
		N'</pre>
<table width="1250" border="1">' + N'
<tbody>
<tr>
<th align="left" width="40%">Database</th>
<th align="left" width="30%">Recovery Model</th>
<th align="left" width="30%">Last Backup</th>
</tr>
' + CAST ( ( SELECT td = rtrim(name), '', td = rtrim(recovery_model_desc), '', td = isnull(convert(varchar(100),rtrim(last_backup),100),'NO BACKUP'), '' 
from #results order by name FOR XML PATH('tr'), TYPE ) AS VARCHAR(MAX) ) + N'</tbody>
</table>
<pre>
' ;

	exec msdb.dbo.sp_send_dbmail
		@recipients = @emailrecip,
		@subject = 'Overdue Backup Report',
		@body = @tableHTML,
		@body_format = 'HTML' ;
end

drop table #results

The basics here can also be modified for your particular needs. In my own environment, I use this and a second script that monitors for log backups(type=’L'). I schedule the full backups to run every morning. For the log backups, I run the report every 4 hours and alert me if a database hasn’t had a log backup in the last 12 hours.

Stopped Services

It’s hard to monitor SQL Services from SQL Server itself, so for this task I turned to Powershell. By making use of the Get-Service commandlet, you can collect all service related information from a computer. Add some functionality for email, and you can build a handy little service monitoring script.

$srvlist = Get-Content $args[0]
$report=@()

foreach ($server in $srvlist)
{
	try
	{
          $svcs=Get-Service -ComputerName $server | where {$_.name -like "*SQL*"}
		foreach ($svc in $svcs)
		{
			$output = New-Object System.Object
			$output | Add-Member -type NoteProperty -name Instance -value $server
			$output | Add-Member -type NoteProperty -name SvcName -value $svc.Name
			$output | Add-Member -type NoteProperty -name DisplayName -value $svc.DisplayName
			$output | Add-Member -type NoteProperty -name Status -value $svc.Status
			$report+=$output
		}
	}
	catch
	{
		$output = New-Object System.Object
		$output | Add-Member -type NoteProperty -name Instance -value $server
		$output | Add-Member -type NoteProperty -name SvcName -value "No_Service_Collected"
		$output | Add-Member -type NoteProperty -name DisplayName -value "No Service Collected - COLLECTION ERROR"
		$output | Add-Member -type NoteProperty -name Status -value "ERROR"
		$report+=$output
	}
}

#Set these for your environment
$smtp="yourmail.server.com"
$from="SvcAlert@yourserver.com"
$to="You@yourcompany.com"

if(($report | where {$_.Status -ne "Running"}).Length -gt 0)
{
	[string]$body=$report|where{$_.Status -ne "Running"}| ConvertTo-HTML
	Send-MailMessage -To $to -from $from -subject "Service Monitor Alert!" -smtpserver $smtp -body $body -BodyAsHtml
}

To call this script, you’ll want a text file that contains a list of your servers (passed through as an argument to the script).  Then, run the script as a Windows Scheduled Task.  I recommend setting it up to check the status regularly every few minutes, running from a different computer than your SQL Server. This guards against your monitoring process failing in case your SQL Server shuts down. You will also need to ensure that whatever account the task runs under has remote access to your SQL Server.

Free Space

There is actually quite a bit written about this particular topic. There are two ways you can go about this, depending on whether or not you want to monitor your space from within SQL Server or externally. One option is to use xp_fixeddrives for querying data from directly within SQL Server. If you’ve upgraded to SQL 2008 R2 SP1, you can also make use of sys.dm_os_volume_stats. Personally, I lean more towards making use of Powershell (again) because of the flexibility it gives me. For this, there’s several options, but a good place to start is this script off of Simple Talk. Any of these choices will give you the information you need.

With these basic tools, you can start monitoring your systems for continued operation as well as being alerted when critical issues arise.  From here, you can then move to monitor other aspects of your server and database health.