Art of the DBA Rotating Header Image

automation

A PoSH way to skin the cat

Today my friend John Morehouse(@SqlrUs) posted a handy T-SQL script to configure your SQL Server memory based on some rules provided by Jonathan Kehayias(@SQLPoolBoy).  This is a great script and helps automate a lot of the work a DBA needs to do when setting up a new instance. Hopefully, you know by now about my passion for automating and standardization.  However, when reading John’s post, I had the following internal conversation:

“Wow, self, John’s got a neat little script here.”

“Yeah, but T-SQL is so passé.  Everyone does it.  Plus, not sure how easily we could add that into a server deployment script if we wanted total hands-off-ness.”

“Self, I know what you mean.  Nothing wrong with it, it just doesn’t fit my own particular….oh…particular….”

Idiom?

“Yes!  Idiom!  Hrmmm…how about a Powershell script?”

“A Powershell script?  Brilliant!”

And so I sat down and cranked out the following script, which essentially does what John’s script does, just using Powershell.

</pre>
<#
.SYNOPSIS
 Configures a SQL Server instance per the Jonathan Kehayias' guidelines.
.DESCRIPTION
 This script will configure your SQL Server instance per the guidelines
 found in Jonathan Kehayias' blog post: http://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/
 The rules are:
 - 1GB for initial OS reserve
 - +1GB per 4GB server RAM up to 16GB
 - +1GB per 8GB server RAM above 16
.PARAMETER
 -instance SQL instance name, i.e. localhost\SQL2012, DBASERVER01
 -apply Switch parameter, call if you want to actually apply the changes. Otherwise, a report will be produced.
.EXAMPLE
 Configure-SQLMemory -instance DBASERVER01 -apply
#>

param([parameter(Mandatory=$true)][string] $instance
 , [Switch] $apply
 )

#load SMO
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

if($instance.Contains("\")){
 $sqlhost = $instance.Split("\") | Select -First 1
 }
else{
 $sqlhost = $instance
 }

#set memory variables
$totalmem = (gwmi Win32_ComputerSystem -computername $sqlhost).TotalPhysicalMemory/1GB
$sqlmem = [math]::floor($totalmem)

#calculate memory
while($totalmem -gt 0){
 if($totalmem -gt 16){
 $sqlmem -= [math]::floor(($totalmem-16)/8)
 $totalmem=16
 }
 elseif($totalmem -gt 4){
 $sqlmem -= [math]::floor(($totalmem)/4)
 $totalmem = 4
 }
 else{
 $sqlmem -= 1
 $totalmem = 0
 }
}

#if not in debug mode, alter config. Otherwise report current and new values.
$srv = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server $instance
 "Instance:" + $instance
 "Max Memory:" + $srv.Configuration.MaxServerMemory.ConfigValue/1024 + " -> " + $sqlmem
 "Min Memory:" + $srv.Configuration.MinServerMemory.ConfigValue/1024 + " -> " + $sqlmem/2
if($apply){
 $srv.Configuration.MaxServerMemory.ConfigValue = $sqlmem * 1024
 $srv.Configuration.MinServerMemory.ConfigValue = $sqlmem/2 * 1024
 $srv.Configuration.Alter()
 "Configuration Complete!"
 }
<pre>

Now it should be noted that I’m not going to tell you which of these scripts are “better”. Neither is, really, they’re just different ways of approaching the problem. And that’s the fun of it. When working in technology, cats can be skinned in multiple ways, we just have to use something. Whether it’s T-SQL, Powershell, C#, or some other thing, the point is that you want scripts to automate your work.

What was also cool about this was it gave me another exercise to practice Powershell.  After all, practice makes perfect and when learning something, any excuse to make me use it is good.  John and I actually had a conversation shortly after I wrote the script about these sort of opportunities, and we’ll continue to look for ways to challenge each other for learning.

Edit:  Well, that was embarrassing.  Amazing how sometimes you can get the math little screwy.  I’ve updated this script as of 9:44 PM MST to correct me having the math backwards.  If you grabbed/reviewed this script before hand, please make sure up grab the updated version.

The Moving Van

A problem DBAs commonly face is moving database files around, especially when we’re trying to manage space on a drive.  I know that I’ve had to do it and, when you have a database instance with hundreds of databases, you need an automated way to manage this.  Recently, someone on DBA Stackexchange had this specific problem and I was able to provide a Powershell script that I’ve used in the past to manage this problem.  And when I haven’t blogged for a while, a good way back into it is to share a Powershell script.  Mostly because it’s easy.  :)

First, let me talk about a pattern I commonly use.  A lot of times, I could write a single script to take an action, but instead I build a script to write a script.  The main reason I do this is that it’s very hard, especially with many of our administrative tasks, to write a one-size-fits-all process.  So instead, I have a tool that gets me 90% of the way which I can then tweak or alter that script to address that final 10% to fit that specific need.

That’s what this script is doing.  It doesn’t perform the actual file moves itself, but writes out a script that will execute those files moves.  It’s fairly generic, creating a script that moves all the database files from their current location (wherever that may be) to two common directories: data and log. It’s rare that you would move all your database files in this fashion, but you can edit the generated script to suit your specific situation.

This script makes use of the techniques detailed here by Ben Miller(@dbaduck) with one significant exception. I’ve found that the .Offline() method for databases is very unreliable because you’re basically executing your normal “ALTER DATABASE [foo] SET OFFLINE;”. The main problem with this command is it has to wait for all connections to close gracefully before it can proceed, something that doesn’t happen to frequently. I prefer to use “ALTER DATABASE [foo] SET OFFLINE WITH ROLLBACK IMMEDIATE;” because it forces all connections to close and open transactions to rollback, no waiting involved. Since the SMO methods don’t have an option for that,  I call the SQL using the Invoke-SQLCmd cmdlet. I find this is a much cleaner option for my script.

Specifically, the script accepts 4 parameters:

  • $newdata – New data file path

  • $newlog – New log file path

  • $instance – SQL instance you want to move files for (defaults to local)

  • $outputfile – Output file name (defaults to Documents, FileMover.ps1)

When you run it, it will spit out a FileMover.ps1 script to you Documents folder (though, using that last parameter, you can change the output location and name).  This FileMover.ps1 will be the script you can then edit to suit your specific needs.  The script is below, but you can also download it here.

<#
.SYNOPSIS
   Builds a Powershell script for moving many database files from one
	 set of directories to another.
.DESCRIPTION
   Used to generate a Powershell script for moving many database files
	 from one location to another.  This will typically be used with a
	 large number of databases that need to be relocated.  Caveats for
	 this script include:
	 		-Destination for all files will be the same.
			-User that runs the script must have access to source and destination locations
			-This uses the 2008 R2 version of the SMO.

	The script generates a FileMover.ps1 script (by default in My
	Documents).  The reason for generating a separate script is so
	specific configurations can be edited/updated before actually
	execyting the move.

.PARAMETER <paramName>
   instance - Instance owning the databases to be moved
	 newdata - New data file location, no trailing '\'.  example: "C:\DBFiles\Data"
	 newlog - New log file location, no trailing '\'.  example: "C:\DBFiles\Log"
	 $outputfile - Full path and name of output file.  By default, FileMover.ps1 in My Documents.

.EXAMPLE
   .\Build-FileMover.ps1 -newdata "C:\DBFiles\Data" -newlog "C:\DBFiles\Log"
#>

param([parameter(Mandatory=$true)][string] $newdata,
			[parameter(Mandatory=$true)][string] $newlog,
			[string] $instance="localhost",
      [string] $outputfile=([Environment]::GetFolderPath("MyDocuments"))+"`\FileMover.ps1")

#load SMO
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

#Create server object and output filename
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server $instance

#get your databases
$db_list=$server.Databases

#build initial script components
"Add-PSSnapin SqlServerCmdletSnapin100" > $outputfile
"Add-PSSnapin SqlServerProviderSnapin100" >> $outputfile
"[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') `"$instance`" | out-null" >> $outputfile
"`$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server " >> $outputfile

foreach($db_build in $db_list)
{
	#only process user databases
	if(!($db_build.IsSystemObject))
	{
		#script out all the file moves
		"#----------------------------------------------------------------------" >> $outputfile
		"`$db=`$server.Databases[`""+$db_build.Name+"`"]" >> $outputfile

		$dbchange = @()
		$robocpy =@()
		foreach ($fg in $db_build.Filegroups)
		{
			foreach($file in $fg.Files)
			{
				$shortfile=$file.Filename.Substring($file.Filename.LastIndexOf('\')+1)
				$oldloc=$file.Filename.Substring(0,$file.Filename.LastIndexOf('\'))
				$dbchange+="`$db.FileGroups[`""+$fg.Name+"`"].Files[`""+$file.Name+"`"].Filename=`"$newdata`\"+$shortfile+"`""
				$robocpy+="ROBOCOPY `"$oldloc`" `"$newdata`" $shortfile /copyall /mov"

			}
		}

		foreach($logfile in $db_build.LogFiles)
		{
			$shortfile=$logfile.Filename.Substring($logfile.Filename.LastIndexOf('\')+1)
			$oldloc=$logfile.Filename.Substring(0,$logfile.Filename.LastIndexOf('\'))
			$dbchange+="`$db.LogFiles[`""+$logfile.Name+"`"].Filename=`"$newlog`\"+$shortfile+"`""
			$robocpy+="ROBOCOPY `"$oldloc`" `"$newlog`" $shortfile"
		}

		$dbchange+="`$db.Alter()"
		$dbchange+="Invoke-Sqlcmd -Query `"ALTER DATABASE ["+$db_build.Name+"] SET OFFLINE WITH ROLLBACK IMMEDIATE;`" -ServerInstance `"$instance`" -Database `"master`""

		$dbchange >> $outputfile
		$robocpy >> $outputfile

		"Invoke-Sqlcmd -Query `"ALTER DATABASE ["+$db_build.Name+"] SET ONLINE;`" -ServerInstance `"$instance`" -Database `"master`""  >> $outputfile
	}
}

Server Inventories

Ok, I’ve been promising this to people for a while, time to get at it.  Brace yourself for a long one.

 As enterprise DBA’s, we usually have many servers to manage.  Whether it’s around ten, a hundred, or a thousand (or more!), tracking this manually is a bear.  However, there are still many shops maintain convoluted spreadsheets and other manual documentation to track their inventory.  Manual sucks.  As John Sansom(@SQLBrit) says, the best DBAs automate everything, including their server inventories.

 There’s a lot of approaches and third party tools you can use for your own environment.  In my mind, I’m totally cool with spending the money on a third party tool because of the depth they can provide in my tracking and monitoring.  A lot of times, though, we might not have access to those tools or we may need more information than those tools provide.  It’s at this point we sit down at our keyboard, crack our fingers, and start banging out some code.  Of course, the code I started banging out was Powershell (shocking, I know!).

 I didn’t start from scratch, though.  When putting my own script together (which we’ll get to later), I found two great resources that got me about 80% of the way:

 Allen White(@SQLRunr) – Let PowerShell do an Inventory of your Servers
Colleen Morrow(@ClevelandDBA) – Building A SQL Server Inventory

These are both great resources that will get you started, but I needed a little more.  The biggest challenge I had was a need to poll cluster information and collect SQL instance information separate from my machine information.  I also wanted to do some consolidation of my data sets.  So while I borrowed heavily from both Allen and Colleen, but then molded it to my own purposes.

Process

Before we dig into the code, I first want to touch on the collection process and how I handle it.  It’s not complex, but code doesn’t make sense until you understand the “why” of the different components.

What I’m collecting

  • SQL Instance information – Whether it’s an instance living on a cluster node, a physical stand alone, or a VM host, I want the SQL Instance without really caring about the machine it lives on.  At least not meshed in directly with the machine, though I DO collect the physical host name so I can identify which machine information I need.

  • Physical Machine information – Since I’m not tying the machine info directly with the SQL Instance collection, I want the physical information for the servers my instances live on.  This is where I get everything that isn’t tied directly to the SQL Instance.

How I’m collecting it

  • I’ve got 4 tables, 2 live tables and 2 stage tables, that I use for the collection.  Nothing out of the ordinary, but there’s an instance table with a stage table and a machine table with a stage table.

  • I use a Powershell script to perform the collection itself.  It uses instance table as its primary driver, where the idea is that I provide the initial population of instance names to track into the table and the inventory process will fully populate the rest of the information.

  • When the Powershell scripts complete, there is a stored procedure it executes to load data from the stage tables into the live tables.  The load is simple, where the instance table is updated with data from the stage and the machine information is deleted/replaced.

  • The Powershell script is run by a SQL Agent job running under a specific monitoring Active Directory account created as a credential in SQL Server.  The gotcha here is that the monitoring account needs domain access to the clusters and machines it’s going to be querying.

The Code

I won’t list the full scripts here, but you can download and review them here:

SQL Objects
Powershell script

 Let’s first talk about the dependencies:

  • The SMO, so you’re going to need to have either your script or your profile load the 2008 snap-ins or import the 2012 sqlps module.

  • Chad Miller’s Out-DataTable and Write-DataTable (I combined them into one file, DataTables.ps1).

  • The FailOverClusters module, part of the Remote Admin pack if you’re not running the code on a machine with the Windows Server OS, part of the Fail Over Cluster components if you are.

There are two key functions I use, Get-Instance to gather the SQL instance information and Get-Machine to gather the machine information:

#Collects and returns SQL Instance information
function Get-Instance([string]$instcoll,[int]$id,[string]$name)
{
try
{
	$smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $name
	$sname = $smo.NetName
	$iname = $smo.InstanceName
	if($iname.Length -eq 0 -or $iname -eq $null) { $iname = "MSSQLSERVER" }

	$managedcomp = new-object ('Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer') $sname
	$output = New-Object System.Object

	$port = $managedcomp.ServerInstances[$iname].ServerProtocols["Tcp"].IPAddresses["IPAll"].IPAddressProperties["TcpPort"].Value
	$ip = (Test-Connection $sname -count 1).IPV4Address.ToString()

	$output | Add-Member -type NoteProperty -name InstanceId -value $id
	$output | Add-Member -type NoteProperty -name SQLVersion -value $smo.VersionString
	$output | Add-Member -type NoteProperty -name SQLVersionDesc -value $smo.ProductLevel
	$output | Add-Member -type NoteProperty -name SQLEdition -value $smo.Edition
	$output | Add-Member -type NoteProperty -name MemoryMinMB -value $smo.Configuration.MinServerMemory.RunValue
	$output | Add-Member -type NoteProperty -name MemoryMaxMB -value $smo.Configuration.MaxServerMemory.RunValue
	$output | Add-Member -type NoteProperty -name MAXDOPVal -value $smo.Configuration.MaxDegreeOfParallelism.RunValue
	$output | Add-Member -type NoteProperty -name IP -value $ip
	$output | Add-Member -type NoteProperty -name Port -value $port
	$output | Add-Member -type NoteProperty -name PhysicalHost -value $smo.ComputerNamePhysicalNetBIOS
	return $output
}
catch
{
	write-host "Error collecting $name"
	return $null
}
}#Get-Instance

#Get host machine information via WMI
function Get-Machine([string]$name,[string]$clst)
{
 try
{
	$comp = gwmi Win32_ComputerSystem -Computer $name | select Model,Manufacturer,TotalPhysicalMemory
	$proc = gwmi Win32_Processor -Computer $name | select NumberOfLogicalProcessors,MaxClockSpeed
	$os = gwmi Win32_OperatingSystem -Computer $name | select OSArchitecture,Name,Version,ServicePackMajorVersion,ServicePackMinorVersion

	$output = New-Object System.Object

	$output | Add-Member -type NoteProperty -name MachineName -value $name
	$output | Add-Member -type NoteProperty -name Model -value $comp.Model
	$output | Add-Member -type NoteProperty -name Manufacturer -value $comp.Manufacturer
	$output | Add-Member -type NoteProperty -name Architechture -value $os.OSArchitecture
	$output | Add-Member -type NoteProperty -name PhysicalCPUs -value $(if(!$proc.Length){"1"}else{$proc.Length})
	$output | Add-Member -type NoteProperty -name LogicalCPUs -value ($proc | Measure-Object NumberOfLogicalProcessors -sum).Sum
	$output | Add-Member -type NoteProperty -name CPUSpeed -value ($proc | Measure-Object MaxClockSpeed -max).Maximum
	$output | Add-Member -type NoteProperty -name MaxMemory -value ($comp.TotalPhysicalMemory/1MB)
	$output | Add-Member -type NoteProperty -name OSName -value $os.name.split("|")[0]
	$output | Add-Member -type NoteProperty -name OsVersion -value $os.Version
	$SPMaj = $os.ServicePackMajorVersion
	$SPMin = $os.ServicePackMinorVersion
	$output | Add-Member -type NoteProperty -name SPVersion -value "$SPMaj.$SPMin"
	$output | Add-Member -type NoteProperty -name Cluster -value $clst

	return $output
}
catch
{
	write-host "Error collecting $name"
	return $null
}
}#Get-Machine

Both leverage the SMO and the WMI to get relevant info.  I’ll let you sort through the individual elements I’m pulling, it’s fairly standard config info.  Data gathering, because I think in terms of tables, is a collection of objects that are essentially rows.  I append each new object with all the appropriate properties to a larger collection, then use Write-Datatable to push that data directly into the related stage table.

Look at the entire script for my full process.  I’ve been pretty happy with it and it’s been chugging away in my environments now for about 8-9 months without a whole lot of interaction from me.  If we stand up a new server, then I just pop that name in the instance table, run the job, and everything gets updated.  What made me most proud was that we had a recent meeting with a new VP and pulled up the SSRS report built off of this information to give him an idea of what our environments.  His comment was “This is the most documentation I’ve seen since I’ve got here.”  The beauty is that it was provided without a whole lot of manual maintenance on my part.

This is very much an evolving work on my part, but hopefully gives you some insight into how I manage my environments.  I’ve still got lots of ideas on improvements, including automatic population (or other use) of Central Management Server.  If you have any ideas on how this can be improved, I’d love to hear them.

Slipstreamin’ Away

It’s nothing new: I love anything that makes less work for me.  One thing that has been a great contributor to this has been slipstream installs.  While that’s a bit of a fancy name, basically it’s an install of SQL Server where the service packs and cumulative updates are rolled up into one install package.  This saves the tedious chore of constant install after install to get a new SQL Server installation up to a patch level consistent with the rest of your enterprise.

Now, the problem with slipstream installs is that they’re a pain in the butt to create, at least previous to SQL 2012.  Not difficult, but tedious.  You have to copy some files, merge some directories, and edit some config files.  It’s nothing that can’t be done, but very repetitive.  To boot, if something isn’t copied or configured right, the whole package won’t work and you won’t know until you attempt the install (and it fails).

Enter Powershell.

Building a slipstream is no different than any other task that we should automate.  After all, we’re doing the same actions every time, the only thing that might change are the files we use to build our slipstream.  What I’ve done is put together a script that does all this for me.  As with any script, there are a few rules to how you use it:

  • The script was designed for building SQL 2008 and 2008 R2 slipstreams.  I tried it with 2005, but that process is different enough that this script won’t work.
  • RTMSource and output are the two mandatory parameters.  The SPFiles and CUFiles parameters will be skipped if left null.  Basically, if you are making just a SQL 2008 R2 SP2 slipstream, you would only need to supply the service pack files.
  • The script expects the base install (the RTM media) to be un-extracted and the service packs or cumulative updates in their .exe form.  The script will extract the additional files from the .exe into the slipstream package.
  • I built the script to work along my organizational thought process, which means I put each set of files in their own subdirectory.  This shouldn’t be a big deal, but be aware that if you point the script at a location, it won’t just pull out the files it needs, it will try and use every .exe file in the subdirectory.

Other than that, here’s the script:

<#
.SYNOPSIS
   Builds a SQL Server slipstream install.
.DESCRIPTION
   This script will take the various components of a SQL Server isntall (RTM binaries, SPs, CUs) and assemble
	 them to create a slipstream install.  To use this script, you will need the unextracted RTM binaries (.iso
	 or DVD media) and the executables for the SP and CUs you want to use.
	 Mike Fal (htp://www.mikefal.net) 2-28-2013
.PARAMETER <paramName>
   RTMSource - File path for the RTM source files
	 SPSource - File path for the Service Pack executables (if no value is passed, no SP will be added)
	 CUSource - File path for the Cumulative update executables (if no value is passed, no CU will be added)
	 output - filepath where slipstream will be written, directory will be created if it doesn't exist.
.EXAMPLE
   .\Build-Slipstream.ps1 -RTMSource "C:\Users\mfal\Downloads\en_sql_server_2008_r2_developer_x86_x64_ia64_dvd_522665"
	 												-SPSource "C:\Users\mfal\Downloads\2008R2SP1"
													-CUSource "C:\Users\mfal\Downloads\2008R2SP1CU2"
													-output "C:\SQL2008SP1CU2_Slipstream"
#>

param([parameter(Mandatory=$true)][string] $RTMSource,
			[parameter(Mandatory=$false)][string] $SPSource,
      [parameter(Mandatory=$false)][string] $CUSource,
      [parameter(Mandatory=$true)][string] $output)

#function to extract and copy SPs and CUs
function Modify-Source{
	param($TYPE="PCU",$WORKDIR,$PATCHDIR)
	$PATCHFILES=get-childitem "$PATCHDIR\*" -include *.exe
	if (!(test-path("$WORKDIR\$TYPE"))) {mkdir $WORKDIR\$TYPE }
	if (!(test-path("$WORKDIR\$TYPE"))) {Throw "Error: Patch file destination not available"}

	set-location $PATCHDIR

	foreach($FILE in $PATCHFILES)
		{

			iex "./$($FILE.basename)$($FILE.extension) /x:""$WORKDIR\$TYPE"" /q"
			"Waiting for $FILE to extract..."
			while (@(Get-Process $FILE.Name.Replace(".exe","") -ErrorAction SilentlyContinue).Count -ne 0)
			{
			Start-Sleep 1
			}
		}
  "Copying $TYPE files..."
	robocopy "$WORKDIR\$TYPE" "$WORKDIR" setup.exe /ndl /nfl /njh /njs
	robocopy "$WORKDIR\$TYPE" "$WORKDIR" setup.rll /ndl /nfl /njh /njs

if (test-path("$WORKDIR\$TYPE\x86")) {robocopy "$WORKDIR\$TYPE\x86" "$WORKDIR\x86" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}
if (test-path("$WORKDIR\$TYPE\x64")) {robocopy "$WORKDIR\$TYPE\x64" "$WORKDIR\x64" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}
if (test-path("$WORKDIR\$TYPE\ia64")) {robocopy "$WORKDIR\$TYPE\ia64" "$WORKDIR\ia64" /XF Microsoft.SQL.Chainer.PackageData.dll /ndl /nfl}

set-location $WORKDIR
}

#Test path locations for validity

if (!(test-path($RTMSource + "\setup.exe"))) {Throw "SQL RTM Source does not exist!"}
if (!(test-path($SPSource)) -and !($SPSource -eq $null)) {Throw "Invalid Service Pack file location!"}
if (!(test-path($CUSource)) -and !($CUSource -eq $null)) {Throw "Invalid Cumulative Update file location!"}
if (!(test-path($output))) {mkdir $output}
if (!(test-path($output))) {Throw "Unable to create build directory!"}

#begin merge process, RTM
"Copying RTM..."
robocopy "$RTMSource" "$output" /s /ndl /nfl
$startloc=pwd
set-location $output

Copy-Item ".\x86\DefaultSetup.ini" ".\"

#merge Service Pack
if(!($SPSource -eq $null))
{
	Modify-Source "PCU" $output $SPSource
	"`n`nPCUSOURCE=`".\PCU`" `n" |Out-File "DefaultSetup.ini" -Append
}

#merge Cumulative Update
if(!($CUSource -eq $null))
{
	Modify-Source "CU" $output $CUSource
	"`n`nCUSOURCE=`".\CU`" `n" |Out-File "DefaultSetup.ini" -Append
}

#cleanup
copy-item "defaultsetup.ini" .\ia64 -force
copy-item "defaultsetup.ini" .\x64 -force
copy-item "defaultsetup.ini" .\x86 -force

Set-Location $startloc

A Temporary Configuration

A couple weeks ago, I participated in T-SQL Tuesday #26 with a trick to dynamically build a script for doing your restores. In the spirit of that laziness, I wanted to share with you another script I have for configuring Tempdb. There’s not much special here and you could use any other scripting language to get the same result, but the idea is to automate the bulk of the task so that you can save yourself some work as well as creating consistent processes for your database configurations.

--Configure TempDB
--Turn off nasty rowcount
SET NOCOUNT ON

--Declare some variables
declare @files int,
  @path varchar(max),
  @totalsize_gb float,
  @v_sql varchar(max),
  @filesize_mb int,
  @logsize_mb int,
  @currfile int

--Set these to configure tempdb
set @files = 8
set @path = 'T:\TempDB\'
set @totalsize_gb = 40

--script calculates individual file sizes
select @filesize_mb=FLOOR((@totalsize_gb*1024)/(@files)),@logsize_mb = FLOOR(@totalsize_gb*1024)* .25

--Build your config script
set @v_sql ='use tempdb
dbcc shrinkfile (tempdev,100)

alter database tempdb
modify file
(NAME=tempdev,
FILENAME='''+@path+'tempdb.mdf'',
SIZE='+CONVERT(nvarchar(10),@filesize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

alter database tempdb
modify file
(NAME=templog,
FILENAME='''+@path+'templog.ldf'',
SIZE='+CONVERT(nvarchar(10),@logsize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

'
set @currfile = 1

while @currfile < @files
begin
set @v_sql = @v_sql + 'alter database tempdb
add file
(NAME=tempdb'+CONVERT(nvarchar(10),@currfile)+',
FILENAME='''+@path+'tempdb'+CONVERT(nvarchar(10),@currfile)+'.ndf'',
SIZE='+CONVERT(nvarchar(10),@filesize_mb)+'MB,
FILEGROWTH=512MB,
MAXSIZE=UNLIMITED)

'

set @currfile = @currfile+1

end

select @v_sql = @v_sql + char(10)+'--Verify your configuration!'+char(10) + 'select name,size/128.0 [size_mb],physical_name from sys.database_files'

print(@v_sql)

PRINT '/***RESTART SERVER***/'

There are three things we want to declare up near the top, and that is number of files, location of the files, and total size of the files. Without getting into the larger discussion of how many files you should have (for those wondering, start at Paul Randal’s(b|t) blog post on it), this script will help you make all your files consistent.

Some other notes about things I have set:

  • The total file size is total data file size. Log file size is in addition to this, so plan accordingly.
  • I’ve got the log file calculation to size to 25% of the total data files.
  • I’ve got autogrowth on all files set to half a GB (512 MB).

These settings are completely personal preference, so your miles may vary. If you have different practices, by all means implement those. The point here, as with my RESTORE script, it to pre-build most of your configuration script so that you only have to tweak a couple things.

 

P.S. It should be noted that I call out a DBCC SHRINKFILE on tempdb.  The idea of this script is your running it either while configuring a server or during a maintenance period.  Shrinking a live tempdb can cause corruption, so only execute this script when you’re planning on restarting the server and rebuilding tempdb as part of your configuration.  (Thanks to Vicky Harp (b) for reminding me about that.)

Getting Started with Dynamic SQL

Since two of the scripts I’ve posted so far on Mondays involve the use of dynamic SQL code, I figured it would be a worthwhile to post on using dynamic SQL in SQL Server. After all, understanding and using dynamic SQL is one of the fundamental tools for a DBA to make management tasks automated, helping you save time and eliminate “fat fingers”.

What is dynamic SQL?

Before we get started, let’s define what we’re talking about. A quick Googling gives us a bunch of links that all describe the same process: “Dynamic SQL is a term used to mean SQL code that is generated programatically (in part or fully) by your program before it is executed”. What this means to us is now we can take SQL code that we want to run repeatedly with some changing parts, and write it in a way where those changes can be handled by other logic and keep the rest of the query unchanged, saving us a whole lot of time and effort.

Parametrization and code reuse

We’ll use my sp_spaceused script to show dynamic SQL in action. First off, we should review what the objective of that script is: running sp_spaceused to give us information about the size of the tables in our database in context with the other tables. Since the stored procedure only gives us information one table at a time, we need a way to gather all that information without having to run each query separately. We know that because the stored procedure accepts the table name as a parameter, the core code will remain the same no matter what we pass to it, so we only need to handle the changing table name.

Let’s look at the code:

declare recscan cursor for
select name from sysobjects where type = 'U'

open recscan
fetch next from recscan into @curr
while @@fetch_status = 0

begin
insert into @return
exec sp_spaceused @curr

fetch next from recscan into @curr
end

In the first two lines, we have a SQL cursor. If you need more information on cursors and how they work, hit up MSDN, but the short version is they give us a way to create a data set that we can use programmatically for processing. In this specific case, I’m building a data set of all the user tables in my database.

The next part is to open our list of table names and grab the first item in it. I’ll take that table name (the next available value in my cursor) and stuff it into a variable. Now I can use the stored procedure, with my variable as a parameter, to execute the code and get my return set. The rest of a the code is a simple loop that will continue until the SQL engine can no longer fetch a value from the cursor.

At the end of the day, we have two pieces that make this work:

  • A static set of code that performs the same task, but with a parametrized value that can alter the target of that task.
  • A programatically changing data set that we can use to provide the parameter for our code statement, so we can apply the same work to different targets.

But is this really dynamic SQL, since all we’re doing is using a stored procedure driven by a changing parameter? Sure thing, because while the stored procedure code remains the same, we are changing the parameter programmatically. We aren’t limited to just using stored procedures, though. While there are a lot of advantages to using stored procedures, such as pre-compiled SQL statements and making use of the procedure cache, sometimes you need something more fine grained. Fortunately, SQL server provides us with a couple other methods to execute dynamic SQL.

EXECution time

We can also use the EXECUTE function for more than just calling a stored procedure. It will also accept a SQL statement passed to it as a string. Let’s see how this works by modifying our script to get the number of tables in each database instead of the space used:

declare @sql varchar(max)
declare @db varchar(100)
declare @return table
(db_name varchar(100),
table_count int)

declare recscan cursor for
select name from sysdatabases where dbid >4

open recscan
fetch next from recscan into @db
while @@fetch_status = 0
begin
set @sql = 'select '''+@db+''',count(*) from '+@db+'.sys.objects where type = ''U'''

insert into @return
exec(@sql)
end

close recscan
deallocate recscan

select * from @return

Now we can take our SQL statement and change the database context programmatically, allowing us to reuse the same query over and over again.

SQL Server also provides the sp_executesql stored procedure as an alternative to using EXECUTE with two major differences. The first is that sp_executesql allows you to use parameters within your dynamic statement, so it allows you a cleaner option for parametrization. The second is you can use it to capture an output parameter, giving you some other options for using the data, though you are limited in using the statement output within an insert.

Finally, we can also make use of two undocumented stored procedures in SQL server, sp_MSforeachtable and sp_MSforeachdb. Just like they’re names imply, you can use these stored procedures to execute a SQL statement against every table in a database or every database on a server. While they don’t offer you as much control over your execution, they are pretty handy.

I’ve found dynamic SQL to be a huge time saver, since it is so flexible and powerful. With it, I can build out scripts to perform any number of tasks that remain the same, giving me the security in knowing that the task will run the same way, no matter how my parameters change.  Keep in mind that, while we only used SELECTs for the examples, you can use any SQL statement you normally would write ad hoc. For additional information and examples, I highly recommend Introduction to Dynamic SQL on SQLTeam.com, which will provide you more detailed information on the use of the different methods, as well as permissions and scope.