# February, 2015:

## #Powershell and Automating SQL Server Builds-Part 3

Hopefully you’ve seen over the last two posts some basic techniques for automating and codifying your SQL Server builds.  There’s just two more items I want to cover here, questions you’re probably asking yourself already.

## The build script

Let’s start with reviewing the whole build script to this point. While I wouldn’t recommend running this in production yourself, the finalized version looks something like this:

Run this from the install binary location
$SAPassword = [System.Web.Security.Membership]::GeneratePassword(16,4) .\setup.exe /CONFIGURATIONFILE=<config file location> /SAPASSWORD=$SAPassword /IACCEPTSQLSERVERLICENSETERMS

#Configure the OS
New-Item -ItemType Directory G:\MSSQL\Data
New-Item -ItemType Directory H:\MSSQL\Logs
New-Item -ItemType Directory I:\MSSQL\TempDB

#Configure Instant File Initialization
$svcaccount = 'SDF\sqlsvc' secedit /export /areas USER_RIGHTS /cfg C:\templocalsec.cfg$privline = Get-Content C:\templocalsec.cfg | Select-String 'SeManageVolumePrivilege'
(Get-Content C:\templocalsec.cfg).Replace($privline,"$privline,$svcaccount") | Out-File C:\newlocalsec.cfg secedit /configure /db secedit.sdb /cfg C:\newlocalsec.cfg #Open the firewall for 1433 New-NetFirewallRule -DisplayName "Allow SQL Server" -Direction Inbound –LocalPort 1433 -Protocol TCP -Action Allow #Set Server configurations$smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') localhost
$smosrv.Configuration.MaxServerMemory.ConfigValue = 4000$smosrv.Configuration.MinServerMemory.ConfigValue = 2000
$smosrv.Configuration.MaxDegreeOfParallelism.ConfigValue = 4$smosrv.Configuration.OptimizeAdhocWorkloads.ConfigValue = 1
$smosrv.DefaultFile = 'G:\MSSQL\Data'$smosrv.DefaultLog = 'H:\MSSQL\Logs'
$smosrv.Alter() #disable sa$smosrv.Logins['sa'].Disable()

## Next Steps

After this, we should consider deploying maintenance jobs, restoring databases, or building out things like availability groups. While I don’t cover these here, keep in mind these tasks can be scripted out as well.  I encourage you to consider how you could do this.  For example, if you leverage either Ola Hallengren’s scripts or the new Minion Reindex from the Midnight DBAs(@MidnightDBA), you have another piece you can automate. The key is that you have a repeatable process.

At Xero we have all our maintenance jobs as part of our administrative database deployment. This is handled through SQL Server Data Tools and .dacpacs. We achieve consistency and deployment speed by managing that solution and deploying the administrative database project. By maintaining the database in source control we manage our tool set, keeping it standardized.  When we add or update it, we can apply the changes out to our environment. This also means when we build a new server, we have a standardized way to install our admin tools with a minimum of muss and fuss.

## The Point

Automation is more about consistency than speed, but speed is a nice side benefit. By assembling these components into a single script, we have a repeatable build process for any SQL Server in our environment. Our instances will be built the same way, every time, so long as we aren’t changing our script. The bonus is, since it’s all scripted, there’s no fumbling with wizards, dialog boxes, and making sure we type in the right values, so it all just happens. And in a matter of minutes. At Xero, I can take a server from nothing to ready for databases in about 20 minutes using these techniques.

Now, we could go pretty crazy with how we build our automation script. In fact, there’s some pretty cool tools out there that will help you with this. I’ll let you do that on your own. Just make sure that, before you do, you know all the steps you have to build your SQL Server. It’s fine and dandy to be able to script out things, but without a plan or process to automate first, most of the scripts you could write won’t do you much good because you can’t use them again.

P.S. HUGE thanks out to Melody Zacharias(@SQLMelody) for helping me with this series of posts.

## #Powershell and Automating SQL Server Builds-Part 2

When last we left our heroes, we were discussing how best to go about building SQL Servers quickly and consistently using Powershell.  The thing is, we hadn’t quite gotten to the Powershell part yet and only really covered the build process plus installing SQL Server.  Let’s change that and move on to the next steps.

Configure the OS

It’s time to flex Powershell. Because Powershell works directly with the OS, it makes it really easy to perform tasks that SQL Server isn’t so good at. Let’s start with creating three directories for our database files:

New-Item -ItemType Directory G:\MSSQL\Data
New-Item -ItemType Directory H:\MSSQL\Logs
New-Item -ItemType Directory I:\MSSQL\TempDB

This assumes that our sysadmins have built the box with those LUNs. There are, of course, ways to script that out as well, but we’ll skip over that for the sake of brevity.

Enabling Instant File Initialization is a little tricky, because editing the local security policy is not something Powershell can do directly. Insert a harumph here. Anyway, we can do some tricks using the secedit tool. This technique, borrowed from Kyle Neier(@Kyle_Neier), basically dumps out the local security policy to a text file, updates it, then re-imports it to the local security policy. Kyle provides  a full function to use, but you can boil it down to the following script:

$svcaccount = 'SDF\sqlsvc' secedit /export /areas USER_RIGHTS /cfg C:\templocalsec.cfg$privline = Get-Content C:\templocalsec.cfg | Select-String 'SeManageVolumePrivilege'
(Get-Content C:\templocalsec.cfg).Replace($privline,"$privline,$svcaccount") | Out-File C:\newlocalsec.cfg secedit /configure /db secedit.sdb /cfg C:\newlocalsec.cfg As a quick aside, you might want to consider working with your sysadmins to actually set this permission in your domain Group Policy. Means you’ll get it without having to configure it yourself. Finally, we want to make sure we open the firewall for port 1433: New-NetFirewallRule -DisplayName "Allow SQL Server" -Direction Inbound –LocalPort 1433 -Protocol TCP -Action Allow Configuring SQL Server Now with the OS configured, it’s time to configure our SQL Server. For this, we’re going to leverage our friend and pal, the .Net SMO. As with many things in the SMO, the beauty of it is that we get a consistent API to interface with our SQL Server. For our purposes, we want to look at the Configurations collection of the Microsoft.SqlServer.Management.Smo.Server object. In here we can find everything you’d find in the sys.configurations view through T-SQL. Now let’s configure our max and min memory, max degree of parallelsim, and optimize for ad hoc queries. $smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') localhost
$smosrv.Configuration.MaxServerMemory.ConfigValue = 4000$smosrv.Configuration.MinServerMemory.ConfigValue = 2000
$smosrv.Configuration.MaxDegreeOfParallelism.ConfigValue = 4$smosrv.Configuration.OptimizeAdhocWorkloads.ConfigValue = 1
$smosrv.Alter() Simple enough, right? You can get pretty fancy and add all sorts of logic to calculate these values, giving you a flexible setup script. You could just as easily write these as a T-SQL script for a series of sp_configure executions to set your values. I’ve done it and know plenty of people who do. There are two reasons I’m moving away from this. The first is that some of my configurations, such the default database directories, need some specialized techniques (usually involving xp_regedit or something similar) to implement in T-SQL. I don’t like that approach because the extended stored procedures are usually unsupported and can have lots of issues. Powershell is much more graceful and (the second reason), using it across the board gives me a common interface for my configuration management. In fact, let’s look at the default database directory settings. As stated, if you want to do this programatically in T-SQL, you are going to need to use xp_regedit to update the registry keys. Using the SMO is much, much easier: $smosrv.DefaultFile = 'G:\MSSQL\Data'
$smosrv.DefaultLog = 'H:\MSSQL\Logs'$smosrv.Alter()

Piece of cake, right? One last thing to do now, and that is disable the sa account:

.\setup.exe /CONFIGURATIONFILE=<config file location> /SAPASSWORD=$SAPassword /IACCEPTSQLSERVERLICENSETERMS This gets my install complete and, honestly, is the longest part of the process. There are ways to cheat around that, but I’ll save that for another post. ## To Be Continued… When I first started writing this, I figured it would be a quick post. Suddenly I found I was at 5 pages and going. So we’re going to break it up to make it a little more manageable. Tune in tomorrow when we cover configuring the OS and SQL Server itself. ## Copying SQL Logins with #Powershell Managing SQL server logins, especially in the days of availability groups, can be quite the pain in the butt. Usually we’ll make use of scripts like sp_help_revlogin to manage this, copying logins en masse from one instance to another. This works, but it’s pretty brute force and there might be logins you may not want to copy across. So let’s use Powershell to be a little more…elegant. There’s a couple things we can use to get a nice login copying function that is flexible and robust. The first is the .Script() method in the SMO’s Login object. This is actually a piece of functionality many of us use daily, whenever we script out objects in SQL Server Management Studio. The trick is to access this programmatically, which is a snap in Powershell. The basic syntax is something like this: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null$smoserver = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server localhost
$smoserver.Logins[‘SHION\mike’].Script() What’s awesome about this method is the SMO will also script out all of the additional server level options, like default database and server roles. Very handy for maintaining the consistency of the logins from one instance to another. There are a couple of gotchas. First , if we want to include the SID in our script, we have to create a ScriptingOptions object and specify that the LoginSid should be included. Second, if the login is a SQL Server login, the .Script() method will add in a line to disable the login and give it a completely random password. This is done for security reasons, so that any login scripted out can’t create a possible security hole. As an FYI, you’ll get the same if you script out the login in SSMS. So what do we do? Managing passwords across instances is tricky, but must be done, otherwise things will start to break. With Windows based logins it’s simple, since that’s all managed in the Active Directory. SQL logins require more care. To handle this in T-SQL, you’ll want to create the login with the SQL Server password hash. Unfortunately, the SMO doesn’t provide an easy way to do this, so we have to get clever. To be clever, we have to: 1. Retrieve the binary hash value from the source SQL instance. 2. Convert the binary hash to a string for use in a CREATE LOGIN statement. 3. Insert the password hash string into our CREATE LOGIN statement in the correct place. Easy enough, right? Honestly, all it takes is some string manipulation, which Powershell is pretty darn good at. Certainly better than SQL Server itself. Turns out this is the biggest hurdle to the process and, once solved, the rest of the script comes together quite nicely. The following function brings all these concepts together. It’s a little more complex than other scripts I’ve posted, but here’s the highlights: • You must declare a source instance. This is the instance from which all logins will be scripted from. • You can specify a string array/collection of logins. The function will only script out the logins you list. If you don’t list any (argument is null), the function will script out all non-system logins (everything but sa, pretty much). • Because I like the pattern of scripts to build scripts, the function will create a .sql script of all your create login statements. By default this will go to your My Documents folder, but you can specify another directory if you want. • If you provide an instance to the ApplyTo parameter, the function will attempt to create the logins on that instance if they don’t exist. Pass the -Verbose switch if you want to see which ones it creates and which ones it skips. function Copy-SQLLogins{ [cmdletbinding()] param([parameter(Mandatory=$true)][string] $source ,[string]$ApplyTo
,[string[]] $logins ,[string]$outputpath=([Environment]::GetFolderPath("MyDocuments")))

#create initial SMO object
$smosource = new-object ('Microsoft.SqlServer.Management.Smo.Server')$source

#Make sure we script out the SID
$so = new-object microsoft.sqlserver.management.smo.scriptingoptions$so.LoginSid = $true #set output filename$filename = $source.Replace('/','_') + '_' + (Get-Date -Format 'yyyyMMddHHmm') + '_logins.sql'$outfile = Join-Path -Path $outputpath -ChildPath$filename

if(!($logins)){$logins = ($smosource.Logins | Where-Object {$_.IsSystemObject -eq $false}).Name.Trim() } foreach($loginname in $logins){ #get login object$login = $smosource.Logins[$loginname]

#Script out the login, remove the "DISABLE" statement included by the .Script() method
$lscript =$login.Script($so) | Where {$_ -notlike 'ALTER LOGIN*DISABLE'}
$lscript =$lscript -join ' '

if($login.LoginType -eq 'SqlLogin'){$sql = "SELECT convert(varbinary(256),password_hash) as hashedpass FROM sys.sql_logins where name='"+$loginname+"'"$hashedpass = ($smosource.databases['tempdb'].ExecuteWithResults($sql)).Tables.hashedpass
$passtring = ConvertTo-SQLHashString$hashedpass
$rndpw =$lscript.Substring($lscript.IndexOf('PASSWORD'),$lscript.IndexOf(', SID')-$lscript.IndexOf('PASSWORD'))$comment = $lscript.Substring($lscript.IndexOf('/*'),$lscript.IndexOf('*/')-$lscript.IndexOf('/*')+2)
$lscript =$lscript.Replace($comment,'')$lscript = $lscript.Replace($rndpw,"PASSWORD = $passtring HASHED") } #script login to out file$lscript | Out-File -Append -FilePath $outfile #if ApplyTo is specified, execute the login creation on the ApplyTo instance If($ApplyTo){
$smotarget = new-object ('Microsoft.SqlServer.Management.Smo.Server')$ApplyTo

if(!($smotarget.logins.name -contains$loginname)){
$smotarget.Databases['tempdb'].ExecuteNonQuery($lscript)
$outmsg='Login ' +$login.name + ' created.'
}
else{
$outmsg='Login ' +$login.name + ' skipped, already exists on target.'
}
Write-Verbose $outmsg } } } There is one additional piece to this, and that’s a helper function for converting the binary hash value to a string. It’s pretty simple, taking the value passed to it and doing some formatting magic to get a value for the SQL statement. function ConvertTo-SQLHashString{ param([parameter(Mandatory=$true)] $binhash)$outstring = '0x'
$binhash | ForEach-Object {$outstring += ('{0:X}' -f $_).PadLeft(2, '0')} return$outstring
}

Bring this all together and now we have a straightforward function we can call to help us manage our logins. Some examples:

#Create a SQL Script in My Documents for all logins

#Create a SQL Script in My documents for the passed logins

#Create a SQL Script in My documents for the passed logins and try and create these logins on the target SHION\ALBEDO
Copy-SQLLogins -source 'localhost' -logins @('SHION\mike','test','pcuser') -ApplyTo 'SHION\ALBEDO' -Verbose

Hopefully this helps you in your day to management of your instances. I’ve used this script for migrations and managing my Availability Group nodes. In my opinion, it’s a heck of a lot cleaner than sp_help_revlogin (a venerable tool in its own right, to be sure). As always, this along with my other scripts, can be found up on my GitHub repository

## #tsql2sday 63 – Security, Default Databases, and #Powershell

This month’s T-SQL Tuesday is about how we manage database security, hosted by the ever effervescent Kenneth Fischer(@sqlstudent144). While I didn’t have a lot of time to work on this post, I wanted to share with you all a quick little nugget I used for one of my most recent presentations.

So, default databases. This is a security bit that usually gets skipped over, unfortunately, but I think it’s a pretty important practice that is easy to implement and maintain. Let’s be honest, most of us leave the default database as ‘master’ when we create them. This then will usually lead us to accidentally doing work in the master, which potentially could be disastrous.

Security is more than just locking people out or letting people in. While most of the situations with master can be be limited by reduced permissions, it doesn’t do anything to help or manage accounts that need to be able to work in that database.  Security is not just about active policies, but also passive rules to reduce and prevent mistakes. A lot of damage can be prevented by directing people out of dangerous places by default.

Fixing this is easy:

ALTER LOGIN [foo\bar] WITH DEFAULT_DATABASE = [tempdb];

Also, we’d ideally create the logins with that default database set to begin with:

CREATE LOGIN [foo\bar] FROM WINDOWS WITH DEFAULT_DATABASE=[tempdb];

However, this can be difficult to manage and keep up to date, especially since SQL Server doesn’t help by always defaulting to master. One thing that can be done is to have a way to set all your logins’ default databases to an appropriate database (I really like tempdb for this). So how about some Powershell?

Using the SMO, we can use the DefaultDatabase property of the Login class for a quick and easy way to alter our default database:

$login.DefaultDatabase = ‘tempdb’ Then, using the Logins collection in our Server object, we have an easy loop to go through an alter these default databases. Wrap that up in a fucntion and Bob’s your uncle. Well, he’s my uncle, not yours. Anyway, we’ll get a little something like this: #load assemblies [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null$ErrorActionPreference = 'Inquire'

function Set-SQLDefaultDatabases{
param([string[]]$Instances = 'localhost' ,[string]$defaultdb = 'tempdb')

foreach($InstanceName in$Instances){
$smosrv = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server$InstanceName
if($smosrv.Databases.name -contains$defaultdb){
foreach($login in$smosrv.Logins){
Write-Verbose "Altering $login on$InstanceName to $defaultdb"$login.DefaultDatabase = $defaultdb } } else{ Write-Warning "Database$defaultdb is not valid on $InstanceName." } } } Set-SQLDefaultDatabases So this simple function will set all the logins (SQL, Windows, and AD Groups) to the default database of choice. It’s fairly brute force, but gives you a good starting point. At this point, you should be able to see the value of writing it in this function format and use techniques I’ve blogged about earlier to set this enterprise wide. Thanks out to Ken for hosting this month’s T-SQL Tuesday! Looking forward to reading the other blogs! ## Dynamically Extracting .dacpacs With #Powershell As DBAs, backing up our databases isn’t quite enough. We also want to back up our schemas for additional protection. And while we could go through some complex scripting to actual write out all those schema objects, Microsoft has given us a handy tool with SQL Server Data Tools and Visual Studio to capture these schemas. I’m talking about sqlpackage. I hadn’t used this before I got to my current job, but it really is a snap. Using sqlpackage (in conjunction with managing your database through a Visual Studio project), you can easily deploy or script out database code changes. You can also extract your schema with a few simple commands, like so: sqlpackage.exe /action:extract /targetservername:localhost /targetdatabasename:dummy /targetfile:someawesome.dacpac So this is great and all, but when you’ve got dozens or even hundreds of databases, you need a quick and efficient way to run this command. Since this is a command line operation, Powershell becomes the natural way to do this. Here’s a simple script that will allow you to extract all the user database schemas from a server: #extract all your database schemas as dacpacs$server = 'PICARD'

$dbs = Invoke-Sqlcmd -ServerInstance$server -Database tempdb -Query 'SELECT name FROM sys.databases WHERE database_id >4'

foreach($db in$dbs.name){
$cmd = "& 'C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\sqlpackage.exe' /action:Extract /targetfile:'C:\dacpacs\$db.dacpac' /SourceServerName:$server /SourceDatabaseName:$db"

Invoke-Expression $cmd } Let’s look at two things in play here. The first is the use of the ‘&’, which is the call operator. This takes a string expression you create and then executes it. In this case, we build out the expression with that operator. Then we use item number two, Invoke-Expression, to run the expression we built. The rest is pretty simple. This is a quick and dirty version of it, but we could get a lot fancier. In fact, Gianluca Sartori(@spaghettidba) has a very nice blog post where he provides a function to do this. My own function (which looks pretty similar) goes a little something like this: function Export-SQLDacPacs{ param([string[]]$Instances = 'localhost',
[string] $outputdirectory=([Environment]::GetFolderPath("MyDocuments")) ) #get the sqlpackage executable$sqlpackage = (get-childitem C:\ -Recurse | Where-Object {$_.name -eq 'sqlpackage.exe'} |Sort-Object LastWriteTime | Select-Object -First 1).FullName #declare a select query for databases$dbsql = @"
SELECT name FROM sys.databases
where database_id >4 and state_desc = 'ONLINE'
"@

#loop through each instance
foreach($instance in$Instances){
#set processing variables
$dbs = Invoke-Sqlcmd -ServerInstance$instance -Database tempdb -Query $dbsql$datestring =  (Get-Date -Format 'yyyyMMddHHmm')
$iname =$instance.Replace('\','_')

#extract each db
foreach($db in$dbs.name){
$outfile = Join-Path$outputdirectory -ChildPath "$iname-$db-$datestring.dacpac"$cmd = "& '$sqlpackage' /action:Extract /targetfile:'$outfile' /SourceServerName:$instance /SourceDatabaseName:$db"
Invoke-Expression $cmd } } } Export-SQLDacPacs -instances 'PICARD'  What’s cool is with this is a function, we can now extend it and do some really cool things. Remember that CMS lookup we used last week? Let’s use it here: $CMS=’PICARD’
$servers=@((dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$CMS").Name)

$servers+=$cms
Export-SQLDacPAcs -Instances \$servers

And now I have a script to extract all my user database schemas from all the instances in my enterprise.  How cool is that?  Now I can put this into an agent job or a Windows scheduled task and have a regular schema extract, giving me an extra layer of protection for my systems.