Art of the DBA Rotating Header Image

Powershell

Defending Invoke-SqlCmd

Twitter. It can be annoying, it can be frivolous, but you’d be surprised at how many times it gives me a blog topic. This time around, I was having a back and forth with some folks about Invoke-SqlCmd. I had recommended someone try using it, which was followed by a lot of people griping about the cmdlet while suggesting other community tools or scripts should be used.

While I understand where a lot of these folks come from, I wanted to share a little, in a longer form, about why I use and recommend Invoke-SqlCmd. I think it still gets a bad rap from the PowerShell power users for some of its…quirks. I want to stand up for this handy little cmdlet, which I think is likely the most useful part of the SqlServer PowerShell module.

The .NET Root

When we start talking about Invoke-SqlCmd, what are we trying to do? At the core, PowerShell scripters need something to execute SQL commands within the context of a .ps1 script. I often speak of how PowerShell is a framework where you can tie different parts of the Windows stack together, so this ability to run SQL is needed.

One of the strongest elements of PowerShell is that it is built on .NET, which means we can always use .NET code and objects. In the case of SQL, we can use traditional ADO .NET code to execute SQL against a SQL Server instance:

$Datatable = New-Object System.Data.DataTable    
$Connection = New-Object System.Data.SQLClient.SQLConnection

$Connection.ConnectionString = "server=localhost;database=AdventureWorks2012;trusted_connection=true;"
$Connection.Open()
$Command = New-Object System.Data.SQLClient.SQLCommand
$Command.Connection = $Connection
$Command.CommandText = 'select top 10 BusinessEntityID,PersonType,FirstName,LastName from person.Person'

$DataAdapter = new-object System.Data.SqlClient.SqlDataAdapter $Command
$Dataset = new-object System.Data.Dataset
$DataAdapter.Fill($Dataset)
$Connection.Close()
$Dataset.Tables[0]

As you can see, this is pretty verbose.  We have to create a connection object and set its command string, then open the connection, execute the SQL, populate the data set, close the connection, then finally return the data set. No matter how you slice it, this is a lot of code for a simple SELECT statement.

SMOing it up

To avoid this verbosity, most programmers wouldn’t rely on the raw .NET unless they absolutely needed to. Instead, they would use existing libraries (or write their own if they had to). This is where the SQL Server Management Objects (SMO) come into play. I’ll skip the long boring explanation of what SMO is and just focus on the fact that we can use these existing libraries to simplify our call:

$smosrv = New-Object Microsoft.SqlServer.Management.Smo.Server
$query = 'select top 10 BusinessEntityID,PersonType,FirstName,LastName from person.Person'
$results = $smosrv.Databases['AdventureWorks2012'].ExecuteWithResults($query)
$results.Tables[0]

The SMO lets us reduce twelve lines of code (ignoring whitespace) to four for our SELECT output. This is the power of using existing libraries. We don’t have to reinvent the wheel for our work,which means we can focus more of our effort on everything around the SQL statement.

But what if we could make it even easier?

Keep It Simple, Sir (or Ma’am)

Three lines of code isn’t that much. The problem is that our code is a little less intuitive. Let’s keep in mind that most PowerShell scripters are not .NET programmers. Understanding dot notation isn’t necessarily difficult, but if it doesn’t come naturally to you there’s a learning curve that you have to climb.

This is where Invoke-SqlCmd starts to make sense. First off, it wraps up all the functionality of the above .NET code so we can use it with the cmdlet call. Also, it follows the founding concepts of the PowerShell language, where the syntax is verbose enough to understand what you’re passing as arguments. Let’s look at our simple SELECT now using Invoke-SqlCmd:

Invoke-Sqlcmd -ServerInstance localhost -Database 'AdventureWorks2012' `
    -Query 'select top 10 BusinessEntityID,PersonType,FirstName,LastName from person.Person'

If we break this down, you can see we have a clear set of syntax for calling the cmdlet. We specify an instance, a database, and a query. The output is an array of data rows, though it could be a datatable if you’re using the most current version. Most scripters are not going to need much more than that. It’s a straightforward solution to a simple problem.

So why not?

Why do people gripe so much about Invoke-SqlCmd then? Well, to understand this is to understand the history of SQLPS. For a long while, the SQL Server module for PowerShell was klunky and buggy. There were a lot of challenges with loading it and using it, such that many scripters decided to throw it out and write their own functions. In many cases, PowerShell folks would skip Invoke-SqlCmd not because it was bad, but because it came packaged with the rest of SQLPS and they wanted to avoid the entire module.

Now that the SQL Tools team has been reworking the module as SqlServer, this has become less of a concern. The module is less of a burden to load and the other components do not get in the way. There are also improvements and updates to the code to make it work better and serve more needs.

There are other reasons as well, but it usually boils down to the use case. Invoke-SqlCmd is great, but it can’t do everything. My anecdotal evidence is that the cmdlet will handle upwards of 90% of what I need it for, but there will still be edge cases. For these edge cases, we always have the more code intensive options available to us. This is why PowerShell is so great, there’s always another way, whether it’s writing more detailed code or using a community function.

To touch on this last point, this is where I also bump up against folks. There are some fantastic community functions out there. Seriously, check out DBATools if you haven’t yet. These tools enhance and expand your PowerShell experience and I make heavy use of these tools. They also have better alternatives that address the quirks of the core SqlServer module.

The rub is that you can’t always rely on community tools being available to you. Sometimes a corporate environment won’t let you install those tools or scripts. It might be that you go into a gig and they haven’t even heard of these tools. I can tell you that as consultant, I have to be very considerate of my client’s environments before I start bringing in outside code. However, because Invoke-SqlCmd is part of Microsoft’s tool set, I can rely on it being there. There are less steps and hurdles to making use of it.

Making Sense Of It All

There are a lot of reasons for using this little cmdlet. There are also reasons not to use it, depending on your situation or use case. My goal here is to not let the baby get thrown out with the proverbial bathwater. Invoke-SqlCmd is one of my favorite cmdlets in my toolbox and (most of the time) it does exactly what I need it to. Whether I’m writing scripts or working adhoc on a server, I usually don’t need a lot of fluff, just a way to execute a SQL query and sometimes get something back.

I hope you enjoyed my little love letter to Invoke-SqlCmd. I understand if it doesn’t suite your needs, but maybe you should give it another look? Or, if you’re a PowerShell newbie, you can see why using it can make your life a little easier. Wherever you go next after this blog post, I hope it is with a clearer view of how Invoke-SqlCmd can fit into your own PowerShell habits.

Query Plan Analysis with #PowerShell

A couple years ago I wrote a blog post on shredding query plans with XML. At the time, it was just an experiment. I investigated how to parse XML and a query plan without any real goal or purpose other than “what can I do with this?” Then I left it alone and didn’t come back to it.

At least not until recently. Fast forward a few years when I’m trying to tune a large SQL batch process that had a loop. It wasn’t a very pretty process and it worked, mostly, but it could definitely benefit from some performance love. The trick was trying to quantify the changes I made within. A common way to quantify this is to check logical reads. The lower your logical reads, the less “work” is typically done in your query. The trick was capturing all the executions within the loop and summing all the logical reads across a single batch execution.

At first I went to Plan Explorer. This is a great tool and had a lot of information, but what it was missing was the ability to sum up all my logical reads across the entire batch execution. I could look at each individual query, but to add these values up was going to be tedious and painful. Two things I hate.

At this point, I figured why not give PowerShell a shot? After all, I knew that the query plan was an XML doc and I could easily traverse that using the XML functionality built into the language. That combined with a little XQuery (which I’m terrible at, by the way) should solve my problem.

Armed with this knowledge, I charged ahead. Everything worked more or less as expected, but the one piece I missed from my previous blog post was using the XML namespace. See, you need the namespace so the XML pieces in PowerShell know what to query. I floundered with this for a bit until I found Jonathan Kehayias(@SqlPoolBoy) post on sanitizing query plans.

Once you have the namespace set, the rest becomes easy. To go with my previous example, the following statements allowed me to sum and compare logical reads across all statements executed in the batch:

#Read in query plan XML into XML objects
$old = Get-Content C:\Users\Mike\Documents\oldQueryPlan.sqlplan
$new = Get-Content C:\Users\Mike\Documents\newExecutionPlan.sqlplan

#Set namespace manager
$nsMgr = new-object 'System.Xml.XmlNamespaceManager' $old.NameTable;
$nsMgr.AddNamespace("sm", 'http://schemas.microsoft.com/sqlserver/2004/07/showplan');

#Query all nodes and then sum ActualLogicalReads
'Old Plan Logical Reads: ' + `
($old.SelectNodes("//sm:RunTimeCountersPerThread",$nsMgr) | Measure-Object -Property ActualLogicalReads -Sum).Sum
'New Plan Logical Reads: ' + `
($new.SelectNodes("//sm:RunTimeCountersPerThread",$nsMgr) | Measure-Object -Property ActualLogicalReads -Sum).Sum

Once the pattern is down, the use is pretty straightforward. There’s also more options accessible to you. If we just look at the RunTimeCountersPerThread node, we can compare other values such as Rows, Scans, and CPU time. We could really get crazy and extract all the different statements within the batch. There are numerous possibilities for analysis and review.

I’m not here to tell you that you should start using PowerShell to automate query tuning. Query performance is an art form and requires a lot of case-by-case analysis. However, like any great carpenter, it’s good to know the capabilities of your tool set. Understanding the options available to you not only helps you be more effective, but can also provide answers you may not have had access to.

Using #PowerShell to Restore to a New Location

Now that I’ve gotten some of my thought pieces out of my brain, I wanted to get back to some more technical posts, starting with some simpler techniques for people trying to figure out how to use SQL Server and PowerShell together. I know that a lot of database pros are starting to understand the importance of the language, but still struggle with some practical examples of how to get started. One of my goals with this blog is to bridge that gap.

When restoring a backup, it can be tedious to restore to a new location and have to figure out your MOVE statements. If you only have one data file and one log file, it’s probably not a big deal, but it’s still annoying. Usually, the steps for me are:

  1. Figure out my new data and log paths.
  2. Run a RESTORE FILELISTONLY against the backup file to get the files.
  3. Write out my RESTORE WITH MOVE commands using the new paths.
  4. Execute

None of this is difficult, but we can still make it easier. We have an established process, so putting some PowerShell scripting around it can automate our restore to make the script building faster and more consistent.

Our weapon of choice will be Restore-SqlDatabase. This workhorse cmdlet has been part of the both the old SQLPS and the new SqlServer modules. The functionality hasn’t really changed, meaning that what we go over here should work for you regardless of what module you use. I always recommend using the most recent version of the code, but don’t worry if you can’t.

The cmdlet is straightforward in its use. Fundamentally, all we need to declare is an instance, database name, and backup file. However, if we don’t declare anything else, the cmdlet will try and restore the database files to their original locations. Keep in mind this is no different than how a normal RESTORE DATABASE command works.

This is where we make our lives easier with PowerShell. First off, to move files using Restore-SqlDatabase, we need to create a collection of RelocateFile objects. Don’t let the .Net-ness of this freak you out. All we’re doing is creating something that has the logical file name and the new physical file name. In other words, it’s just an abstraction of the MOVE statement in RESTORE DATABASE.

Let’s look at some code. I’ve got a script, but I think the best way to approach it is to break it up and talk about each section individually, just to make sure we’re all on the same page. To get started, we should declare a few things: the new file locations, output of a script file, database name for the restore, backup file, and then an array we can store our RelocateFile objects in.

#Set Variables
$NewDataPath = 'C:\DBFiles\Data'
$NewLogPath = 'C:\DBFiles\Log'
$OutputFile = '.\restore.sql'
$dbname = 'AdvWorks2014'
$BackupFile = 'C:\DBFiles\AdventureWorks2014.bak'
$relocate = @()

Next up is a simple RESTORE FILELISTONLY to get our file list. This needs to be done with Invoke-SqlCmd because there’s no support in Restore-SqlDatabase (or any other cmdlet) for the file list option.

#Get a list of database files in the backup
$dbfiles = Invoke-Sqlcmd -ServerInstance localhost -Database tempdb -Query "RESTORE FILELISTONLY FROM DISK='$BackupFile';"

Now comes the “magic”. Our RESTORE FILELISTONLY call gives us a collection for all our files, but it’s all the old locations. We will look through this collection, do some string replacement, and create our RelocateFile objects. I want to call out the use of Split-Path -Leaf, a handy cmdlet that will separate out the different parts of a file path. By using -Leaf, the cmdlet give you only the actual file name. We can just append that value to the end of our new path (using Join-Path) and use that for creating the RelocateFile object for each file.

#Loop through filelist files, replace old paths with new paths
foreach($dbfile in $dbfiles){
  $DbFileName = $dbfile.PhysicalName | Split-Path -Leaf
  if($dbfile.Type -eq 'L'){
    $newfile = Join-Path -Path $NewLogPath -ChildPath $DbFileName
  } else {
    $newfile = Join-Path -Path $NewDataPath -ChildPath $DbFileName
  }
  $relocate += New-Object Microsoft.SqlServer.Management.Smo.RelocateFile ($dbfile.LogicalName,$newfile)
}

Creating the RelocateFile objects is the heavy lifting. After this, it’s just a matter of calling Restore-SqlDatabase with the right arguments. Note that I’m using the -Script argument and piping this to Out-File. We’re using PowerShell to create a SQL script, which is a pattern I like. As handy as these tools are, they don’t always get everything, so I will use scripts to create scripts and then edit the final output with whatever else I need.

#Create Restore script
Restore-SqlDatabase -ServerInstance localhost `
-Database $dbname `
-RelocateFile $relocate `
-BackupFile "$BackupFile" `
-RestoreAction Database `
-Script | Out-File $OutputFile

By saving and reusing this script, I have saved myself a lot of man hours for restores. The strength here isn’t in any mystery code or magic functionality. It is simply a matter of leveraging a framework to automate an existing process.

I’ve actually taken this script and created a more formalized function with it. The core is there, but in keeping with the tooling spirit, I’ve added some additional code that validates file system paths. You can find it on my GitHub repository and you’re welcome to download and make use of it yourself.

Azure SQL Databases with PowerShell: Moving To The Cloud

cloud-computing-2It’s been a while, but I want to wrap up my series on Azure SQL Databases and Powershell. In my last post, I talked about how you can make point in time backups of Azure SQL Databases using the Start-AzureSqlDatabaseExport. This process allows you to make a backup of your Azure SQL Database and store it as an Azure blob. You can then restore these backups using Start-AzureSqlDatabaseImport. It’s all very handy to manage your data within the Azure environment.

Now let’s talk about what I alluded to at the end of that post: using this process to migrate data from your traditional SQL Servers to Azure SQL databases. As we have previously discussed, Azure SQL Database does not support traditional backups as we are used to with regular SQL Server. While there’s a debate on whether or not these should be supported, the reality is that we need another process to handle this. This is where the export/import process comes in.

BACPACing

SQL Server Data Tools(SSDT) have always had a process to extract your database. There are two types of extracts you can perform:

  • DACPAC – A binary file that contains the logical database schema and possibly the data. This file retains the platform version of the database (i.e. 2012, 2014, 2016).
  • BACPAC – A binary file that contains the logical database schema and the data as insert statements. This stores the platform version, but is not locked into it.

The DACPAC extract is useful for managing your database code. These objects can be source controlled and, using SSDT, can be used to compare to a current database state and deploy changes. I’m a big fan of using DACPACs to manage my database code and maintain code consistency. I don’t use the data extract portion, but it’s there if you need it (more on this in a later blog post).

BACPACs are a little more limited. They contain the schema and your data in a logical state, as insert VALUE statements to populate your database objects. The advantage is you get a flexible package that can deploy a database and its data to almost any SQL Server platform (with some caveats, which we will get to).  The disadvantages are that the BACPACs are not as efficient as native backups and are not transactionally consistent.

What this means is that BACPACs are not really suited for day to day backups of your database. They are intended to be a migration tool, especially if you are working with disparate versions of SQL Server. It is why they’re intended for moving databases to Azure SQL Database, because a BACPAC can move your database logically and you are not reliant on the underlying database architecture.

Which brings us to the caveat. While BACPACs allow us to move a database logically, it can not always take into account unsupported features. We can export a database from our SQL Server, but if we’re using something not supported in Azure SQL Database (such as CLR or Service Broker), then this migration will not work. If you are planning this sort of migration, you first need to review your databases for these issues.

Exporting Our Database

Enough about that, let’s get exporting. There are several ways to migrate your database, but we’re going to focus on Powershell and those cmdlets. We can use sqlpackage.exe to create the BACPAC, which is pretty easy (this is really one line, I’m separating it so it’s easier to read):

SqlPackage.exe 
/action:Export 
/SourceServerName:localhost 
/SourceDatabaseName:AdventureWorks2012 
/targetfile:'C:\temp\AW2012.bacpac'

Once we’ve got our .bacpac file, we need to get it up into Azure. For this we need a storage account, then it’s just a matter of copying the blob. For this, we will use the same storage container we used for the exports before. We will create a storage context for the storage account, then use Set-AzureStorageBlobContent to upload the .bacpac file.

$stctxt = (Get-AzureRmStorageAccount -ResourceGroupName test -Name msftest).Context
Set-AzureStorageBlobContent -File C:\TEMP\ADW2012.bacpac -Context $context -Container sqlexports

Liftoff to the Cloud

With our export loaded into Azure, we’re ready to import it into Azure SQL database. We need to first create a database to import the data into:

New-AzureRmSqlDatabase -ResourceGroupName 'IntroAzureSQL' -ServerName 'msf-sqldb' -Edition Basic -DatabaseName AWTEST

After this, it is just a matter of starting the import as we did previously. We need to create a connection context for our Azure SQL Database server, then start the import:

$cred = Get-Credential
$sqlctxt = New-AzureSqlDatabaseServerContext -ServerName msf-sqldb -Credential $cred
$request = Start-AzureSqlDatabaseImport -SqlConnectionContext $sqlctxt -StorageContext $context -StorageContainerName sqlexports -DatabaseName awtest -BlobName ADW2012.bacpac

Notice I save the output of  Start-AzureSqlDatabaseImport into a variable. This is an object requesting the import request, which I can use to check on the import. Remember that calling the cmdlet starts an asynchronous process that will import the schema and data of the .bacpac into your newly created database. We’ll want to check on the process to identify when it completes.

Double Check Your Work

The one last thing to do is to verify the migration, which requires two checks: the schema and the data. For the schema, we will go back to sqlpackage.exe to perform a schema comparison. This is a two step process:

  1. Extract the source schema as a .dacpac
  2. Generate a deploy report targeting the Azure SQL database. If no changes are shown, the schemas match.

Extracting the schema as a .dacpac is a similar operation to extracting the schema as a .bacpac:

SqlPackage.exe /action:Extract `
/SourceServerName:localhost `
/SourceDatabaseName:AdventureWorks2012 `
/p:ExtractAllTableData=True `
/targetFile:C:\Temp\AW2012.dacpac

Generating a deploy report is a similar command, though we have a couple gotchas. First, we’re connecting to our Azure SQL database, so we need our login and password passed to the command. Second, .dacpacs store the database platfom version and will halt if there’s a possible incompatibility. So we need to declare AllowIncompatiblePlatform=True in our sqlpackage call to ignore this. We’ll still get a flag that there could be incompatibilities, but it will at least run the comparison for us:

SqlPackage.exe /action:DeployReport `
/SourceFile:C:\Temp\AW2012.dacpac `
/targetServerName:msf-sqldb.database.windows.net `
/targetDatabaseName:AWTEST `
/targetUser:"$($cred.UserName)" `
/targetPassword:"$($cred.GetNetworkCredential().Password)" `
/p:AllowIncompatiblePlatform=True `
/OutputPath:C:\temp\AWMigration.xml

The resulting .xml file will tell us what changes the deployment would have to make to get everything lined up. Since the goal here is to verify that everything got deployed correctly, we want this report to not have any changes. The desired output would look a little like this:

2016-09-15_8-48-46

Since no changes are listed, there’s nothing to do and everything matches.

Checking your data can be a trickier proposition. For a basic “sniff test”, I just check the row counts of each table. For this, I have a query to check sys.partitions for heaps and clustered indexes (index_id is 0 or 1) and compares the source to the Azure SQL database to look for any differences:

$sql = "select
schema_name(schema_id) +'.'+ t.name as TableName
,p.Rows as Rows
from
sys.tables t
join (select * from sys.partitions where index_id in (0,1)) p on t.object_id = p.object_id
order by TableName"

$local = Invoke-Sqlcmd -ServerInstance localhost -Database AdventureWorks2012 -Query $sql
$azure = Invoke-Sqlcmd -ServerInstance msf-sqldb.database.windows.net -Database AWTEST -Query $sql -Username $cred.UserName -Password $cred.GetNetworkCredential().Password

$matches = @()
foreach($i in $local){
$matches += New-Object psobject -Property @{'TableName'=$i.TableName;'LocalRows'=$i.Rows;'AzureRows'=($azure | Where-Object {$_.TableName -eq $i.TableName}).Rows}
}
$matches | Where-Object {$_.LocalRows -ne $_.AzureRows}

Again, if everything worked correctly we shouldn’t get any output.

I should stress that these two checks do NOT replace thorough regression testing. These should give you a starting point to verify that your migration was successful, but you still will want to test your applications and processing.

The New Hotness

Now, while I would like for Microsoft to include some native cmdlets that accomplish these actions, it should be noted that sqlpackage.exe is a pretty solid tool. It can be used from the command line and wrapping PowerShell around it is still effective. The key is having something I can use from the command line so I can write repeatable scripts. This is important because Azure is driven by automation and, while we can use GUI tools and the portal, we won’t succeed long term in the cloud on these tools alone, so it’s important to understand all the options.

Creating Alerts for Azure SQL Database with Powershell

Last week, as I watched the Twitters, I saw an interesting blog post from Julie Koesmarno(@MsSQLGirl) about creating alerts for Azure SQL Database. These are cool because one of the many things I get asked about when talking about Azure SQL Database is “how do I monitor this darn thing?” With these alerts, you can monitor performance, capacity, and security settings, setting up either email notifications or webhook responses.

Now, me being me, I tweeted out that it would be cool to do this in PowerShell. I say this not just being a geek about the language, but also because if you want to live in the cloud you need to up your automation and scripting game. PowerShell is the way to manage your Azure resources, especially if you want to set up consistent alerts for all your Azure SQL Databases. Julie pointed me to some examples that, while not specific for Azure SQL Database, got me moving in the right direction. I’d like to share some of that learning here.

The cmdlets

To start, we need to know what cmdlets are at our disposal. You’re going to need the Azure and AzureRM modules first (I recommend using the Powershell gallery installation) and then we just need to use PowerShell’s command syntax to find what we’re looking for. Using Get-Command, just do a wildcard search around ‘Azure’ and ‘Alert’ as my search terms.

Get-Command *Azure*Alert*

2016-08-17_15-02-15

It’s not a long list, but we don’t need much to get our work done. It should be noted that these are generic and can be used across all Azure resources.

Because these cmdlets are generic, we have to identify the specific metrics we can monitor for Azure SQL Database. Fortunately, these are discoverable using the Get-AzureRmMetricDefinition, which is not listed but part of the AzureRM.Insights module (use Get-Command -Module AzureRM.Insights). To use it, we need to know our Azure resource ID, which we can derive from three pieces of information: the resource group, the server name, and the database name.

$ResourceGroup = 'IntroAzureSql'
$server = 'msf-sqldb'
$db = 'MSFADMIN'
$rid = (Get-AzureRmResource -ResourceGroupName $ResourceGroup -ResourceName "$server/$db").ResourceID
Get-AzureRmMetricDefinition -ResourceId $rid | Format-Table

2016-08-17_15-11-26

Most of these Name values match what Julie listed in her post, but the nice thing is if we ever get new ones, we can look them up with this method.

Creating an Alert

So let’s get down to brass tacks and actually create an alert. To do this, we need some info first:

  • The Resource Group we will create the alert in.
  • An Azure location where the alert will live.
  • An Azure SQL Database server and database we are creating the alert for.
  • What metric we will monitor and what is the threshold we will be checking.
  • (optional) An email to send an alert to.

With this, here’s the settings I’m going with:

Resource Group – IntroAzureSql
Location – West US
Server – msf-db
Database – MSFADMIN
Alert – dtu_consumption_percent greater than 90 (Flag if more than 90% DTU usage)

There are a couple other considerations for creating an alert. The first is a time window the alert will check against. This window can be anywhere from five minutes to a full day. The second is what aggregation of the metric we will check (i.e. total, average, maximum). You’ll want to use values and aggregations that make sense, but for the example alert, we’ll check the maximum value over a 5 minute window.

Once we have everything in place, we will call the New-AzureRmAlertRuleEmail cmdlet to create our email notification and Add-AzureRmMetricAlertRule cmdlet to create our alert:

$ResourceGroup = 'IntroAzureSql'
$location = 'West US'
$server = 'msf-sqldb'
$db = 'MSFADMIN'
$rid = (Get-AzureRmResource -ResourceGroupName $ResourceGroup -ResourceName "$server/$db").ResourceID
$email = New-AzureRmAlertRuleEmail -CustomEmails 'mfal@dummy.com' -SendToServiceOwners
Add-AzureRmMetricAlertRule -Name 'DTU90Check' `
-Location $location `
-ResourceGroup $ResourceGroup `
-TargetResourceId $rid `
-MetricName 'dtu_consumption_percent' `
-Operator GreaterThanOrEqual `
-Threshold 90 `
-WindowSize '00:05:00' `
-TimeAggregationOperator Maximum `
-Actions $email

2016-08-17_15-33-45

And now we can look at our shiny new alert using either the portal or Get-AzureRmAlertRule:

2016-08-17_15-37-18

2016-08-17_15-39-54

To get a rid of an alert is easy, too, just call Remove-AzureRmAlertRule:

Remove-AzureRmAlertRule -ResourceGroup $ResourceGroup -Name 'DTU90Check'

2016-08-17_15-42-50

Where To Now?

As you can see managing alerts in PowerShell isn’t all that difficult. The question is “why”? Hopefully you can already see the value, especially if you want to standardize your Azure SQL Database environment. By having all your alerts scripted out, you can apply them in a consistent fashion. This can be part of a larger automation process that helps scale out your environment as necessary. Regardless of how you manage it, using PowerShell can significantly decrease your management time for your Azure environment.

Thanks again to Julie for giving me the inspiration to figure this out!

Stuck in SQLPS

maxresdefaultA month and a half ago, with a lot of fanfare and maybe a little confetti, the SQL Server Tools Team released an update to SQL Server Management Studio that massively updated the SQL Server Powershell experience. As I blogged, this is a big deal and long overdue. I am extremely happy that this is finally getting some attention in Redmond.

However, once the dust settled and I had a chance to dig into things, I discovered that not all was rainbows and candy. The truth of the matter is that SQLPS is not so easily replaced. We are still going to be burdened with using the old module for at least the short term. This means at least being comfortable with the foibles of the old module.

Bundled With SSMS

The first real issue with the new SqlServer Powershell module is that is part of the SQL Server Management Studio install. I’ve long lamented the fact that you can’t install Powershell components separately. This means you’re limited to installing this only in locations where you can install SSMS 2016. Microsoft probably thinks you should be installing SSMS2016 EVERYWHERE, but even if you run on the bleeding edge this is a problem if you’re using Windows Server Core (no GUI, no SSMS).2016-08-11_14-41-38

I have not figured out any workaround to this. It’s possible to copy the module folder from a good install to somewhere else, but I’m nervous about that since something could be missed. This means that, until we can install this separately, users are stuck running SQLPS on servers without management studio. Yes, this means SQL 2016 without SSMS as well.

Not Recognized by SQL Agent

The second issue is that even if you do install SSMS 2016, SQL Agent won’t recognize and give you access to the new module if you use a PowerShell job step. When you create a PowerShell job step, the script in that job step runs within a specific context. It’s hidden from you, but whenever that script runs the first thing that happens is SQL Server launches sqlps.exe.

3285_72b386224056bf940cd5b01341f65e9dSqlps.exe is a “mini-shell”, which configures a few things to support SQL Server and PowerShell together. The important piece here is that the executable is hard coded to use the SQLPS module. Now, sqlps.exe is deprecated and going away, but only in a future SQL Server release. This means if you’re using anything previous to this release (having no idea when the release will happen), you are going to be forced to use the SQLPS module if you are creating a Powershell script job step.

How can you get around this? This is a trick that’s been around for a while. I try to avoid it since it adds additional layers to running PowerShell in an agent job, but it does work. What you do is create a CmdExec agent job and call the Powershell executable to run your task. This approach gives you a LOT more control on how PowerShell is working in your environment, but takes some extra work. Hat tip:Derik Hammer (@sqlhammer)

Stuck In The Past

pastLike I said before, I’m excited and happy the SQL Server Tools Team is improving the PowerShell experience for SQL Server. It has been a long time coming and I hope it continues. This is why it makes the above items SO frustrating, because while all these updates are being made, most of the user base won’t be able to use them. In a world where SQL Server installations still run on 2000 and 2005, it can take a long while to upgrade your core installation to take advantage of these improvements.

Normally, I’d be ok with this. I know that when I use an earlier version of SQL Server, I don’t get the shiny new toys. With SQLPS, though, it’s different. For all these years the experience has been klunky and problematic. While some of this klunky-ness finally got fixed back in March, most of us are still stuck with the workarounds because we do not have a way to install the updates.

What can you do about it? I’m glad you asked. Currently there are two Connect items that address these directly:

You can also make your voice heard on the community Trello board. As users, we’re at the mercy of Microsoft for updating this stuff and making it available to us. The goal is to raise awareness of these issues and let the development teams know how important they are. I will definitely give Microsoft credit for being more responsive to user feedback.

 

Out With The SQLPS, In With The SqlServer

This month, the SQL Server tools team released an updated version of the SQL Server PowerShell module. It is a huge leap forward for using PowerShell and SQL Server together, with the tools team making a serious commitment to correcting the sins of the past. While we still have a long way to go, this is a promising path. At this time next year, I’m certain the PowerShell/SQL Server experience will be in the best place it has ever been.

Before I share my thoughts, let me give you some relevant links. You know, in case you missed something.

Not Playing Nice

Now for my turn! First off, I want to call out that with the July 2016 update we have a new module. SqlServer replaces SQLPS, which means that all the old functionality in the old module exists in the new one (along with some nice little fixes). What I discovered is that this also means you can’t use both modules at the same time. I ran into this because my profile automatically loaded SQLPS for me. When I went to import SqlServer, I got some unpleasant red text:

2016-07-09_12-39-26

Basically, it’s a typing conflict with the SMO. Now, there’s no real reason why you should be running both modules side by side, but be aware that if you have anything that automatically loads SQLPS, you’re going to have problems. Fortunately there’s an easy way to fix this without having to restart your session:

Remove-Module SQLPS
Import-Module SqlServer

However, there’s another monkey wrench, especially for those of us using PowerShell scripts in SQL Agent jobs. Even with the release of SQL 2016 (which was just before these changes were made), SQL Agent still runs sqlps.exe and loads the SQLPS module. To check this, I made a simple agent job that executes:

Get-Module | Out-File C:\Temp\AgentPSModules.txt

The resulting output shows us the issue plain as day:

2016-07-09_12-50-13

There are ways around this, of course. The key here is awareness and I’m sure that we’ll see the SQL tools team get an update to this when possible.

Invoke-SqlCmd: Now with more flexibility!

Enough of the bad news, let’s talk about some of the improvements. As I mentioned earlier, Laerte Junior blogged about some of the updates to Invoke-SqlCmd. These changes really help make the cmdlet more useful for managing and working with outputs. However, I want to look at one specific addition more closely: connection strings!

I’ve always found the biggest limitation of Invoke-SqlCmd was that I couldn’t specify connection string parameters. This was a significant issue when working with multi-subnet Availability Groups, because I couldn’t specify MultiSubnetFailover=True, a key connection string parameter. Now I can, along with a host of other options.

Using it is pretty simple. Just declare a connection string as you would with any other application and pass it to the -ConnectionString parameter:

$ConnectionString = 'Server=localhost;Database=tpcc;Trusted_Connection=true'
$sql = 'SELECT name,physical_name FROM sys.database_files'
Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql

2016-07-09_13-05-37

This new parameter opens up a whole new range of possibilities. I love that Microsoft is providing multiple paths to work with their tools. This means that if you’re a .NET guy who is more comfortable with connection strings, you don’t have to constantly keep looking up all the different flags and parameters of Invoke-SqlCmd. Also, if you have a situation which can not be addressed by those standard flags, you can always use the standard .NET connection string parameters and get the job.

(And yeah, I’m excited about this because it was my Connect item. :D )

Providing for the Provider

The last thing I want to touch on are a couple updates to the SQL Server provider. Now, the provider definitely has a bad name out there, but it can be a very useful tool for exploring and gathering SQL Server objects you want to work with. Since it is based on the SMO, it makes SQL Server object manipulation easier by giving you more options to collect your objects.

An important facet of the provider is that it behaves like a file system. How many of us have deleted files from the command line? Have you also used a PowerShell one-liner to delete old files, like backups? If you have, you might be familiar with two parameters: -WhatIf and -Confirm. These two switch parameters are extremely helpful because they can keep you from cutting yourself with that sharp PowerShell knife.

With the July 2016 update, the SQL Server provider now supports the use of these two switches. While using them may not be a common situation, it’s good to know that they are there. After all, it could be handy if you wanted to clean up some items from the command line, like maybe a junk database. Now you can both check what you’re going to do before you do it, along with getting a confirmation question when you go for the actual delete:

2016-07-09_13-21-52

While this may not seem like a big deal, it’s another example of how the SQL tools team is trying to get the SQL Server PowerShell module up to the standards of other PowerShell implementations. For all of us who struggle daily with technical debt, I’m sure you can understand how significant this is.

Early Days

I know the SQL Server PowerShell module still has a bad reputation with a lot of folks out there. Honestly, it’s well deserved. The SQL Server Tools team is out to fix that and everything we’ve seen over the last 3-4 months shows that commitment. A lot of this has to do with the effort put forth by the SQL Server community and the ongoing Trello board where the community is working with the tools team to suggest and prioritize changes. Want to help? Join us by contributing changes and voting up the current suggestions. Microsoft wants to make this your product and this is how we can help.

Azure SQL Databases with #Powershell: Exporting and Importing

Previously we went over how Azure handles regular backups of your Azure SQL Databases. It is such a huge weight off an administrator’s shoulders when we only have to worry about restoring a database if something goes wrong. While many DBAs might struggle with letting go of the need to control and manage their backups, I think most of us will embrace the freedom from the tedium and worry of keeping an eye on this part of our disaster recovery.

There still is a need to take a backup of some sort. Over my career, while I have used backups for protecting my data, I have also used them for other tasks. Sometimes it is to snapshot the data at a point in time, such as before a code release or a major change. Other times a backup can serve as a great template for creating a new application database, especially if you have a federated database model. Whatever your use case, there are times we would need to snapshot a database so we can restore from it.

Exports

One approach would be to just mark down the time you want to use as your backup and restore from there, but this approach could be difficult to control and be tricky to automate. Azure offers us a better option: exporting the Azure SQL database in question to blob storage. We can restore (or, more precisely, import) this export to a new database.

To run an export is a simple call to the Start-AzureSqlDatabaseExport cmdlet. Just like the restore cmdlet, it will start the process in the Azure environment, running in the background while we do other work. To run an export, we need the following information:

  • Azure SQL database to export
  • The administrative login for the server hosting your Azure SQL Database (which we will define as a SQL Server storage context)
  • The storage container information

The only mildly frustrating thing with the export we need to use cmdlets from both the Azure module and the AzureRM module (assuming your storage blob is deployed using the resource manager model). Because of this, make sure you run Add-AzureAccount and Login-AzureRMAccount before you get started.

We first need to create a connection context for our Azure SQL Database instance, using a credential for our admin login and the server name/

$cred = Get-Credential
$sqlctxt = New-AzureSqlDatabaseServerContext -ServerName msfazuresql -Credential $cred

Once we have established our SQL connection context, we will then need to set our storage context using a combination of AzureRM and Azure cmdlets.

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName Test -StorageAccountName msftest).Key1
$stctxt = New-AzureStorageContext -StorageAccountName msftest -StorageAccountKey $key

Now we can then start the export. Notice, we need a name for the export, used in the BlobName parameter.

$exp = Start-AzureSqlDatabaseExport -SqlConnectionContext $sqlctxt -StorageContext $stctxt -StorageContainerName sqlexports -DatabaseName awdb -BlobName awdb_export

Since this only starts the export, we need a way to check on the status. We can check using Get-AzureSqlDatabaseImportExportStatus. Oddly enough, the status cmdlet requires the username and password to be passed separately and does not take a credential object.

Get-AzureSqlDatabaseImportExportStatus -RequestId $exp.RequestGuid -ServerName msfsqldb -Username $cred.UserName -Password $cred.GetNetworkCredential().Password

And then there is our blob.

2016-05-14_11-23-48

There are two gotchas to keep in mind with both the export and the status. The first is you can not overwrite an existing blob, so make sure your blob name is unique (or get rid of the old one). Also, you can not check on the status of an export that has finished. If you get an error, chances are your export has completed.

Imports

Once we have our export, we now have a “backup file” we can create new databases from. All we need to do is run an import of our blob. Just as for our export, we need some information for our import, which we will (unsurprisingly) run with Start-AzureSqlDatabaseImport.

  • The storage container and blob that we will import from
  • A destination server and credentials for the server
  • The name for our database

Now, since we are creating a new Azure SQL Database with the import, the process needs to define a service objective. By default, it will import the database at standard 0 (S0), but you can defined a higher or lower edition if you want. To simplify things, we will go with the defaults and use the contexts from above, so all we really need to do to kick off the import is:

Start-AzureSqlDatabaseImport -SqlConnectionContext $sqlctxt -StorageContext $stctxt -StorageContainerName sqlexports -DatabaseName awdb_imp -BlobName awdb_export

Which, when completed, gives us a new Azure SQL Database created from our export blob:

2016-05-14_11-56-01

The Secret Sauce

What makes this black voodoo magic work? Is this some proprietary technique Microsoft has snuck in on us? Surprisingly, this is a bit of technology that have existed for sometime now as part of SQL Server Data Tools called BACPACs. A BACPAC is essentially a logical backup of a database, storing the schema and data as SQL statements.

This differs from a typical SQL Server backup, which stores your database pages directly in a binary format. Because of this, native backups are smaller and can be made/restored faster. However, they are more rigid, as you can only restore a native backup in specific scenarios. A logical backup, since it is a series of SQL statements, can be more flexible.

I don’t know why Microsoft went with BACPACs over some native format, but because they did,we can also migrate a database from on-premise SQL Server to Azure SQL database. This is a follow up to a common question I get: “How can I copy my database up to Azure SQL Database?” I want to look at this in my next post. Tune in next week, where we will create a BACPAC with regular SQL Server database and migrate it up to Azure SQL Database. 

Azure SQL Databases and #Powershell: Database Restores

logoAzureSqlA database administrator’s first priority is to ensure their datais protected in the event of a disaster. Typical scenarios range from someone forgetting a WHERE clause to completely losing the data center that houses our servers. We are charged with thinking through all the possible scenarios and making sure we can support our company’s Recovery Time and Recovery Point Objectives (RTO and RPO). This sort of responsibility is a lot why being DBA is a tough job.

Owning our servers gives us a lot of control of how our disaster recovery is constructed, but also a lot of responsibility. This can be a real challenge. We already know that cloud services, like Azure SQL Database, take that responsibility away by managing the infrastructure behind the scenes. These services also provide much of the disaster recovery protection, which is much more comprehensive and (in most cases) cost effective. In this post, we will review these options and how they can be managed using Powershell.

A Big investment

The most fundamental form of disaster recovery is database backups and restores. Typically setting up backups is a lot of work. DBAs need to make sure there’s enough storage available for backups, create schedules that accommodate business operations and support RTOs and RPOs, and implement jobs that execute backups according to those schedules. On top of that, there is all the work that has to be done when backups fail and making sure disk capacity is always large enough. There is a huge investment that must be made, but it is a necessary one, as losing a database can spell death for a company.

This is one of the HUGE strengths of Azure SQL Database. Since it a service offering, Microsoft has already built out the backup infrastructure for you. All that stuff we talked about in the previous paragraph? If you use Azure SQL Database, you do not have to do any of it. At all.

What DBAs still need to manage is being able to restore databases if something happens. This is where Powershell comes into play. While we can definitely perform these actions using the portal, it involves a lot of clicking and navigation. I would much rather run a single command to run my restore.

Executing a Restore

Before we get started, we need to first load up the classic Azure module. While our work so far has been done with the AzureRM module, there are many Azure SQL Databases tasks that can only be managed using the Azure module and database restores are one of them. Unfortunately, it also means we have to login to our Azure account a second time, as authentication is managed separately between the modules. Using Add-AzureAccount is just like Login-AzureRMAccount and will bring up a graphical dialog for you to sign in with.

Import-Module Azure
Add-AzureAccount

2016-05-07_12-25-46

To restore a database, first we just need three things to start: the server where the original database came from, the name of the database we want to restore, and a name to restore the database as. When you restore an Azure SQL Database, you have to create a database that does not already exist, so there is no WITH REPLACE option. Starting the restore is then just a matter of calling the right cmdlet:

Start-AzureSqlDatabaseRestore -SourceServerName msfazuresql -SourceDatabaseName msfsqldb -TargetDatabaseName msfsqldb-restore

2016-05-07_12-37-34

This only initiates the database restore operation and we have to wait for it to complete. To check on the status, we can use another Powershell cmdlet to display that information.

Get-AzureSqlDatabaseOperation -ServerName msfazuresql -DatabaseName msfsqldb-restore | Sort-Object LastModifyTime -Descending | Select-Object -First 1

2016-05-07_12-57-24

All the pipeline calls are there to filter out additional operations and only show the most recent, as the cmdlet returns all restore operations that have ever happened for that server and database. It is difficult to predict how long a restore will take, as this is all handled behind the scenes. I have not tested restore times, but my test databases usually take a few minutes to restore.

There are other options we can declare, depending on how we want to manage our restore, including:

  • Point in time within a minute of when we want to recover.
  • An optional destination server that we want to restore the database
  • A dropped database as our source

One other gotcha to be aware of, albeit a minor one, is that you can not perform a restore of an Azure SQL Database until 15 minutes after the creation of the database. This should not be a concern for most, but it can get in the way of rapidly spinning up databases for demo purposes (ask me how I know!).

Other Scenarios

Simple database restores is the entry point for most recovery scenarios. There are other options for disaster recovery as well as managing copies of your database. There are other options within the platform to support these scenarios. In the next post, I want to dive into a couple cmdlets that we can use to copy our Azure SQL Database as well as make a “backup file” to Azure blob storage that can be used to store a copy of our database in a particular state.

Azure SQL Databases with #Powershell: Managing your Databases

I meant to write this post a month ago, but then life happened. This is a continuation of my short series on Azure SQL Database and how you can manage it with Powershell. If you want, you can go back and read the previous two posts: Getting started and creating your database.

Administrative Access

Now I’d like to show you how to connect to the database, but since it’s been so long since I’ve looked at this demo, I’ve forgotten my administrative password! While it would be simple to blow away the Azure SQL server and database, there are many situations where this is not possible. Fortunately we have an easy way to reset the administrative password using the cmdlets.

$pw = ConvertTo-SecureString -AsPlainText -Force '********'
Set-AzureRmSqlServer -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -SqlAdministratorPassword $pw

Walking through this, we just need to create a secure string for our password and then use the Set-AzureRmSqlServer cmdlet and pass the secure string to -SqlAdministratorPassword argument. Easy as that and we don’t even need to know what the previous password was. With this in mind, I also want to call out that you can only change the password and not the admin login name. While this is not such a big deal, be aware that once you have an admin login name, you are stuck with it.

Connecting and Querying

Armed with our reset password, now we can query the database. The easiest way for you to do this is to connect to your database with SQL Server Management studio:2016-04-30_10-20-41

Remember that your server name will be what you named it followed by “.database.windows.net”. You must have SQL Server Authentication selected, and then just enter your login and password. SSMS will then connect up to your Azure SQL Database and the rest is very much like managing your on premises databases.

What about using Powershell? Again, not all that different. Let’s run a basic query to get some information about our database using Invoke-SqlCmd:

$sql = @"
SELECT
DATABASEPROPERTYEX ( 'msfsqldb' , 'Edition' ) as Edition
,DATABASEPROPERTYEX ( 'msfsqldb' , 'ServiceObjective' ) as ServiceTier
,DATABASEPROPERTYEX ( 'msfsqldb' , 'Version' ) as Version
,convert(bigint,DATABASEPROPERTYEX ( 'msfsqldb' , 'MaxSizeInBytes' ))/1024/1024/1024 as MaxSizeGB

GO
"@

Invoke-Sqlcmd -ServerInstance msfazuresql.database.windows.net -Database msfsqldb -Username msf -Password '*********' -Query $sql

2016-04-30_10-39-00

As you can see, working with Azure SQL Database after it has been created and configured is not much different than your typical SQL Server installation. There is a lot of writing out there on features and use of Azure SQL Database, so I will skip that here. The best place to start is Grant Fritchey(@GFritchey) and his Azure blog posts.

Remaining Flexible

The last item I want to cover here is managing your database size. One of the promises of the cloud and Azure is the flexibility to change your resources as your needs demand and not be stuck on whatever hardware you purchased when you first built your data center. There are a couple items you should consider when first creating your Azure SQL Database, but the great thing is that you are not tied into many of those options permanently.

There are several service tiers available to you, with a mix between standard and premium offerings. Obviously, each service tier has a different price point, so how do you know what is the right choice for your database? What complicates this is you have a mystery metric for performance: Database Transaction Units. Ostensibly this is the number of transactions per second your database should be able to provide, but there’s more to it than that. My general recommendation is you should base your choice on features you need and user connections you expect. If you find, after that, you have a performance bottleneck, you can adapt upwards incrementally.

How do you make this change? It is just one line of Powershell:

Set-AzureRmSqlDatabase -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -DatabaseName msfsqldb -RequestedServiceObjectiveName 'S1' 

Then it is off to the races. The actual operation is not instantaneous and Azure needs some time to allocate the resources and adjust the settings on its side. The great news here is this does not interfere with the operation of your database at all and it will be available while the resizing happens. Existing connections will be maintained and even queries that are running will continue to run. The end result will simply be:

2016-05-01_18-18-03

Ease of Use

As you can see, we can manage the access and scale of Azure SQL Database with a few straightforward commands. While these changes can also be managed through the web portal, I find the Powershell approach to not only be simpler, as we can skip all the browsing through blade windows and troublesome clicks. While these actions are not really ones we would automate, using the cmdlets can also ensure that our actions can be more precise and consistent.

In my next post, I want to cover a topic near and dear to the hearts of DBAs everywhere: database restores. While backups for Azure SQL Database are managed for us, we need the ability to restore our data to a point and time of our choosing. Fortunately this is fairly easy to accomplish with Powershell. Stay tuned and I promise you will not have to wait another month for me to demonstrate this.