Art of the DBA Rotating Header Image

Giving Thanks for Powershell Resources

turkey-rice-krispiesIt seems an appropriate time to write this post as this week is the American holiday for Thanksgiving, a time when we show appreciation for that which makes our lives better. I would like to thank some folks for the Powershell resources they’ve provided, resources that I’ve used to figure out the language. We all succeed by standing on the shoulders of giants and I can not thank these giants enough. Since I also get a lot of questions from folks about what are good resources to use for Powershell, I want to help them out by providing this list.

Don Jones(@concentrateddon) and Jeff Hicks(@JeffHicks)

Let’s face it: You are not going to get very far into Powershell without running into these two names. Don and Jeff have contributed so much to the Powershell community. While we could go on and on about all of their material, let me give you the top three hits.

Powershell in a Month of Lunches is the go-to book for learning Powershell. For a language that takes users time to get their brains around, this book makes it easy. By breaking Powershell fundamentals into 25+ one hour lessons, it will give you a gradual learning curve for understanding and using Powershell. If you don’t have it and are starting your Powershell journey, get it.

Powershell in Depth is a much larger book, but that’s because it is true to its title. Where Powershell in a Month of Lunches is a starter, easing the Powershell noob into how the language works, Powershell in Depth is a detailed reference manual. The book has many chapters that explore different aspects of the language, from basics of control flow and operators to the more involved topics of workflows and background jobs. I have this book for when I know what I want to do, but need to know how to make it work.

Powershell.Org is the Powershell community’s beating heart. While the other two resources I’ve listed can be used for studying and getting down the basics, this site provides a hub that contains articles, forums, and other helpful material for new and experienced users. The Powershell community is young, but growing rapidly. The best part is that the community has plenty of helpful folks willing to lend a hand.

Allen White(@sqlrnr)

Allen is a long standing member of the SQL community and has been a Microsoft MVP for many years. While he’s busy with his new job at SQL Sentry, his many blog posts on will still provide great code and patterns to DBAs looking to use Powershell. I’ve learned so much from Allen’s work that ‘thank you’ isn’t enough, but I try.

The great thing about Allen’s posts is that they are clear and understandable. The problem with Powershell is, much like other languages, scripts can easily spin out of control and become so convoluted that the purpose of them can be lost. Allen’s code is broken down so that you can understand and analyze each piece, taking away the valuable information and not getting drowned in if/else statements and SMO methods.

Ed Wilson(@ScriptingGuys)

If you’ve done any internet searches on Powershell, you have come across the Scripting Guys blog at some point. This fountain of information is a necessary stop on the World Wide Web for all Powershell enthusiasts. Ed often will provide nice little code snippets that we can learn from and build upon. Add into that the constant flow of other notable Powershell folks as guest bloggers, and you will learn numerous tips and tricks you can incorporate into your own code.  Ed’s conversational writing style makes the material less intimidating and the way he breaks posts up between tips and full discussions makes it easy to digest.

Microsoft Virtual Academy

We all learn in different ways, whether it is getting our hands dirty, studying books so we can digest information at our own pace, or listening to a lecture on the topic and watching an expert provide step-by-step examples. Most of us probably mix all three of these methods in one way or another. For those who prefer webinars and lectures, Microsoft’s Virtual Academy has many free videos for you that can walk you through many different topics within Powershell.

Sounds great, right? What’s even better is these sessions are given by none other than Jeffrey Snover(@jsnover) and Jason Helmick(@theJasonHelmick). Jeffrey, of course, is the father of Powershell and Technical Fellow at Microsoft. Jason is the CFO of and one of the original community members of the Powershell movement. Through the MVA courses, you are getting free training from two of the biggest names in and around Powershell.

Let me say that again so it sinks in: Through the MVA courses, you are getting free training from two of the biggest names in and around Powershell. (Hint:free)

The Tip of the Iceberg

This is by no means an exhaustive list of Powershell resources available to you. Instead, it is an answer to the question I hear after almost every presentation I give: Where do I start? There’s a lot to learn with Powershell and, like any other technical subject, you will be continuously improving your game. I just want to get you started in much the way I started.

Good luck!

Reporting on SQL Agent Jobs with #Powershell

SQL Agent jobs are a fundamental tool for managing our databases. Whether we are running regular backups, executing maintenance tasks, or performing a scheduled ETL process, being able to manage and run scheduled tasks is key to keeping our workload manageable. Of course, as our environment grows, keeping track of these jobs becomes harder and harder.

We can manage this growth by leveraging facets of Powershell. One aspect I continually advocate is Powershell’s ability to execute tasks easily across multiple servers. This means that our tools can easily adapt to the our growth. Additionally, since Powershell is a framework that supports different approaches, administrators can use a method most comfortable to them to accomplish their work.

Let’s take the case of reporting on failed jobs. Job failures usually mean that an administrator needs to troubleshoot an error, resolve it, and run the job again. Now, some jobs are critical and require immediate attention to fix. For this, the Agent provides error alerts and reporting that can execute at the time of failure. However, if it’s not as critical and can wait for someone to review it, we can build a reporting process that can be reviewed. I’d like to cover three techniques we can use in Powershell to accomplish this reporting and how you can leverage them yourself.

T-SQL and Powershell

The first technique is to simply combine a T-SQL query with some Powershell to execute across multiple instances and collect the results. Let’s start with the SQL query (borrowed from

;WITH CTE_MostRecentJobRun AS
-- For each job get the most recent run (this will be the one where Rnk=1)
  SELECT job_id
    ,RANK() OVER (PARTITION BY job_id ORDER BY run_date DESC,run_time DESC) AS Rnk
  FROM sysjobhistory
  WHERE step_id=0
  @@SERVERNAME as Instance
  ,name AS [JobName]
  ,CONVERT(VARCHAR,DATEADD(S,(run_time/10000)*60*60 /* hours */
     +((run_time - (run_time/10000) * 10000)/100) * 60 /* mins */
     +(run_time - (run_time/100) * 100) /* secs */
  ,CONVERT(DATETIME,RTRIM(run_date),113)),100) AS [TimeRun]
FROM CTE_MostRecentJobRun MRJR
  JOIN sysjobs SJ ON MRJR.job_id=sj.job_id
  AND run_status=0
  AND [enabled] = 1

Running this within SQL Server Management Studio is easy enough and the query is exactly what we would expect. We want to run this against multiple servers. Now with a few lines of Powershell, we can create a report across all of our instances:

#I've already put the query into the $sql string variable.
#Not displayed here to save space
$servers = @('PICARD','RIKER','SPOCK','KIRK')
$servers | ForEach-Object {Invoke-Sqlcmd -ServerInstance $_ -Database msdb -Query $sql}

11-14-2015 11-47-36 AM

Pretty simple and it is flexible enough that all we have to do is add servers to our named list and we’re off to the races. If you want to make it even more dynamic, you could always leverage Central Management Server to populate your server list.


This is effective, but I struggle a little with the SQL query. It’s good, but suffers from the structure of the jobs tables in MSDB. We have to account for that and it makes the SQL query a little convoluted. It would be helpful if we could reference a simple data set like the Job Activity Monitor in SSMS.

Of course, this is a leading question on my part. There is a way to do this and it is by leveraging the SQL Server Management Objects (SMO). This .Net library is the API interface for working with SQL Server and is what SSMS is built on. Because it is a .Net library, we can also access it through Powershell.

What we want from the SMO are the Smo.Agent.JobServer and Smo.Agent.Job classes. These represent the SQL Server Agent jobs and allow us to interact with everything within it. Using this and some handy Select-Object calls, we can accomplish what we want without the long T-SQL query:

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.Smo') | Out-Null

$servers = @('PICARD','RIKER','SPOCK','KIRK')
$servers | ForEach-Object {(New-Object Microsoft.SqlServer.Management.Smo.Server $_).JobServer.Jobs |
    Where-Object {$_.LastRunOutcome -ne 'Succeeded'} |
    Select-Object @{Name='Server';Expression={$_.Parent.Name}},Name,LastRunDate

11-14-2015 12-19-32 PM

We’ve accomplished the same task, but now without the complicated SQL query. Granted, this method is predicated on understanding the SMO object model, but this can be figured out using Get-Member and the MSDN pages that describe the methods and properties. Because everything in Powershell is an object, we can do so much more with the information contained within.

The Provider

The third method is actually the same as using the SMO. The SQL Server provider is an additional layer of abstraction that lets us browse SQL Server components as if they were a file system. It uses the SMO to do its work, so it will be handled in a similar way and give us the same results as the pure SMO. However, we can skip some of the .Net instantiation techniques in favor of a simple directory lookup:

$servers = @('PICARD','RIKER','SPOCK','KIRK')
$servers | ForEach-Object {Get-ChildItem "SQLSERVER:\SQL\$_\DEFAULT\JobServer\Jobs\" |
    Where-Object {$_.LastRunOutcome -ne 'Succeeded'} |
    Select-Object @{Name='Server';Expression={$_.Parent.Name}},Name,LastRunDate

The output is exactly the same as the SMO, which makes sense because this method works in the same manner. However, because all the necessary libraries and objects are loaded when we load the provider (Import-Module SQLPS), we can skip the object instantiation. It removes another layer of complexity.

Wrapping Up

These three methods all give you the same amount of information, it’s up to you to decide which approach works best for your environment. For DBAs who want to work mostly with T-SQL and are still getting used to Powershell, the first option makes a lot of sense. For system administrators who have had SQL Server thrust upon them, the third option might work better as it doesn’t require any T-SQL knowledge. It’s all a matter of what is comfortable. However you want to work with your data, Powershell gives you the tools to get the information in a way that makes sense for you.

#TSQL2SDAY: You’re Gonna Cluster that GUID

370801205_efe0fa8d7d_zWelcome to the continuing blog party that is T-SQL Tuesday. This month’s host is Mickey Stuewe (@SQLMickey) with a challenge to blog about data modelling mistakes we’ve seen. Unfortunately, it’s an easy topic to blog about because proper data modelling often falls by the wayside in order to rapidly deliver applications. I’m not saying it should, but the fact of the matter is many developers are so focused on delivering functionality that they do not think about the long term implications of their design decisions.

One such mistake is the selection of Globally Unique Identifiers (GUIDs) for a data type. Developers love this data type for a lot of reasons, some of them very good. The fact that GUIDs are globally unique provides a way to keep data unique when migrating along with a nicely obfuscated surrogate key that can help protect user data. All in all, I can not really blame people for wanting to use this data type, but the problem is that SQL Server does not manage that data type very well behind the scenes.

The problems with GUIDs in SQL Server are fairly well documented. As a quick review for context, the issue is that since GUIDs are random values, it is hard to efficiently index them and these will rapidly fragment. This means slower inserts and more disk space taken up by the index. I created the following two tables (one with a GUID, one with an INT) and inserted 2000 rows into each:

create table GUIDTest(
    orgid uniqueidentifier default NEWID()
    ,orgname varchar(20)

create table NonGUIDTest(
    orgid int default NEXT VALUE FOR NonGuidSeq
    ,orgname varchar(20)

The results from sys.dm_db_index_physical stats tell the story from a fragmentation and size perspective:

11-7-2015 11-37-41 AM

So GUIDs are bad. With as much as has been written on this topic, it feels a bit like shooting fish in a barrell. However, GUIDs will find their way into databases because of their usefulness to the application layer. What I want to talk about is a common misconception around “solving” the GUID problem: clustering on a sequence ID but keeping the GUID in the table.

Let’s start by creating a new table to have a GUID and cluster on a sequence:

create table SeqGUIDTest(
    seqid int default NEXT VALUE FOR GuidSeq
    ,orgid uniqueidentifier default NEWID()
    ,orgname varchar(20)

11-7-2015 11-48-28 AMAs expected, less fragmentation and size. This is good, right? It can be, but here’s the problem: the sequence is completely meaningless to our data and our queries will likely not use it (unless we build in additional surrogate abstraction to relate sequence to our GUID). Let’s compare query plans for our GUIDTest and SeqGuidTest tables where we query each for a specific orgid value:

11-7-2015 11-54-20 AM

The query where the GUID is a clustered index is far more efficient than the one against the table where we cluster on a sequence. This is because it can leverage the index, meaning we will get a seek instead of a scan. While clustering on a sequence field saves us on space and fragmentation, it ends up hurting us when trying to do data retrieval.

If we were tuning the query against SeqGuidTest, the next logical step for tuning would be to create a non-clustered index on orgid. This would improve the query, but in order to make it useful (and avoid key lookups), we would need to include all the columns of the table. With that, we have completely negated any benefit we got from clustering on a sequence column, because the situation is now:

  • A clustered index that is the table, thus using all that disk space.
  • A non-clustered index that also uses as much space as the table (it has all the columns in it as well).
  • The non-clustered index now has the same fragmentation problem we were trying to avoid on the clustered index.

So while trying to avoid a problem, we have made the problem worse.

There are two lessons here.  The first is the classic “it depends” case. There are no hard and fast rules to how we implement our databases. You need to be flexible in creating your design and understand the complete impact of your choices. Clustering on a GUID is not great, but in the above case it is far better than the alternative.

The second lesson is to understand your data and how it is going to be used. When we build tables and indexes, we need to be conscious of how our data will be queried and design appropriately. If we use abstractions like surrogate keys, then we need to make sure the design is built around that abstraction. In the above examples, we could cluster on a sequence key, but only if that sequence key has some sort of meaning in our larger database design. Building a better database is about understanding our relationships and appropriately modeling around them.

Thanks to Mickey for the great T-SQL Tuesday topic! Please check her blog for other great posts throughout the day.

A Month of SQLPS: The Thrilling Conclusion

17097596540_f1280b65f7 (1)Here we are, 45 cmdlets and a provider later. When I started this series, I did it for two reasons. The first was the general lack of documentation for the cmdlets and a hope that I could bridge that gap for the community. The second was that I realized I didn’t understand much of the functionality that was there myself and writing on it would help me build that knowledge. I can definitely say that now, at the end of it all, I’ve had my eyes opened to what we can and can’t do.

The Good

I have to say, I’m pretty pleased with most of the cmdlets available to us.  The cmdlets for managing Availability Groups are great and I make daily use of them. I would like to see some Get- cmdlets for some of those instead of using the path, but in a pinch I can use Get-Item within the provider context to accomplish the same thing. The other alternative is to create SMO server objects and reference the appropriate properties to create the necessary objects.

As I blogged about awhile back, I do like Invoke-SqlCmd and the ability to get query results as datarow objects. This is a handy way to work with data within a Powershell script. I also like the backup and restore cmdlets for abstracting away some of the SQL work into Powershell. I know I can run any of these using T-SQL, but the abstraction makes things easier for me when writing automation.

I also really like the provider. I know it’s clunky, but once you’re past the gotchas, there is a lot of great things you can collect using it. To boot, the collection is fairly simple. Using the provider is a fairly simple and effective way to collect server wide inventories.

The Bad

Some of the functions just seem unnecessary. It strikes me that some cmdlets, like Convert-UrnToPath and Encode/Decode-SqlName were developed for some specific use cases and have limited real world use. This is not surprising, as we have seen cases in the past where Microsoft decided internally that something was useful, but real world adoption did not happen. These only annoy in the way that they are clutter and add confusion.

Of course, the real problem is the initial experience of the provider. So many DBAs get discouraged by the timeouts and strange behavior of the provider that they ask if Powershell is even worth it. The usual path for them is to go back to what they know and code things in T-SQL which, while cumbersome and inelegant for many tasks, just work. The struggle is to get DBAs over that initial hump with Powershell so that they can get on with figuring out what to use the tool for.

The Meh

As we’ve seen, the SQLPS module has some head scratchers. I’m still not 100% clear on why the team needed to leverage the Cloud Adapter Service when creating some of these functions. Tasks like adding a firewall rule, enabling authentication modes, and restarting services already have valid ways to be managed through Powershell, so these cmdlets are really not necessary. Again, we’re left to deal with clutter in the module.

What’s Next

I’ve gotten a lot of feedback in person on this series and it sounds like a lot of people out there have really benefited from my posts. This is great. If you’ve missed the series or maybe a post or two within it, the full list is easy to find.

What I would like to do is to take some time over the next few months and compile this information into an e-book, along with some patterns for use. I also intend to build a presentation for next year that is a deep dive into both the provider and the module. I really do like the SQLPS module and the provider, as it can be extremely powerful if used correctly. I hope that, now that you’ve read this series, the wheels are turning in your brain about where and how you can implement SQL Server and Powershell automation in your environment.

A Month of SQLPS: Forget-Me-Nots

You’ve probably gotten the impression over this series that it was a little ad hoc. It’s true, because this was as much about exploring the module as documenting it. I figured the best approach was just to dive in and start writing about the ones that could be fit together. This worked out, but over the series I did sections on cmdlets and missed a couple. I’m going to cover them now.

The first is New-SqlBackupEncryptionOption, which I should have covered when I talked about the backup and restore cmdlets. This cmdlet is for supporting the encrpyted backup features of SQL 2014 and allows you to generate an Smo.BackupEncryptionOptions object that can be used by either the Backup-SqlDatabase or Set-SqlSmartAdmin cmdlets. You use it to define a certificate or key and encryption algorithm so that your backup file will be encrypted:

$encryption = New-SqlBackupEncryptionOption -Algorithm Aes256 -EncryptorType ServerCertificate -EncryptorName AzureManagedBackup
Backup-SqlDatabase -ServerInstance PICARD -Database AdventureWorks2012 -EncryptionOption $encryption -BackupFile C:\Backups\AW_Encrypted.bak

Restoring the database does not require you to declare an encryption option, you just need the appropriate certificate or asymmetric key to be available. This makes encrypting backups for SQL2014 pretty simple. The only real challenge is how you manage your keys.

The other cmdlet I skipped over was Test-SqlDatabaseReplicaState, which should have been included when I talked about databases in Availability Groups. This cmdlet will use Policy Based Management to evaluate all the databases in an Availability Group replica set to determine their health. As with the other AG cmdlets, it is a little cumbersome to use because it relies on the provider paths. And, of course, it has its own path to reference: AvailabilityGroups\<AGNAME>\DatabaseReplicaStates.

$replicastates = Get-ChildItem SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE\DatabaseReplicaStates
$replicastates | Test-SqlDatabaseReplicaState


We get a handy little report of the policy evaluations that is a collection of PowerShell.Hadr.DatabaseReplicaHealthState objects. This could be used to drive alerts or remediations of replicas that become unhealthy within the AG.

That does it! That’s all the current cmdlets in the SQLPS module, at least as of SQL Server 2014. With the impending release of 2016, I’m sure we’ll get some new ones. As of the this post, I haven’t yet looked at SQL Server 2016 CTP 3, so I’m not sure what to expect. Watch for that in a future post. Tomorrow I’ll give you a quick wrap up of the series.

A Month of SQLPS: Odds and Ends

The last few cmdlets to cover don’t fit in any nice buckets. I’ll just address each one on its own to keep it simple. One I won’t cover is Invoke-SqlCmd, because I already did that a couple of months ago. I will say that I use Invoke-SqlCmd a LOT and recommend that you look into using it for your own scripts.

What’s left? Well, one cmdlet is Set-SqlAuthenticationMode. As the name implies, it allows us to change the authentication mode of our instance. The cmdlet requires to the Cloud Adapter Service to function and a Windows Administrator credential for the target machine. When called, you can specify one of two modes: Integrated and Mixed. Normal and Unknown will show up as accepted values by Intellisense, but if you try and use them the cmdlet will fail.

If we try to set Mixed, the cmdlet will require a SQL credential that contains the password for ‘sa’. When run, this cmdlet will force a service restart (as normal for changing an authentication mode). It will ask you to confirm a restart, which you can avoid using the -Force flag.


Is it useful? It’s really nice and handy to have a cmdlet that will wrap up the checking for credentials and manipulating the service, but needing that Cloud Adapter Service is frustrating. It’s another layer of functionality that I need to provide, one that does not offer me a whole lot of value outside of this one call. Especially when we already have a good way to do it using the SMO (hat tip Allen White(@SQLRunr)).

Next up is a cmdlet useful to those who leverage Policy Based Management, Invoke-PolicyEvaluation. Interestingly enough, Microsoft actually has this cmdlet documented, so this is helpful. Running it seems fairly simple: Get a policy (either from an XML declaration or a SQL Server object), then call the cmdlet with that and a target server:


Neat, right? A nice little report that my policy evaluation failed. Just one problem: I don’t know WHAT failed. As of writing this, I’ve been trying to figure out how to display the results in a readable format to the Powershell window. You can get the detailed results by using the -OutputXML flag to generate an XML string (not an XML doc object, unfortunately) which will contain the detail, but this isn’t all that helpful.

One nice little piece of fucntionality is the -AdHocPolicyEvaluationMode parameter. By default it is set to Check, which means it simply reports on whether or not the policy passed evaluation. However, we can pass Configure and then the cmdlet will attempt to correct the violations. I’m still testing with this, but this could be handy for cleaning up your environment using PBM.

The final cmdlet is Convert-UrnToPath. This cmdlet is built to assist Universal Resource Names that the SMO uses to name and organize SQL Server objects. Full disclosure: I’ve never used these. Fortunately, the URN for an object can be found as a property of the related SMO object:


The cmdlet will convert one of these URN paths to its appropriate SQL Server provider path:


This can be useful for those who work a lot with URNs, but I honestly have not had a need up until this point. The only real use case I can think of with it is if I want to get a provider path from an SMO object I’ve created.

That about wraps it up! There are two cmdlets that I should have covered in earlier posts that I will touch on tomorrow, then we’ll wrap up with one last post with some thoughts I have on the SQLPS module now that I’ve had a chance to explore it through this blog post series. Thanks for following this and feel free to let me know of any feedback you have via mike [at] [this domain] or the comments and I will try to cover them in my wrap up post.

A Month of SQLPS: Encoding and Decoding

Welcome back and thanks for waiting. There are just a couple cmdlets left to cover in this series, so I expect we’ll wrap up later this week. The next two are odd ones for a couple of reasons. The functions to cover are for SqlNames:

Get-Command -Module SQLPS *SqlName


What do these cmdlets even do? The help files describe the Encode-SqlName as a function for converting special characters in SQL Server names to characters that can be read by the Powershell provider. The reason this is necessary is because SQL Server objects will support characters in their names that are reserved within Powershell, specifically: : \:./%<>*?[]|

Using the cmdlets is simple enough. For the Encode-SqlName cmdlet, all we need to do is pass the name with those characters and the output will be something we can use within a provider path. The Decode-SqlName will reverse this process, taking the provider path value and convert it back to its SQL Server object name:


Do we even need these cmdlets? I really don’t think so. Firstly, I would strongly advise against using any of these reserved characters in a SQL Server object name. There are too many issues and problems that can arise, beyond this one, when these are used. Secondly, we already have ways of handling special characters in path names:

  • Wrapping the the full name in single or double quotes
  • Escaping the characters with the Powershell escape character: ` (the backtick/backquote)


The additional hiccup is that these are the two cmdlets that throw the warning when the SQLPS module is loaded. Encode and Decode are not approved Powershell verbs, which means the you will be warned that those cmdlets will be harder to discover because they don’t fit the language standard. While this is the case, the module still gets loaded and will operate just fine.

I’ve mentioned cmdlets that I like and use. I can’t say that I have ever used these cmdlets and don’t know what I would recommend them for. Honestly, I’d like to see the Microsoft team deprecate these and simply recommend the current methods for handling special characters in names.

A Month of SQLPS: Databases and Availability Groups

The final piece of the Availability Group puzzle is managing databases in them. Managing is typically adding and removing databases, but we also can start and stop the replication of data to other nodes.  These can all be handled via T-SQL and cmdlets contained within the SQLPS module.

There are four cmdlets in the module for database management in Availability Groups:

Get-Command -Module SQLPS *SqlAvailabilityDatabase


These are easily identifiable based on their names. Since adding and removing a database from an AG is the most common set of tasks, we’ll first look at Add-SqlAvailabilityDatabase. Typical of the other SQLPS cmdlets for AGs, it will take the path or Smo.AvailabilityGroup object for the node where you are adding the database. It will also take the database name. Simple enough, right? What’s interesting is how the behavior of the cmdlet differs depending on where you execute it.

Let’s see what happens if we use the cmdlet for a database on the primary node. Whenever we add a database, we always have to add it to the primary node first. In order to demonstrate the differences, I’ll run the example with the -Script switch so we can see the T-SQL it will execute:

Add-SqlAvailabilityDatabase -Path SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE -Database AdventureWorks2012 -Script


This output is exactly what we would expect if we followed the Microsoft documentation on adding a database to an AG. This further reinforces that cmdlets are simply wrappers for T-SQL. Now what if we ran this same command on a secondary node:

Add-SqlAvailabilityDatabase -Path SQLSERVER:\SQL\SPOCK\DEFAULT\AvailabilityGroups\ENTERPRISE -Database AdventureWorks2012 -Script


There is a lot more here, but I’ve cut out a lot of the extra stuff and called out the important bit, which is an ALTER DATABASE command instead of the ALTER AVAILABILITY GROUP call we need to make on the primary node. The cut out material is T-SQL that validates that there is an AG for the secondary node to join, then runs the T-SQL for joining a secondary.

This is one of the handier bits of code within the SQLPS module. It is a little annoying to remember that there are two different pieces of T-SQL syntax for adding a database to an AG. The SQLPS cmdlets account for that by wrapping up the logic in a single call and sorting out the correct command for you.

The Remove-SqlAvailabilityDatabase cmdlet works in a similar fashion. The main difference is that our reference is now to a Smo.AvailabilityDatabase object, found in AvailabilityDatabases under your AG in the provider. Yes, I hear ya’, how many different object paths do we need to keep track of? Anyway, the cmdlet will take a different action depending on which node it executes on:

Remove-SqlAvailabilityDatabase -Path SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE\AvailabilityDatabases\AdventureWorks2012 -Script

Remove-SqlAvailabilityDatabase -Path SQLSERVER:\SQL\SPOCK\DEFAULT\AvailabilityGroups\ENTERPRISE\AvailabilityDatabases\AdventureWorks2012 -Script


The final two cmdlets will control data replication. We can suspend data replication in an availability database when necessary using the Suspend-SqlAvailabilityDatabase and Resume-SqlAvailabilityDatabase cmdlets. They serve as wrappers for ALTER DATABASE [] SET HADR SUSPEND/RESUME and follow the same restrictions. You call them with the availability database path or using the Smo.AvailabilityDatabase object for the database you want to alter:

Suspend-SqlAvailabilityDatabase -Path SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE\AvailabilityDatabases\AdventureWorks2012

Resume-SqlAvailabilityDatabase -Path SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE\AvailabilityDatabases\AdventureWorks2012

This wraps up the AG cmdlets for the SQLPS module. As you can see, there is quite a bit of functionality here and pretty solid. I have some issues with the use of paths and lack of Get- commands, but overall there is good support here for automating the creation of Availability Groups. So much that they are a major part of my presentation at the 2015 SQL PASS Summit next week. If you’re there and would like to know more, check it out on Thursday (10/29) at 3:30PM.

Speaking of Summit, since I will be there this series will go on hold for a week. When I get back, we’ll wrap up with the handful of remaining cmdlets. Thanks for reading and we’ll see you back here on November 2.

A Month of SQLPS: AG Listeners

In order to properly communicate with an Availability Group, we need an AG Listener. Straight from the Microsoft documentation, a listener is a Virtual Network Name that represents the AG, allowing applications to connect to it instead of directly to the SQL instance. The listener will handle routing to the appropriate replica within the AG and allow for connections to be consistent regardless of failovers and node changes.

To create a listener, we need a unique name within the domain and an IP address for each subnet the AG lives in. Creating it in your SQL Server Management Studio is just a matter of entering the information when creating a new listener. The GUI is under the Availability Group Management section, where you can add a new listener. Now, there’s a bit of a gotcha with creating the listener. It doesn’t really create SQL Server objects, but instead cluster resources. This means that while can create it in SSMS, after it is created we have to manage it through the Failover Cluster Manager.

Anyway, we’re here to talk about how to manage this using the SQLPS cmdlets, right? There are three cmdlets for listeners:

Get-Command -Module SQLPS *SqlAvailabilityGroupListener*


We’ll look at New-SqlAvailabilityGroupListener first, used to create a new listener. For this cmdlet we’ll need either the provider path for the Availability Group or the Smo.AvailabilityGroup object. Additionally, we also need the name, IP address, and TCP port for the listener. The name is easy enough, it can be any unique name, but we have options for the IP address: declare the address to be acquired via DHCP or give it a static IP address combined with a subnet mask. Personally, I prefer static IP addresses for my listeners and I tend to name my AG listener after the AG itself:

New-SqlAvailabilityGroupListener -Name 'ENTERPRISE' -staticIP -Port 1433 -Path 'SQLSERVER:\Sql\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE'

If you look at the help file for this cmdlet, you’ll notice the -StaticIP argument will take an array. This is becausewe could have multiple IP addresses for the listener if the AG spans multiple subnets. Note, you can still only have one IP address per subnet, so keep this in mind as you provision your listener.

After the listener is created, we can view the listener in a few places. First is in SSMS under Availability Groups. The second is within the Failover Cluster Manager, as the cmdlet creates cluster resources. Finally, we could look at it using the provider:


Set-SqlAvailabilityGroupListener will only allow you to do one thing: Change the port the listener uses. You can not change addresses, only add and remove them. You also can not change the listener name, only remove it and create a completely new listener. To use this cmdlet, you’ll need the provider path or Smo.AvailabilityGroup object for the AG, specifically on the primary node.

Set-SqlAvailabilityGroupListener -Path SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE -Port 1433

Add-SqlAvailabilityGroupListenerStaticIp allows us to add additional static IP addresses to the listener. This cmdlet will also create cluster resources for these addresses, just like New-SqlAvailabilityGroupListener, and must be unique per subnet for the listener.

If you need to remove any of these objects, you’ll probably want to do it through the Failover Cluster Manager or the Failover Cluster cmdlets. You can use Remove-Item if you reference the provider path of the listener and this will drop the entire object. However, managing the IP addresses of the listener itself can only be done by managing the individual cluster resources.

Using these cmdlets will give us a functioning Availability Group and allow us to connect to it using the AG name. With this complete, there is one more step to make an AG 100% functional: install a database into the AG and enable it for synchronization. In the last post for this week, we will cover the cmdlets for enabling and managing databases within an AG.

A Month of SQLPS: Availability Group cmdlets

In the previous post, we covered how to create and manipulate Availability Group replicas. Assembling these replicas are necessary for building and using SQL Server’s AlwaysOn technology for high availability. Now it is on to the cmdlets in the SQLPS module that actually create the Availability Groups themselves. As always, we’ll start with looking at the cmdlets in the module:

Get-Command -Module SQLPS *SqlAvailabilityGroup


We talked about Join-SqlAvailabilityGroup in relation to the cmdlets for managing replicas. This cmdlet is important because while we can create replicas, they still need to be joined to the Availability Group. Calling this cmdlet is simply a matter of passing the provider path or the Smo.Server object that represents the node, then the Availability Group name:


This is necessary once the Availability Group is created, but what about actually creating it in the first place? For that we have the New-SqlAvailabilityGroup cmdlet. To use this, we need a couple things: the name of the Availability Group, the path or SMO.Server object for the primary node, and a collection of replicas.

New-SqlAvailabilityGroup -Name ENTERPRISE -Path SQLSERVER:\SQL\KIRK\DEFAULT -AvailabilityReplica $replicas

Again, even if we create this with the cmdlet, we still have to join the replicas. Make sure you keep that gotcha in mind. Otherwise, this is all we need to get a basic Availability Group up and running. Doesn’t seem hard, does it? Really, the bulk of the work is done by creating the replica objects.

If we want to change any settings for the Availability Group, we would use SqlSet-AvailabilityGroup. This is for changing settings like backup preference, failure condition levels, and the AG health check timeout. Now that an AG exists, the cmdlet requires the Smo.AvailabilityGroup object or the provider path for that AG:

Set-SqlAvailabilityGroup -Path SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE -AutomatedBackupPreference Secondary

Calling Remove-SqlAvailabilityGroup will destroy the AG with no extra action required:

Remove-SqlAvailabilityGroup -Path SQLSERVER:\SQL\KIRK\DEFAULT\AvailabilityGroups\ENTERPRISE

This is all pretty useful, but the strength of Availability Groups is fast failover to another node. Normally, to accomplish this you would connect to the node using SQL Server Management Studio and running the appropriate ALTER AVAILABILITY GROUP command. The Switch-SqlAvailabilityGroup cmdlet saves us a little bit of extra effort by avoiding the step of logging into the SQL Server. We can just call the cmdlet and specify the AG path within the node we want to fail to:

Switch-SqlAvailabilityGroup -Path SQLSERVER:\SQL\SPOCK\DEFAULT\AvailabilityGroups\ENTERPRISE

It’s important to note the difference in the path. Just as we want to run the SQL from the target node for the failover, we need to use the provider path for that target node.

We know the Test-SqlAvailabilityReplica will evaluate some Policy Based Management checks, and so will Test-SqlAvailabilityGroup. We can pass the usual arguments and get back a PowerShell.Hadr.AvailabilityGroupHealthState object with a HealthState property.

By covering these cmdlets, we know have a good foundation for how to get started with Availability Groups and Powershell. There are two other steps we need to complete our fully functioning AG: creating a listener and deploying a database. The next few posts will go over those cmdlets in detail, so you can understand the best way to automate the creation and management of this great piece of SQL Server technology.