Art of the DBA Rotating Header Image

Desired SQL Configuration with #Powershell – Part 2

Last week, as part of T-SQL Tuesday 68, I introduced you to two Powershell functions for validating and setting your SQL Server configurations. While useful, they depend on some sort of configuration source, which begs the question: How do we get this source? This post covers the third function that will help you create your configuration source, using an existing server’s values and allowing you to use them directly or modify them as you see fit.

Get-SQLConfiguration

The concept of Get-SQLConfiguration is simple: Get the sys.configuration values of a SQL Server instance and export them as a hash table that can be used by the other functions. The additional criterion to consider is that Test-SQLConfiguration and Set-SQLConfiguration both use the SMO properties to do this, so our configuration source must also use these names. The result is a function that uses the SMO to perform its export:

function Get-SQLConfiguration{
param([string]$InstanceName='localhost'
  ,[string[]] $Filter
  )

$smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
$output = @()
if($Filter){
  $configs = $smosrv.Configuration | Get-Member -MemberType Properties | Where-Object {$Filter.Contains($_.Name)}
}
else{
  $configs = $smosrv.Configuration | Get-Member -MemberType Properties | Where-Object {$_.Name -ne 'Properties'}
}

foreach($config in $configs){
  $output += New-Object PSObject -Property ([Ordered]@{'Name'=$config.Name;
  'DesiredValue'=$smosrv.Configuration.$($config.Name).RunValue})
}

return $output

}

The function itself is not complex, but I did want to add one other feature to it. A simple export might be overwhelming, considering the number of configuration properties within SQL Server. Most of these will actually be left at their defaults. To aid us, I also added a Filter parameter that accepts an array of SMO configuration property names and will only export those properties.

There are a couple patterns to use this function. The simplest is to just run it for one server and use the output to check another:

$configurations=Get-SQLConfiguration -InstanceName PICARD
Test-SQLConfiguration -InstanceName RIKER -Configs $configurations

This call will essentially compare and find the differences between the two instances. If we wanted to get more specific, we could use our Filter parameter to only compare one or two configurations:

$configurations=Get-SQLConfiguration -InstanceName PICARD -Filter @('FillFactor')
Test-SQLConfiguration -InstanceName RIKER -Configs $configurations

It is also possible to store and manage these configurations by using the Get-SQLConfiguration function. Because we’re working with hash tables, we have a variety of storage options. The easiest to get started is to use the built in Export-CSV function and save our configurations as a delimited file (I like pipe, but commas will do):

Get-SQLConfiguration -InstanceName PICARD | Export-Csv -Path .\PicardConfig.cfg -NoTypeInformation

With the text file that is generated, we can go in and edit our configs, removing what we want and editing values we need to update. The result can then be stored in source control for tracking and audit purposes. We could even go as far as to load the configurations into a database table and call them from a central administrative server. There are a lot of possibilities.

The Code

While I’ve published the functions in these two blog posts, there’s an easier way to get ahold of them. These are part of my SQLConfiguration module published on GitHub, which you can download and use (along with my other SQL configuration functions). Please keep in mind this module and the code contained within should still be considered a work in progress and the functions are provided as is. The usual disclaimers apply, so be careful using it and test it thoroughly before making regular use of it. Of course, if you do find any issues, I would love to know about them so I can review and update my code.

There are many additional items that can be added to this. The largest that comes to mind is dynamic configuration values, as these functions consider your configs to be static. Items like minimum memory, maximum memory, and maximum degree of parallelism depend on the hardware configuration and there are various ways to calculate these values. There are also configurations not stored in this part of SQL Server, such as default database directories and SQL Server agent settings. These functions have plenty of room for enhancement. They’re a good start, however, to giving you additional tools to manage your environment, helping to make your SQL Server deployments more consistent and reliable.

#TSQL2SDAY 68: Desired SQL Server Configuration with #Powershell

Welcome to another month of T-SQL Tuesday, started by Adam Machanic(@adammachanic) and hosted this month by Andy Yun(@SqlBek). The topic for this month’s blog party is “Just say ‘NO’ to defaults!”, a call on what we have learned and how we manage SQL Server defaults in our environments. While you will probably find lots of posts out there on what you should or should not set your SQL Server configurations to, I wanted to share with you a post on some tools that can help you manage these configurations.

Management Overhead

Many database professionals, SQL Server or otherwise, learn very quickly that you don’t want to stick with your default settings. The database vendors usually try and set some general values that can apply to most situations, but these typically don’t last long in any enterprise. You probably have a set of configurations you change from the defaults whenever you install a new SQL Server instance.

The struggle, whether you have 5 SQL Servers or 500, is keeping these settings consistent. Sure, you can script out your changes (and should), but how do you manage the changes over time? What if someone changes a setting, how do you enforce your configurations? Or maybe someone else sets up an instance and doesn’t apply your scripts? It becomes an ugly problem to manage.

Since I’m a Powershell fan, I’m also a fan of Desired State Configuration. While the technology is still new on the scene, it’s quickly turning into an effective way to manage your server builds. When thinking about Andy’s topic and the problem of managing changes to your instance defaults, my ‘eureka’ moment was that the DSC model could easily be applied to managing these configurations. The result was three Powershell functions that provide tools you can use to control the settings in your environment.

Test-SQLConfiguration

The first function I wrote was to evaluate the configurations on an instance. To do this, I started with the assumption that I’d have a hash table of Config(name) and DesiredValue(value) pairs. Each pair would be the SMO Configuration Class property and the desired value I wanted to check. Then I would simply loop through each one and, if it did not match, I would add that to an output array. The function would then return a collection of configurations that did not match my desired state.

function Test-SQLConfiguration{
  param([string]$InstanceName='localhost'
    ,[Parameter(Mandatory=$true)][PSObject] $Configs
  )
  $smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
  $output = @()

  foreach($config in $configs){
    if($config.DesiredValue -ne $smosrv.Configuration.$($config.Name).RunValue){
      $output += New-Object PSObject -Property (@{'Configuration'=$config.Name;
                   'DesiredValue'=$config.DesiredValue;
                   'CurrentValue'=$smosrv.Configuration.$($config.Name).RunValue})
    }
  }
  return $output
}

To test this, I put together a very simple pipe-delimited file of configurations I wanted to check. These configuration names had to match the SMO property names (which aren’t difficult to acquire) and the resulting file and output looks like this:

SQLServerConfig.cfg
 Name|DesiredValue
 FillFactor|80
 MaxDegreeOfParallelism|4
 CostThresholdForParallelism|40
 XPCmdShellEnabled|0

SQLConfigure_1

This function provides a quick view of which SQL Server configurations don’t match my desired values. What’s cool is now I can then take this function and easily run it across my entire enterprise.

SQLConfigure_2
With this function, I’ve removed the burden of validating my SQL Server instance configurations. Since the output is an object, there’s many flexible options for reporting and collecting the information. Writing this to a text file is a snap or uploading it to a database table for ongoing auditing.

Set-SQLConfiguration

The next step, after we’ve checked configurations, is to correct the violations. This next function works much like Test-SQLConfiguration and takes the same two parameters as Test-SQLConfiguration. The difference is that the function will now alter the value and then reconfigure the instance to apply the change.

function Set-SQLConfiguration{
  param([string]$InstanceName='localhost'
    ,[Parameter(Mandatory=$true)][PSObject] $Configs
  )
  $smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
  $output = @()

  foreach($config in $configs){
    if($config.DesiredValue -ne $smosrv.Configuration.$($config.Name).RunValue){
        $row = New-Object PSObject -Property (@{'Configuration'=$config.Name;
               'DesiredValue'=$config.DesiredValue;
               'CurrentValue'=$smosrv.Configuration.$($config.Name).RunValue})
        $smosrv.Configuration.$($config.Name).ConfigValue = $Config.DesiredValue
        $smosrv.Configuration.Alter()
        $row | Add-Member -MemberType NoteProperty -Name 'ConfiguredValue' -Value $smosrv.Configuration.$($config.Name).RunValue
        $output += $row
        if($smosrv.Configuration.$($config.Name).IsDynamic -eq $false){$reboot=$true}
    }
  }

  if($reboot){Write-Warning 'Altered configurations contain some that are not dynamic. Instance restart is required to apply.'}

  return $output
}

SQLConfigure_3

Because the function is built to accept a hash table of configurations, we can use the same pipe-delimited file (or any delimited file) to update the instance. Note the warning with the output. Because not all configurations are dynamic, the function will alert you if a non-dynamic configuration was changed. If you change a non-dynamic configuration, you will need to restart the SQL Service to complete the change.

Wrap Up

The last challenge is building out the configurations to check against. The third function I wrote will handle that, but I felt like covering that functionality would make this blog post to long. Next week I will cover the Get-SQLConfiguration function, additional techniques for extending these functions, and then tell you where you can get the code. Please note that this code is in a work-in-progress state, so use with caution. However, this also means that if you have any suggestions, I’d love to hear them so I can turn this into a functional tool that the community can use.

Thanks again to Andy for a great T-SQL Tuesday topic. Keep your eye on his invite blog post and the #TSQL2SDAY hashtag on Twitter for other great contributions.

Speaking Updates – July 2015

This is a short break from my usual blog posts to share some information on upcoming presentations I’m giving that might interest you.

Performance Palooza – Virtual Performance Chapter

Coming up in two weeks (July 23), I’ll be part of the annual Performance Palooza that is run by the Performance PASS Virtual Chapter. I’m pretty excited, as this will be a new session I’ve built from some work I’ve been doing on benchmarking SQL Server.

SQL SERVER BENCHMARKING: THE POWERSHELL SPEEDOMETER

How fast do you think you were going? The only way to be sure is to have a reliable way to measure your performance. The challenge with SQL Server is that there are many aspects of your stack that you need to measure and understand. Powershell, with its ability to access and report on all these different levels, can give you a robust tool to benchmark your SQL Server. This session will cover benchmarking methodologies, the tools Powershell offers for capturing performance information, and will demonstrate how you can use them to measure performance statistics. After attending this session, you will understand how to effectively use Powershell to measure your database speed.

RSVP here if you’re interested: https://attendee.gotowebinar.com/register/6348286936653801474

Upcoming Precons

I’ve had the opportunity to present my Introduction to Powershell for SQL Server DBAs now at both SQL Saturday Albuquerque and SQL Saturday Atlanta. This has been a great time for me and I’ve already had comments from attendees of how they’ve used Powershell in their current jobs. If you plan on attending either SQL Saturday Omaha or SQL Saturday Denver, you have an opportunity to learn some Powershell and add this robust tool to your skill set.

Introduction to Powershell for SQL Server DBAs

This full day session will help build your foundation for learning and using Powershell.  While we will be focusing on using Powershell as a SQL Server Database Administrator (or Developer), much of the material will also review general use for system administrators.  By attending this training, you will gain an understanding of what Powershell is, how you can use it in your day to day management of your environments, and what specific things can be done using Powershell in a SQL Server environment.

RSVP to either of these links if you are interested:

IT/DEV Connections – Las Vegas, NV #ITDevCon

ITnDevConnections_logo_TylerOptimized_236x59I’m super excited to be speaking at this year’s IT/DEV Connections Conference in Las Vegas. This is a huge opportunity for me to share some of the useful Powershell techniques that I’ve built out in my work environments. I will be presenting two sessions:

There are TONS of great speakers at this event, so it’s a huge learning opportunity for attendees and will be a great time.

PASS Summit 2015 – Seattle, WA #PASSSummit2015

I'm Speaking Graphic_LargeThe grandaddy of ’em all. I’m honored to be part of this year’s Summit speaker line up. This is the premiere SQL Server conference and will be my fifth year in attendance. I’ll be speaking on a Powershell topic that will not only help out a lot of DBAs with their day-to-day work, but will also give you some insights in to how I think the world of database infrastructure is changing with the two magic buzzwords of “DevOps” and “the Cloud”. If you’re the kind of DBA that has to manage the struggles of deploying and maintaining SQL Servers in a highly-available environment or a large enterprise, this session can make your life a whole lot easier.

PowerShell and the Art of SQL Server Deployment

In today’s tech world, IT professionals are driven to perform tasks faster and more consistently. Automation is the watchword for our success, whether we are deploying SQL Server to private virtual clouds or public platforms such as Azure IaaS. This session focuses on rapidly creating and configuring SQL Servers, using PowerShell and desired state configuration. You will see practical examples of how to create repeatable builds of SQL Server that can be deployed in a matter of minutes. We will also review how these techniques can be applied to both public and private cloud platforms, helping to ensure success in any situation.

Even if you don’t want to attend my session, you should seriously consider registering for the Summit. It’s the largest pure-SQL Server conference in the world and has been a tremendous boost for my career over these past five years. And now I get to speak there! (How cool is that)

Checking Last Policy Based Management Execution

Contrary to popular belief, it’s not all about Powershell for me. Yeah, I do a lot with it and want to share that knowledge, but I’m still a SQL Server DBA and there are many challenges I encounter day to day where I use my other skills. Policy Based Management(PBM) is one of those areas where I need to apply this knowledge and solve problems.

A Dirty Window

My current shop makes heavy use of PBM for alerting and monitoring our environment. For more on using PBM, check out stuff by John Sterrett(@johnsterrett) or this book. It’s a very useful tool, but takes some getting used to. We have policies in place to alert on data file free space, last backups, database owners, and a host of other checks for the health and configuration of our environment. So many, in fact, that identifying what has failed can be a chore when things go wrong.

With PBM, it is very easy to know when a policy fails, but a lot more difficult to see exactly what has failed and why. Usually, the easiest way to see a failure is in SQL Server Management Studio (SSMS):

PBM_1

As you can see, this doesn’t tell us much. The next stop is the PBM history, which you can access by right clicking on the Management -> Policy Management node in SSMS and selecting ‘View History’:

PBM_2

The result opens up the SQL Server log viewer. As this is a simple example, it may not be clear to you, but the log viewer is a chore for history. If you have more than a few policies, you will have a lot of records to navigate and load, making troubleshooting extremely convoluted. When you’re a DBA looking to fix a problem, this is problematic. Fortunately, there is another way.

I Can See Clearly Now

Most of the information in SQL Server is stored in views. The SQL Server team has gone to great lengths with the Dynamic Management Objects (DMVs) to make sure most of the information displayed through the GUI is stored somewhere behind the scenes as a queryable object. With PBM, these system views are found in MSDB with the name dbo.syspolicy*. I dug in and found the following views that had the information I was looking for:

The query pattern from there is pretty typical: Show the most recent set of history records for a policy. The result is the following query:

;with last_pbm_execution as (
select
    policy_id
    ,max(history_id) as history_id
from
    msdb.dbo.syspolicy_policy_execution_history
where
    end_date > GETDATE()-1
group by
    policy_id
)
select
    p.name
    ,h.end_date
    ,case h.result when 1 then 'Success' else 'Failure' end result
    ,d.target_query_expression
    ,d.exception
    ,d.exception_message
    ,d.result_detail
from msdb.dbo.syspolicy_policies p
    join msdb.dbo.syspolicy_policy_execution_history h on (p.policy_id = h.policy_id)
    join last_pbm_execution lpe on (h.policy_id = lpe.policy_id and h.history_id = lpe.history_id)  
    left join msdb.dbo.syspolicy_policy_execution_history_details d on (h.history_id = d.history_id)
order by p.name

This query gives me a quick, easy to read report for the server of each policy and what the last result was. In the event of a failure, the target_query_expression, exception, exception_message, and result_detail columns give me the info about why the policy failed, helping me to fix it.

As DBAs we need quick answers and need to spend as little time as possible wading through logs of what went right just to get at what went wrong. This can be exacerbated when a tool doesn’t have good reporting right out of the box. Within SQL Server, we can overcome this by understanding the system views and the information stored there. That knowledge will help us get those quick answers, accelerating our troubleshooting and allowing us to make effective use of our tools.

#Powershell and SQL Server: SQLPS Challenges and Hurdles

Over the last two posts, I’ve covered what the SQLPS provider is and how you can get started with it. Now it’s time for other side of the story. If you’ve started working with the provider or done some reading on the web, various hurdles have probably popped up. These hurdles can frustrate and discourage you, so let’s discuss them before that happens.

Slower than Molasses in January

The initial complaint you’ll usually see with the provider is that it’s slow. Mind numbingly slow. So slow that it breaks tab completion in the ISE. Unfortunately, due to the way the provider was implemented using the SMO, we don’t have a lot of options

What can we do? Unfortunately, there are no magic workarounds to implement. This limitation is why you will see a lot of Powershell/SQL development either use the SMO directly or implement .Net code that avoids this completely. The challenge here is that these approaches can be too advanced for administrators who are not used to code development. My recommendation is to be patient with it. The provider is your entry point and is your starting point. As you get more comfortable with Powershell, you will find SMO and other .Net methods will perform better for you. Begin with understanding the language.

What You Least Expect

Another challenge with the provider is that sometimes it will behave in a fashion that doesn’t quite make sense. As with the speed, we’re tied to how the SQL Server team at Microsoft implemented the provider. It is, after all, an API and this means we can only use the it as it was written. What I struggle with is that there there are elements of the provider that weren’t thought all the way through and occasionally they catch me by surprise.

One example of this is an interesting piece of behavior I discovered recently with a several online colleagues (Paul Timmerman, Chris Sommer, Derik Hammer, and Johan Bijnens). Basically, we found that there was no real way to refresh a Central Management Server listing within an existing Powershell session. Once a CMS registry was populated in your provider session, you were stuck with it. If you made changes to your CMS in SSMS, you wouldn’t see those changes propagated until you opened a new Powershell session.

While frustrating, it’s not a show stopper. The challenge, however, is that there are other little land mines like this in the provider and they pop up when you least expect it. This particular issue is not really a problem because you can always save a script and re-open your host. However, it should behave a little more consistently and at least give the user the opportunity to execute a manual refresh of the cached object. I’m hoping that the SQL Server team will be able to address this in future versions of SQL Server.

The Mini-Shell

SQLPS_warts_1Speaking of things that behave not as you would expect, I want to cover the notorious mini-shell. If you have ever right clicked in Management Studio Object Explorer, you’ve probably seen a Start Powershell option.

The intent of this option is to open up a command window with the SQL provider already loaded. This shell also duplicates what SQL Server does if you ever run a Powershell script within a SQL Server Agent job step. What catches people by surprise is how it behaves. In SQL Server 2012 or prior, the host machine will load using Powershell version 1.0 or 2.0, no matter what version you have installed on the host machine. This can be annoying because modules won’t be automatically loaded and some cmdlets aren’t available to you. This was fixed in SQL 2014, so whatever version is installed on the host machine will be used.

While this can present a hurdle for developing scripts to be used in the SQL Agent, it will not block you completely. You can always double check this by starting the Powershell task in Management Studio and interrogating the $PSVersionTable system variable. Beyond that, I haven’t had much reason to use the mini-shell for any Powershell tasks and prefer to work in the ISE or the standard Powershell command window host.

A Fixer-Upper

The SQLPS provider is klunky. It suffers from a little neglect by the SQL Server team and some slapdash coding to implement it. In light of some of the Powershell tools provided for Windows Server, Exchange, and Active Directory, it’s pretty frustrating to see the SQLPS provider languish the way it has. I have some theories around that and I hope that someday I can talk with the product team to see what changes can be made.

However, it should be noted that if you’re just getting started with Powershell, the SQLPS provider is still the best place to start. If you’ve read my other posts, you know by now there has been an effort to make the provider an intuitive path for building automation around SQL Server. Don’t let the hurdles stop you,. The provider will let you get far enough on its own and help you develop patterns that you can later improve and enhance.

Please feel free to leave any comments below on your experience with the provider and other questions you may have regarding it.

#Powershell and SQL Server: Using the SQLPS Provider

In the previous post, I gave an overview of what the SQLPS provider is and how it works. Now I want to pull back the curtains a bit and demonstrate how you can navigate the provider and use it effectively for managing your SQL Server instances. For those new to Powershell, this can be an effective way to start using the language without a lot of “overhead” for learning syntax and methods.

Being Direct

Focusing on the concept that providers allow us to browse parts of the Windows stack as directory structure, let’s start by just reviewing what “directories” are available with SQLPS:

SQLPS_use_1

Right away you can see that there’s a lot of familiarity here, as we can see many components of SQL Server that we’re used to seeing. For the time being, let’s go straight into the SQL Database Engine and look around by browsing down into the SQL\LOCALHOST\DEFAULT path:

SQLPS_use_2Pay attention to the structure of the directory paths:

SQLPS_use_3

This pattern is used throughout the provider, offering a consistent structure for our objects. The next thing to notice is that the list of items under our instance is pretty standard and looks very much like what we would find under the object browser in SQL Server Management Studio. The provider is merely an abstraction for all the usual parts of SQL Server that we’re used to seeing.

Objects and Details

The most basic thing we can use the SQLPS provider for is getting lists of our SQL Server objects. Just like getting a directory listing of our files, we can list out our databases with a simple command:

SQLPS_use_4

Note the use of the -force flag. If you leave the -force flag off, the provider will only return user databases. Including this flag will display all databases, including the system databases.

What should also be noted is that there are many more fields here available to us, which we can see using Get-Member. As DBAs, a common question is when our databases were last backed up. Within the SMO, each database has the .LastBackupDate property that shows the last full backup, which we can easily include in our directory call:

SQLPS_use_5

Getting Things Done

While listing objects (and you can list logins, agent jobs, and other parts of your instance in much the same way) is effective, what else can we do? The key to the next step is recalling that objects within the provider are all SMO database objects.  All of these objects will have attendant properties and methods. We can leverage the properties to accomplish some pretty standard tasks.

One thing I tend to do with my databases is set the owner to ‘sa’. This common task is easy to do in T-SQL and I certainly don’t want to discourage that option, but it’s also an easy task to manage in Powershell by using the .SetOwner() method:

cd SQLSERVER:\SQL\LOCALHOST\DEFAULT\databases
$dbs = dir

foreach($db in $dbs){
$db.SetOwner('sa')
}

#List the directory contents again and look at the owner
dir

With these few lines, I can quickly change the owner of all my databases. How cool is that? Now, one note about permissions. Any actions you execute through the provider will use your current security context. Typically, DBAs will have enough permissions to perform these tasks, but if the account you connect to does not have rights for a task, it will fail.

And so much more

There are so many more examples of what we could use the provider for. Simply using directory lookups can provide a lot of flexibility, but there’s more we can do leveraging the methods and properties of the Provider objects. From this point, it can get very advanced and complex. Focus on the fact that SQLPS provider is a starting point, a place where can quickly get in and manage SQL Server with Powershell.

Next up is some of the warts of the SQLPS provider. A lot of folks who have been using the provider have run in to some challenges. In a lot of ways, how the SQLPS provider works is a little clumsy. This doesn’t mean you shouldn’t use it, but I do want to cover some of the frustrations so that you are aware of them and they don’t prevent you from effectively using this aspect of Powershell.

#Powershell and SQL Server: The SQLPS Provider

One of the key entry points for using Powershell and SQL Server is the SQLPS provider. If you’ve been reading my blog for any length of time, you’ve seen me make use of the provider for one task or another. It’s an incredibly useful tool and can help you manage some basic tasks in SQL Server without a lot of additional coding on your part.

The struggle that I see is there’s a lot of confusion about what the provider is and how it works. There are also challenges and gotchas around what the SQLPS provider does that frustrate both new and long time users of Powershell. I hope to provide some clarity over the next few posts as I review the basics of the SQLPS provider, how I’ve used it, and how to work around some of these challenges.

What are providers?

The idea of providers is a foundational concept in Powershell. They are fully documented over on MSDN, or directly in Powershell by using ‘Get-Help about_Providers’. Let me give you a starting point, though: the general idea is that providers extend different parts of the stack as file system paths in your shell. They give users an intuitive method to browse components like the registry, environment variables, and SQL Server as if they were directory structures.

You can start by using ‘Get-PSDrive’ from your favorite Powershell host. You’ll see something like this:

SQLPS_Intro_1

Note that I’ve called out the important bits here. You can consider the Name column the drive letter for each provider. The different types of providers are listed on the right. Understand that the drives we are used to seeing (C:\, E:\, etc.) are FileSystem providers and essentially equivalent to other providers, like environment variables. The trick is that the behavior of each provider is different once you start working within them, but accessing them works the same.

Using Providers

To use a different provider is just a matter of running some familiar commands. You can switch to a different provider by using “cd” and see what’s in the provider by using “dir” (which are aliases for Set-Location and Get-ChildItem). All pretty easy, right? Try this, for example:

cd Env:\
dir
Get-Content PATH

Already you can see how handy this is, as it gives us a much easier way to view the PATH variable than that tiny little GUI box under Environment Variables.

Let’s talk specifically about the provider for SQL Server. The provider is not part of Powershell by default and you have to load it. Doing this is straightforward, you just need to import the SQLPS module that is installed whenever you installed the SQL Server 2012+ client tools.

Import-Module SQLPS
#If you want to see what’s in the module, use this...
Get-Command -Module SQLPS

When you execute this command, you’ll get a warning about cmdlet names. You can safely ignore this, the provider has loaded and I’ll cover this message in a later post. A gotcha here is that if your client tools are previous to SQL 2012, you’ll need to load a snap-in. This is an obsolete method, so I strongly recommend you upgrade your client tools to SQL 2012 or better.

The Provider Foundation

Before we get into actually using the provider, we should review what makes the provider tick. Remember that everything in Powershell is a .Net object. Everything. With this in mind, we can look at the building blocks that create the SQL Server provider by using the trusty Get-Member cmdlet:

cd SQLSERVER:\SQL\localhost\DEFAULT\databases
dir | Get-Member

Don’t focus on the directory we are drilling into, we’ll cover that in the next post. You are going to get a sizeable output, but you should focus on the first part:

SQLPS_Intro_2

Notice that the typename is a Microsoft.SqlServer.Management.Smo.Database object. This shows us that the SQLPS provider is built on that long standing SQL Server Management Objects .Net library.

I call this out because if you browse the interwebz for examples of using Powershell and SQL Server, you’ll see a mix of code. Some folks will use the provider structure, others will instantiate .Net SMO objects. What you should understand is that these approaches use the same fundamental building blocks. The difference is that if you use the provider, you have some additional abstractions that SQLPS provides you (which means less code), whereas creating the SMO objects forgo them (translating to more code, but more control).

Moving Along

Providers give you a lot of flexibility within your shell experience. The intuitive nature of them allow for an easy entry into using Powershell, whether you have a coding background or not. This folds quite well into the initial intent of Powershell, which is to provide a useful scripting experience for administrators who do not have development experience.
In my next post, I will cover specifics for using the SQL Server Powershell provider. We will review the fundamentals and how these can be leveraged for common administrative tasks. If you have specific questions about the provider and how it can be used, I encourage you to leave these questions in the comments section so that I can answer them in a later post. If you prefer, you can also email your questions to ‘mike at this blog domain’.

#TSQL2sDay: #Powershell and Extended Events

I’ll be the first to tell you I’m not a fan of extended events. The why is a whole other blog post unto itself, but since T-SQL Tuesday is all about learning, I figured I’d take a stab at things. Furthermore, since I like to talk about Powershell, it seems like a good opportunity to explore a little about how we can manage extended events through that language.

The SQLPS Provider

The simplest entry point into interfacing with SQL Server is the SQLPS provider. Yes, I know there’s a lot of gripes about it (and that’s yet ANOTHER blog post I’m working on). However, it’s still a good starting point. To load up the provider is easy:

Import-Module SQLPS

Providers in Powershell are nice because we get to interact with different parts of the stack as if they were file systems. When you load up the provider, you’ll get a SQLSERVER drive that you can do a directory lookup on.

XE_PosH_1

XE_PosH_2Notice how we have an XEvent folder? This is how we can start exploring our extended events. The format for the lookups is the standard format.  If we drill down, we start to see lookups that match what we see in Management Studio. Looking at SQLSERVER:\XEvent\PICARD\Default\sessions will give us a quick listing of all the sessions on that instance and if they’re running or not.

XE_PosH_3

Thinking Objectively

As we further explore this, we need to remember that everything in Powershell is an object. This specifically means that with the SQLPS provider, everything is an SMO object. By leveraging the trusted Get-Member cmdlet, we can quickly find out that we are dealing with Microsoft.SqlServer.Management.XEvent objects and the Session class. So what can we do with this?

First off, I found a bit of a gotcha when trying to assign these sessions to objects in Powershell. Normally, I’d just use Get-ChildItem on the name to populate the variable, but when I tried to do that I ended up with a string object. Not very useful. In order to get the proper assignment, I had to work around it a bit and ended up with this:

$xe = dir | Where-Object {$_.name -eq 'system_health'}
$xe |  gm

Why this is different from the usual ‘dir system_health’ is unknown, but is the result of how the SQL Server team implemented the provider (yes, another post for another time). However, now we have an object and can start leveraging some of the properties.  Here’s a couple examples of some of what you can look at:

XE_PosH_4The calls I used for the above are:

$xe.Targets
$xe.Stop()
$xe.IsRunning
$xe.Start()
$xe.IsRunning
$xe.ScriptCreate().GetScript()

You’ll notice at least one funky bit of syntax here around scripting. While most SMO objects give you a .Script() method, these objects require you to first call a .ScriptCreate() (or alter or drop) method, which returns an ISfcScript script object. You then have to call that object’s .GetScript() method to actually generate your script. It’s a little bit of a long way around the horn, but you get there.

Practical Application

What can we do with this? I’ll be the first to admit that some of this is clunky, but much of that is probably my general unfamiliarity with this aspect of the provider and extended events in general. However, I can definitely start to see some specific use cases, such as:

  • Backing up existing custom sessions across my environment by scripting them out with a scheduled job.
  • Checking that all system_health sessions on my SQL Servers are running.
  • Executing a mass change to my extended events sessions using the methods.

To effectively leverage Powershell with extended events, we need to play to the language’s strengths. The focus here is on multi-server execution and scripting for consistency. This is also just a start. As you (and I) continue to explore this facet of the SQL Server provider, many other use cases and situations may show up.

Thanks out to Jes Borland(@grrl_geek) for hosting this month’s T-SQL Tuesday. Remember to watch the #tsql2sday tag for other great blog posts.

The Cult of Automation

Last week, while perusing through the usual interesting links from Twitter, I came across an article from Kendra Little (@Kendra_Little) titled “Not Everything Should be Automated”. It’s brief post where Kendra talks about why she has stopped automating some of her tasks. The title is attention grabbing (well, it grabbed MY attention), but there’s a core debate hidden in here that I want to take sides on.

If You Liked It, You Should Have Wrote a Script For Itbrodoyouevenscript

I often hear “you can’t automate everything”. As a card carrying member of the Cult of Automation (caps mine), I usually respond with “why not?” Many of my daily tasks can and should be automated. When doing these tasks manually, I am exposing the work to risk from skipping something, flubbing a syntax, or some other human error. The cost of these errors is compounded by the fact that it often takes a person longer to execute a process manually then if it were automated. This all adds up and can heavily impact your the value you bring to your employer.

This is, after all, why GUIs were created. By providing that interface, we aid users with a method to execute tasks that can have mistake protection built in as well as providing a speedy way to complete the work. This is the conversation I end up having with folks who want to use the SQL Server Management Studio GUI to create databases, that they can do it faster than having to type out that tedious CREATE DATABASE statement. The benefits are less errors and faster time to completion.

Automating a process isn’t really that different, because you’re wrapping up an action in a script so it executes the same way every time. The goals are the same: reduce errors and increase speed. It’s just now, instead of providing a GUI for interaction, we’re scripting out all the actions and getting the heck out of the way. The result is repeatable, consistent action.

Why wouldn’t you automate then? To me, it seems a no brainer to script something out if you do it more than once. When you do that, you reduce your errors while increasing the speed of execution. This is the message of my Cult and one I would hope more people subscribe to.

How do you get to Senior DBA? Practice, Practice, Practice

Let’s go back to Kendra’s post, though. Her message is that she forces herself to type out commands instead of scripting or snippeting them because she wants to make sure she knows and understands those commands. It’s a matter of mastering the syntax. If you automate a task or put it behind some sort of interface, you can quickly lose touch on how to write it. You might forget the options or the exact syntax, which means your skills can get rusty.

As a musician, I know and understand the value of practice. When preparing for a concert, I often rehearse a section of music dozens of times in order to commit it to muscle memory. This means when it’s time to perform, my brain doesn’t get crossed up trying to remember what the notes were and I can focus on making music. Code practice is no different, because when it gets to crunch time, you shouldn’t be flailing with how to do a point in time restore because you can’t remember how a command is written.

This all sounds fine in theory, but let’s talk use case. Restore testing is a perfect example, because it’s something that will commonly be automated. I’ve written my own scripts to aid this and will often set up a regular job to select a database, restore it to a target server, run a DBCC check, and catalog a report of the process for later review. To manually test all my databases is way too long and if I were doing that, I couldn’t work on any of the cool projects. This makes automation a perfect and necessary solution.

However, I still do regular manual point in time restores. This is because I need the practice. I won’t do all my databases and I won’t do it every day (usually once a month), but I still do it. It keeps my skills fresh. It applies for other tasks as well. I have many automated processes, but I will occasionally bypass the automation so I can get my reps.

This takes us back to the GUI discussion. Many of the tasks in SQL Server we could do graphically through SSMS, but more experienced DBAs eschew that to typing the script. We’ve all used the learning tool of scripting out an SSMS action. To script is to understand, at the core, what is happening in SQL Server and gives us better knowledge of how to manage it. I remind many DBAs I’ve talked to that pretty much everything in SQL Server is a T-SQL command of some sort, so it behooves you to understand the relationship between the action and the syntax.

The Bone of Contention

Now we get to my issue with Kendra’s post. I don’t disagree with any of her statements, but I disagree with the tone of the post as set by the title. We’re not arguing against automation here, but instead championing practice. You still should try and automate as many tasks as possible. Just don’t lose sight of the skills you need to build that automation. It’s very difficult to write a script if you don’t know and understand the actions you need to script. Worse yet, if someone hands you a script and you run it without understanding how it works, you could be doing lasting damage to your environment.

Practice makes perfect, and perfect makes automation. You should build and practice your skills, understanding what makes the platform tick. Once you do that, you can script it. Once you script it, you can start practicing your next automation trick. Just don’t lose sight of the skills that got you this far.

Your #Powershell Profile

Keeping with the theme of my last blog post, let’s talk a little more about Powershell basics. More specifically, let’s talk about the basics of reusing your Powershell scripts. Most DBAs have their library of scripts that they take with them from job to job, helping them be more effective at their job. Whether it’s SQL, Powershell, or some other language, experienced data professionals always make sure to retain their work for when they’ll need it again.

Enter the profile. This is actually a concept that’s been around for a long, long while in the *nix world. The concept is simple: There is a script that runs every time you start up a session in your shell, configuring your environment. Many times in *nix, this will configure system variables like PATH or HOME, but you can also use it to load custom functions and aliases within your shell.

The Microsoft team borrowed this concept when constructing the Powershell language. We have access to a profile (well, 4 of them) that help us configure and customize our own environments. Similar *nix, it’s a .ps1 script that runs every time you open a new session. This allows you a lot of flexibility for reuse of your code.

Getting Started

As linked above, there are 4 different types of profiles you can use. We won’t focus on these different profiles, but instead stay with the basic profile: the one that applies to the current user and the current host (in these examples, the ISE). To view your profile, open it up in your text editor of choice by referencing the $profile system variable:

ise $profile

If you don’t currently have a profile, you’ll probably get an error about the file not existing. To fix this, create the file:

New-Item -ItemType File -Path $profile -Force

Vioala! You now have a completely empty profile, which is what you will see when you try and open it again.

So Now What?

I can hear the internal dialog now:

“Woo hoo! A profile!”

“Ummm…yeah. That’s great. But what do we do with it?”

“…….”

“That’s what I was thinking, too.

I don’t want to leave you hanging, so let’s start making use of it. The first order of business with the profile is to use it for customizing your environment. Since I am a SQL DBA, I like  to load up the SQLPS module and the SMO objects. This means any time I’m in Powershell, I’ve got the SQL Server cmdlets and objects ready to go.

Next up, I have a function I wrote a while back for reporting on the available disk space on a server. It’s very handy and I use it almost every day. The code is not original, but I wrapped it in a function to save myself a LOT of typing. Usually, you’d package such a function in a module.  I ended putting it in my profile because it was easy. I also add to an alias to save myself even more typing.

This means I end up with a profile that looks a lot like this:

#load SMO library
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

#Load SQLPS Module
Import-Module SQLPS -DisableNameChecking

#Set Aliases
New-Alias -Name gfs -Value Get-FreeSpace

function Get-FreeSpace{
<#
.SYNOPSIS
Uses WMI to get capacity and freespace for all disks/mounts on a host.

.DESCRIPTION
Uses WMI Win32_Volume to query logical disks and provide drive size and usage for all
logical disks and mountpoints.  If no parameter is given, localhost is used.  Otherwise
the host name should be passed.

Mike Fal (http://www.mikefal.net) 2012-10-10

.PARAMETER
host - Name of machine information is being queried from, defaults to localhost

.EXAMPLE
Get-FreeSpace "CCX-SQL-PRD-01"
#>

param([string] $hostname = ($env:COMPUTERNAME))
gwmi win32_volume -computername $hostname  | where {$_.drivetype -eq 3} | Sort-Object name `
| ft name,@{l="Size(GB)";e={($_.capacity/1gb).ToString("F2")}},@{l="Free Space(GB)";e={($_.freespace/1gb).ToString("F2")}},@{l="% Free";e={(($_.Freespace/$_.Capacity)*100).ToString("F2")}}

}

Now if I had just added this to my profile and wanted to load it into the current session, I’d just have to execute it, like so:

. $profile

This loads the profile, running it as if it were any other .ps1 script (because it is).

The Power of the Profile

Using this, I have a customized Powershell environment ready to go any time I start. It lets me use Powershell the way I want to, allowing me to extend my shell with my own custom code or pre-packaged modules that I need access to. I’ve talked several times before about building Powershell tools, but at some point we all need a toolbox. By leveraging the profile, you can start building your own custom Powershell toolbox that you can rely on.

Want to know more? You can learn about Powershell profiles right from the shell itself: Get-Help about_profiles. How cool is that?