Art of the DBA Rotating Header Image

Your #Powershell Profile

Keeping with the theme of my last blog post, let’s talk a little more about Powershell basics. More specifically, let’s talk about the basics of reusing your Powershell scripts. Most DBAs have their library of scripts that they take with them from job to job, helping them be more effective at their job. Whether it’s SQL, Powershell, or some other language, experienced data professionals always make sure to retain their work for when they’ll need it again.

Enter the profile. This is actually a concept that’s been around for a long, long while in the *nix world. The concept is simple: There is a script that runs every time you start up a session in your shell, configuring your environment. Many times in *nix, this will configure system variables like PATH or HOME, but you can also use it to load custom functions and aliases within your shell.

The Microsoft team borrowed this concept when constructing the Powershell language. We have access to a profile (well, 4 of them) that help us configure and customize our own environments. Similar *nix, it’s a .ps1 script that runs every time you open a new session. This allows you a lot of flexibility for reuse of your code.

Getting Started

As linked above, there are 4 different types of profiles you can use. We won’t focus on these different profiles, but instead stay with the basic profile: the one that applies to the current user and the current host (in these examples, the ISE). To view your profile, open it up in your text editor of choice by referencing the $profile system variable:

ise $profile

If you don’t currently have a profile, you’ll probably get an error about the file not existing. To fix this, create the file:

New-Item -ItemType File -Path $profile -Force

Vioala! You now have a completely empty profile, which is what you will see when you try and open it again.

So Now What?

I can hear the internal dialog now:

“Woo hoo! A profile!”

“Ummm…yeah. That’s great. But what do we do with it?”

“…….”

“That’s what I was thinking, too.

I don’t want to leave you hanging, so let’s start making use of it. The first order of business with the profile is to use it for customizing your environment. Since I am a SQL DBA, I like  to load up the SQLPS module and the SMO objects. This means any time I’m in Powershell, I’ve got the SQL Server cmdlets and objects ready to go.

Next up, I have a function I wrote a while back for reporting on the available disk space on a server. It’s very handy and I use it almost every day. The code is not original, but I wrapped it in a function to save myself a LOT of typing. Usually, you’d package such a function in a module.  I ended putting it in my profile because it was easy. I also add to an alias to save myself even more typing.

This means I end up with a profile that looks a lot like this:

#load SMO library
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

#Load SQLPS Module
Import-Module SQLPS -DisableNameChecking

#Set Aliases
New-Alias -Name gfs -Value Get-FreeSpace

function Get-FreeSpace{
<#
.SYNOPSIS
Uses WMI to get capacity and freespace for all disks/mounts on a host.

.DESCRIPTION
Uses WMI Win32_Volume to query logical disks and provide drive size and usage for all
logical disks and mountpoints.  If no parameter is given, localhost is used.  Otherwise
the host name should be passed.

Mike Fal (http://www.mikefal.net) 2012-10-10

.PARAMETER
host - Name of machine information is being queried from, defaults to localhost

.EXAMPLE
Get-FreeSpace "CCX-SQL-PRD-01"
#>

param([string] $hostname = ($env:COMPUTERNAME))
gwmi win32_volume -computername $hostname  | where {$_.drivetype -eq 3} | Sort-Object name `
| ft name,@{l="Size(GB)";e={($_.capacity/1gb).ToString("F2")}},@{l="Free Space(GB)";e={($_.freespace/1gb).ToString("F2")}},@{l="% Free";e={(($_.Freespace/$_.Capacity)*100).ToString("F2")}}

}

Now if I had just added this to my profile and wanted to load it into the current session, I’d just have to execute it, like so:

. $profile

This loads the profile, running it as if it were any other .ps1 script (because it is).

The Power of the Profile

Using this, I have a customized Powershell environment ready to go any time I start. It lets me use Powershell the way I want to, allowing me to extend my shell with my own custom code or pre-packaged modules that I need access to. I’ve talked several times before about building Powershell tools, but at some point we all need a toolbox. By leveraging the profile, you can start building your own custom Powershell toolbox that you can rely on.

Want to know more? You can learn about Powershell profiles right from the shell itself: Get-Help about_profiles. How cool is that?

I’ll Just Leave This Here…

Last year you might have heard about a dust up around PASS voter eligibility. I got pretty vocal about it, but in the end PASS did the right thing. Now PASS is trying to do the right thing again by getting ahead of the issue and notifying the community to check their eligibility.

Here’s the deal. You need to update your PASS profile and make sure some mandatory fields are filled in. You’ll see if you’re eligible if you look at your profile page on the sqlpass.org website. You have until June 1 to do this.

But why wait?  Do it now. It won’t take long and helps both you and PASS maintain the integrity of the voting process.

Read more about it on the PASS website.

Exploring #Powershell

Over the past months, I’ve devoted a lot of time to scripts and tips on using Powershell. While helpful, these articles assume a level of familiarity with Powershell that a lot of SQL Server administrators don’t have. I wanted to take some time to help out those who are looking for a place to start.

The great thing about Powershell is that the Microsoft team has put a lot of effort into building cmdlets that allow you to discover how to use the language. The term ‘cmdlets’ was invented by the Powershell team to help make the language distinct, but they’re essentially functions. Three cmdlets, in particular, will help teach you the language and discover how you can use it:

  • Get-Command
  • Get-Help
  • Get-Member

I like to refer to these cmdlets with the mildly cheesy name “The Holy Trinity of Powershell Self Discovery”.  Yes, that’s a mouthful. With such a clunky name, though, it is hard to forget about them. Because of the information they provide, not a day goes by where I use one of these cmdlets while working with Powershell.

Get-Command

The first of these three, Get-Command, is the cmdlet that lets you browse the cmdlet ‘directory’. While there are several arguments we can pass to this cmdlet, really all you should concern yourself with at the beginning is passing wildcard searches for cmdlets you might be looking for. If you keep in mind that there’s a defined list of Powershell verbs, Get-Command becomes a good way to find cmdlets.

For example, let’s say we wanted to find a cmdlets to add a Windows firewall rule. The best way to start is to simple filter all the cmdlets containing the word “firewall”:

Get-Command *firewall*

Quickly, you’ll get a list of 30 functions for manipulating different parts of the firewall. A lot easier to read that the full list of 1000+ Powershell cmdlets. But we want to filter our list down even more, so let’s improve our filter. Since we know we want a new rule, we can improve our wildcard search:

Get-Command New*firewall*rule

Our result is a single cmdlet, New-NetFirewallRule. Remember that anytime we create something in Powershell, the standard syntax will be “New-” and the noun. We add this into our wildcard search, plus the word “rule” to further filter down. Combine this all into how we use Get-Command, and we have a powerful way to search the language.

Get-Help

Once you find your cmdlet, though, how do you learn how to use it? Fortunately for us, the Powershell team took a page from the *nix world. For anyone who as worked in Unix or Linux, you’ve probably made heavy use of the ‘man’ pages. This is a built in help system to help describe shell commands. Powershell has its own system and you access it using Get-Help (conveniently aliased in Powershell as ‘man’).

Using our previous example, let’s learn about the New-NetFirewallRule cmdlet. The syntax is easy enough:

Get-Help New-NetFirewallRule

Running this will output text to the host window that describes the cmdlet. By default, we’ll see a synopsis, a description, and an outline of the parameters the cmdlet accepts. Extremely useful, but this is just scratching the surface. Get-Help will also accept additional arguments that can give you even more detail or functionality  if you want it:

  • -Examples – Examples of how the cmdlet gets used
  • -Detailed – Even more information on how the cmdlet functions
  • -Full – The entire help file for the cmdlet in question
  • -ShowWindow – Output the full help file to a separate windows
  • -Online – Open up the TechNet page for the cmdlet

But wait, there’s more! Two more awesome things about the Powershell help system are that you can find help on more than just cmdlets and it is constantly updated. Powershell contains a list of topics that you can read on to see how to use the language. These are all found with the text “about_” and you can get a listing of these using:

Get-Help about*

If you want to learn more about how to use variables, for example, you can look at the about_Variables topic and read the specifics about using Powershell variables. There are over 100 different topics in the help system for Powershell 4.0, baking your learning right into the language.

Of course, this information can change and update frequently. In order to keep pace with this rate of change, the Powershell team separated the help system from your standard Windows updates and made manageable online. This means if you ever think your help system is out of date, you only have to run the Update-Help cmdlet when connected to the internet and your local help system will get all the current information.

Get-Member

One of the strongest advantages of Powershell is that it is built on .Net. This means that everything in Powershell is a .Net object. To explore these objects and what they have to offer, you can use the Get-Member cmdlet to provide information about a cmdlet.

The use is pretty simple. Take your variable and pass it via the pipeline to Get-Member:

[string]$AString = ‘Tea. Earl Grey. Hot.’
$AString | Get-Member

What will follow is a list of the object type, methods, and properties you can leverage with that object. You get a comprehensive view of all the capabilities of the object.

If we go back a few posts, you can read how I compared sqlcmd with Invoke-SqlCmd. Get-Member was key to this comparison, because it allowed me to explore the outputs of both:

$query = ‘SELECT type,count(1) FROM sys.objects GROUP BY type;’
$cmdout = sqlcmd -Q $query
$invout = Invoke-Sqlcmd -Query $query

$cmdout | Get-Member
$invout | Get-Member

Comparing these two outputs shows us a lot about the difference between the two commands. We can see that the sqlcmd output is an array of System.Strings and Invoke-SqlCmd is a collection of System.Data.Datarow. For more on the importance of this difference, read my other blog post.  The key is to understand how you can explore and discover this difference using Get-Member.

A Compass and a Map

Understanding Powershell is a long road. There’s a lot of depth to it, providing a great deal of functionality. It can be easy to get lost in the details. While we live in the day and age of LMGTFY, it still helps to have some basic tools at our disposal to find our way through something new. Get-Command, Get-Help, and Get-Member are three cmdlets you will always have access to, helping you find your way to solving problems in Powershell.

#Powershell in Atlanta

SQL Saturday Atlanta is coming up in two weeks and I’ll be giving my Introduction to Powershell for SQL Server DBAs precon the day before the event. I’m extremely excited to offer this precon and get more SQL Server professionals exposed to Powershell. Think you might be interested? Here’s the synopsis:

Maybe you’ve only heard of Powershell. Maybe you’ve seen a little bit of Powershell code, but still don’t understand how it works. You’ve been telling yourself for months now that you need to learn it. Why wait? The time to get started with Powershell is now.

This full day session will help build your foundation for learning and using Powershell. While we will be focusing on using Powershell as a SQL Server Database Administrator (or Developer), much of the material will also review general use for system administrators. By attending this training, you will gain an understanding of what Powershell is, how you can use it in your day to day management of your environments, and what specific things can be done using Powershell in a SQL Server environment.

You can sign up on Eventbrite and use the discount code ‘Mike_Fal’ to save $15 off the registration fee.  The session will be held at the Georgia State University Alpharetta Center on May 15 from 9am to 5pm.

I hope to see you there!

 

Availability Groups, Agent Jobs, and #Powershell

My current gig has me working a lot with Availability Groups. I love the options they give me with their ability to failover fast and replicate my data to other servers. It is a lot of peace of mind for high availability and data protection. The price of all this comes from a much more complex system to manage and the extra considerations needed to have a properly deployed Availability Group.

One such consideration is your SQL Agent Jobs. It is pretty well documented that your server level objects must be replicated or created outside of the actual Availability Group process. It’s not difficult, but must be thought about and addressed when setting up a server. If you source control your management jobs, then it should be a simple matter of executing a script to create these jobs on a new server. There are cases, though, where either jobs get out of sync or are created outside of the source control process. When this happens, administrators need to copy these jobs to the other servers.

The common way to accomplish this is to simply script out the job in Managment Studio, then run the script on the other nodes. While effective, this is also manual and prone to error, such as forgetting a server.

By automating this process, we can ensure consistency.  You can use Powershell to leverage the SMO and the .Script() method to basically duplicate your Management Studio action.  However, by using this and incorporating it into a script, we can ensure our consistency.  The following function is a pretty basic use of this that allows you to copy a single job from a source server to any number of targets:

function Copy-SQLAgentJob{
  param([string]$Source
  ,[string[]]$Targets
  ,[string]$JobName)

#Load assemblies
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

$smosource = New-Object Microsoft.SqlServer.Management.Smo.Server $Source
$JobScript = $smosource.JobServer.Jobs[$JobName].Script()

foreach($Target in $Targets){
 $smotarget = New-Object Microsoft.SqlServer.Management.Smo.Server $Target
 if($smotarget.JobServer.Jobs.Name -notcontains $JobName){
   $smotarget.Databases['msdb'].ExecuteNonQuery($JobScript)
    }
  }
}

All this function does is find the source job, script it out, and then run that job creation script against all the other targets. Simple enough. One thing to call out is I’ve specifically written this function to stay completely within the SMO. Normally, I would use Invoke-SqlCmd to execute the SQL script, but I thought it simpler to use the SMO .ExecuteNonQuery() method (which runs a SQL batch and does not expect a return).

Now let’s implement this for a hypothetical Availability Group to copy from one node to all the others:

$PrimaryNode = 'PICARD'
$nodes = (Get-ClusterNode -Cluster (Get-Cluster $PrimaryNode).Name | Where-Object {$_.Name -ne $PrimaryNode}).name
Copy-SQLAgentJob -Source $PrimaryNode -Targets $nodes -JobName 'Backup Databases - Powershell'

Of course, we could also use this in a much simpler fashion, such as deploying a new job across all of our servers. Remember the Central Management Server trick I’ve implemented in the past? We can fold that in here for an easy deployment once we’ve created the job on a single server in our environment.

$CMS=’PICARD’
$servers=@((dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$CMS").Name)
$servers+=$cms

Copy-SQLAgentJob -Source $CMS -Targets $Servers -JobName 'Backup Databases - Powershell'

As always, the lesson learned here is to leverage our tools to make more our work more repeatable and consistent. While everything we have done here can be done in other methods, I prefer this because it minimizes the interaction needed to accomplish the task. The more any of us have to touch a process to get work done, the greater our risk of error. You’ve heard it before: consistency through automation, this time for SQL Agent jobs.

 

#TSQL2SDAY 65 – TIL Edition: Whaddya mean there’s no endpoint?

This month’s T-SQL Tuesday is from Doc Mike Donnely(@SQLMD) with a simple premise: show us something you learned. Considering that working in the information technology field means we are always learning (or at least should be), blogging on this should be a snap. It’s a great fit for the #SQLNewBlogger challenge if you’re in on it, or just a nice reason to write if you’re not.

My tidbit is somewhat Powershell related, but touches on general setup of Availability Groups. If you read my early short series on automating SQL Server builds, I show you how you can build an Availability Group using Powershell. When I was testing it, the process works just fine for building the Availability Group, but I ran into problems when adding databases. When I went through the troubleshooting, I discovered that while you specify an endpoint in the Powershell cmdlet call, that cmdlet doesn’t actually create said endpoint. It just records the endpoint as part of the config.

This should not really come as a surprise. If you ever use the wizard in SQL Server Management Studio to create an Availability Group, it will script out commands to create the proper endpoint. I assumed that the Powershell cmdlet would work similar to the wizard, but it does not. This means I will have to add a step in my process to actually create the endpoint.

There are two ways to do this. The first is to simply use T-SQL in my script and pass it to either Invoke-SqlCmd or sqlcmd:

$endpointsql = @"
CREATE ENDPOINT [HADR_endpoint]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM AES);
"@

Invoke-SqlCmd -ServerInstance 'NODE1' -Database 'master' -Query $endpointsql

The second option is to use the SMO to create an endpoint:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
$smonode = New-Object Microsoft.SqlServer.Management.Smo.Server ‘NODE1’

#create endpoint
if($smonode.Endpoints.Name -notcontains 'HADR_endpoint'){
    $EndPoint = New-Object Microsoft.SqlServer.Management.Smo.Endpoint($smonode, 'HADR_endpoint')
    $EndPoint.EndpointType = 'DatabaseMirroring'
    $EndPoint.ProtocolType = 'Tcp'
    $EndPoint.Protocol.Tcp.ListenerPort = 5022
    $EndPoint.Payload.DatabaseMirroring.ServerMirroringRole = 'All'
    $EndPoint.Payload.DatabaseMirroring.EndpointEncryption = 'Required'
    $EndPoint.Payload.DatabaseMirroring.EndpointEncryptionAlgorithm ='Aes'
    $EndPoint.Create()
    $EndPoint.Start()
}

The choice between these options is down to personal comfort, either one would be acceptable in my eyes. I lean towards the SMO method just because I’m likely to be using the SMO objects for other operations in a script.

The lesson here is the old line about assumptions being the mother of all screwups and why you should always understand what your tools are doing. If you get stuck in to using wizards all the time, you will not see what is happening under the covers. Using a tool is fine, even necessary, but you should always make sure you understand the mechanics of those tools in case something does not work as expected.

How can you learn those mechanics? Almost every action in SSMS can be scripted out into T-SQL. This key learning tool is something I always stress to new SQL Server folks. It is the easiest way to see what SQL Server is trying to do and helps build your skills. The next time you’re trying to do something in Management Studio, study the script, and teach yourself something new.

Update:

So there’s a third option and I’m mildly annoyed at myself for missing it. As Nic Cain(@SirSQL) points out in the comments, Microsoft provides cmdlets for endpoints. In keeping with the links above, here’s your use of them:

$endpoint = New-SqlHadrEndpoint HADR_Endpoint -Port 5022 -Path SQLSERVER:\SQL\NODE1\DEFAULT
Set-SqlHadrEndpoint -InputObject $endpoint -State "Started"

A LOT simpler using the SMO and remains consistent with the rest of the script. Probably the better way to go.  Thanks Nic!

The PASS Summit 2015 Call for Speakers

Here we are, the annual community tradition of submitting sessions to the PASS Summit. This is the big time, the real deal, the event everyone wants to be seen at (as far as SQL Server goes). We all have hopes and dreams of standing on that stage, sharing what we love about SQL Server with the greater community.  Now that the submission deadline has passed, I wanted to post a few thoughts I had on this year’s process.

And yes, I want to do it before the OTHER annual community tradition of “bitching about why I didn’t get selected” starts.

Too Many Topics

The first thing that struck me was the excessive number of topics in each track. I didn’t count them all, but it was a little overwhelming. What made the process especially difficult was that there’s a lot of overlap with these subcategories, making it confusing about which to select.  This is especially a challenge for me with Powershell submissions, because many of the abstracts I created could have easily gone under two or three subcategories.

One example is my abstract on automating SQL Server builds with Powershell. Now, there’s a Powershell/SMO subcategory and one for Installing/Upgrading SQL Server. Which do I choose? I went with Powershell because that’s the focus of my session, but it might have been more applicable to the other.  This is a problem.

Suggestion: Either PASS needs to cull the overall list of topics or allow you to select multiple topics for your abstract.  Personally, I’d prefer the former.

Too Many Submissions

We have a good problem in the SQL Server community: tons of great speaking talent. There a lots of folks at the local and regional level who give great presentations at local user groups and SQL Saturdays. There’s also a fair amount of mediocre talent as well. However, all of these speakers can submit to Summit, regardless of their experience level. This places an overwhelming burden of work on the program committee.

I don’t want to make speaking at the Summit an exclusive club. I think all community members should have the opportunity to present at the Summit, but we need to balance that with consideration for the program committee volunteers and trying to provide the best product for Summit attendees. As such, the process should focus on standards that encourage both active community participation and a focus on public speaking.

Suggestion: PASS needs to start placing restrictions on who can and can’t submit. They already do this for the precons. My initial thought is a speaker should have four public SQL speaking events in the past calendar year. With opportunities like local user groups, SQL Saturdays, and virtual chapters, this is not a difficult number to hit.  It ensures that the speaker is an active community participant, someone who is contributing and practicing their craft.

We need more faces

In general, this isn’t a big deal, but we do have many popular speakers get multiple sessions at the Summit. I understand these folks are a draw and will put butts in seats, but the reality is that every person who gets doubled up means one less other speaker who will be selected. Again, considering the depth of speaking talent in the PASS community, this strikes me as problematic in general and a wasted opportunity to increase the variety of speakers at the Summit.

Suggestion: All speakers are limited to a single session, no exceptions.

More Transparency Around the Selection

I know this is a constant refrain regarding PASS, but it is needed. This is not me piling on the organization for being closed and secretive, but simply reminding them that we still need to know more about what’s going on. I’ve been paying attention to the selection process for the past two years and it IS getting better. I like how open PASS has been with how the program committee works.

The challenge is that the program volunteers don’t make the actual selections. They make recommendations that are passed to the selection committee. The selection committee then performs “levelling” and uses that to select sessions. This is the black box, because I don’t know what the levelling process entails and how things are judged. I’m not advocating some grand conspiracy theory around who gets selected, but there will always be questions and criticism until we are better informed about why.

Suggestion: PASS needs to continue what they have been doing and communicate more about the process. As I said, I recognize that the process of improvement is ongoing and it is getting better, but it can’t stop. The community needs communication, especially as we grow larger and larger.

Videos Don’t Add Value

I understand that PASS is trying to gather as much information about a speaker as possible. Videos are not the way to do it. There are too many issues with recording quality and software that make this a reliable method. Additionally, many videos won’t actually display a candidate’s public speaking skills, only their ability to sit down and chat in front of a camera. These combine to make video recordings more subjective than objective, and we need to focus on objective quantifiers.

Suggestion: Just ditch the video submission.

You Don’t Deserve To Speak At The Summit

This one is directed to the entire community, the people submitting. Every year I see the same tweets and posts griping about how certain people did not get selected or others did. How the process is horribly broken. To be fair, when you craft abstracts and then wait two to three months for a response, the resulting rejection is disappointing.

I think, because of all the other opportunities in the community, being rejected from a speaking event is a bit of a shock. The reality is that competition is tight for the Summit and speaking slots are limited. Getting turned down really is not that surprising. What bothers me most is that I see an attitude of entitlement from some of community members concerning presenting at the Summit. None of us are owed a speaking slot and the burden is on each individual speaker to prove their worth.

I view speaking at the Summit a lot like free agency in a professional sport. There’s a lot of competition to get noticed and a lot of reward if you do. Some people have an edge because they’re really good at what they do and demonstrate that on the public stage. Some others are hidden gems, waiting for their opportunity to get noticed. However, the market is driven by demand and selections are made based on that demand, not by attribution or “right”. We as speakers should always be striving to prove ourselves to the community and our selection should be based on ability, presence, and contribution, all of which will drive up demand.

Suggestion: Chill out if you don’t get selected. Focus on what you can do locally and regionally, through your own user group and SQL Saturdays.The Summit is awesome and I love going, but the truth is I have MUCH more of an impact speaking at smaller events. I encourage all community members to have a presence at these events, to focus on what they can do in their city. Build your community resume and skill set so that when you do submit to the Summit, you’ll be better prepared when you do make it to that stage.

In Conclusion…

I’ve already shared some of this with some members of the PASS board privately, but I wanted to share this publically as well. This is our community and we all have the ability to affect this change by being involved and reaching out. What has always impressed me about PASS is how grass roots it is and how members feed off each other. We need to focus on that, because it is what makes our community stronger than any other technical community out there.

Effectively Using Invoke-SqlCmd

I really enjoy interacting with the SQL community on Twitter. Whether it is exchanging silly one line, sharing interesting articles and blog posts, or trying to answer a tricky problem, the conversation is always engaging. One such conversation that happened last week was a community member who was struggling with the output he wanted out of Invoke-SqlCmd. I was able to help him out, but the conversation got me thinking about how people use this cmdlet within their scripts and processes.

The focal point of the confusion is how Invoke-SqlCmd relates to sqlcmd.exe. Sqlcmd has been around since SQL 2005, replacing the antiquated osql executable. Many administrators have gotten used to the robustness of sqlcmd and how they can leverage it for their backend automation. I’ve used sqlcmd for many tasks over the years.

Then along came Powershell and Invoke-SqlCmd. This cmdlet, included with the SQL Server provider, is intended to cover a lot of the functionality that an administrator can find in sqlcmd, but now as part of Powershell. What people get hung up on is that Invoke-SqlCmd is not a direct replacement of sqlcmd and that it can’t (and shouldn’t) be used in the same way.

Thinking Differently

The challenge when thinking about sqlcmd versus Invoke-SqlCmd focuses on outputs. One of the cardinal rules of Powershell is that everything is an object. When comparing sqlcmd with Invoke-SqlCmd, this difference becomes quite stark. First, let’s look at sqlcmd’s output, capturing it into a variable and then examining the types that make up the output:

sqlcmd_out

Using the GetType() method, we can see that the collected output of sqlcmd is an array, with each element of the array being a string. What this means is that that each line of output from sqlcmd (including the header and the row count output) is a single string, with all column elements concatenated into that string. As you can imagine, this is clumsy to handle and would require a lot of ugly string parsing to separate out the elements.

Now let’s look at the same command using Invoke-Sqlcmd:

Invoke-SqlCmd_Out

Right off the bat, the output is noticeably different. Our total output object is now a DataRow, with the individual column being an Int32 type. You’ll note that there is no item count as well. This is because we have a singular object as output (which has no count or length properties). If we further investigate the output using Get-Member, we’ll find that the columns of our query are actual properties of the output object (in this case, Column1 since we didn’t name our column).

A New Perspective

This helps us because we can now start treating our outputs as data, not just a mass of text that has to be parsed. For example, let’s consider the following query:

SELECT
SERVERPROPERTY('InstanceName') as InstanceName
,SERVERPROPERTY('MachineName') as MachineName
,SERVERPROPERTY('ProductVersion') as SQLVersion
,SERVERPROPERTY('Edition') as SQLEdition
,SERVERPROPERTY('ProductLevel') as SQLReleaseLevel

If you want to make use of these values in Powershell and use sqlcmd, you would have to perform all sorts of string manipulations to separate out the values. To make it easy, we’d likely have each value a separate call to sqlcmd.exe. Using Invoke-SqlCmd changes the game, allowing you to easily access each column as a property of your output:

$sql=@"
SELECT
SERVERPROPERTY('InstanceName') as InstanceName
,SERVERPROPERTY('MachineName') as MachineName
,SERVERPROPERTY('ProductVersion') as SQLVersion
,SERVERPROPERTY('Edition') as SQLEdition,SERVERPROPERTY('ProductLevel') as SQLReleaseLevel
"@

$sqlstats = Invoke-Sqlcmd -Query $sql

cls

'Instance: ' + $sqlstats.InstanceName
'Host: ' + $sqlstats.MachineName
'Product Info: ' + $sqlstats.SQLVersion +'('+$sqlstats.SQLReleaseLevel+') - ' + $sqlstats.SQLEdition


ServerProperties

This ability to reference columns as properties gets even more powerful when we deal with larger data sets for our scripts. A common technique I use is to leverage a collection of databases to drive work I need to do. With my collection as a series of objects, each row element is easier to access and make use of for my processing. To see a detailed example of this, run the following commands and study the output of the Get-Member call:

$dbs = Invoke-Sqlcmd -Query 'select name,recovery_model_desc,page_verify_option_desc from sys.databases;'
$dbs | Get-Member

Data Driven Decisions

Invoke-SqlCmd lets you integrate your data with your scripts. When you make your outputs discrete objects, data becomes readily accessible for your automation. It allows you to move those clunky parts of your processing out of T-SQL and into Powershell, making your scripts more robust. You can use T-SQL for what T-SQL is good at, allowing Powershell to take over where appropriate.

This does not replace sqlcmd. Another key difference is Invoke-SqlCmd isn’t interactive, where sqlcmd is. If you have need of a command line tool for using SQL Server (you’re on Server Core or you need an emergency connection), sqlcmd will be your weapon choice.

The fuzzy area is around compiled SQL scripts you might use. Both Invoke-SqlCmd and sqlcmd.exe can accept these scripts and allow you to perform sqlcmd’s variable replacement for parameterizing scripts. Choosing which method you use, in my mind, comes back to outputs. If your script output is simply a matter of logging the entire execution of your script for later review, sqlcmd is a perfectly acceptable method. However, if you need to integrate your scripts into a larger process and make use of the that output within the script execution, Invoke-SqlCmd could be your better option.

Powershell is about options. Everyone has their own methods and patterns, the idea behind Powershell scripting is not to give you a one-size-fits-all solution, but to provide you the components for building your scripts. The challenge we all have is to make sure we’re using the right components in the right way, the proverbial “driving a screw with a hammer” problem. When presented with a new tool set, don’t try and use it the same way as other tools you’ve had in the past. Instead, learn the tool on its own so you can make the best use of it for your process.

The #SQLNewBlogger Challenge: Git ‘er done!

This week my friend Ed Leighton-Dick(@elieghtondick) announced his New SQL Blogger challenge.  It’s an effort focused on getting new bloggers to write regularly and build a habit of blogging. We’ve heard a lot about how blogging can build your personal brand, so I’m a big fan of this challenge and will participate, even though I’ve been blogging off and on for the past few years.  So far, some big names have come out in support of this challenge. Awesome to see. Not to try and ride their coat tails, but I want to add my own thoughts because I think it’s incredibly important to participate.

Most community members will be intimidated by this challenge. I say this because I’ve heard (and said) all the excuses that are probably going through your head when you think about blogging.  I want to show you how you can overcome that intimidation and participate successfully, jump starting your blogging career.

I don’t have anything to blog about

I hear this all the time. Really what people are saying is “I don’t have anything valuable to blog about” and I completely call shenanigans on this attitude, for much the same reason as why I tell folks they should start presenting. Everyone has something to share. Even if you think it’s simple or a no-brainer, I guarantee you someone will benefit from your knowledge.

Let’s consider why new bloggers get so intimidated. The perception is that current bloggers, especially the BIG names, always seem to have some clever script or gotcha to contribute. Something no one else has ever thought of. It’s a tough act to follow, especially if you are just getting started.

However, to butcher a song lyric, “Any blog is a good blog”.

I always recommend that new bloggers approach their blogging as self documentation. Write for yourself, don’t expect anyone else to read it (and if they do, BONUS!). There have been a number of times where I go back to my blog for a technique or script I wrote in the past. It’s a great entry point to get you to started writing and reduces the “freak out” about other people reading what you wrote.

They’re All Going to Laugh at Me

This could also be the “what if I’m wrong” clause. For new presenters and bloggers, there seems to be a permeating fear about getting called out for something wrong or bad that they publish. I’m sorry, have you met the #SQLFamily? What I love about the SQL Server community is that most folks out there are extremely supportive and helpful. If something is wrong, the community will help you fix it and learn from it.

The bonus of doing this in the internet is making corrections and updates is easy. If someone corrects you or shows you a better way, you can blog about it! If there’s an error, you can fix it! Consider your blog a living diary that can be adjusted as necessary. The only caveat here: Be public about your changes. Either write a new post or make an addendum calling out your edit. Stealth edits look fishy, be public.

Who Has Time To Blog?

Blogging is like any other part of your life where you need to grow: you need to make time for it.  It doesn’t have to be much, an hour or two. The trick is to schedule it like any other commitment and stick to that schedule.

My routine is to write every Saturday morning. I found a nice little tea shop near my house and include that in my morning routine:

  1. I’ll walk to the tea shop around 9 AM. It’s a 30 minute walk and gives me time to think about what I will write about. Plus, the physical activity energizes me.
  2. Once I get to the shop, I order my tea and breakfast (oatmeal, ‘cause I’m trying to lose ‘dat weight). Then I find a space, get plugged in, and start writing.
  3. The writing process is very stream of consciousness. I use Google docs and basically just spitball out what’s in my brain. I don’t worry so much about grammar or sentence structure, the idea is to get my thoughts on paper. This also might include hacking out scripts or testing the stuff I’m blogging about if it’s technical.
  4. Once the writing is complete, I’ll take a Twitter/Internet break (note, I shut Twitter down during the writing, reduce those distractions). Not long, maybe 15 minutes.
  5. After the break, I’ll do one major pass to clean up sentence structure and grammar. Then I shut it down and go home.

 

It should be noted that at this point the blog isn’t quite complete, but the bulk of the work is done.  Next steps for me are to get to get the post into WordPress and schedule it. I always schedule my blog posts for Tuesdays at 8 AM MDT, giving myself a deadline.

You need to commit to this to make it work. The best way I’ve found to hold myself to commitments is to set deadlines. Need to build a presentation? Commit to giving it on a specific date. Need to get a blog post up? There’s my weekly publish deadline. Will you hit those deadlines every the time? Probably not, but as long as it’s not a habit and you don’t let yourself get away with missing a deadline, you’ll be fine.

How Can I Help?

While I think this is a great challenge, I think it’s fairly obvious I’m not a new blogger. How I’m participating is lending my less-than-expansive blogging experience to get others started. This post is the first portion of me owning the challenge as I hope to show you the path to getting started. There’s yet more that I can contribute. So here’s the next steps:

  • Need someone to review your blog before you post it? Hit me up.
  • Want to bounce blog post ideas off of someone? I can do this.
  • Lacking inspiration for what to blog about? Let’s talk.

Let me help you make the most of this challenge. I’m not a expert, I’m not a big name, I’m just a dude doing his SQL thing. But I think I can share some of that with you to make the road a little easier.  Email me via mike at this blog.

Own that $#!+

Blogging, like presenting, is a huge part of building your career and personal brand. It will make you more visible to your peers, help you retain knowledge, and improve your writing skills through practice. By blogging you strengthen the larger SQL community by adding to the pool of tribal knowledge as well as making yourself a stronger member of that group. Remember, you have something to contribute, a unique piece of knowledge that you can share with your comrades in the community. I encourage you to step up and answer the challenge.

Auditing SQL Execution Times With #Powershell

Last week, I was talking with another DBA on my team about our release process.  He was running a series of SQL scripts as part of a code release.  There were about 4 different scripts and, while he was running each one, he would log the start and stop time for each script execution in a text file. We both recognized this as a perfect opportunity for some Powershell automation and I figured I’d put something together.

The pattern for logging this kind of work isn’t terribly complex. We have about 3 steps:

  1. Record the start time
  2. Execute the SQL command or script
  3. Record the end time

The simplest way to express this in Powershell-ese would be:

$StartTime = Get-Date
Invoke-SqlCmd -ServerInstance localhost -Database YourDB -InputFile ‘.\YourSQLFile.sql’
$EndTime = Get-Date
$Duration = New-TimeSpan -Start $StartTime -End $EndTime

Simple, effective, and can then be recorded to any desired output, such as a text file or a database table. Why stop here? Whenever we build automation, we should focus on flexibility and doing the work up front. So let’s enhance this.

If you’ve been reading my blog, you’ve probably noticed that I like to make use of custom Powershell objects for reporting. Custom objects give you a flexible and effective way to generate output, as it allows you to present your results in a variety of ways. With this in mind, I put together the following wrapper function to take either a SQL command or script, execute that command or script, and capture the relevant data about the execution.

function Measure-SqlCmd{
   param($instancename
       ,$databasename = 'tempdb'
       ,[Parameter(ParameterSetName = 'SQLCmd',Mandatory=$true)]$Query
       ,[Parameter(ParameterSetName = 'SQLScript',Mandatory=$true)]$InputFile)

   $output = New-Object System.Object
   $errval = $null

   $output | Add-Member -Type NoteProperty -Name InstanceName -Value $instancename
   $output | Add-Member -Type NoteProperty -Name DatabaseName -Value $databasename
   $output | Add-Member -Type NoteProperty -Name StartTime -Value (Get-Date)

   if($sqlscript){
       $output | Add-Member -Type NoteProperty -Name SQL -Value $sqlscript
       $sqlout = Invoke-Sqlcmd -ServerInstance $instancename -Database $databasename -InputFile $InputFile -ErrorVariable errval
   }
   else{
       $output | Add-Member -Type NoteProperty -Name SQL -Value $sqlcmd
       $sqlout = Invoke-Sqlcmd -ServerInstance $instancename -Database $databasename -Query $Query -ErrorVariable errval
   }

   $output | Add-Member -Type NoteProperty -Name EndTime -Value (Get-Date)
   $output | Add-Member -Type NoteProperty -Name RunDuration -Value (New-TimeSpan -Start $output.StartTime -End $output.EndTime)
   $output | Add-Member -Type NoteProperty -Name Results -Value $sqlout
   $output | Add-Member -Type NoteProperty -Name Error -Value $errval

   return $output

}

At the core, we’re using the same pattern as declared above, but we’re just capturing other pieces of data and putting them into the properties of our custom object.  This allows us to do all sorts of stuff.  Let’s start with simply running our function:

Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_loadpartitiondata;'

The results are a nice little object list, giving us the information about our SQL command:

Measure-SqlCmd

Handy, no? Well, now the fun begins. Let’s say we have a couple commands to execute and we want to output this to a comma separated values(.csv) file for logging. Because we did all the work upfront of making a function and the custom object, the rest becomes a snap:

$total = @()
$total += Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_loadpartitiondata;'
$total += Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_fragmentpartition;'
$total | Select-Object InstanceName,DatabaseName,StartTime,EndTime,SQL,RunDuration | Export-Csv -Path 'C:\Temp\ExecutionLog.csv' -NoTypeInformation

Measure-SqlCmd_results

The bonus of using this pattern is I can use the object collection for a variety of reporting methods.  Using Export-Csv, I can easily create my .csv log file. Another option is to use Chad Miller’s(@cmille19) Out-DataTable and Write-DataTable to record this info to a database table. We now have a flexible logging process to work with.

We also go back to the principle of tool building. Here’s a tool for SQL execution that I can use to give me a repeatable logging process. As I continue to say, automation is about consistency. We can use this logging method to automate SQL script and command executions across multiple instances, track them all in the same way, and then store or handle this appropriate to our process. The end result is tracking our work the same way, every time.