Art of the DBA Rotating Header Image

#TSQL2SDAY 65 – TIL Edition: Whaddya mean there’s no endpoint?

This month’s T-SQL Tuesday is from Doc Mike Donnely(@SQLMD) with a simple premise: show us something you learned. Considering that working in the information technology field means we are always learning (or at least should be), blogging on this should be a snap. It’s a great fit for the #SQLNewBlogger challenge if you’re in on it, or just a nice reason to write if you’re not.

My tidbit is somewhat Powershell related, but touches on general setup of Availability Groups. If you read my early short series on automating SQL Server builds, I show you how you can build an Availability Group using Powershell. When I was testing it, the process works just fine for building the Availability Group, but I ran into problems when adding databases. When I went through the troubleshooting, I discovered that while you specify an endpoint in the Powershell cmdlet call, that cmdlet doesn’t actually create said endpoint. It just records the endpoint as part of the config.

This should not really come as a surprise. If you ever use the wizard in SQL Server Management Studio to create an Availability Group, it will script out commands to create the proper endpoint. I assumed that the Powershell cmdlet would work similar to the wizard, but it does not. This means I will have to add a step in my process to actually create the endpoint.

There are two ways to do this. The first is to simply use T-SQL in my script and pass it to either Invoke-SqlCmd or sqlcmd:

$endpointsql = @"
CREATE ENDPOINT [HADR_endpoint]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM AES);
"@

Invoke-SqlCmd -ServerInstance 'NODE1' -Database 'master' -Query $endpointsql

The second option is to use the SMO to create an endpoint:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
$smonode = New-Object Microsoft.SqlServer.Management.Smo.Server ‘NODE1’

#create endpoint
if($smonode.Endpoints.Name -notcontains 'HADR_endpoint'){
    $EndPoint = New-Object Microsoft.SqlServer.Management.Smo.Endpoint($smonode, 'HADR_endpoint')
    $EndPoint.EndpointType = 'DatabaseMirroring'
    $EndPoint.ProtocolType = 'Tcp'
    $EndPoint.Protocol.Tcp.ListenerPort = 5022
    $EndPoint.Payload.DatabaseMirroring.ServerMirroringRole = 'All'
    $EndPoint.Payload.DatabaseMirroring.EndpointEncryption = 'Required'
    $EndPoint.Payload.DatabaseMirroring.EndpointEncryptionAlgorithm ='Aes'
    $EndPoint.Create()
    $EndPoint.Start()
}

The choice between these options is down to personal comfort, either one would be acceptable in my eyes. I lean towards the SMO method just because I’m likely to be using the SMO objects for other operations in a script.

The lesson here is the old line about assumptions being the mother of all screwups and why you should always understand what your tools are doing. If you get stuck in to using wizards all the time, you will not see what is happening under the covers. Using a tool is fine, even necessary, but you should always make sure you understand the mechanics of those tools in case something does not work as expected.

How can you learn those mechanics? Almost every action in SSMS can be scripted out into T-SQL. This key learning tool is something I always stress to new SQL Server folks. It is the easiest way to see what SQL Server is trying to do and helps build your skills. The next time you’re trying to do something in Management Studio, study the script, and teach yourself something new.

Update:

So there’s a third option and I’m mildly annoyed at myself for missing it. As Nic Cain(@SirSQL) points out in the comments, Microsoft provides cmdlets for endpoints. In keeping with the links above, here’s your use of them:

$endpoint = New-SqlHadrEndpoint HADR_Endpoint -Port 5022 -Path SQLSERVER:\SQL\NODE1\DEFAULT
Set-SqlHadrEndpoint -InputObject $endpoint -State "Started"

A LOT simpler using the SMO and remains consistent with the rest of the script. Probably the better way to go.  Thanks Nic!

The PASS Summit 2015 Call for Speakers

Here we are, the annual community tradition of submitting sessions to the PASS Summit. This is the big time, the real deal, the event everyone wants to be seen at (as far as SQL Server goes). We all have hopes and dreams of standing on that stage, sharing what we love about SQL Server with the greater community.  Now that the submission deadline has passed, I wanted to post a few thoughts I had on this year’s process.

And yes, I want to do it before the OTHER annual community tradition of “bitching about why I didn’t get selected” starts.

Too Many Topics

The first thing that struck me was the excessive number of topics in each track. I didn’t count them all, but it was a little overwhelming. What made the process especially difficult was that there’s a lot of overlap with these subcategories, making it confusing about which to select.  This is especially a challenge for me with Powershell submissions, because many of the abstracts I created could have easily gone under two or three subcategories.

One example is my abstract on automating SQL Server builds with Powershell. Now, there’s a Powershell/SMO subcategory and one for Installing/Upgrading SQL Server. Which do I choose? I went with Powershell because that’s the focus of my session, but it might have been more applicable to the other.  This is a problem.

Suggestion: Either PASS needs to cull the overall list of topics or allow you to select multiple topics for your abstract.  Personally, I’d prefer the former.

Too Many Submissions

We have a good problem in the SQL Server community: tons of great speaking talent. There a lots of folks at the local and regional level who give great presentations at local user groups and SQL Saturdays. There’s also a fair amount of mediocre talent as well. However, all of these speakers can submit to Summit, regardless of their experience level. This places an overwhelming burden of work on the program committee.

I don’t want to make speaking at the Summit an exclusive club. I think all community members should have the opportunity to present at the Summit, but we need to balance that with consideration for the program committee volunteers and trying to provide the best product for Summit attendees. As such, the process should focus on standards that encourage both active community participation and a focus on public speaking.

Suggestion: PASS needs to start placing restrictions on who can and can’t submit. They already do this for the precons. My initial thought is a speaker should have four public SQL speaking events in the past calendar year. With opportunities like local user groups, SQL Saturdays, and virtual chapters, this is not a difficult number to hit.  It ensures that the speaker is an active community participant, someone who is contributing and practicing their craft.

We need more faces

In general, this isn’t a big deal, but we do have many popular speakers get multiple sessions at the Summit. I understand these folks are a draw and will put butts in seats, but the reality is that every person who gets doubled up means one less other speaker who will be selected. Again, considering the depth of speaking talent in the PASS community, this strikes me as problematic in general and a wasted opportunity to increase the variety of speakers at the Summit.

Suggestion: All speakers are limited to a single session, no exceptions.

More Transparency Around the Selection

I know this is a constant refrain regarding PASS, but it is needed. This is not me piling on the organization for being closed and secretive, but simply reminding them that we still need to know more about what’s going on. I’ve been paying attention to the selection process for the past two years and it IS getting better. I like how open PASS has been with how the program committee works.

The challenge is that the program volunteers don’t make the actual selections. They make recommendations that are passed to the selection committee. The selection committee then performs “levelling” and uses that to select sessions. This is the black box, because I don’t know what the levelling process entails and how things are judged. I’m not advocating some grand conspiracy theory around who gets selected, but there will always be questions and criticism until we are better informed about why.

Suggestion: PASS needs to continue what they have been doing and communicate more about the process. As I said, I recognize that the process of improvement is ongoing and it is getting better, but it can’t stop. The community needs communication, especially as we grow larger and larger.

Videos Don’t Add Value

I understand that PASS is trying to gather as much information about a speaker as possible. Videos are not the way to do it. There are too many issues with recording quality and software that make this a reliable method. Additionally, many videos won’t actually display a candidate’s public speaking skills, only their ability to sit down and chat in front of a camera. These combine to make video recordings more subjective than objective, and we need to focus on objective quantifiers.

Suggestion: Just ditch the video submission.

You Don’t Deserve To Speak At The Summit

This one is directed to the entire community, the people submitting. Every year I see the same tweets and posts griping about how certain people did not get selected or others did. How the process is horribly broken. To be fair, when you craft abstracts and then wait two to three months for a response, the resulting rejection is disappointing.

I think, because of all the other opportunities in the community, being rejected from a speaking event is a bit of a shock. The reality is that competition is tight for the Summit and speaking slots are limited. Getting turned down really is not that surprising. What bothers me most is that I see an attitude of entitlement from some of community members concerning presenting at the Summit. None of us are owed a speaking slot and the burden is on each individual speaker to prove their worth.

I view speaking at the Summit a lot like free agency in a professional sport. There’s a lot of competition to get noticed and a lot of reward if you do. Some people have an edge because they’re really good at what they do and demonstrate that on the public stage. Some others are hidden gems, waiting for their opportunity to get noticed. However, the market is driven by demand and selections are made based on that demand, not by attribution or “right”. We as speakers should always be striving to prove ourselves to the community and our selection should be based on ability, presence, and contribution, all of which will drive up demand.

Suggestion: Chill out if you don’t get selected. Focus on what you can do locally and regionally, through your own user group and SQL Saturdays.The Summit is awesome and I love going, but the truth is I have MUCH more of an impact speaking at smaller events. I encourage all community members to have a presence at these events, to focus on what they can do in their city. Build your community resume and skill set so that when you do submit to the Summit, you’ll be better prepared when you do make it to that stage.

In Conclusion…

I’ve already shared some of this with some members of the PASS board privately, but I wanted to share this publically as well. This is our community and we all have the ability to affect this change by being involved and reaching out. What has always impressed me about PASS is how grass roots it is and how members feed off each other. We need to focus on that, because it is what makes our community stronger than any other technical community out there.

Effectively Using Invoke-SqlCmd

I really enjoy interacting with the SQL community on Twitter. Whether it is exchanging silly one line, sharing interesting articles and blog posts, or trying to answer a tricky problem, the conversation is always engaging. One such conversation that happened last week was a community member who was struggling with the output he wanted out of Invoke-SqlCmd. I was able to help him out, but the conversation got me thinking about how people use this cmdlet within their scripts and processes.

The focal point of the confusion is how Invoke-SqlCmd relates to sqlcmd.exe. Sqlcmd has been around since SQL 2005, replacing the antiquated osql executable. Many administrators have gotten used to the robustness of sqlcmd and how they can leverage it for their backend automation. I’ve used sqlcmd for many tasks over the years.

Then along came Powershell and Invoke-SqlCmd. This cmdlet, included with the SQL Server provider, is intended to cover a lot of the functionality that an administrator can find in sqlcmd, but now as part of Powershell. What people get hung up on is that Invoke-SqlCmd is not a direct replacement of sqlcmd and that it can’t (and shouldn’t) be used in the same way.

Thinking Differently

The challenge when thinking about sqlcmd versus Invoke-SqlCmd focuses on outputs. One of the cardinal rules of Powershell is that everything is an object. When comparing sqlcmd with Invoke-SqlCmd, this difference becomes quite stark. First, let’s look at sqlcmd’s output, capturing it into a variable and then examining the types that make up the output:

sqlcmd_out

Using the GetType() method, we can see that the collected output of sqlcmd is an array, with each element of the array being a string. What this means is that that each line of output from sqlcmd (including the header and the row count output) is a single string, with all column elements concatenated into that string. As you can imagine, this is clumsy to handle and would require a lot of ugly string parsing to separate out the elements.

Now let’s look at the same command using Invoke-Sqlcmd:

Invoke-SqlCmd_Out

Right off the bat, the output is noticeably different. Our total output object is now a DataRow, with the individual column being an Int32 type. You’ll note that there is no item count as well. This is because we have a singular object as output (which has no count or length properties). If we further investigate the output using Get-Member, we’ll find that the columns of our query are actual properties of the output object (in this case, Column1 since we didn’t name our column).

A New Perspective

This helps us because we can now start treating our outputs as data, not just a mass of text that has to be parsed. For example, let’s consider the following query:

SELECT
SERVERPROPERTY('InstanceName') as InstanceName
,SERVERPROPERTY('MachineName') as MachineName
,SERVERPROPERTY('ProductVersion') as SQLVersion
,SERVERPROPERTY('Edition') as SQLEdition
,SERVERPROPERTY('ProductLevel') as SQLReleaseLevel

If you want to make use of these values in Powershell and use sqlcmd, you would have to perform all sorts of string manipulations to separate out the values. To make it easy, we’d likely have each value a separate call to sqlcmd.exe. Using Invoke-SqlCmd changes the game, allowing you to easily access each column as a property of your output:

$sql=@"
SELECT
SERVERPROPERTY('InstanceName') as InstanceName
,SERVERPROPERTY('MachineName') as MachineName
,SERVERPROPERTY('ProductVersion') as SQLVersion
,SERVERPROPERTY('Edition') as SQLEdition,SERVERPROPERTY('ProductLevel') as SQLReleaseLevel
"@

$sqlstats = Invoke-Sqlcmd -Query $sql

cls

'Instance: ' + $sqlstats.InstanceName
'Host: ' + $sqlstats.MachineName
'Product Info: ' + $sqlstats.SQLVersion +'('+$sqlstats.SQLReleaseLevel+') - ' + $sqlstats.SQLEdition


ServerProperties

This ability to reference columns as properties gets even more powerful when we deal with larger data sets for our scripts. A common technique I use is to leverage a collection of databases to drive work I need to do. With my collection as a series of objects, each row element is easier to access and make use of for my processing. To see a detailed example of this, run the following commands and study the output of the Get-Member call:

$dbs = Invoke-Sqlcmd -Query 'select name,recovery_model_desc,page_verify_option_desc from sys.databases;'
$dbs | Get-Member

Data Driven Decisions

Invoke-SqlCmd lets you integrate your data with your scripts. When you make your outputs discrete objects, data becomes readily accessible for your automation. It allows you to move those clunky parts of your processing out of T-SQL and into Powershell, making your scripts more robust. You can use T-SQL for what T-SQL is good at, allowing Powershell to take over where appropriate.

This does not replace sqlcmd. Another key difference is Invoke-SqlCmd isn’t interactive, where sqlcmd is. If you have need of a command line tool for using SQL Server (you’re on Server Core or you need an emergency connection), sqlcmd will be your weapon choice.

The fuzzy area is around compiled SQL scripts you might use. Both Invoke-SqlCmd and sqlcmd.exe can accept these scripts and allow you to perform sqlcmd’s variable replacement for parameterizing scripts. Choosing which method you use, in my mind, comes back to outputs. If your script output is simply a matter of logging the entire execution of your script for later review, sqlcmd is a perfectly acceptable method. However, if you need to integrate your scripts into a larger process and make use of the that output within the script execution, Invoke-SqlCmd could be your better option.

Powershell is about options. Everyone has their own methods and patterns, the idea behind Powershell scripting is not to give you a one-size-fits-all solution, but to provide you the components for building your scripts. The challenge we all have is to make sure we’re using the right components in the right way, the proverbial “driving a screw with a hammer” problem. When presented with a new tool set, don’t try and use it the same way as other tools you’ve had in the past. Instead, learn the tool on its own so you can make the best use of it for your process.

The #SQLNewBlogger Challenge: Git ‘er done!

This week my friend Ed Leighton-Dick(@elieghtondick) announced his New SQL Blogger challenge.  It’s an effort focused on getting new bloggers to write regularly and build a habit of blogging. We’ve heard a lot about how blogging can build your personal brand, so I’m a big fan of this challenge and will participate, even though I’ve been blogging off and on for the past few years.  So far, some big names have come out in support of this challenge. Awesome to see. Not to try and ride their coat tails, but I want to add my own thoughts because I think it’s incredibly important to participate.

Most community members will be intimidated by this challenge. I say this because I’ve heard (and said) all the excuses that are probably going through your head when you think about blogging.  I want to show you how you can overcome that intimidation and participate successfully, jump starting your blogging career.

I don’t have anything to blog about

I hear this all the time. Really what people are saying is “I don’t have anything valuable to blog about” and I completely call shenanigans on this attitude, for much the same reason as why I tell folks they should start presenting. Everyone has something to share. Even if you think it’s simple or a no-brainer, I guarantee you someone will benefit from your knowledge.

Let’s consider why new bloggers get so intimidated. The perception is that current bloggers, especially the BIG names, always seem to have some clever script or gotcha to contribute. Something no one else has ever thought of. It’s a tough act to follow, especially if you are just getting started.

However, to butcher a song lyric, “Any blog is a good blog”.

I always recommend that new bloggers approach their blogging as self documentation. Write for yourself, don’t expect anyone else to read it (and if they do, BONUS!). There have been a number of times where I go back to my blog for a technique or script I wrote in the past. It’s a great entry point to get you to started writing and reduces the “freak out” about other people reading what you wrote.

They’re All Going to Laugh at Me

This could also be the “what if I’m wrong” clause. For new presenters and bloggers, there seems to be a permeating fear about getting called out for something wrong or bad that they publish. I’m sorry, have you met the #SQLFamily? What I love about the SQL Server community is that most folks out there are extremely supportive and helpful. If something is wrong, the community will help you fix it and learn from it.

The bonus of doing this in the internet is making corrections and updates is easy. If someone corrects you or shows you a better way, you can blog about it! If there’s an error, you can fix it! Consider your blog a living diary that can be adjusted as necessary. The only caveat here: Be public about your changes. Either write a new post or make an addendum calling out your edit. Stealth edits look fishy, be public.

Who Has Time To Blog?

Blogging is like any other part of your life where you need to grow: you need to make time for it.  It doesn’t have to be much, an hour or two. The trick is to schedule it like any other commitment and stick to that schedule.

My routine is to write every Saturday morning. I found a nice little tea shop near my house and include that in my morning routine:

  1. I’ll walk to the tea shop around 9 AM. It’s a 30 minute walk and gives me time to think about what I will write about. Plus, the physical activity energizes me.
  2. Once I get to the shop, I order my tea and breakfast (oatmeal, ‘cause I’m trying to lose ‘dat weight). Then I find a space, get plugged in, and start writing.
  3. The writing process is very stream of consciousness. I use Google docs and basically just spitball out what’s in my brain. I don’t worry so much about grammar or sentence structure, the idea is to get my thoughts on paper. This also might include hacking out scripts or testing the stuff I’m blogging about if it’s technical.
  4. Once the writing is complete, I’ll take a Twitter/Internet break (note, I shut Twitter down during the writing, reduce those distractions). Not long, maybe 15 minutes.
  5. After the break, I’ll do one major pass to clean up sentence structure and grammar. Then I shut it down and go home.

 

It should be noted that at this point the blog isn’t quite complete, but the bulk of the work is done.  Next steps for me are to get to get the post into WordPress and schedule it. I always schedule my blog posts for Tuesdays at 8 AM MDT, giving myself a deadline.

You need to commit to this to make it work. The best way I’ve found to hold myself to commitments is to set deadlines. Need to build a presentation? Commit to giving it on a specific date. Need to get a blog post up? There’s my weekly publish deadline. Will you hit those deadlines every the time? Probably not, but as long as it’s not a habit and you don’t let yourself get away with missing a deadline, you’ll be fine.

How Can I Help?

While I think this is a great challenge, I think it’s fairly obvious I’m not a new blogger. How I’m participating is lending my less-than-expansive blogging experience to get others started. This post is the first portion of me owning the challenge as I hope to show you the path to getting started. There’s yet more that I can contribute. So here’s the next steps:

  • Need someone to review your blog before you post it? Hit me up.
  • Want to bounce blog post ideas off of someone? I can do this.
  • Lacking inspiration for what to blog about? Let’s talk.

Let me help you make the most of this challenge. I’m not a expert, I’m not a big name, I’m just a dude doing his SQL thing. But I think I can share some of that with you to make the road a little easier.  Email me via mike at this blog.

Own that $#!+

Blogging, like presenting, is a huge part of building your career and personal brand. It will make you more visible to your peers, help you retain knowledge, and improve your writing skills through practice. By blogging you strengthen the larger SQL community by adding to the pool of tribal knowledge as well as making yourself a stronger member of that group. Remember, you have something to contribute, a unique piece of knowledge that you can share with your comrades in the community. I encourage you to step up and answer the challenge.

Auditing SQL Execution Times With #Powershell

Last week, I was talking with another DBA on my team about our release process.  He was running a series of SQL scripts as part of a code release.  There were about 4 different scripts and, while he was running each one, he would log the start and stop time for each script execution in a text file. We both recognized this as a perfect opportunity for some Powershell automation and I figured I’d put something together.

The pattern for logging this kind of work isn’t terribly complex. We have about 3 steps:

  1. Record the start time
  2. Execute the SQL command or script
  3. Record the end time

The simplest way to express this in Powershell-ese would be:

$StartTime = Get-Date
Invoke-SqlCmd -ServerInstance localhost -Database YourDB -InputFile ‘.\YourSQLFile.sql’
$EndTime = Get-Date
$Duration = New-TimeSpan -Start $StartTime -End $EndTime

Simple, effective, and can then be recorded to any desired output, such as a text file or a database table. Why stop here? Whenever we build automation, we should focus on flexibility and doing the work up front. So let’s enhance this.

If you’ve been reading my blog, you’ve probably noticed that I like to make use of custom Powershell objects for reporting. Custom objects give you a flexible and effective way to generate output, as it allows you to present your results in a variety of ways. With this in mind, I put together the following wrapper function to take either a SQL command or script, execute that command or script, and capture the relevant data about the execution.

function Measure-SqlCmd{
   param($instancename
       ,$databasename = 'tempdb'
       ,[Parameter(ParameterSetName = 'SQLCmd',Mandatory=$true)]$Query
       ,[Parameter(ParameterSetName = 'SQLScript',Mandatory=$true)]$InputFile)

   $output = New-Object System.Object
   $errval = $null

   $output | Add-Member -Type NoteProperty -Name InstanceName -Value $instancename
   $output | Add-Member -Type NoteProperty -Name DatabaseName -Value $databasename
   $output | Add-Member -Type NoteProperty -Name StartTime -Value (Get-Date)

   if($sqlscript){
       $output | Add-Member -Type NoteProperty -Name SQL -Value $sqlscript
       $sqlout = Invoke-Sqlcmd -ServerInstance $instancename -Database $databasename -InputFile $InputFile -ErrorVariable errval
   }
   else{
       $output | Add-Member -Type NoteProperty -Name SQL -Value $sqlcmd
       $sqlout = Invoke-Sqlcmd -ServerInstance $instancename -Database $databasename -Query $Query -ErrorVariable errval
   }

   $output | Add-Member -Type NoteProperty -Name EndTime -Value (Get-Date)
   $output | Add-Member -Type NoteProperty -Name RunDuration -Value (New-TimeSpan -Start $output.StartTime -End $output.EndTime)
   $output | Add-Member -Type NoteProperty -Name Results -Value $sqlout
   $output | Add-Member -Type NoteProperty -Name Error -Value $errval

   return $output

}

At the core, we’re using the same pattern as declared above, but we’re just capturing other pieces of data and putting them into the properties of our custom object.  This allows us to do all sorts of stuff.  Let’s start with simply running our function:

Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_loadpartitiondata;'

The results are a nice little object list, giving us the information about our SQL command:

Measure-SqlCmd

Handy, no? Well, now the fun begins. Let’s say we have a couple commands to execute and we want to output this to a comma separated values(.csv) file for logging. Because we did all the work upfront of making a function and the custom object, the rest becomes a snap:

$total = @()
$total += Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_loadpartitiondata;'
$total += Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_fragmentpartition;'
$total | Select-Object InstanceName,DatabaseName,StartTime,EndTime,SQL,RunDuration | Export-Csv -Path 'C:\Temp\ExecutionLog.csv' -NoTypeInformation

Measure-SqlCmd_results

The bonus of using this pattern is I can use the object collection for a variety of reporting methods.  Using Export-Csv, I can easily create my .csv log file. Another option is to use Chad Miller’s(@cmille19) Out-DataTable and Write-DataTable to record this info to a database table. We now have a flexible logging process to work with.

We also go back to the principle of tool building. Here’s a tool for SQL execution that I can use to give me a repeatable logging process. As I continue to say, automation is about consistency. We can use this logging method to automate SQL script and command executions across multiple instances, track them all in the same way, and then store or handle this appropriate to our process. The end result is tracking our work the same way, every time.

Using #Powershell to Measure SQL Transactions

4333013417_ca6f8904d3_zA common question that comes up for SQL Server administrators is “how do I measure performance”? This is a tough question and there are a variety of ways to do this.  Just off the top of my head, you could measure wait statistics, logical reads and writes for a query, or query duration. When we talk about system performance, we need more general metrics and the one that business folks tend to look at is transactions per second.

For SQL Server, this metric is captured within the SQL Server performance counters.  There are actually two counters you could use: Batch Requests/sec and Transactions/sec. There’s some significant differences between these two counters and this SQL Magazine article covers them both in great detail.  For my work, I like to go with Batch Requests/sec because it captures more transactions going on within the instance, making it a more informative value for measuring system performance.

What’s the best way to capture this? Setting up a perfmon data collection is pretty easy, but then we have to manage creating the collection, then manage the output, and making sense of it. This can be a bear to handle and take a fair amount of manual work to get in place, especially if we just need a quick check on what’s going on in the system.

Powershell provides the Get-Counter cmdlet to query Windows perfmon counters, allowing us rapid access to the information without a lot of hassle. With this cmdlet we can easily measure system performance:

Get-Counter -Counter '\SQLServer:SQL Statistics\Batch Requests/sec'

The output is a PerformanceCounterSampleSet object that is a collection of samples for the call. Using the above command only gives us a single sample, so that makes it simple. We’ll talk about multiple samples in a bit.

 

The key properties of a sample to focus on are:

  • Timestamp – When the sample was collected
  • RawValue – Running cumulative counter for the object
  • CookedValue – The “actual” value, based on the calculating the previous RawValue and the current RawValue.

For reference, if you’ve ever looked at these values in sys.dm_os_performance_counters you have seen the RawValue. This means you are also probably used to doing your own math to measuring these values. Powershell and the perfmon object model will do this math for us, making it easier for us to use.

How do we leverage this for quickly measuring our instance’s transaction performance? I’ve put together the following function to do this for me.  The approach is simple: I give it an instance name and a duration in seconds, and my code will build a Get-Counter call to calculate the Batch Requests/sec information on my instance.

function Get-SQLTxnCount{

param([string]$InstanceName='localhost'
,[int]$DurationSec)

$smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
$ComputerName = $smo.ComputerNamePhysicalNetBIOS

$Samples = [Math]::Ceiling($DurationSec/5)
$output = New-Object System.Object
if($smo.InstanceName -gt 0){
$Counters = @('\MSSQL`$'+$smo.InstanceName+':SQL Statistics\Batch Requests/sec')
}
else{
$Counters = @('\SQLServer:SQL Statistics\Batch Requests/sec')
}

$Txns = Get-Counter -ComputerName $ComputerName -Counter $Counters -SampleInterval 5 -MaxSamples $samples
$Summary=$Txns.countersamples | Measure-Object -Property CookedValue -Minimum -Maximum -Average

$output | Add-Member -type NoteProperty -name InstanceName -Value $smo.DomainInstanceName
$output | Add-Member -type NoteProperty -name AvgTxnPerSecond -Value $Summary.Average
$output | Add-Member -type NoteProperty -name MinTxnPerSecond -Value $Summary.Minimum
$output | Add-Member -type NoteProperty -name MaxTxnPersecond -Value $Summary.Maximum
return $Output
}

Get-SQLTxnCount -InstanceName 'PICARD' -DurationSec 30

This function expands on the simple Get-Counter call in a couple ways.  First, I make sure I’ve narrowed the counters to only the specific named instance I want to measure. The second change is to the duration of the test and calculation of  how many 5 second samples will be collected over that duration. Finally, I create a custom output with the Measure-Object cmdlet to show the minimum, maximum, and average Batch Requests/sec for my samples.  This all comes together as a simple transactional measurement for my instance.

This isn’t a replacement for detailed benchmarks and analysis. For this, I recommend some of what Laerte Junior(@LaerteSQLDBA) has written. He has a much more detailed overview of Get-Counter and using it with SQL Server that I would highly recommend (P.S. His article gave me my start on this function).  Benchmarks are extremely important and necessary to make the information coming out of my function useful.

Just like a carpenter has his square or tape measure, this function lets me measure my SQL Server with just a few keystrokes. I still need the full context of what’s happening on my instance, but at least I’ll know the dimensions without a lot effort or additional work. I challenge you to start thinking about your own Powershell in the same way. Don’t think of simply creating scripts, consider how you can take your work and build out your own squares, calipers, or other tools to improve your skills as a craftsman.

Building Availability Groups with #Powershell

A couple weeks ago, I went over automating your SQL Server builds. That’s the kind of process that will benefit many SQL Server administrators. But why stop there?  I say that the more we can automate, the better.  So now let’s talk about automating another aspect of building SQL Servers: availability groups.

Availability groups were the killer feature in SQL 2012 and saw more enhancements with SQL 2014. In my opinion, this is a key feature that allows SQL Server to step into the horizontally scalable world. The problem is that it is not an easy feature to set up or configure. Availability groups have a lot of pitfalls and gotchas to navigate. This is where automation comes in as our ally to build a consistent, repeatable process to deal with these hurdles. Remember, automation is primarily about consistency and we can achieve that by scripting out as much of our build process as possible.

As before, let’s talk about the process we’re going to automate:

  1. Build a Windows failover cluster consisting of two nodes.
    1. The two nodes will already have SQL Server installed.
    2. We will add a file share witness into the cluster (because an even number of quorum votes is bad).
    3. The cluster will be simple, so we will skip things like heartbeat networks and other more advanced configurations.
  2. Build an availability group across both nodes of the cluster.
    1. Availability group will use standard endpoints
    2. Both nodes will be configured for synchronous commit and automatic failover
    3. Create a listener with a static IP address.

Building the Cluster

To ease the scripting process, Microsoft provides several cmdlets in the Failover Cluster module.  Using these, we can create our cluster.

Import-Module FailoverClusters

New-Cluster -Name 'NC1701' -StaticAddress '192.168.10.100' -NoStorage -Node @('PICARD','RIKER')| Set-ClusterQuorum -FileShareWitness '\\hikarudc\qwitness'

As you can see, the cluster creation is pretty easy. Keep in mind this is an extremely simple cluster with none of the usual items we include in production. While you can use this process for your production environment, make sure you work with a clustering expert to define all the additional configurations you will need to build in to make your cluster stable. I also want to call out that I created the fileshare for the witness outside of this script as well.

Before we proceed, I want to validate the cluster.  As with everything else we will use Powershell:

Test-Cluster -Name ‘NC1701’

Firewall Ports

An easy thing to overlook is the firewall ports for availability group communication. I know, I forget it all the time! We need to open up 1433 for SQL Server and listener communication and port 5022 for the availability group endpoints.  Some quick Powershell resolves this for us:

Invoke-Command -ComputerName RIKER -ScriptBlock {New-NetFirewallRule -DisplayName 'SQL Server' -LocalPort 1433 -Direction Inbound -Protocol TCP}
Invoke-Command -ComputerName PICARD -ScriptBlock {New-NetFirewallRule -DisplayName 'SQL Server' -LocalPort 1433 -Direction Inbound -Protocol TCP}

Invoke-Command -ComputerName RIKER -ScriptBlock {New-NetFirewallRule -DisplayName 'AG Endpoint' -LocalPort 5022 -Direction Inbound -Protocol TCP}
Invoke-Command -ComputerName PICARD -ScriptBlock {New-NetFirewallRule -DisplayName 'AG Endpoint' -LocalPort 5022 -Direction Inbound -Protocol TCP}

Building the Availability Group

Microsoft provides some cmdlets for building availability groups in the SQLPS module. The process is fairly straightforward, since we’re only declaring two synchronous nodes with automatic failover. If we were to have additional nodes, we would need to put in additional logic for determining which nodes would perform which roles.

Import-Module SQLPS -DisableNameCheckin
$AGName = 'ENTERPRISE'
$PrimaryNode = 'PICARD'
$IP = '192.168.10.101/255.255.255.0'
$replicas = @()

$cname = (Get-Cluster -name $PrimaryNode).name
$nodes = (get-clusternode -Cluster $cname).name

$sqlperms = @"
use [master];
GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM];
GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM];
GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM];
"@

foreach($node in $nodes){
    Enable-SqlAlwaysOn -Path "SQLSERVER:\SQL\$node\DEFAULT" -Force
    Invoke-Sqlcmd -ServerInstance $node -Database master -Query $sqlperms
    $replicas += New-SqlAvailabilityReplica -Name $node -EndpointUrl "TCP://$($node):5022" -AvailabilityMode 'SynchronousCommit' -FailoverMode 'Automatic' -AsTemplate -Version 12
}

New-SqlAvailabilityGroup -Name $AGName -Path "SQLSERVER:\SQL\$PrimaryNode\DEFAULT" -AvailabilityReplica $replicas

$nodes | Where-Object {$_ -ne $PrimaryNode} | ForEach-Object {Join-SqlAvailabilityGroup -path "SQLSERVER:\SQL\$_\DEFAULT" -Name $AGName}

New-SqlAvailabilityGroupListener -Name $AGName -staticIP $IP -Port 1433 -Path "SQLSERVER:\Sql\$PrimaryNode\DEFAULT\AvailabilityGroups\$AGName"

There are a couple cmdlets we’re using to make this all work:

  • Enable-SQLAlwaysOn enables Always On with the SQL Server service. This must be done before we can create our AGs.
  • New-SqlAvailabilityReplica creates a SMO object that represents our node for the availability group. We signify that we are creating SMO objects by using the -AsTemplate switch. Without that switch, the cmdlet would try creating the actual replica, which we can’t do since the availability group doesn’t actually exist at this point.
  • New-SqlAvailabilityGroup requires two items: the primary node the availability group will be created on (declared as a SQL provider path) and the collection of replica nodes.  Running the cmdlet will create the availability group and join the replicas to it.
  • Join-SqlAvailabilityGroup is used to join the replicas to the availability group. Weirdly enough, when you create the availability group, it won’t join the other replicas, so we have to take an additional step to join the non-primary nodes.

You’ll note we wrap the replica process in a foreach loop built on the Get-ClusterNode output. I use this approach so that, no matter how many nodes my cluster contains, I can add them all without explicit calls to the node names. The loop will add the SMO objects to a collection that will eventually represent all the nodes for the availability group (in this case, only two).

A gotcha to call out is the SQL Statement I execute on each node. When working on this, I kept encountering error 41131. It surprises me, because I believe these permissions should be granted when you enable Always On for the service. However, most of the time this does not happen, so I’ve included the SQL Statement to guarantee that my permissions are correct for creating the availability group

Once we’ve created the availability group, we need to create the listener. To do this, we need the listener name, the port we will use, the static IP that will be assigned to the listener name, and the availability group (declared as the provider path). To ease the implementation, I use the availability group name as the listener name and port 1433. This actually takes the longest of the steps, because the cmdlet has to validate the IP isn’t already in use, but as long as the address is available your listener will be created and your availability group built.

 At this point, you should have a functional availability group and can add databases to it.

Another approach

The interesting thing about these cmdlets is they all work using the SMO and related objects. That means that if you want a more controlled approach where you get down and dirty with the code, you can go that route. Thomas Stringer(@sqllife) has a great post on this method and I actually was going down that route originally.

The question I had to ask myself was if it was worth re-inventing the wheel for my availability group builds. I’m not a big fan of how the SQLPS cmdlets rely on the SQL Server provider. Additionally, if you build your script using Thomas’ method, your script will work anywhere you have .Net and doesn’t rely on you installing the SQL Server client tools to get SQLPS. However, using the SMO/.Net method means I have to do a lot more work to get my script running, where the SQLPS cmdlets have taken care of most of that work for me.

Which approach should you use? That’s for you to decide, but I encourage you to look at both methods and evaluate which approach works better for you. When all was said and done, I went with the SQLPS cmdlets because it meant I didn’t have to spend a lot of time writing additional code.

Just as with our SQL Server build process, we’re building consistency and repeatability into our environment. The added benefit is, using this script, I can create my availability group in about a minute. While I may not be creating availability groups often, my script ensures that when I do create them I can maintain my build standards and keep my systems aligned.

 

Validation and Inspiration

I’d like to take a brief break from the technical posts to talk a little bit about community. As I write this, I’m currently heading back to Denver from SQL Saturday Phoenix. As with other SQL Saturdays I have attended, this was a fun event with lots of great learning and camaraderie with my fellow SQL geeks. This is a bit of a love letter to those geeks, but I wanted share with you some of the impact this event had on me.

Validation

As with everything else I’ve done in 2015, my presentation at this event was Powershell related.  I gave a presentation on Powershell Tips and Tricks for SQL Server DBAs for the third time this year and was pretty pleased with my execution. What blew me away was the reception from the audience. I had a ton of positive feedback and comments and could tell the attendees really appreciated what I shared with them.

Why am I telling you this? Because I want to convey to you why you should present and the benefits of it. It’s more than just having your ego stroked or getting that pat on the back (though those don’t hurt). When you share your knowledge, you have an immediate and profound impact on other people’s careers. Each and every one of us has something that others can benefit from.  We need to share it. To know that I showed my audience a better way to do their jobs and help them step up to another level is extremely gratifying.

Much is made of technical presentations being used as a vehicle for advancing your career. They are also a vehicle to advance the careers of your peer group. The great thing is the more we help each other, the more we help ourselves and make our skills and abilities stronger. I could see this in the gratitude of my audience and the feedback they gave me from my session.

Inspiration

What I like most about technical conferences is not just the education and the sessions. These are valuable for both the presenters and attendees, but the true value is gained in the conversations that happen around the event. This is why it’s so important to make time to talk with the other people at these events, to chat with speakers, and to avail yourself of the social aspects. You’ll find inspiration for solving problems at work, developing new strategies for your current position, or defining the next moves in your career.

I had several such conversations while I was at the event. Coming away from this SQL Saturday, I was able to help some of my peers with strategies and ideas for their blogs, their presentations, and their jobs. Beyond that, though, I was inspired for other things I could do to both improve myself and my career.

One example was a conversation with Amy Herold(@texasamy), where we talked a lot about Powershell and automation. She gave me a few ideas that I could further develop and we talked about some projects we could collaborate on. I’ve got some exciting ideas that I hope to work with Amy on over the next few months that will help both of us grow.

It’s hard to have these kind of conversations during our day-to-day jobs. Since we’re usually only one of a couple people (or maybe the lone gunman) doing data work in our jobs, it’s difficult to bounce ideas off of others and get that inspiration. You’d be amazed at what kind of thoughts you will get when you have really smart people to talk with.

Satisfaction

The reason SQL Saturdays are such great events is it allows the greater SQL community to share, connect and learn with one another. If you haven’t been to a SQL Saturday, I strongly encourage you to go. I know it’s sometimes tough, being on a weekend when some of us would rather be getting along with our non-database lives.  I want you to think about what you could do with your career, though, if you gave up that one weekend. Where could you go if you could have that kind of free learning. Most of all, how much better will you be by plugging in to the SQL community and feeding off the energy and knowledge you can find at these events. Building your career is more than just learning, it’s collaborating and sharing. SQL Saturdays give you all of this in spades.

 

#Powershell and Automating SQL Server Builds-Part 3

Hopefully you’ve seen over the last two posts some basic techniques for automating and codifying your SQL Server builds.  There’s just two more items I want to cover here, questions you’re probably asking yourself already.

The build script

Let’s start with reviewing the whole build script to this point. While I wouldn’t recommend running this in production yourself, the finalized version looks something like this:

Run this from the install binary location
[Reflection.Assembly]::LoadWithPartialName(“System.Web”)
$SAPassword = [System.Web.Security.Membership]::GeneratePassword(16,4)
.\setup.exe /CONFIGURATIONFILE=<config file location> /SAPASSWORD=$SAPassword /IACCEPTSQLSERVERLICENSETERMS

#Configure the OS
New-Item -ItemType Directory G:\MSSQL\Data
New-Item -ItemType Directory H:\MSSQL\Logs
New-Item -ItemType Directory I:\MSSQL\TempDB

#Configure Instant File Initialization
$svcaccount = 'SDF\sqlsvc'
secedit /export /areas USER_RIGHTS /cfg C:\templocalsec.cfg
$privline = Get-Content C:\templocalsec.cfg | Select-String 'SeManageVolumePrivilege'
(Get-Content C:\templocalsec.cfg).Replace($privline,"$privline,$svcaccount") | Out-File C:\newlocalsec.cfg
secedit /configure /db secedit.sdb /cfg C:\newlocalsec.cfg

#Open the firewall for 1433
New-NetFirewallRule -DisplayName "Allow SQL Server" -Direction Inbound –LocalPort 1433 -Protocol TCP -Action Allow

#Set Server configurations
$smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') localhost
$smosrv.Configuration.MaxServerMemory.ConfigValue = 4000
$smosrv.Configuration.MinServerMemory.ConfigValue = 2000
$smosrv.Configuration.MaxDegreeOfParallelism.ConfigValue = 4
$smosrv.Configuration.OptimizeAdhocWorkloads.ConfigValue = 1
$smosrv.DefaultFile = 'G:\MSSQL\Data'
$smosrv.DefaultLog = 'H:\MSSQL\Logs'
$smosrv.Alter()

#disable sa
$smosrv.Logins['sa'].Disable()

Next Steps

After this, we should consider deploying maintenance jobs, restoring databases, or building out things like availability groups. While I don’t cover these here, keep in mind these tasks can be scripted out as well.  I encourage you to consider how you could do this.  For example, if you leverage either Ola Hallengren’s scripts or the new Minion Reindex from the Midnight DBAs(@MidnightDBA), you have another piece you can automate. The key is that you have a repeatable process.

At Xero we have all our maintenance jobs as part of our administrative database deployment. This is handled through SQL Server Data Tools and .dacpacs. We achieve consistency and deployment speed by managing that solution and deploying the administrative database project. By maintaining the database in source control we manage our tool set, keeping it standardized.  When we add or update it, we can apply the changes out to our environment. This also means when we build a new server, we have a standardized way to install our admin tools with a minimum of muss and fuss.

The Point

Automation is more about consistency than speed, but speed is a nice side benefit. By assembling these components into a single script, we have a repeatable build process for any SQL Server in our environment. Our instances will be built the same way, every time, so long as we aren’t changing our script. The bonus is, since it’s all scripted, there’s no fumbling with wizards, dialog boxes, and making sure we type in the right values, so it all just happens. And in a matter of minutes. At Xero, I can take a server from nothing to ready for databases in about 20 minutes using these techniques.

Now, we could go pretty crazy with how we build our automation script. In fact, there’s some pretty cool tools out there that will help you with this. I’ll let you do that on your own. Just make sure that, before you do, you know all the steps you have to build your SQL Server. It’s fine and dandy to be able to script out things, but without a plan or process to automate first, most of the scripts you could write won’t do you much good because you can’t use them again.

P.S. HUGE thanks out to Melody Zacharias(@SQLMelody) for helping me with this series of posts.

#Powershell and Automating SQL Server Builds-Part 2

When last we left our heroes, we were discussing how best to go about building SQL Servers quickly and consistently using Powershell.  The thing is, we hadn’t quite gotten to the Powershell part yet and only really covered the build process plus installing SQL Server.  Let’s change that and move on to the next steps.

Configure the OS

It’s time to flex Powershell. Because Powershell works directly with the OS, it makes it really easy to perform tasks that SQL Server isn’t so good at. Let’s start with creating three directories for our database files:

New-Item -ItemType Directory G:\MSSQL\Data
New-Item -ItemType Directory H:\MSSQL\Logs
New-Item -ItemType Directory I:\MSSQL\TempDB

This assumes that our sysadmins have built the box with those LUNs. There are, of course, ways to script that out as well, but we’ll skip over that for the sake of brevity.

Enabling Instant File Initialization is a little tricky, because editing the local security policy is not something Powershell can do directly. Insert a harumph here. Anyway, we can do some tricks using the secedit tool. This technique, borrowed from Kyle Neier(@Kyle_Neier), basically dumps out the local security policy to a text file, updates it, then re-imports it to the local security policy. Kyle provides  a full function to use, but you can boil it down to the following script:

$svcaccount = 'SDF\sqlsvc'
secedit /export /areas USER_RIGHTS /cfg C:\templocalsec.cfg
$privline = Get-Content C:\templocalsec.cfg | Select-String 'SeManageVolumePrivilege'
(Get-Content C:\templocalsec.cfg).Replace($privline,"$privline,$svcaccount") | Out-File C:\newlocalsec.cfg
secedit /configure /db secedit.sdb /cfg C:\newlocalsec.cfg

As a quick aside, you might want to consider working with your sysadmins to actually set this permission in your domain Group Policy. Means you’ll get it without having to configure it yourself.

Finally, we want to make sure we open the firewall for port 1433:

New-NetFirewallRule -DisplayName "Allow SQL Server" -Direction Inbound –LocalPort 1433 -Protocol TCP -Action Allow

Configuring SQL Server

Now with the OS configured, it’s time to configure our SQL Server. For this, we’re going to leverage our friend and pal, the .Net SMO. As with many things in the SMO, the beauty of it is that we get a consistent API to interface with our SQL Server. For our purposes, we want to look at the Configurations collection of the Microsoft.SqlServer.Management.Smo.Server object. In here we can find everything you’d find in the sys.configurations view through T-SQL. Now let’s configure our max and min memory, max degree of parallelsim, and optimize for ad hoc queries.

$smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') localhost
$smosrv.Configuration.MaxServerMemory.ConfigValue = 4000
$smosrv.Configuration.MinServerMemory.ConfigValue = 2000
$smosrv.Configuration.MaxDegreeOfParallelism.ConfigValue = 4
$smosrv.Configuration.OptimizeAdhocWorkloads.ConfigValue = 1
$smosrv.Alter()

Simple enough, right? You can get pretty fancy and add all sorts of logic to calculate these values, giving you a flexible setup script.

You could just as easily write these as a T-SQL script for a series of sp_configure executions to set your values. I’ve done it and know plenty of people who do. There are two reasons I’m moving away from this. The first is that some of my configurations, such the default database directories, need some specialized techniques (usually involving xp_regedit or something similar) to implement in T-SQL. I don’t like that approach because the extended stored procedures are usually unsupported and can have lots of issues. Powershell is much more graceful and (the second reason), using it across the board gives me a common interface for my configuration management.

In fact, let’s look at the default database directory settings. As stated, if you want to do this programatically in T-SQL, you are going to need to use xp_regedit to update the registry keys. Using the SMO is much, much easier:

$smosrv.DefaultFile = 'G:\MSSQL\Data'
$smosrv.DefaultLog = 'H:\MSSQL\Logs'
$smosrv.Alter()

Piece of cake, right? One last thing to do now, and that is disable the sa account:

$smosrv.Logins['sa'].Disable()

And we’re done!

To be concluded…

This builds the core of our server, but there’s a couple more odds and ends to discuss.  We’ll wrap up tomorrow with a few things, like how to handle maintenance jobs and other administrative tools.  We’ll also talk a little bit about the “why” of all of this.  Stay tuned for the thrilling conclusion!

(Well, I think it’s thrilling.)