Art of the DBA Rotating Header Image

Highways and Railroads

Stop me if you’ve heard this one….

“We don’t have time for process, it just slows us down.”

In the buzz of today’s world, you’ve probably had this thrown at you at least once. Process has supposedly become an anathema to productivity. Everyone from developers up to CTOs seem to think that you need to get rid of process in order to turn out timely software.

Well, they’re wrong.

I understand the reasons behind it. As someone who’s had to fill out change request forms to alter a server’s MAXDOP setting, process can be tedious and troublesome. It can be a shackle that mires work in the swamp of forms and sign offs. When you’re living in a world of “get sh*t done”, process can be a thorn in everyone’s side.

There’s another side of the coin, though. Process, done right, will ensure that work is done the same way every time. It can provide protection against costly mistakes, unexpected outages, and other business pains that puts everyone under the gun. Also, if you build your process right, it won’t be a bottleneck to your work but a guide rail to keep you on the right track.

Driving vs. Riding

The analogy I like to use with folks when I explain the need for process is by comparing highways and railroads. Think about how you drive for a moment. When you’re on the highway, with a bunch of other people trying to get to the same place, you can only go so fast. You’re going to be limited to how many other cars are around you, so more traffic means slower speeds. Of course, then you get that one person who starts trying to move faster than is safe in traffic, usually causing an accident.

This is the lack of process at play. Different teams or developers have their own car and are trying to get to the release. However, everyone else is doing the same and your highway only has so much bandwidth. Sure, we have a road, so some path is defined, but there’s nothing to govern your interactions with the other cars other than general momentum and the number of resources you have to make the deploy work. You could make the road wider by getting more resources, but that’s not scalable.

Contrast this with a railroad track. It’s straight, narrow, and trains can fly along this because all the roadblocks are out of the way. You can link multiple cars together on a train to get everyone to the same place at the same time, which is a lot faster than using the highway. Even if the train moves a little slower than a car, it will still get there ahead because there isn’t anything blocking the track. You also get there safer because there isn’t any competing traffic to get in the way and risk an accident.

Building railroads is the goal of implementing process. You want to build something that is direct and well defined. What we lose in freedom pays off in speed. We’re building bullet trains that can deliver rapid deployments through consistent, repeatable action. This is achieved by  removing blocks, reducing stops, and streamlining the process we use to get things done. Process should not be a throttle, but a track, giving us a clear method of getting things done faster.

I’ve Been Working On The Railroad

How do we go about building this Nirvana? It takes cooperation between both development and operations teams to build this right, as both are invested in the outcome. Developers want to deliver new code to market faster and operations wants that code to be safe, stable, and not put the business at risk. Both sides must come together to collaborate on building the process. And make no mistake, this will be a building effort. There is no black voodoo magic in making this happen.

The first step is actually to write everything down. I always preach that the most basic form of automation is a check list. You can’t automate a process if you don’t know what it is. Start with a plan that everyone puts together for how software should be released and use that as your road map.

“We can’t take the time to stop doing everything just to put this together!!!” Yeah, I’ve heard that one too. No one is asking you to stop working just to switch to a new process. All I’m suggesting is, if you’ve got your highway, start building your railroad next to it. People will still be driving on the road.

You also might have people concerned about getting it right the first time. Let’s be honest here, you may not. Treat this plan like you would any other piece of work you have and develop it iteratively. Start somewhere and, as you build things out, fix the problems you discover. Your release process is just another piece of software, with features and bugs. No reason not to treat it in the same way.

Once you’ve got your plan, start following it. It might be inconvenient, but this is the sort of thing that takes discipline. It may seem tedious, but this is the start of a diet or exercise plan. It will be tough, but over time it will get easier and you’ll get better at it.

As you follow your plan, you will be able to introduce tooling and automation to speed up parts of your railroad track. Unit tests, automated builds, source control, incremental changes…these are all pieces of the process. Improve your process one piece at a time, making each new change so it can be evaluated and verified before moving on to the next step.

Build your process with the goal of removing traffic jams and slow downs. You want to lay down track to give guidance and consistency. You will find that you and those around you will achieve that mythical space of being faster, more reliable, and ultimately more agile with your software development. In short, you’ll run like you’re on rails.

All Aboard

Building this sort of process is at the heart of DevOps. DevOps isn’t a tool or a magic method, it is a deliberate culture of cooperation and discipline. The only way you will build your railroad is if you bring together development teams and operations engineers and work together to relieve your shared pain. That is the selling point: both sides share many of the same challenges, so it should a no-brainer to come together and solve those issues.

If you’re looking for a real world example, check out Farm Credit Services of America. This article articulates, in a very real way, how one company moved into a DevOps culture and built their own railroad. While there’s a lot of information in the article, at the core it is about how development and operations collaborated to achieve something together.

I’ve seen similar work in other companies. It is usually painful growth, but the results far exceed the gains. What is important is to not lose sight of the need for process. Getting rid of process, if anything, will hinder you just as much as following some archaic method merely because it is rote. Build your process with the goal of removing bottlenecks and establishing guide rails. You will find that you and those around you will achieve that mythical space of being faster, more reliable, and ultimately more agile with your software development.

(I want to thank Rie Irish(@IrishSQL) for proof reading and contributing to this blog article.)

We Are All Developers Now

Last year, I had a pretty intense conversation with a friend of mine at a SQL Saturday. It was one of those that started with the typical “grumble grumble grumble damn devs” statement. There was a time I would have echoed that with a hearty “harrumph harrumph”, but as I’ve progressed through my career, I’ve come to realize that the line between developers and DBAs has softened and blurred, particularly in the age of DevOps. What followed was a back and forth about the habits of DBAs and developers and lead me to a phrase I’ve added to my lexicon: “We’re all developers now”

I know, I know. What about the long standing division between righteous Operations folks (DBAs, sysadmins, network engineers, and their ilk) versus the .Net, Java, Node, and other heathens of the Developer world. These “barbarians” assail the fortresses of Operations with hastily written code that is not properly tested. This causes sleepless nights filled with pages  that a weary admin must respond to and resolve. Why would anyone in their right mind associate with these irresponsible practices?

Borrowing From Your Neighbor

The first step to answering this question is to step back and actually look at what happens in the developer world and compare it to what we do in administration. It’s a common practice to borrow code and practices from our peers in the SQL world, so why not do the same with those who develop other types of code? Especially those who develop code for a living (hint: consider the recursiveness of this statement).

As an administrator, I created all sorts of administrative scripts. Obviously I’ve got a reputation for PowerShell, but I also have T-SQL scripts that I use as well. For a while I would hack these together as any other good DBA and store them on a USB/Dropbox/Google Drive/OneDrive. It was functional, but I only ever had the current state of those scripts and didn’t always know why I changed things. I could put in a header, but this was tedious and manual, an anathema to how I work.

Enter my time at Xero. I hadn’t really used source control before and the teams there used it heavily. I got a rapid introduction to GitHub from Kent Chenery(@kentchenery) and Hannah Gray(@lerevedetoiles). It didn’t take long for me to realize the benefits of source control, not just for databases, but for my own personal scripts. Now I maintain my own personal GitHub repo and not only have a central location where my code is stored, but it can be shared with others who can contribute and collaborate.

Code, Rinse, Repeat

After adopting source control, I began to look at other developer practices and habits. While one can debate the pros and cons of Agile development, one of the concepts I like is iterative development. As with other work we do, iterative development isn’t rocket science, but it took me a while to adopt it because of a natural fear admins have: production paranoia (aka “what will this break”).

Admins of all stripes are in constant fear of breaking production, and for good reason. We want every change to be right and as close to perfect as possible. However, most folks who develop iteratively realize that perfect is a road block. It is hard to anticipate all the factors. When you develop iteratively, you ship what you can and fix/fail fast once you deploy it.

I’ve adopted this approach for my own script/process development. Whether I’m publishing a script or deploying a server, I focus on delivering a product. I test aggressively, but I’m prepared for the event that something will fail. I focus on the feedback loop to test, evaluate, remediate, and deploy. As an aside, this feedback loop is often where application developers will fail, because they are often driving towards the next set of improvements or features. It’s incumbent on both sides of the fence to adopt and enforce the feedback loop.

It’s All Just Ones and Zeroes

I could go on about habits I’ve adopted, but the real question is “why are developer practices important to administrators?” As we move into a realm of automation and scripting (as any good admin will), we are doing more and more code development. Sure, we can click through GUIs to setup SQL Server or run a backup, but the more experienced folks out there will have scripts to accomplish these tasks. Network and system admins are deploying infrastructure to the cloud using CloudDeploy or ARM templates. We live in an age where almost everything can be codified.

This codification means it is imperative that we focus on good habits for managing our code. It might be that we’re writing T-SQL code for SQL maintenance. Perhaps we’re writing shell scripts to execute code deploys or build a continuous integration pipeline. Suddenly we’re developers of a different stripe.

So what do you do about it? You probably haven’t implemented some of these habits and, likely, you might be a little mystified on how to get started. I suggest you start where I started: Go to a developer and talk to them. Maybe chat with a couple. Go to a local developer user group and see what they’re talking about. This is about learning, so find a mentor who can help you learn this topic.

And who knows? Maybe you can teach them a few things about your world while you’re at it.

Be Like Water

“You must be shapeless, formless, like water. When you pour water in a cup, it becomes the cup. When you pour water in a bottle, it becomes the bottle. When you pour water in a teapot, it becomes the teapot. Water can drip and it can crash. Become like water my friend.

Bruce Lee

 

If you look around the internet, you will come across this famous quote. Often cited by arm chair philosophers (myself included), these simple sentences speak on being flexible and adaptable within your life and how being too rigid can limit us. These limits will make it difficult to truly grow and improve ourselves, whether it is our professional or personal lives.

We can break this quote down in many ways, but I’d like to focus on how it impacts technology professionals. One thing I love about working in technology is how often things change. The joy comes from the constant learning we must do to keep pace. If we’re not pushing ourselves to explore and discover, we risk falling behind and losing our way.

Empty Your Mind

If we look at the IT career path through the lens of Lee’s quote, the idea of flexibility becomes obvious. Through my time in the IT industry, I’ve seen the rise of the Internet, the cloud, concepts like Agile development and DevOps, and other changes that range between large and subtle. If I’ve learned anything, it is that defining myself as a DBA, sysadmin, or developer limits what I am willing to do and narrows the opportunities available to me.

When someone asks you to take on a new task, how do you approach it? Do you look at it through the lens of your job role? Or, instead, consider how it relates to your career path? Do you determine what you want to work on based on what you know and are comfortable with, or do you let the opportunities shape your growth?

As technology professionals, it’s important not to limit ourselves. Today’s experiment might be tomorrow’s trend. A hobby we tinker with could easily become our driving passion. We need to be receptive to what’s around us, be formless and fill the demands that are presented to us. Do not close yourself off to challenges because you haven’t done them, but be prepared to know what it takes to fill those vessels.

H Two O

This probably sounds daunting because there’s so much going on in the technology world. Should we be able to do anything? Be ready to build networks, write applications, design databases, and so on? How do we keep up with the overwhelming number of disciplines and developing technologies in our world?

I’ve long held that the “full stack developer” is a myth. We live in a world of specialization and trying to have deep knowledge of all disciplines is impossible. Sure, we can understand something of everything (being a jack of all trades, master of none), but it becomes difficult to bring the full weight of our expertise to bear if we only know a little bit of a lot of things.

At some point in our careers we need to understand what makes up our “water”. This brings me to another long held view of mine: technologies change, but concepts remain the same. The example I like to use is relational databases. Over the years we’ve had platforms like Microsoft SQL Server, Oracle, MySQL, PostGREs…the list goes on and on. Each of these platforms introduces new features every few years, trying to one up each other.

At the core, though, each of these platforms is a relational database. The relational model was codified by E.F. Codd back in 1969. Think about that for a moment. While Microsoft SQL Server is adding new features every few years, the foundational concepts that we all work with are forty seven years old. Almost half a decade of a technological principle that we build our careers on.

I don’t consider myself a SQL Server professional. My skill set is not limited to just SQL Server as a platform, but relational database design and data management. I am strongest with Microsoft’s offering, but I can adapt to another platform because I understand and own the foundation. The core concepts are what I build upon, which is what makes me stronger and more prepared.

Be Like Water

Becoming like water is more than just being adaptable, it’s about understanding what defines you. Job roles and requirements are just vessels we fill with the knowledge and experience we have. If we choose to be like water, we remain fluid and can, at once, fill the needs of a role as well as be more than that.

Be the sum of the things you have learned, not the tasks you can accomplish. My challenge to you for the new year is to build a deeper, more fulfilling career. Flow and adapt, learn and absorb. Empty your mind and be open to the numerous possibilities in front of you.

Be like water, my friend.

T-SQL Tuesday #84: Getting Ready for your Presentation

boy-speech-lettersI’ve been super busy lately, but I wanted to at least post something for this month’s T-SQLTuesday. The topic is about encouraging new speakers, something I’m very passionate about. I think that speaking is one of the best things you can do to boost your career. If you are reading this and are considering speaking, I encourage you to reach out to your local user group and see if you can get started with a 15 or 30 minute session. Take a shot, I’ll bet you’ll be surprised.

What I want to share are some tips for the day you give your first presentation. A lot of folks are going to talk to you about building and preparing your presentation, but that is only half the battle. What you should do when you actually GIVE the presentation is often glossed over, even though this is the most high pressure moment of the whole cycle.

How do we reduce or relieve some of this pressure? Well, let’s start with a list of things that could possibly go wrong when you present. Think about the following list:

  • You’re presenting and you get an on-call page.
  • Your demo blows up spectacularly.
  • While giving your presentation, your computer attempts to apply updates.
  • You start 10 minutes late because you have issues with your video or sound.
  • During your presentation, someone sends you a picture on your favorite IM client:

oh_hai

Any of these will easily throw an experienced presenter off their game. For a new speaker, it can spell disaster. I’ve got a routine that I go through on the day of my presentation, which is designed to reduce that risk of disaster. And who doesn’t like reduced risk?

Getting Ready

Step 1: At the beginning of the day, well before my presentation, I make sure my presentation machine has been updated with Windows and other corporate software. This is SUPER important if it’s a Tuesday (when Microsoft releases updates). Doing this avoids any update surprises while I’m presenting or right before I go on stage.

Step 2: A couple of hours or so before my presentation, I will walk through my presentation. I open up the PowerPoint slide deck and step through it. When I get to demos, I will walk through my demo scripts. I test EVERYTHING, and do it in order. If I encounter an error, fix it, and then start over. This helps me insure that the flow works and that I understand what the step dependencies are in my demo.

Step 3: About an hour before my presentation, I will turn off everything on my presentation machine unnecessary to the presentation. Programs like Skype, Google, unneeded local SQL instances, Virtual Machines….so on and so forth. I only want what I need running to make sure that I have enough resources for my demos, along with keeping possible distractions shut down.

Step 4: At least 15 minutes before I’m due to present, I go to my room and hook up my presentation machine. I test the video and make sure my adapter works. This way I can address any tech issues that could hamper the presentation. I will display PowerPoint and also my scripts and demos to make sure everything looks ok.

I also usually duplicate my screen to the projector. This is important because if I extend, this means the only way (typically) that I can see what’s on my screen is to look back at it. This is distracting for your audience. If you duplicate, you only have to look down at your screen, which maintains contact with the audience.

Step 5: Right before I present, I turn my phone OFF. Then I put it in my bag. I get it away from me. I don’t want to get calls, I don’t want to have to worry about silencing it, and I don’t want it buzzing in my pocket if I’ve got a lot of notifications. The phone is off and away.

It’s GO time

At this point, I’m free and clear to do my presentation. Does that mean that nothing will go wrong? Of course not. However, performing these steps puts me in the best position to give my presentation without disruption. It is a foundation for success. Just like we build out our database solutions to minimize the option of failure, we need to approach our presentations with a similar sort of care to help guarantee our own success.

I want to thank Andy Yun(@sqlbek) for hosting this month’s T-SQL Tuesday. It’s a great topic and I hope folks get a lot out of it. If you’re considering stepping into the speaking world (and you should!), this month’s blog posts should give you all the tools to succeed. Good luck!

Azure SQL Databases with PowerShell: Moving To The Cloud

cloud-computing-2It’s been a while, but I want to wrap up my series on Azure SQL Databases and Powershell. In my last post, I talked about how you can make point in time backups of Azure SQL Databases using the Start-AzureSqlDatabaseExport. This process allows you to make a backup of your Azure SQL Database and store it as an Azure blob. You can then restore these backups using Start-AzureSqlDatabaseImport. It’s all very handy to manage your data within the Azure environment.

Now let’s talk about what I alluded to at the end of that post: using this process to migrate data from your traditional SQL Servers to Azure SQL databases. As we have previously discussed, Azure SQL Database does not support traditional backups as we are used to with regular SQL Server. While there’s a debate on whether or not these should be supported, the reality is that we need another process to handle this. This is where the export/import process comes in.

BACPACing

SQL Server Data Tools(SSDT) have always had a process to extract your database. There are two types of extracts you can perform:

  • DACPAC – A binary file that contains the logical database schema and possibly the data. This file retains the platform version of the database (i.e. 2012, 2014, 2016).
  • BACPAC – A binary file that contains the logical database schema and the data as insert statements. This stores the platform version, but is not locked into it.

The DACPAC extract is useful for managing your database code. These objects can be source controlled and, using SSDT, can be used to compare to a current database state and deploy changes. I’m a big fan of using DACPACs to manage my database code and maintain code consistency. I don’t use the data extract portion, but it’s there if you need it (more on this in a later blog post).

BACPACs are a little more limited. They contain the schema and your data in a logical state, as insert VALUE statements to populate your database objects. The advantage is you get a flexible package that can deploy a database and its data to almost any SQL Server platform (with some caveats, which we will get to).  The disadvantages are that the BACPACs are not as efficient as native backups and are not transactionally consistent.

What this means is that BACPACs are not really suited for day to day backups of your database. They are intended to be a migration tool, especially if you are working with disparate versions of SQL Server. It is why they’re intended for moving databases to Azure SQL Database, because a BACPAC can move your database logically and you are not reliant on the underlying database architecture.

Which brings us to the caveat. While BACPACs allow us to move a database logically, it can not always take into account unsupported features. We can export a database from our SQL Server, but if we’re using something not supported in Azure SQL Database (such as CLR or Service Broker), then this migration will not work. If you are planning this sort of migration, you first need to review your databases for these issues.

Exporting Our Database

Enough about that, let’s get exporting. There are several ways to migrate your database, but we’re going to focus on Powershell and those cmdlets. We can use sqlpackage.exe to create the BACPAC, which is pretty easy (this is really one line, I’m separating it so it’s easier to read):

SqlPackage.exe 
/action:Export 
/SourceServerName:localhost 
/SourceDatabaseName:AdventureWorks2012 
/targetfile:'C:\temp\AW2012.bacpac'

Once we’ve got our .bacpac file, we need to get it up into Azure. For this we need a storage account, then it’s just a matter of copying the blob. For this, we will use the same storage container we used for the exports before. We will create a storage context for the storage account, then use Set-AzureStorageBlobContent to upload the .bacpac file.

$stctxt = (Get-AzureRmStorageAccount -ResourceGroupName test -Name msftest).Context
Set-AzureStorageBlobContent -File C:\TEMP\ADW2012.bacpac -Context $context -Container sqlexports

Liftoff to the Cloud

With our export loaded into Azure, we’re ready to import it into Azure SQL database. We need to first create a database to import the data into:

New-AzureRmSqlDatabase -ResourceGroupName 'IntroAzureSQL' -ServerName 'msf-sqldb' -Edition Basic -DatabaseName AWTEST

After this, it is just a matter of starting the import as we did previously. We need to create a connection context for our Azure SQL Database server, then start the import:

$cred = Get-Credential
$sqlctxt = New-AzureSqlDatabaseServerContext -ServerName msf-sqldb -Credential $cred
$request = Start-AzureSqlDatabaseImport -SqlConnectionContext $sqlctxt -StorageContext $context -StorageContainerName sqlexports -DatabaseName awtest -BlobName ADW2012.bacpac

Notice I save the output of  Start-AzureSqlDatabaseImport into a variable. This is an object requesting the import request, which I can use to check on the import. Remember that calling the cmdlet starts an asynchronous process that will import the schema and data of the .bacpac into your newly created database. We’ll want to check on the process to identify when it completes.

Double Check Your Work

The one last thing to do is to verify the migration, which requires two checks: the schema and the data. For the schema, we will go back to sqlpackage.exe to perform a schema comparison. This is a two step process:

  1. Extract the source schema as a .dacpac
  2. Generate a deploy report targeting the Azure SQL database. If no changes are shown, the schemas match.

Extracting the schema as a .dacpac is a similar operation to extracting the schema as a .bacpac:

SqlPackage.exe /action:Extract `
/SourceServerName:localhost `
/SourceDatabaseName:AdventureWorks2012 `
/p:ExtractAllTableData=True `
/targetFile:C:\Temp\AW2012.dacpac

Generating a deploy report is a similar command, though we have a couple gotchas. First, we’re connecting to our Azure SQL database, so we need our login and password passed to the command. Second, .dacpacs store the database platfom version and will halt if there’s a possible incompatibility. So we need to declare AllowIncompatiblePlatform=True in our sqlpackage call to ignore this. We’ll still get a flag that there could be incompatibilities, but it will at least run the comparison for us:

SqlPackage.exe /action:DeployReport `
/SourceFile:C:\Temp\AW2012.dacpac `
/targetServerName:msf-sqldb.database.windows.net `
/targetDatabaseName:AWTEST `
/targetUser:"$($cred.UserName)" `
/targetPassword:"$($cred.GetNetworkCredential().Password)" `
/p:AllowIncompatiblePlatform=True `
/OutputPath:C:\temp\AWMigration.xml

The resulting .xml file will tell us what changes the deployment would have to make to get everything lined up. Since the goal here is to verify that everything got deployed correctly, we want this report to not have any changes. The desired output would look a little like this:

2016-09-15_8-48-46

Since no changes are listed, there’s nothing to do and everything matches.

Checking your data can be a trickier proposition. For a basic “sniff test”, I just check the row counts of each table. For this, I have a query to check sys.partitions for heaps and clustered indexes (index_id is 0 or 1) and compares the source to the Azure SQL database to look for any differences:

$sql = "select
schema_name(schema_id) +'.'+ t.name as TableName
,p.Rows as Rows
from
sys.tables t
join (select * from sys.partitions where index_id in (0,1)) p on t.object_id = p.object_id
order by TableName"

$local = Invoke-Sqlcmd -ServerInstance localhost -Database AdventureWorks2012 -Query $sql
$azure = Invoke-Sqlcmd -ServerInstance msf-sqldb.database.windows.net -Database AWTEST -Query $sql -Username $cred.UserName -Password $cred.GetNetworkCredential().Password

$matches = @()
foreach($i in $local){
$matches += New-Object psobject -Property @{'TableName'=$i.TableName;'LocalRows'=$i.Rows;'AzureRows'=($azure | Where-Object {$_.TableName -eq $i.TableName}).Rows}
}
$matches | Where-Object {$_.LocalRows -ne $_.AzureRows}

Again, if everything worked correctly we shouldn’t get any output.

I should stress that these two checks do NOT replace thorough regression testing. These should give you a starting point to verify that your migration was successful, but you still will want to test your applications and processing.

The New Hotness

Now, while I would like for Microsoft to include some native cmdlets that accomplish these actions, it should be noted that sqlpackage.exe is a pretty solid tool. It can be used from the command line and wrapping PowerShell around it is still effective. The key is having something I can use from the command line so I can write repeatable scripts. This is important because Azure is driven by automation and, while we can use GUI tools and the portal, we won’t succeed long term in the cloud on these tools alone, so it’s important to understand all the options.

Creating Alerts for Azure SQL Database with Powershell

Last week, as I watched the Twitters, I saw an interesting blog post from Julie Koesmarno(@MsSQLGirl) about creating alerts for Azure SQL Database. These are cool because one of the many things I get asked about when talking about Azure SQL Database is “how do I monitor this darn thing?” With these alerts, you can monitor performance, capacity, and security settings, setting up either email notifications or webhook responses.

Now, me being me, I tweeted out that it would be cool to do this in PowerShell. I say this not just being a geek about the language, but also because if you want to live in the cloud you need to up your automation and scripting game. PowerShell is the way to manage your Azure resources, especially if you want to set up consistent alerts for all your Azure SQL Databases. Julie pointed me to some examples that, while not specific for Azure SQL Database, got me moving in the right direction. I’d like to share some of that learning here.

The cmdlets

To start, we need to know what cmdlets are at our disposal. You’re going to need the Azure and AzureRM modules first (I recommend using the Powershell gallery installation) and then we just need to use PowerShell’s command syntax to find what we’re looking for. Using Get-Command, just do a wildcard search around ‘Azure’ and ‘Alert’ as my search terms.

Get-Command *Azure*Alert*

2016-08-17_15-02-15

It’s not a long list, but we don’t need much to get our work done. It should be noted that these are generic and can be used across all Azure resources.

Because these cmdlets are generic, we have to identify the specific metrics we can monitor for Azure SQL Database. Fortunately, these are discoverable using the Get-AzureRmMetricDefinition, which is not listed but part of the AzureRM.Insights module (use Get-Command -Module AzureRM.Insights). To use it, we need to know our Azure resource ID, which we can derive from three pieces of information: the resource group, the server name, and the database name.

$ResourceGroup = 'IntroAzureSql'
$server = 'msf-sqldb'
$db = 'MSFADMIN'
$rid = (Get-AzureRmResource -ResourceGroupName $ResourceGroup -ResourceName "$server/$db").ResourceID
Get-AzureRmMetricDefinition -ResourceId $rid | Format-Table

2016-08-17_15-11-26

Most of these Name values match what Julie listed in her post, but the nice thing is if we ever get new ones, we can look them up with this method.

Creating an Alert

So let’s get down to brass tacks and actually create an alert. To do this, we need some info first:

  • The Resource Group we will create the alert in.
  • An Azure location where the alert will live.
  • An Azure SQL Database server and database we are creating the alert for.
  • What metric we will monitor and what is the threshold we will be checking.
  • (optional) An email to send an alert to.

With this, here’s the settings I’m going with:

Resource Group – IntroAzureSql
Location – West US
Server – msf-db
Database – MSFADMIN
Alert – dtu_consumption_percent greater than 90 (Flag if more than 90% DTU usage)

There are a couple other considerations for creating an alert. The first is a time window the alert will check against. This window can be anywhere from five minutes to a full day. The second is what aggregation of the metric we will check (i.e. total, average, maximum). You’ll want to use values and aggregations that make sense, but for the example alert, we’ll check the maximum value over a 5 minute window.

Once we have everything in place, we will call the New-AzureRmAlertRuleEmail cmdlet to create our email notification and Add-AzureRmMetricAlertRule cmdlet to create our alert:

$ResourceGroup = 'IntroAzureSql'
$location = 'West US'
$server = 'msf-sqldb'
$db = 'MSFADMIN'
$rid = (Get-AzureRmResource -ResourceGroupName $ResourceGroup -ResourceName "$server/$db").ResourceID
$email = New-AzureRmAlertRuleEmail -CustomEmails 'mfal@dummy.com' -SendToServiceOwners
Add-AzureRmMetricAlertRule -Name 'DTU90Check' `
-Location $location `
-ResourceGroup $ResourceGroup `
-TargetResourceId $rid `
-MetricName 'dtu_consumption_percent' `
-Operator GreaterThanOrEqual `
-Threshold 90 `
-WindowSize '00:05:00' `
-TimeAggregationOperator Maximum `
-Actions $email

2016-08-17_15-33-45

And now we can look at our shiny new alert using either the portal or Get-AzureRmAlertRule:

2016-08-17_15-37-18

2016-08-17_15-39-54

To get a rid of an alert is easy, too, just call Remove-AzureRmAlertRule:

Remove-AzureRmAlertRule -ResourceGroup $ResourceGroup -Name 'DTU90Check'

2016-08-17_15-42-50

Where To Now?

As you can see managing alerts in PowerShell isn’t all that difficult. The question is “why”? Hopefully you can already see the value, especially if you want to standardize your Azure SQL Database environment. By having all your alerts scripted out, you can apply them in a consistent fashion. This can be part of a larger automation process that helps scale out your environment as necessary. Regardless of how you manage it, using PowerShell can significantly decrease your management time for your Azure environment.

Thanks again to Julie for giving me the inspiration to figure this out!

Stuck in SQLPS

maxresdefaultA month and a half ago, with a lot of fanfare and maybe a little confetti, the SQL Server Tools Team released an update to SQL Server Management Studio that massively updated the SQL Server Powershell experience. As I blogged, this is a big deal and long overdue. I am extremely happy that this is finally getting some attention in Redmond.

However, once the dust settled and I had a chance to dig into things, I discovered that not all was rainbows and candy. The truth of the matter is that SQLPS is not so easily replaced. We are still going to be burdened with using the old module for at least the short term. This means at least being comfortable with the foibles of the old module.

Bundled With SSMS

The first real issue with the new SqlServer Powershell module is that is part of the SQL Server Management Studio install. I’ve long lamented the fact that you can’t install Powershell components separately. This means you’re limited to installing this only in locations where you can install SSMS 2016. Microsoft probably thinks you should be installing SSMS2016 EVERYWHERE, but even if you run on the bleeding edge this is a problem if you’re using Windows Server Core (no GUI, no SSMS).2016-08-11_14-41-38

I have not figured out any workaround to this. It’s possible to copy the module folder from a good install to somewhere else, but I’m nervous about that since something could be missed. This means that, until we can install this separately, users are stuck running SQLPS on servers without management studio. Yes, this means SQL 2016 without SSMS as well.

Not Recognized by SQL Agent

The second issue is that even if you do install SSMS 2016, SQL Agent won’t recognize and give you access to the new module if you use a PowerShell job step. When you create a PowerShell job step, the script in that job step runs within a specific context. It’s hidden from you, but whenever that script runs the first thing that happens is SQL Server launches sqlps.exe.

3285_72b386224056bf940cd5b01341f65e9dSqlps.exe is a “mini-shell”, which configures a few things to support SQL Server and PowerShell together. The important piece here is that the executable is hard coded to use the SQLPS module. Now, sqlps.exe is deprecated and going away, but only in a future SQL Server release. This means if you’re using anything previous to this release (having no idea when the release will happen), you are going to be forced to use the SQLPS module if you are creating a Powershell script job step.

How can you get around this? This is a trick that’s been around for a while. I try to avoid it since it adds additional layers to running PowerShell in an agent job, but it does work. What you do is create a CmdExec agent job and call the Powershell executable to run your task. This approach gives you a LOT more control on how PowerShell is working in your environment, but takes some extra work. Hat tip:Derik Hammer (@sqlhammer)

Stuck In The Past

pastLike I said before, I’m excited and happy the SQL Server Tools Team is improving the PowerShell experience for SQL Server. It has been a long time coming and I hope it continues. This is why it makes the above items SO frustrating, because while all these updates are being made, most of the user base won’t be able to use them. In a world where SQL Server installations still run on 2000 and 2005, it can take a long while to upgrade your core installation to take advantage of these improvements.

Normally, I’d be ok with this. I know that when I use an earlier version of SQL Server, I don’t get the shiny new toys. With SQLPS, though, it’s different. For all these years the experience has been klunky and problematic. While some of this klunky-ness finally got fixed back in March, most of us are still stuck with the workarounds because we do not have a way to install the updates.

What can you do about it? I’m glad you asked. Currently there are two Connect items that address these directly:

You can also make your voice heard on the community Trello board. As users, we’re at the mercy of Microsoft for updating this stuff and making it available to us. The goal is to raise awareness of these issues and let the development teams know how important they are. I will definitely give Microsoft credit for being more responsive to user feedback.

 

Out With The SQLPS, In With The SqlServer

This month, the SQL Server tools team released an updated version of the SQL Server PowerShell module. It is a huge leap forward for using PowerShell and SQL Server together, with the tools team making a serious commitment to correcting the sins of the past. While we still have a long way to go, this is a promising path. At this time next year, I’m certain the PowerShell/SQL Server experience will be in the best place it has ever been.

Before I share my thoughts, let me give you some relevant links. You know, in case you missed something.

Not Playing Nice

Now for my turn! First off, I want to call out that with the July 2016 update we have a new module. SqlServer replaces SQLPS, which means that all the old functionality in the old module exists in the new one (along with some nice little fixes). What I discovered is that this also means you can’t use both modules at the same time. I ran into this because my profile automatically loaded SQLPS for me. When I went to import SqlServer, I got some unpleasant red text:

2016-07-09_12-39-26

Basically, it’s a typing conflict with the SMO. Now, there’s no real reason why you should be running both modules side by side, but be aware that if you have anything that automatically loads SQLPS, you’re going to have problems. Fortunately there’s an easy way to fix this without having to restart your session:

Remove-Module SQLPS
Import-Module SqlServer

However, there’s another monkey wrench, especially for those of us using PowerShell scripts in SQL Agent jobs. Even with the release of SQL 2016 (which was just before these changes were made), SQL Agent still runs sqlps.exe and loads the SQLPS module. To check this, I made a simple agent job that executes:

Get-Module | Out-File C:\Temp\AgentPSModules.txt

The resulting output shows us the issue plain as day:

2016-07-09_12-50-13

There are ways around this, of course. The key here is awareness and I’m sure that we’ll see the SQL tools team get an update to this when possible.

Invoke-SqlCmd: Now with more flexibility!

Enough of the bad news, let’s talk about some of the improvements. As I mentioned earlier, Laerte Junior blogged about some of the updates to Invoke-SqlCmd. These changes really help make the cmdlet more useful for managing and working with outputs. However, I want to look at one specific addition more closely: connection strings!

I’ve always found the biggest limitation of Invoke-SqlCmd was that I couldn’t specify connection string parameters. This was a significant issue when working with multi-subnet Availability Groups, because I couldn’t specify MultiSubnetFailover=True, a key connection string parameter. Now I can, along with a host of other options.

Using it is pretty simple. Just declare a connection string as you would with any other application and pass it to the -ConnectionString parameter:

$ConnectionString = 'Server=localhost;Database=tpcc;Trusted_Connection=true'
$sql = 'SELECT name,physical_name FROM sys.database_files'
Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql

2016-07-09_13-05-37

This new parameter opens up a whole new range of possibilities. I love that Microsoft is providing multiple paths to work with their tools. This means that if you’re a .NET guy who is more comfortable with connection strings, you don’t have to constantly keep looking up all the different flags and parameters of Invoke-SqlCmd. Also, if you have a situation which can not be addressed by those standard flags, you can always use the standard .NET connection string parameters and get the job.

(And yeah, I’m excited about this because it was my Connect item. :D )

Providing for the Provider

The last thing I want to touch on are a couple updates to the SQL Server provider. Now, the provider definitely has a bad name out there, but it can be a very useful tool for exploring and gathering SQL Server objects you want to work with. Since it is based on the SMO, it makes SQL Server object manipulation easier by giving you more options to collect your objects.

An important facet of the provider is that it behaves like a file system. How many of us have deleted files from the command line? Have you also used a PowerShell one-liner to delete old files, like backups? If you have, you might be familiar with two parameters: -WhatIf and -Confirm. These two switch parameters are extremely helpful because they can keep you from cutting yourself with that sharp PowerShell knife.

With the July 2016 update, the SQL Server provider now supports the use of these two switches. While using them may not be a common situation, it’s good to know that they are there. After all, it could be handy if you wanted to clean up some items from the command line, like maybe a junk database. Now you can both check what you’re going to do before you do it, along with getting a confirmation question when you go for the actual delete:

2016-07-09_13-21-52

While this may not seem like a big deal, it’s another example of how the SQL tools team is trying to get the SQL Server PowerShell module up to the standards of other PowerShell implementations. For all of us who struggle daily with technical debt, I’m sure you can understand how significant this is.

Early Days

I know the SQL Server PowerShell module still has a bad reputation with a lot of folks out there. Honestly, it’s well deserved. The SQL Server Tools team is out to fix that and everything we’ve seen over the last 3-4 months shows that commitment. A lot of this has to do with the effort put forth by the SQL Server community and the ongoing Trello board where the community is working with the tools team to suggest and prioritize changes. Want to help? Join us by contributing changes and voting up the current suggestions. Microsoft wants to make this your product and this is how we can help.

Sharing My Summit 2016 Submission Results

Sharing your abstract results for 2016 has become a Thing(TM) that Brent Ozar(@BrentO) started this week. At first I wondered about the value, but as I read others posts it made sense to share because it helps pull the curtain back a little to how the process works for speakers. It also gives us insight into the kind of work the volunteers are doing. As such, I want to share some thoughts on what I’ve seen and how it can be improved.

First off, let’s get the fine print out of the way:

  • Thank you to Allen White(@sqlrunr) and the speaker selection team for managing this cat herd. It’s a tough job, but it shouldn’t be thankless. We appreciate your hard work.
  • Thanks to the volunteers who gave of their time to read and offer feedback on the abstracts. While I have some thoughts on the nature of the feedback, I recognize and appreciate the time it took to write those comments.

Let’s go over the general thoughts. I’ve been deeply intrigued by the speaker selection process over the past five years. With each year, the speakers who submit get better feedback and the process is improving with each iteration. Just like the software we work on, it still has “bugs”, but I think we all need to recognize the strides this process makes and how PASS is doing its best to balance all the factors to create an annual Summit with a variety of voices and a solid conference offering.

On this, my biggest complaint is the overall subjective nature of many of the comments. I’ll address mine specifically when I list my abstracts, but as I read others feedback, it seems like many of the reviewers let their bias color things a bit too much. It might be that we have different perceptions of session levels or that there’s a misconception about a tool, technology, or technique. My suggestion back to the abstract teams is that, as you’re reviewing, step back a moment and be careful that your reviews do not contain too much subjectivity.

(Yes, even THAT might be considered a subjective judgement. Take it for what it’s worth.)

Now, let’s look at my abstracts (All General Session submissions):

Powershell Jumpstart for SQL Server DBAs – Level 100

(Accepted)

Abstract:
Powershell can be intimidating. There are many challenges to using it, especially for those not familiar with its nuances. Once understood, the language can be used for effective and robust automation that brings together SQL Server with other Windows components, such as the file system or Active Directory.

In this session, we will cover the Powershell fundamentals, how you can use Powershell itself to learn about concepts and syntax, and techniques for using Powershell with SQL Server. We will focus on core language patterns that you can use immediately, take the mystery out of the code, and help you get started writing your own PowerShell scripts. Attendees will not only acquire an understanding of Powershell, but see practical examples of how it can be used with SQL Server, along with a set of resources they can use to learn more about the language.

Prereqs:
None

Goals:

  • Introduce attendees to Powershell, what it is and where it came from. Cover basic syntax of Powershell and how it is written.
  • Demonstrate specifics of how Powershell can be used to manage and automate tasks around SQL Server.
  • Provide attendees learning resources for Powershell.

Feedback:

  • abstract – detailed but not compelling
    topic – level too low
    subjective rating: not interesting
  • The outline seems to clearly describe the contents of the presentation.  The topic and goals should be compelling to attendees.  The target audience should be big enough to support this session.  There appears to be a reasonable amount of live demonstrations in relation to the topic being presented.
  • Attendants would be interested this session because it is a beginners session.

My Thoughts:
This is where I have an issue with the subjective nature of the feedback. Maybe I’m a little too sensitive about DBA’s perception of PowerShell (I have the debate of “Why bother?” more than I care to count), but the “not compelling” comment is off base. A basic PowerShell session may not be compelling to the reviewer, but we have a LOT of DBAs out there who want this kind of information. I know because I talk to them a lot. I also take issue with the “level too low”, because this session is specifically designed to make PowerShell easy to grasp for those who haven’t worked with it.

I guess I shouldn’t complain too much since this was the one selected. I’m proud of it and the content. It has been very well received at other venues, so this is a case where I think the reviewer was letting their own personal bias of the content color the abstract review a little too much.

Automating Azure SQL Management with Powershell – Level 200

(Not Accepted Popular topic, higher rated session selected)

Abstract:
Azure is still a new frontier for many database administrators. While cloud environments bring a lot of flexibility and freedom, there is a lot to learn and a lot to keep on top of. The current portal provides a functional GUI experience, but the most effective way to manage Azure environments is by using Powershell. In this session, we will look at the fundamentals of using Powershell with Azure SQL databases. We will dive into the tools that can be used to deploy and manage these environments, as well as patterns and processes to automate these tasks. Attendees will leave with a solid foundation to build and administer SQL database environments in Azure.

Prereqs:
Basic familiarity with Powershell and Azure.

Goals:

  • Review Powershell Azure cmdlets and how to use them. Includes connecting to your Azure account and setting your subscription.
  • Examine the Azure cmdlets specifically built for Azure SQL Databases and how they apply to the Azure SQL Database operating model.
  • Demonstrate how to use the cmdlets to create, modify, and remove an Azure SQL Database. Discuss patterns for using these and automating Azure SQL Database deployments.

Feedback:

  • Abstract-  well written, compelling
    topic –  topic and title clear. Topic rarely covered
    subjective rating – interesting
  • Great topic. I would like to attend.

My thoughts:
I think this was just a full field and I had another session accepted. The feedback was pretty sparse, but there is an interest here. I’ll probably submit this one again.

Benchmarking SQL Server with Powershell – 300 Level

(Not Accepted Other sessions selected based on building a balanced program for track coverage, speaker coverage, topic coverage, and session rating.)

Abstract:
Performance is key for any database system. As professionals, we are constantly challenged to to keep our servers performing at peak condition. The struggle is many administrators do not understand their current profile, so it is hard to know when performance degrades. Benchmarking SQL Servers is a vital task that is often overlooked because it is either not understood or difficult to manage. This session will cover the important metrics to measure for an effective benchmark and will also demonstrate how a benchmark can be captured using Powershell. Attendees will leave with a clear plan of how they can measure their SQL Server performance, how that measurement can be automated, and how to develop an action plan to dynamically manage these tasks at scale.

Prereqs:
Understanding of SQL Server performance fundamentals, such as perfmon counters and wait statistics. Basic understanding of Powershell.

Goals:

  • Review what constitutes a good SQL Server benchmark. This will cover perfmon counters and wait statistics.
  • Demonstrate how this benchmark can be collected with Powershell.
  • Show how the benchmark can be automated and reported on.

Feedback:

  • abstract: one grammar error in abstract, but detailed;
    topic interesting
    subject rating: I like this topic
  • I would attend this session.
  • Topic is a good and relevant option.
    Level is good for prereqs and goals.
    Abstract is ok, with a few wording and grammar choices that could be improved/changed to make reading easier (ie. ‘performing at peak condition’ and  ‘The struggle is (that) many… ‘.

My thoughts:
Again, full field. This is a session of mine that has evolved over the past year. It started as more PowerShell focused and has become more “How to do a benchmark” focused with PowerShell the supporting actor. The grammar comments are something I continue to work on, because I tend to get waaaaaaay too verbose when I write and put myself into grammatical corners (see what I did there?). I would submit this again, because it appears to have been well received.

SQLPS In Depth – Level 200

(Not Accepted Higher rated session selected)

Abstract:
A big hurdle for using Powershell and SQL Server together is the SQLPS module. Both old and new users of Powershell don’t completely understand its capabilities. There’s also the concern of how Powershell, SQLPS, and SQL Server all interact with one another. In this session, we’ll talk about the cmdlets you may not know about, tricks to save time using the provider, and even a few gotchas on how the provider works that can save you some time and energy. We will go in depth on what happens in SQL Server when commands are run using the SQLPS module. When we’re finished, you will have a deeper understanding of how you can use SQL Server and Powershell together, along with best practices to integrate the module with your automated tasks.

Prereqs:
Minimal familiarity with Powershell, basic familiarity with SQL Server

Goals:

  • Review the SQLPS provider functionality and go over what it is and how it works.
  • Cover the cmdlets in SQLPS, how they are used and gotchas in their implementation.
  • Demonstrate practical applications for both the provider and cmdlets for SQL Server automation.

Feedback:

  • I would like to attend this session.
  • The outline seems to clearly describe the contents of the presentation.  The target audience may not be big enough to support this session.  The subject matter does not appear to match the experience level (somewhat advanced topic for a 200 level session).
  • abstract: detailed
    topic: goals not compelling
    Subjective Rating : so and so

My thoughts:
The biggest take away here is “The target audience may not be big enough to support this session.” I think that’s true, and the reason is a double edged sword. Many view PowerShell as a niche topic, so there might not be as much demand for this. However, the whole reason I talk about PowerShell and SQL Server (and I submit sessions like this) is to try and move it beyond being a niche topic. This session is intended to get more people excited about PowerShell by seeing what it can do and how to get over some of the hurdles of working with it.

That being said, I had an advanced PowerShell session at last Summit that was deeply technical and still had 300 people in the audience. Maybe it’s not quite as niche as I thought. However, I believe I need to focus on sessions with more general appeal for future Summits.

#TSQL2SDAY: SQL 2016 Direct Seeding

It’s been awhile since I’ve written a T-SQL Tuesday post, but thanks to Michael Swart(@MJSwart) for hosting this month’s edition. The request is simple enough: blog about some aspect of SQL Server 2016. This is actually a good opportunity for me to write about a feature I’ve been meaning to get to for some time now and just haven’t had the chance.

As I tell many people, I’ve been working with SQL Server since 1999 (version 7.0) and I can’t remember being more excited about a SQL Server release. 2016 has SO many cool features, but more than that, they are features that many people who use SQL Server have been asking for a while now. It is hard for me to simply blog about one feature since there are so many. However, there has been an improvement to Availability Groups that has not gotten a lot of coverage and I think it is a game changer, especially for those of us who have to manage AGs on a day to day basis: direct seeding.

Setting Up Replicas

In SQL Server 2012 and 2014, creating an Availability Group could take a significant amount of work. One of the more tedious tasks is setting up your replica databases. This is because that you need to restore your database to your replica node in a state close enough to the primary to allow synchronization to happen. It can take several log backup restores to hit that magic window where you can join the database on the secondary node. Then, you get to do it again on the next replica!

Enter direct seeding in 2016. With this feature you no longer have to do any of the restores. You simply create your replicas with direct seeding enabled, then when you add a database to the AG, SQL Server will directly seed the database to your replica nodes. It’s surprisingly simple.

How do we enable this magic? It’s part of the AG replica creation. We still need the prerequisites, so we’ll start with the following:

  • Create a cluster
  • Enable AlwaysOn for the SQL Service
  • Create endpoints and grant permissions on those endpoints
  • The database we’re adding must be in FULL recovery mode

I, unsurprisingly, have a Powershell script that will set this all up for me. Then we can create the Availability Group with the direct seeding setting. Currently, this is only supported using T-SQL (go wizard or Powershell support), but the syntax is documented in MSDN. Let’s start by creating our Availability Group:

CREATE AVAILABILITY GROUP [ENTERPRISE]
FOR
REPLICA ON 'PICARD' WITH (
 ENDPOINT_URL = 'TCP://PICARD:5022', 
 AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, 
 FAILOVER_MODE = AUTOMATIC,
 SEEDING_MODE = AUTOMATIC
),
'RIKER' WITH (
 ENDPOINT_URL = 'TCP://RIKER:5022', 
 AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, 
 FAILOVER_MODE = AUTOMATIC,
 SEEDING_MODE = AUTOMATIC
)
LISTENER 'ENTERPRISE' (WITH IP (('10.10.10.101','255.255.255.0')))

ALTER AVAILABILITY GROUP [ENTERPRISE] GRANT CREATE ANY DATABASE

--Run these on the secondary node
ALTER AVAILABILITY GROUP [ENTERPRISE] JOIN
ALTER AVAILABILITY GROUP [ENTERPRISE] GRANT CREATE ANY DATABASE

 

There are two important lines to call out here:

  • SEEDING_MODE = Automatic
  • ALTER AVAILABILITY [ENTERPRISE] GRANT CREATE DATABASE

The SEEDING_MODE is what enables direct seeding, but we need to grant the AG permission to create databases on the node as well. Then we add the database to the AG. At this point, though, we only have node. It’s time to add a second node and watch the magic happen.

Hands Off

Once we’ve created the AG, we just add the database to it:

ALTER AVAILABILITY GROUP [ENTERPRISE]
ADD DATABASE [WideWorldImporters]

Once we’ve done this, SQL Server takes over and pushes the database out to the secondary replicas:

2016-06-14_10-27-36

That’s it! SQL Server handles all the replication for you. All is not sunshine and rainbows, however. There are still some gotchas to keep in mind:

  • You need to create the Availability Group with all its replicas at the start. You can not add replicas at a later point (I thought I had done this, but more recent testing was not working).
  • The AG must also be set up before you add any databases. You can add the database as part of the AG creation or after, but you have to have all your nodes configured before adding the database to allow direct seeding to work.
  • This is a data operation, so we “canna’ change tha’ laws a’ physics, Captain!” If you add a database to an AG with direct seeding, SQL Server still has to copy all the bits and bytes across the wire to the nodes. Which means nodes in other geographical locations could be a problem.

Overall, direct seeding is pretty cool, but still feels a little raw to me. Personally I’d like to be able to seed to a replica I add to the cluster if I desire, but that does not work right now. The lack of GUI and Powershell support (both cmdlets and SMO) is also disappointing. I can still use it, but I’d rather have a simpler way to do it.

For some additional reading, Erik Darling wrote about the exact same feature for T-SQL Tuesday as I did, so it gives you another view at it. Otherwise, that’s it for me on this T-SQL Tuesday. Thanks to Michael Swart for hosting this month. Make sure you check out the rest of the T-SQL Tuesday posts!