Art of the DBA Rotating Header Image

T-SQL Tuesday

T-SQL Tuesday #84: Getting Ready for your Presentation

boy-speech-lettersI’ve been super busy lately, but I wanted to at least post something for this month’s T-SQLTuesday. The topic is about encouraging new speakers, something I’m very passionate about. I think that speaking is one of the best things you can do to boost your career. If you are reading this and are considering speaking, I encourage you to reach out to your local user group and see if you can get started with a 15 or 30 minute session. Take a shot, I’ll bet you’ll be surprised.

What I want to share are some tips for the day you give your first presentation. A lot of folks are going to talk to you about building and preparing your presentation, but that is only half the battle. What you should do when you actually GIVE the presentation is often glossed over, even though this is the most high pressure moment of the whole cycle.

How do we reduce or relieve some of this pressure? Well, let’s start with a list of things that could possibly go wrong when you present. Think about the following list:

  • You’re presenting and you get an on-call page.
  • Your demo blows up spectacularly.
  • While giving your presentation, your computer attempts to apply updates.
  • You start 10 minutes late because you have issues with your video or sound.
  • During your presentation, someone sends you a picture on your favorite IM client:

oh_hai

Any of these will easily throw an experienced presenter off their game. For a new speaker, it can spell disaster. I’ve got a routine that I go through on the day of my presentation, which is designed to reduce that risk of disaster. And who doesn’t like reduced risk?

Getting Ready

Step 1: At the beginning of the day, well before my presentation, I make sure my presentation machine has been updated with Windows and other corporate software. This is SUPER important if it’s a Tuesday (when Microsoft releases updates). Doing this avoids any update surprises while I’m presenting or right before I go on stage.

Step 2: A couple of hours or so before my presentation, I will walk through my presentation. I open up the PowerPoint slide deck and step through it. When I get to demos, I will walk through my demo scripts. I test EVERYTHING, and do it in order. If I encounter an error, fix it, and then start over. This helps me insure that the flow works and that I understand what the step dependencies are in my demo.

Step 3: About an hour before my presentation, I will turn off everything on my presentation machine unnecessary to the presentation. Programs like Skype, Google, unneeded local SQL instances, Virtual Machines….so on and so forth. I only want what I need running to make sure that I have enough resources for my demos, along with keeping possible distractions shut down.

Step 4: At least 15 minutes before I’m due to present, I go to my room and hook up my presentation machine. I test the video and make sure my adapter works. This way I can address any tech issues that could hamper the presentation. I will display PowerPoint and also my scripts and demos to make sure everything looks ok.

I also usually duplicate my screen to the projector. This is important because if I extend, this means the only way (typically) that I can see what’s on my screen is to look back at it. This is distracting for your audience. If you duplicate, you only have to look down at your screen, which maintains contact with the audience.

Step 5: Right before I present, I turn my phone OFF. Then I put it in my bag. I get it away from me. I don’t want to get calls, I don’t want to have to worry about silencing it, and I don’t want it buzzing in my pocket if I’ve got a lot of notifications. The phone is off and away.

It’s GO time

At this point, I’m free and clear to do my presentation. Does that mean that nothing will go wrong? Of course not. However, performing these steps puts me in the best position to give my presentation without disruption. It is a foundation for success. Just like we build out our database solutions to minimize the option of failure, we need to approach our presentations with a similar sort of care to help guarantee our own success.

I want to thank Andy Yun(@sqlbek) for hosting this month’s T-SQL Tuesday. It’s a great topic and I hope folks get a lot out of it. If you’re considering stepping into the speaking world (and you should!), this month’s blog posts should give you all the tools to succeed. Good luck!

#TSQL2SDAY: SQL 2016 Direct Seeding

It’s been awhile since I’ve written a T-SQL Tuesday post, but thanks to Michael Swart(@MJSwart) for hosting this month’s edition. The request is simple enough: blog about some aspect of SQL Server 2016. This is actually a good opportunity for me to write about a feature I’ve been meaning to get to for some time now and just haven’t had the chance.

As I tell many people, I’ve been working with SQL Server since 1999 (version 7.0) and I can’t remember being more excited about a SQL Server release. 2016 has SO many cool features, but more than that, they are features that many people who use SQL Server have been asking for a while now. It is hard for me to simply blog about one feature since there are so many. However, there has been an improvement to Availability Groups that has not gotten a lot of coverage and I think it is a game changer, especially for those of us who have to manage AGs on a day to day basis: direct seeding.

Setting Up Replicas

In SQL Server 2012 and 2014, creating an Availability Group could take a significant amount of work. One of the more tedious tasks is setting up your replica databases. This is because that you need to restore your database to your replica node in a state close enough to the primary to allow synchronization to happen. It can take several log backup restores to hit that magic window where you can join the database on the secondary node. Then, you get to do it again on the next replica!

Enter direct seeding in 2016. With this feature you no longer have to do any of the restores. You simply create your replicas with direct seeding enabled, then when you add a database to the AG, SQL Server will directly seed the database to your replica nodes. It’s surprisingly simple.

How do we enable this magic? It’s part of the AG replica creation. We still need the prerequisites, so we’ll start with the following:

  • Create a cluster
  • Enable AlwaysOn for the SQL Service
  • Create endpoints and grant permissions on those endpoints
  • The database we’re adding must be in FULL recovery mode

I, unsurprisingly, have a Powershell script that will set this all up for me. Then we can create the Availability Group with the direct seeding setting. Currently, this is only supported using T-SQL (go wizard or Powershell support), but the syntax is documented in MSDN. Let’s start by creating our Availability Group:

CREATE AVAILABILITY GROUP [ENTERPRISE]
FOR
REPLICA ON 'PICARD' WITH (
 ENDPOINT_URL = 'TCP://PICARD:5022', 
 AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, 
 FAILOVER_MODE = AUTOMATIC,
 SEEDING_MODE = AUTOMATIC
),
'RIKER' WITH (
 ENDPOINT_URL = 'TCP://RIKER:5022', 
 AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, 
 FAILOVER_MODE = AUTOMATIC,
 SEEDING_MODE = AUTOMATIC
)
LISTENER 'ENTERPRISE' (WITH IP (('10.10.10.101','255.255.255.0')))

ALTER AVAILABILITY GROUP [ENTERPRISE] GRANT CREATE ANY DATABASE

--Run these on the secondary node
ALTER AVAILABILITY GROUP [ENTERPRISE] JOIN
ALTER AVAILABILITY GROUP [ENTERPRISE] GRANT CREATE ANY DATABASE

 

There are two important lines to call out here:

  • SEEDING_MODE = Automatic
  • ALTER AVAILABILITY [ENTERPRISE] GRANT CREATE DATABASE

The SEEDING_MODE is what enables direct seeding, but we need to grant the AG permission to create databases on the node as well. Then we add the database to the AG. At this point, though, we only have node. It’s time to add a second node and watch the magic happen.

Hands Off

Once we’ve created the AG, we just add the database to it:

ALTER AVAILABILITY GROUP [ENTERPRISE]
ADD DATABASE [WideWorldImporters]

Once we’ve done this, SQL Server takes over and pushes the database out to the secondary replicas:

2016-06-14_10-27-36

That’s it! SQL Server handles all the replication for you. All is not sunshine and rainbows, however. There are still some gotchas to keep in mind:

  • You need to create the Availability Group with all its replicas at the start. You can not add replicas at a later point (I thought I had done this, but more recent testing was not working).
  • The AG must also be set up before you add any databases. You can add the database as part of the AG creation or after, but you have to have all your nodes configured before adding the database to allow direct seeding to work.
  • This is a data operation, so we “canna’ change tha’ laws a’ physics, Captain!” If you add a database to an AG with direct seeding, SQL Server still has to copy all the bits and bytes across the wire to the nodes. Which means nodes in other geographical locations could be a problem.

Overall, direct seeding is pretty cool, but still feels a little raw to me. Personally I’d like to be able to seed to a replica I add to the cluster if I desire, but that does not work right now. The lack of GUI and Powershell support (both cmdlets and SMO) is also disappointing. I can still use it, but I’d rather have a simpler way to do it.

For some additional reading, Erik Darling wrote about the exact same feature for T-SQL Tuesday as I did, so it gives you another view at it. Otherwise, that’s it for me on this T-SQL Tuesday. Thanks to Michael Swart for hosting this month. Make sure you check out the rest of the T-SQL Tuesday posts!

#TSQL2SDAY: You’re Gonna Cluster that GUID

370801205_efe0fa8d7d_zWelcome to the continuing blog party that is T-SQL Tuesday. This month’s host is Mickey Stuewe (@SQLMickey) with a challenge to blog about data modelling mistakes we’ve seen. Unfortunately, it’s an easy topic to blog about because proper data modelling often falls by the wayside in order to rapidly deliver applications. I’m not saying it should, but the fact of the matter is many developers are so focused on delivering functionality that they do not think about the long term implications of their design decisions.

One such mistake is the selection of Globally Unique Identifiers (GUIDs) for a data type. Developers love this data type for a lot of reasons, some of them very good. The fact that GUIDs are globally unique provides a way to keep data unique when migrating along with a nicely obfuscated surrogate key that can help protect user data. All in all, I can not really blame people for wanting to use this data type, but the problem is that SQL Server does not manage that data type very well behind the scenes.

The problems with GUIDs in SQL Server are fairly well documented. As a quick review for context, the issue is that since GUIDs are random values, it is hard to efficiently index them and these will rapidly fragment. This means slower inserts and more disk space taken up by the index. I created the following two tables (one with a GUID, one with an INT) and inserted 2000 rows into each:

create table GUIDTest(
    orgid uniqueidentifier default NEWID()
    ,orgname varchar(20)
    ,CONSTRAINT pk_GUIDTest PRIMARY KEY CLUSTERED (orgid));

create table NonGUIDTest(
    orgid int default NEXT VALUE FOR NonGuidSeq
    ,orgname varchar(20)
    ,CONSTRAINT pk_NonGUIDTest PRIMARY KEY CLUSTERED (orgid));

The results from sys.dm_db_index_physical stats tell the story from a fragmentation and size perspective:

11-7-2015 11-37-41 AM

So GUIDs are bad. With as much as has been written on this topic, it feels a bit like shooting fish in a barrell. However, GUIDs will find their way into databases because of their usefulness to the application layer. What I want to talk about is a common misconception around “solving” the GUID problem: clustering on a sequence ID but keeping the GUID in the table.

Let’s start by creating a new table to have a GUID and cluster on a sequence:

create table SeqGUIDTest(
    seqid int default NEXT VALUE FOR GuidSeq
    ,orgid uniqueidentifier default NEWID()
    ,orgname varchar(20)
    ,CONSTRAINT pk_SeqGUIDTest PRIMARY KEY CLUSTERED (seqid));

11-7-2015 11-48-28 AMAs expected, less fragmentation and size. This is good, right? It can be, but here’s the problem: the sequence is completely meaningless to our data and our queries will likely not use it (unless we build in additional surrogate abstraction to relate sequence to our GUID). Let’s compare query plans for our GUIDTest and SeqGuidTest tables where we query each for a specific orgid value:

11-7-2015 11-54-20 AM

The query where the GUID is a clustered index is far more efficient than the one against the table where we cluster on a sequence. This is because it can leverage the index, meaning we will get a seek instead of a scan. While clustering on a sequence field saves us on space and fragmentation, it ends up hurting us when trying to do data retrieval.

If we were tuning the query against SeqGuidTest, the next logical step for tuning would be to create a non-clustered index on orgid. This would improve the query, but in order to make it useful (and avoid key lookups), we would need to include all the columns of the table. With that, we have completely negated any benefit we got from clustering on a sequence column, because the situation is now:

  • A clustered index that is the table, thus using all that disk space.
  • A non-clustered index that also uses as much space as the table (it has all the columns in it as well).
  • The non-clustered index now has the same fragmentation problem we were trying to avoid on the clustered index.

So while trying to avoid a problem, we have made the problem worse.

There are two lessons here.  The first is the classic “it depends” case. There are no hard and fast rules to how we implement our databases. You need to be flexible in creating your design and understand the complete impact of your choices. Clustering on a GUID is not great, but in the above case it is far better than the alternative.

The second lesson is to understand your data and how it is going to be used. When we build tables and indexes, we need to be conscious of how our data will be queried and design appropriately. If we use abstractions like surrogate keys, then we need to make sure the design is built around that abstraction. In the above examples, we could cluster on a sequence key, but only if that sequence key has some sort of meaning in our larger database design. Building a better database is about understanding our relationships and appropriately modeling around them.

Thanks to Mickey for the great T-SQL Tuesday topic! Please check her blog for other great posts throughout the day.

Desired SQL Configuration with #Powershell – Part 2

Last week, as part of T-SQL Tuesday 68, I introduced you to two Powershell functions for validating and setting your SQL Server configurations. While useful, they depend on some sort of configuration source, which begs the question: How do we get this source? This post covers the third function that will help you create your configuration source, using an existing server’s values and allowing you to use them directly or modify them as you see fit.

Get-SQLConfiguration

The concept of Get-SQLConfiguration is simple: Get the sys.configuration values of a SQL Server instance and export them as a hash table that can be used by the other functions. The additional criterion to consider is that Test-SQLConfiguration and Set-SQLConfiguration both use the SMO properties to do this, so our configuration source must also use these names. The result is a function that uses the SMO to perform its export:

function Get-SQLConfiguration{
param([string]$InstanceName='localhost'
  ,[string[]] $Filter
  )

$smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
$output = @()
if($Filter){
  $configs = $smosrv.Configuration | Get-Member -MemberType Properties | Where-Object {$Filter.Contains($_.Name)}
}
else{
  $configs = $smosrv.Configuration | Get-Member -MemberType Properties | Where-Object {$_.Name -ne 'Properties'}
}

foreach($config in $configs){
  $output += New-Object PSObject -Property ([Ordered]@{'Name'=$config.Name;
  'DesiredValue'=$smosrv.Configuration.$($config.Name).RunValue})
}

return $output

}

The function itself is not complex, but I did want to add one other feature to it. A simple export might be overwhelming, considering the number of configuration properties within SQL Server. Most of these will actually be left at their defaults. To aid us, I also added a Filter parameter that accepts an array of SMO configuration property names and will only export those properties.

There are a couple patterns to use this function. The simplest is to just run it for one server and use the output to check another:

$configurations=Get-SQLConfiguration -InstanceName PICARD
Test-SQLConfiguration -InstanceName RIKER -Configs $configurations

This call will essentially compare and find the differences between the two instances. If we wanted to get more specific, we could use our Filter parameter to only compare one or two configurations:

$configurations=Get-SQLConfiguration -InstanceName PICARD -Filter @('FillFactor')
Test-SQLConfiguration -InstanceName RIKER -Configs $configurations

It is also possible to store and manage these configurations by using the Get-SQLConfiguration function. Because we’re working with hash tables, we have a variety of storage options. The easiest to get started is to use the built in Export-CSV function and save our configurations as a delimited file (I like pipe, but commas will do):

Get-SQLConfiguration -InstanceName PICARD | Export-Csv -Path .\PicardConfig.cfg -NoTypeInformation

With the text file that is generated, we can go in and edit our configs, removing what we want and editing values we need to update. The result can then be stored in source control for tracking and audit purposes. We could even go as far as to load the configurations into a database table and call them from a central administrative server. There are a lot of possibilities.

The Code

While I’ve published the functions in these two blog posts, there’s an easier way to get ahold of them. These are part of my SQLConfiguration module published on GitHub, which you can download and use (along with my other SQL configuration functions). Please keep in mind this module and the code contained within should still be considered a work in progress and the functions are provided as is. The usual disclaimers apply, so be careful using it and test it thoroughly before making regular use of it. Of course, if you do find any issues, I would love to know about them so I can review and update my code.

There are many additional items that can be added to this. The largest that comes to mind is dynamic configuration values, as these functions consider your configs to be static. Items like minimum memory, maximum memory, and maximum degree of parallelism depend on the hardware configuration and there are various ways to calculate these values. There are also configurations not stored in this part of SQL Server, such as default database directories and SQL Server agent settings. These functions have plenty of room for enhancement. They’re a good start, however, to giving you additional tools to manage your environment, helping to make your SQL Server deployments more consistent and reliable.

#TSQL2SDAY 68: Desired SQL Server Configuration with #Powershell

Welcome to another month of T-SQL Tuesday, started by Adam Machanic(@adammachanic) and hosted this month by Andy Yun(@SqlBek). The topic for this month’s blog party is “Just say ‘NO’ to defaults!”, a call on what we have learned and how we manage SQL Server defaults in our environments. While you will probably find lots of posts out there on what you should or should not set your SQL Server configurations to, I wanted to share with you a post on some tools that can help you manage these configurations.

Management Overhead

Many database professionals, SQL Server or otherwise, learn very quickly that you don’t want to stick with your default settings. The database vendors usually try and set some general values that can apply to most situations, but these typically don’t last long in any enterprise. You probably have a set of configurations you change from the defaults whenever you install a new SQL Server instance.

The struggle, whether you have 5 SQL Servers or 500, is keeping these settings consistent. Sure, you can script out your changes (and should), but how do you manage the changes over time? What if someone changes a setting, how do you enforce your configurations? Or maybe someone else sets up an instance and doesn’t apply your scripts? It becomes an ugly problem to manage.

Since I’m a Powershell fan, I’m also a fan of Desired State Configuration. While the technology is still new on the scene, it’s quickly turning into an effective way to manage your server builds. When thinking about Andy’s topic and the problem of managing changes to your instance defaults, my ‘eureka’ moment was that the DSC model could easily be applied to managing these configurations. The result was three Powershell functions that provide tools you can use to control the settings in your environment.

Test-SQLConfiguration

The first function I wrote was to evaluate the configurations on an instance. To do this, I started with the assumption that I’d have a hash table of Config(name) and DesiredValue(value) pairs. Each pair would be the SMO Configuration Class property and the desired value I wanted to check. Then I would simply loop through each one and, if it did not match, I would add that to an output array. The function would then return a collection of configurations that did not match my desired state.

function Test-SQLConfiguration{
  param([string]$InstanceName='localhost'
    ,[Parameter(Mandatory=$true)][PSObject] $Configs
  )
  $smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
  $output = @()

  foreach($config in $configs){
    if($config.DesiredValue -ne $smosrv.Configuration.$($config.Name).RunValue){
      $output += New-Object PSObject -Property (@{'Configuration'=$config.Name;
                   'DesiredValue'=$config.DesiredValue;
                   'CurrentValue'=$smosrv.Configuration.$($config.Name).RunValue})
    }
  }
  return $output
}

To test this, I put together a very simple pipe-delimited file of configurations I wanted to check. These configuration names had to match the SMO property names (which aren’t difficult to acquire) and the resulting file and output looks like this:

SQLServerConfig.cfg
 Name|DesiredValue
 FillFactor|80
 MaxDegreeOfParallelism|4
 CostThresholdForParallelism|40
 XPCmdShellEnabled|0

SQLConfigure_1

This function provides a quick view of which SQL Server configurations don’t match my desired values. What’s cool is now I can then take this function and easily run it across my entire enterprise.

SQLConfigure_2
With this function, I’ve removed the burden of validating my SQL Server instance configurations. Since the output is an object, there’s many flexible options for reporting and collecting the information. Writing this to a text file is a snap or uploading it to a database table for ongoing auditing.

Set-SQLConfiguration

The next step, after we’ve checked configurations, is to correct the violations. This next function works much like Test-SQLConfiguration and takes the same two parameters as Test-SQLConfiguration. The difference is that the function will now alter the value and then reconfigure the instance to apply the change.

function Set-SQLConfiguration{
  param([string]$InstanceName='localhost'
    ,[Parameter(Mandatory=$true)][PSObject] $Configs
  )
  $smosrv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
  $output = @()

  foreach($config in $configs){
    if($config.DesiredValue -ne $smosrv.Configuration.$($config.Name).RunValue){
        $row = New-Object PSObject -Property (@{'Configuration'=$config.Name;
               'DesiredValue'=$config.DesiredValue;
               'CurrentValue'=$smosrv.Configuration.$($config.Name).RunValue})
        $smosrv.Configuration.$($config.Name).ConfigValue = $Config.DesiredValue
        $smosrv.Configuration.Alter()
        $row | Add-Member -MemberType NoteProperty -Name 'ConfiguredValue' -Value $smosrv.Configuration.$($config.Name).RunValue
        $output += $row
        if($smosrv.Configuration.$($config.Name).IsDynamic -eq $false){$reboot=$true}
    }
  }

  if($reboot){Write-Warning 'Altered configurations contain some that are not dynamic. Instance restart is required to apply.'}

  return $output
}

SQLConfigure_3

Because the function is built to accept a hash table of configurations, we can use the same pipe-delimited file (or any delimited file) to update the instance. Note the warning with the output. Because not all configurations are dynamic, the function will alert you if a non-dynamic configuration was changed. If you change a non-dynamic configuration, you will need to restart the SQL Service to complete the change.

Wrap Up

The last challenge is building out the configurations to check against. The third function I wrote will handle that, but I felt like covering that functionality would make this blog post to long. Next week I will cover the Get-SQLConfiguration function, additional techniques for extending these functions, and then tell you where you can get the code. Please note that this code is in a work-in-progress state, so use with caution. However, this also means that if you have any suggestions, I’d love to hear them so I can turn this into a functional tool that the community can use.

Thanks again to Andy for a great T-SQL Tuesday topic. Keep your eye on his invite blog post and the #TSQL2SDAY hashtag on Twitter for other great contributions.

#TSQL2sDay: #Powershell and Extended Events

I’ll be the first to tell you I’m not a fan of extended events. The why is a whole other blog post unto itself, but since T-SQL Tuesday is all about learning, I figured I’d take a stab at things. Furthermore, since I like to talk about Powershell, it seems like a good opportunity to explore a little about how we can manage extended events through that language.

The SQLPS Provider

The simplest entry point into interfacing with SQL Server is the SQLPS provider. Yes, I know there’s a lot of gripes about it (and that’s yet ANOTHER blog post I’m working on). However, it’s still a good starting point. To load up the provider is easy:

Import-Module SQLPS

Providers in Powershell are nice because we get to interact with different parts of the stack as if they were file systems. When you load up the provider, you’ll get a SQLSERVER drive that you can do a directory lookup on.

XE_PosH_1

XE_PosH_2Notice how we have an XEvent folder? This is how we can start exploring our extended events. The format for the lookups is the standard format.  If we drill down, we start to see lookups that match what we see in Management Studio. Looking at SQLSERVER:\XEvent\PICARD\Default\sessions will give us a quick listing of all the sessions on that instance and if they’re running or not.

XE_PosH_3

Thinking Objectively

As we further explore this, we need to remember that everything in Powershell is an object. This specifically means that with the SQLPS provider, everything is an SMO object. By leveraging the trusted Get-Member cmdlet, we can quickly find out that we are dealing with Microsoft.SqlServer.Management.XEvent objects and the Session class. So what can we do with this?

First off, I found a bit of a gotcha when trying to assign these sessions to objects in Powershell. Normally, I’d just use Get-ChildItem on the name to populate the variable, but when I tried to do that I ended up with a string object. Not very useful. In order to get the proper assignment, I had to work around it a bit and ended up with this:

$xe = dir | Where-Object {$_.name -eq 'system_health'}
$xe |  gm

Why this is different from the usual ‘dir system_health’ is unknown, but is the result of how the SQL Server team implemented the provider (yes, another post for another time). However, now we have an object and can start leveraging some of the properties.  Here’s a couple examples of some of what you can look at:

XE_PosH_4The calls I used for the above are:

$xe.Targets
$xe.Stop()
$xe.IsRunning
$xe.Start()
$xe.IsRunning
$xe.ScriptCreate().GetScript()

You’ll notice at least one funky bit of syntax here around scripting. While most SMO objects give you a .Script() method, these objects require you to first call a .ScriptCreate() (or alter or drop) method, which returns an ISfcScript script object. You then have to call that object’s .GetScript() method to actually generate your script. It’s a little bit of a long way around the horn, but you get there.

Practical Application

What can we do with this? I’ll be the first to admit that some of this is clunky, but much of that is probably my general unfamiliarity with this aspect of the provider and extended events in general. However, I can definitely start to see some specific use cases, such as:

  • Backing up existing custom sessions across my environment by scripting them out with a scheduled job.
  • Checking that all system_health sessions on my SQL Servers are running.
  • Executing a mass change to my extended events sessions using the methods.

To effectively leverage Powershell with extended events, we need to play to the language’s strengths. The focus here is on multi-server execution and scripting for consistency. This is also just a start. As you (and I) continue to explore this facet of the SQL Server provider, many other use cases and situations may show up.

Thanks out to Jes Borland(@grrl_geek) for hosting this month’s T-SQL Tuesday. Remember to watch the #tsql2sday tag for other great blog posts.

#TSQL2SDAY 65 – TIL Edition: Whaddya mean there’s no endpoint?

This month’s T-SQL Tuesday is from Doc Mike Donnely(@SQLMD) with a simple premise: show us something you learned. Considering that working in the information technology field means we are always learning (or at least should be), blogging on this should be a snap. It’s a great fit for the #SQLNewBlogger challenge if you’re in on it, or just a nice reason to write if you’re not.

My tidbit is somewhat Powershell related, but touches on general setup of Availability Groups. If you read my early short series on automating SQL Server builds, I show you how you can build an Availability Group using Powershell. When I was testing it, the process works just fine for building the Availability Group, but I ran into problems when adding databases. When I went through the troubleshooting, I discovered that while you specify an endpoint in the Powershell cmdlet call, that cmdlet doesn’t actually create said endpoint. It just records the endpoint as part of the config.

This should not really come as a surprise. If you ever use the wizard in SQL Server Management Studio to create an Availability Group, it will script out commands to create the proper endpoint. I assumed that the Powershell cmdlet would work similar to the wizard, but it does not. This means I will have to add a step in my process to actually create the endpoint.

There are two ways to do this. The first is to simply use T-SQL in my script and pass it to either Invoke-SqlCmd or sqlcmd:

$endpointsql = @"
CREATE ENDPOINT [HADR_endpoint]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM AES);
"@

Invoke-SqlCmd -ServerInstance 'NODE1' -Database 'master' -Query $endpointsql

The second option is to use the SMO to create an endpoint:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
$smonode = New-Object Microsoft.SqlServer.Management.Smo.Server ‘NODE1’

#create endpoint
if($smonode.Endpoints.Name -notcontains 'HADR_endpoint'){
    $EndPoint = New-Object Microsoft.SqlServer.Management.Smo.Endpoint($smonode, 'HADR_endpoint')
    $EndPoint.EndpointType = 'DatabaseMirroring'
    $EndPoint.ProtocolType = 'Tcp'
    $EndPoint.Protocol.Tcp.ListenerPort = 5022
    $EndPoint.Payload.DatabaseMirroring.ServerMirroringRole = 'All'
    $EndPoint.Payload.DatabaseMirroring.EndpointEncryption = 'Required'
    $EndPoint.Payload.DatabaseMirroring.EndpointEncryptionAlgorithm ='Aes'
    $EndPoint.Create()
    $EndPoint.Start()
}

The choice between these options is down to personal comfort, either one would be acceptable in my eyes. I lean towards the SMO method just because I’m likely to be using the SMO objects for other operations in a script.

The lesson here is the old line about assumptions being the mother of all screwups and why you should always understand what your tools are doing. If you get stuck in to using wizards all the time, you will not see what is happening under the covers. Using a tool is fine, even necessary, but you should always make sure you understand the mechanics of those tools in case something does not work as expected.

How can you learn those mechanics? Almost every action in SSMS can be scripted out into T-SQL. This key learning tool is something I always stress to new SQL Server folks. It is the easiest way to see what SQL Server is trying to do and helps build your skills. The next time you’re trying to do something in Management Studio, study the script, and teach yourself something new.

Update:

So there’s a third option and I’m mildly annoyed at myself for missing it. As Nic Cain(@SirSQL) points out in the comments, Microsoft provides cmdlets for endpoints. In keeping with the links above, here’s your use of them:

$endpoint = New-SqlHadrEndpoint HADR_Endpoint -Port 5022 -Path SQLSERVER:\SQL\NODE1\DEFAULT
Set-SqlHadrEndpoint -InputObject $endpoint -State "Started"

A LOT simpler using the SMO and remains consistent with the rest of the script. Probably the better way to go.  Thanks Nic!

#tsql2sday 63 – Security, Default Databases, and #Powershell

This month’s T-SQL Tuesday is about how we manage database security, hosted by the ever effervescent Kenneth Fischer(@sqlstudent144). While I didn’t have a lot of time to work on this post, I wanted to share with you all a quick little nugget I used for one of my most recent presentations.

So, default databases. This is a security bit that usually gets skipped over, unfortunately, but I think it’s a pretty important practice that is easy to implement and maintain. Let’s be honest, most of us leave the default database as ‘master’ when we create them. This then will usually lead us to accidentally doing work in the master, which potentially could be disastrous.

2042603602_fc289cf395_zSecurity is more than just locking people out or letting people in. While most of the situations with master can be be limited by reduced permissions, it doesn’t do anything to help or manage accounts that need to be able to work in that database.  Security is not just about active policies, but also passive rules to reduce and prevent mistakes. A lot of damage can be prevented by directing people out of dangerous places by default.

Fixing this is easy:

ALTER LOGIN [foo\bar] WITH DEFAULT_DATABASE = [tempdb];

Also, we’d ideally create the logins with that default database set to begin with:

CREATE LOGIN [foo\bar] FROM WINDOWS WITH DEFAULT_DATABASE=[tempdb];

However, this can be difficult to manage and keep up to date, especially since SQL Server doesn’t help by always defaulting to master. One thing that can be done is to have a way to set all your logins’ default databases to an appropriate database (I really like tempdb for this). So how about some Powershell?

Using the SMO, we can use the DefaultDatabase property of the Login class for a quick and easy way to alter our default database:

$login.DefaultDatabase = ‘tempdb’

Then, using the Logins collection in our Server object, we have an easy loop to go through an alter these default databases. Wrap that up in a fucntion and Bob’s your uncle. Well, he’s my uncle, not yours. Anyway, we’ll get a little something like this:

#load assemblies
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
$ErrorActionPreference = 'Inquire'

function Set-SQLDefaultDatabases{
    param([string[]]$Instances = 'localhost'
         ,[string]$defaultdb = 'tempdb')

foreach($InstanceName in $Instances){
    $smosrv = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server $InstanceName
    if($smosrv.Databases.name -contains $defaultdb){
        foreach($login in $smosrv.Logins){
            Write-Verbose "Altering $login on $InstanceName to $defaultdb"
            $login.DefaultDatabase = $defaultdb
            }
        }
    else{
        Write-Warning "Database $defaultdb is not valid on $InstanceName."
        }
    }
}

Set-SQLDefaultDatabases

So this simple function will set all the logins (SQL, Windows, and AD Groups) to the default database of choice. It’s fairly brute force, but gives you a good starting point. At this point, you should be able to see the value of writing it in this function format and use techniques I’ve blogged about earlier to set this enterprise wide.

Thanks out to Ken for hosting this month’s T-SQL Tuesday! Looking forward to reading the other blogs!

#tsql2sday 52: Stop depending on “it depends”

depends-on-who-is-askingIt seems to be a standard practice for data professionals to say “It depends” when asked for suggestions, recommendations, or outright solutions.  This isn’t surprising, because in the “magic” world of technology the actual solution for a problem usually requires some very specific information and that any number of solutions could work.  Add in to this the nature of our world usually puts you at the whip end when things break, so we are natuarally hesitant to draw a line in the sand.

Stop it.

Let’s face it, much of what we do is fairy dust and moonbeams to those outside our industry.  They don’t understand what we do, just that we make it happen.  And that’s what managers, directors, veeps, and c-fillintheblank-os want when they ask us questions: How can we make it happen and/or how to we make the broken thing go away.  The problem with saying “It depends”, especially in situations like that, is that it sounds more like “I don’t know” and that’s not what you’re getting paid for.  You’re the technical expert.  Whether you’re a consultant or a full-time employee, a seasoned veteran or a fresh-faced n00b, you’re being paid to understand the technology and be able to implement solutions.

Many data professionals, however, are afraid to take this sort of stand.  A lot of this stems from the tremendous amount of risk involved, particularly when tied in to the heavy responsibility we tend to bear.  Our job is to protect the company’s data assets and if we screw up it’s not just our head, but it could be seriously damaging to our company.  So we like to hedge our bets.  Unfortunately, a lot of people in our profession will use the phrase “It depends” as a dodge because they’re afraid of completely taking on that responsibility.

Ultimately, they’re afraid of failure.

It’s a common mantra in life coaching that we can’t be afraid of failure.  Failure is what we learn from and how we grow.  Make a mistake, analyze it, and grow from it.  We’re only human, however, and screwing up is scary.  We don’t want to look bad and we don’t want to get in trouble.  That doesn’t help the people who are looking to us for help.  This is when saying “It depends” as a shield turns into a roadblock, hindering both you and your organization from getting anywhere.

So what do we do about it?  Next time someone asks you for a technical opinion and you don’t have enough detail, ask for that detail.  What’s the RTO/RPO? How quickly does this need to perform?  What’s our budget?  Questions like that to refine the answer.  Maybe outline a solution, but caveat it with qualifiers, such as “I’d probably put tempdb on SSDs, but that assumes we can afford that sort of hardware.”  Maybe there’s a better way to do it, maybe you’re just wrong.  But it’s ok to make a mistake, as long as you’re not making the same mistake.

Most of all, I’d suggest to simply remove “It depends” from your vocabulary.  There’s a dozen ways to get the point across that a solution will require some thought and planning, but I found that when I was forced to use something other than this two-word quip, I had to work harder on my response to really explain the dependencies for a solution.  And the non-technical folks around you are ok with that.  Sure, they don’t want you talking above their head, but they also need to know why things need to be done a certain way.

Some folks might call this leadership.  Others, sticking your neck out. Still others might call this being unnecessarily risky.  I call it doing your job.  Like I said, companies pay us to know our field, it’s time we act like it.  Data is our world, very few people live it in and understand it the way we do, so own that knowledge and champion it.  And stop hiding behind that phrase, “It depends”.

(This month’s T-SQL Tuesday, a grand tradition started by Adam Machanic, is being hosted by Michael J. Swart(@MJSwart).  Great topic choice, sir!)

T-SQL Tuesday #41: The Hook #tsql2sday

Bob Pusateri(@SQLBob) is this month’s host for T-SQL Tuesday with a topic I definitely can relate to.  Bob asks bloggers to talk on their presenting experiences:  how they got introduced to it and why do they keep doing it.  Since I’m right on the heels of giving my Server Core talk at SQL Saturday 197, it’s perfect timing.

To put my presentation experiences in context, let’s first talk a bit about some of my performance philosophy.  I’ve written about my musical background before and how it relates to giving technical talks.  One of my chief theories of performance (and art, for that matter) is the requirement of an audience, that art is not really art until you have an audience to appreciate it.  It’s all well and good for me to practice and play by myself, honing my skills and rehearsing pieces, but none of this becomes music until there are people to listen to it and hear my message.  Art is about communicating with that audience, sharing something of yourself through your performance.

This is a philosophy that directly translates to the presentations we give in the SQL community.  The main driver is for us to share our technical knowledge with our peers, to create and education conversation with those who do what we do.  For many, it’s intimidating to present when you think you have nothing to share.  When we realize that we can teach our audience something new, it’s an epiphany of what our impact can really be. This was exactly the “hook” that got me into presenting.

It was March 2011.  I had recently read Brent Ozar’s (@BrentO) landmark post: Rockstars, Normal People, and You.  I wasn’t sure about presenting, but I’d figure I’d give it a bash, so I reached out to the Denver User Group to see if I could sneak in for a slot.  After initially being told that my first chance would probably be something in the summer, I got a call from the VP of Events to see if I could give a short talk for the March meeting.  Apparently, the regularly scheduled speaker had to cancel and the group needed someone to fill in on short notice.

I had about two weeks.  In retrospect, that is a TON of time, but as a new speaker I felt like I was cramming last minute for an exam.  I put together a short presentation on database security, built around this cool extended stored procedure I found: xp_logininfo.  The night of the meeting came along and I went to the podium to warm up the room for Doug Lane(@thedouglane) with my “dinky, little presentation” .  The 30 minutes flew by, I think partly because of my nerves and I talked quickly, but everything went fine.  My demos worked, no one laughed at me, and my biggest sin was not speaking up so the back of the room could hear me.

Then came the “hook”.  As I was packing up for the evening, Tom Norman(@ArmorDBA) came over to talk with me.  Tom’s been a regular at the user group for a while who has given his own share of presentations.  To this day, I remember what he said:

“I’ve been a DBA for over twenty years.  You taught me something new tonight.”

Needless to say I was flattered, but it took a couple days to sink in.  When it did, it hit me: these were people that benefited from my performance, an audience that enjoyed my performance.  I was able to take my technical knowledge and mold it into something more.  Two years later and I’m a regular speaker in the mountain west area as well as VP of Events for the Denver group.  I’ve had the opportunity to speak at SQL Rally and many other SQL Pass events.  Presenting has been so much fun and it’s opened countless doors and started numerous friendships.

I want to thank Bob for giving speakers a chance to share their experiences.  My biggest hope is that we can encourage those who haven’t started speaking to do so.  If you’re reading these T-SQL Tuesday posts and you haven’t given a talk yet, go talk to your user group right now.  The SQL community is always looking for speakers and, whether you believe it or not, you have something I want you to teach me.