Art of the DBA Rotating Header Image

Auditing

Auditing SQL Execution Times With #Powershell

Last week, I was talking with another DBA on my team about our release process.  He was running a series of SQL scripts as part of a code release.  There were about 4 different scripts and, while he was running each one, he would log the start and stop time for each script execution in a text file. We both recognized this as a perfect opportunity for some Powershell automation and I figured I’d put something together.

The pattern for logging this kind of work isn’t terribly complex. We have about 3 steps:

  1. Record the start time
  2. Execute the SQL command or script
  3. Record the end time

The simplest way to express this in Powershell-ese would be:

$StartTime = Get-Date
Invoke-SqlCmd -ServerInstance localhost -Database YourDB -InputFile ‘.\YourSQLFile.sql’
$EndTime = Get-Date
$Duration = New-TimeSpan -Start $StartTime -End $EndTime

Simple, effective, and can then be recorded to any desired output, such as a text file or a database table. Why stop here? Whenever we build automation, we should focus on flexibility and doing the work up front. So let’s enhance this.

If you’ve been reading my blog, you’ve probably noticed that I like to make use of custom Powershell objects for reporting. Custom objects give you a flexible and effective way to generate output, as it allows you to present your results in a variety of ways. With this in mind, I put together the following wrapper function to take either a SQL command or script, execute that command or script, and capture the relevant data about the execution.

function Measure-SqlCmd{
   param($instancename
       ,$databasename = 'tempdb'
       ,[Parameter(ParameterSetName = 'SQLCmd',Mandatory=$true)]$Query
       ,[Parameter(ParameterSetName = 'SQLScript',Mandatory=$true)]$InputFile)

   $output = New-Object System.Object
   $errval = $null

   $output | Add-Member -Type NoteProperty -Name InstanceName -Value $instancename
   $output | Add-Member -Type NoteProperty -Name DatabaseName -Value $databasename
   $output | Add-Member -Type NoteProperty -Name StartTime -Value (Get-Date)

   if($sqlscript){
       $output | Add-Member -Type NoteProperty -Name SQL -Value $sqlscript
       $sqlout = Invoke-Sqlcmd -ServerInstance $instancename -Database $databasename -InputFile $InputFile -ErrorVariable errval
   }
   else{
       $output | Add-Member -Type NoteProperty -Name SQL -Value $sqlcmd
       $sqlout = Invoke-Sqlcmd -ServerInstance $instancename -Database $databasename -Query $Query -ErrorVariable errval
   }

   $output | Add-Member -Type NoteProperty -Name EndTime -Value (Get-Date)
   $output | Add-Member -Type NoteProperty -Name RunDuration -Value (New-TimeSpan -Start $output.StartTime -End $output.EndTime)
   $output | Add-Member -Type NoteProperty -Name Results -Value $sqlout
   $output | Add-Member -Type NoteProperty -Name Error -Value $errval

   return $output

}

At the core, we’re using the same pattern as declared above, but we’re just capturing other pieces of data and putting them into the properties of our custom object.  This allows us to do all sorts of stuff.  Let’s start with simply running our function:

Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_loadpartitiondata;'

The results are a nice little object list, giving us the information about our SQL command:

Measure-SqlCmd

Handy, no? Well, now the fun begins. Let’s say we have a couple commands to execute and we want to output this to a comma separated values(.csv) file for logging. Because we did all the work upfront of making a function and the custom object, the rest becomes a snap:

$total = @()
$total += Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_loadpartitiondata;'
$total += Measure-SqlCmd -instancename 'localhost' -databasename 'demoPartition' -Query 'exec usp_fragmentpartition;'
$total | Select-Object InstanceName,DatabaseName,StartTime,EndTime,SQL,RunDuration | Export-Csv -Path 'C:\Temp\ExecutionLog.csv' -NoTypeInformation

Measure-SqlCmd_results

The bonus of using this pattern is I can use the object collection for a variety of reporting methods.  Using Export-Csv, I can easily create my .csv log file. Another option is to use Chad Miller’s(@cmille19) Out-DataTable and Write-DataTable to record this info to a database table. We now have a flexible logging process to work with.

We also go back to the principle of tool building. Here’s a tool for SQL execution that I can use to give me a repeatable logging process. As I continue to say, automation is about consistency. We can use this logging method to automate SQL script and command executions across multiple instances, track them all in the same way, and then store or handle this appropriate to our process. The end result is tracking our work the same way, every time.

Using #Powershell to Measure SQL Transactions

4333013417_ca6f8904d3_zA common question that comes up for SQL Server administrators is “how do I measure performance”? This is a tough question and there are a variety of ways to do this.  Just off the top of my head, you could measure wait statistics, logical reads and writes for a query, or query duration. When we talk about system performance, we need more general metrics and the one that business folks tend to look at is transactions per second.

For SQL Server, this metric is captured within the SQL Server performance counters.  There are actually two counters you could use: Batch Requests/sec and Transactions/sec. There’s some significant differences between these two counters and this SQL Magazine article covers them both in great detail.  For my work, I like to go with Batch Requests/sec because it captures more transactions going on within the instance, making it a more informative value for measuring system performance.

What’s the best way to capture this? Setting up a perfmon data collection is pretty easy, but then we have to manage creating the collection, then manage the output, and making sense of it. This can be a bear to handle and take a fair amount of manual work to get in place, especially if we just need a quick check on what’s going on in the system.

Powershell provides the Get-Counter cmdlet to query Windows perfmon counters, allowing us rapid access to the information without a lot of hassle. With this cmdlet we can easily measure system performance:

Get-Counter -Counter '\SQLServer:SQL Statistics\Batch Requests/sec'

The output is a PerformanceCounterSampleSet object that is a collection of samples for the call. Using the above command only gives us a single sample, so that makes it simple. We’ll talk about multiple samples in a bit.

 

The key properties of a sample to focus on are:

  • Timestamp – When the sample was collected
  • RawValue – Running cumulative counter for the object
  • CookedValue – The “actual” value, based on the calculating the previous RawValue and the current RawValue.

For reference, if you’ve ever looked at these values in sys.dm_os_performance_counters you have seen the RawValue. This means you are also probably used to doing your own math to measuring these values. Powershell and the perfmon object model will do this math for us, making it easier for us to use.

How do we leverage this for quickly measuring our instance’s transaction performance? I’ve put together the following function to do this for me.  The approach is simple: I give it an instance name and a duration in seconds, and my code will build a Get-Counter call to calculate the Batch Requests/sec information on my instance.

function Get-SQLTxnCount{

param([string]$InstanceName='localhost'
,[int]$DurationSec)

$smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $InstanceName
$ComputerName = $smo.ComputerNamePhysicalNetBIOS

$Samples = [Math]::Ceiling($DurationSec/5)
$output = New-Object System.Object
if($smo.InstanceName -gt 0){
$Counters = @('\MSSQL`$'+$smo.InstanceName+':SQL Statistics\Batch Requests/sec')
}
else{
$Counters = @('\SQLServer:SQL Statistics\Batch Requests/sec')
}

$Txns = Get-Counter -ComputerName $ComputerName -Counter $Counters -SampleInterval 5 -MaxSamples $samples
$Summary=$Txns.countersamples | Measure-Object -Property CookedValue -Minimum -Maximum -Average

$output | Add-Member -type NoteProperty -name InstanceName -Value $smo.DomainInstanceName
$output | Add-Member -type NoteProperty -name AvgTxnPerSecond -Value $Summary.Average
$output | Add-Member -type NoteProperty -name MinTxnPerSecond -Value $Summary.Minimum
$output | Add-Member -type NoteProperty -name MaxTxnPersecond -Value $Summary.Maximum
return $Output
}

Get-SQLTxnCount -InstanceName 'PICARD' -DurationSec 30

This function expands on the simple Get-Counter call in a couple ways.  First, I make sure I’ve narrowed the counters to only the specific named instance I want to measure. The second change is to the duration of the test and calculation of  how many 5 second samples will be collected over that duration. Finally, I create a custom output with the Measure-Object cmdlet to show the minimum, maximum, and average Batch Requests/sec for my samples.  This all comes together as a simple transactional measurement for my instance.

This isn’t a replacement for detailed benchmarks and analysis. For this, I recommend some of what Laerte Junior(@LaerteSQLDBA) has written. He has a much more detailed overview of Get-Counter and using it with SQL Server that I would highly recommend (P.S. His article gave me my start on this function).  Benchmarks are extremely important and necessary to make the information coming out of my function useful.

Just like a carpenter has his square or tape measure, this function lets me measure my SQL Server with just a few keystrokes. I still need the full context of what’s happening on my instance, but at least I’ll know the dimensions without a lot effort or additional work. I challenge you to start thinking about your own Powershell in the same way. Don’t think of simply creating scripts, consider how you can take your work and build out your own squares, calipers, or other tools to improve your skills as a craftsman.

Am I Alive? SQL Connectivity Checks with #Powershell

Inevitably, there will be the call from a developer or admin saying “I can’t connect to the MORDOR SQL Server!  Everything is terrible!  All is darkness and shadows!  HELP!”.  After explaining that one does not simply connect to MORDOR, I fire up SQL Server Management Studio and attempt to log in.  It’s not particularly onerous, but it takes time and can’t be particularly automated for more proactive connectivity checks.

The connectivity check, at the heart of it, isn’t really that complex.  Usually it’s just a case of connecting and running a simple select statement.  I’ve done this proactively in the past using a central management server and linked servers for all the instance in my environment.  While this works, it’s difficult to manage, automate, and isn’t very mobile.  Me being me, I turn to Powershell for a better way.

Test-SQLConnection

The function here is pretty simple: Loop through a collection of instance names and try to query the server name and TempDB creation time.  If I get a result set, then my connection test is successful.  If it errors out, then no connection.  Essentially, the test is whether or not my return set has a StartupTime (TempDB creation date) or not.  So now I can pass single instance or collection and test them all with the one function.

function Test-SQLConnection{
    param([parameter(mandatory=$true)][string[]] $Instances)

    $return = @()
    foreach($InstanceName in $Instances){
        $row = New-Object –TypeName PSObject –Prop @{'InstanceName'=$InstanceName;'StartupTime'=$null}
        try{
            $check=Invoke-Sqlcmd -ServerInstance $InstanceName -Database TempDB -Query "SELECT @@SERVERNAME as Name,Create_Date FROM sys.databases WHERE name = 'TempDB'" -ErrorAction Stop -ConnectionTimeout 3
            $row.InstanceName = $check.Name
            $row.StartupTime = $check.Create_Date
        }
        catch{
            #do nothing on the catch
        }
        finally{
            $return += $row
        }
    }
    return $return
}

Test-SQLConnection -Instances 'localhost'

Why do I return the TempDB creation date?  Well, a boolean value is easy enough to return, but I figure I could use more info to work with.  After all, if data comes back, then I know the connectivity is working.  So let’s do a little more.  By returning the TempDB creation date, I can find out if my instance restarted recently (or if it’s been up for a really long time).  This gives me a little more information to work with.

Extending the functionality

While calling this function adhoc is easy enough, the reason to put it in to Powershell is not so we can just have another way to test our connections.  Automation is the key here.  So what if we took this and combined it with another technique, getting a server collection from CMS?

$CMS='SHION'
$servers=@((dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$CMS").Name)

$servers+=$cms
Test-SQLConnection -Instances $servers

Using this, we can quickly test the connection of all the servers in our CMS.  We can also check to see if any have started up recently, perhaps warning us of a restart of our SQL Services.  From here, it’s not a tremendous leap to load it into a database table for regular health checks.

Another tool for the toolbox

What’s great about these functions is that once I write it, I have a new tool I can use.  Initially I wrote this function so I can verify which servers in my active directory are actually SQL Servers (assuming the default port, of course).  As we can see here, though, the function can be used for a variety of purposes.

 

Server Inventories

Ok, I’ve been promising this to people for a while, time to get at it.  Brace yourself for a long one.

 As enterprise DBA’s, we usually have many servers to manage.  Whether it’s around ten, a hundred, or a thousand (or more!), tracking this manually is a bear.  However, there are still many shops maintain convoluted spreadsheets and other manual documentation to track their inventory.  Manual sucks.  As John Sansom(@SQLBrit) says, the best DBAs automate everything, including their server inventories.

 There’s a lot of approaches and third party tools you can use for your own environment.  In my mind, I’m totally cool with spending the money on a third party tool because of the depth they can provide in my tracking and monitoring.  A lot of times, though, we might not have access to those tools or we may need more information than those tools provide.  It’s at this point we sit down at our keyboard, crack our fingers, and start banging out some code.  Of course, the code I started banging out was Powershell (shocking, I know!).

 I didn’t start from scratch, though.  When putting my own script together (which we’ll get to later), I found two great resources that got me about 80% of the way:

 Allen White(@SQLRunr) – Let PowerShell do an Inventory of your Servers
Colleen Morrow(@ClevelandDBA) – Building A SQL Server Inventory

These are both great resources that will get you started, but I needed a little more.  The biggest challenge I had was a need to poll cluster information and collect SQL instance information separate from my machine information.  I also wanted to do some consolidation of my data sets.  So while I borrowed heavily from both Allen and Colleen, but then molded it to my own purposes.

Process

Before we dig into the code, I first want to touch on the collection process and how I handle it.  It’s not complex, but code doesn’t make sense until you understand the “why” of the different components.

What I’m collecting

  • SQL Instance information – Whether it’s an instance living on a cluster node, a physical stand alone, or a VM host, I want the SQL Instance without really caring about the machine it lives on.  At least not meshed in directly with the machine, though I DO collect the physical host name so I can identify which machine information I need.

  • Physical Machine information – Since I’m not tying the machine info directly with the SQL Instance collection, I want the physical information for the servers my instances live on.  This is where I get everything that isn’t tied directly to the SQL Instance.

How I’m collecting it

  • I’ve got 4 tables, 2 live tables and 2 stage tables, that I use for the collection.  Nothing out of the ordinary, but there’s an instance table with a stage table and a machine table with a stage table.

  • I use a Powershell script to perform the collection itself.  It uses instance table as its primary driver, where the idea is that I provide the initial population of instance names to track into the table and the inventory process will fully populate the rest of the information.

  • When the Powershell scripts complete, there is a stored procedure it executes to load data from the stage tables into the live tables.  The load is simple, where the instance table is updated with data from the stage and the machine information is deleted/replaced.

  • The Powershell script is run by a SQL Agent job running under a specific monitoring Active Directory account created as a credential in SQL Server.  The gotcha here is that the monitoring account needs domain access to the clusters and machines it’s going to be querying.

The Code

I won’t list the full scripts here, but you can download and review them here:

SQL Objects
Powershell script

 Let’s first talk about the dependencies:

  • The SMO, so you’re going to need to have either your script or your profile load the 2008 snap-ins or import the 2012 sqlps module.

  • Chad Miller’s Out-DataTable and Write-DataTable (I combined them into one file, DataTables.ps1).

  • The FailOverClusters module, part of the Remote Admin pack if you’re not running the code on a machine with the Windows Server OS, part of the Fail Over Cluster components if you are.

There are two key functions I use, Get-Instance to gather the SQL instance information and Get-Machine to gather the machine information:

#Collects and returns SQL Instance information
function Get-Instance([string]$instcoll,[int]$id,[string]$name)
{
try
{
	$smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $name
	$sname = $smo.NetName
	$iname = $smo.InstanceName
	if($iname.Length -eq 0 -or $iname -eq $null) { $iname = "MSSQLSERVER" }

	$managedcomp = new-object ('Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer') $sname
	$output = New-Object System.Object

	$port = $managedcomp.ServerInstances[$iname].ServerProtocols["Tcp"].IPAddresses["IPAll"].IPAddressProperties["TcpPort"].Value
	$ip = (Test-Connection $sname -count 1).IPV4Address.ToString()

	$output | Add-Member -type NoteProperty -name InstanceId -value $id
	$output | Add-Member -type NoteProperty -name SQLVersion -value $smo.VersionString
	$output | Add-Member -type NoteProperty -name SQLVersionDesc -value $smo.ProductLevel
	$output | Add-Member -type NoteProperty -name SQLEdition -value $smo.Edition
	$output | Add-Member -type NoteProperty -name MemoryMinMB -value $smo.Configuration.MinServerMemory.RunValue
	$output | Add-Member -type NoteProperty -name MemoryMaxMB -value $smo.Configuration.MaxServerMemory.RunValue
	$output | Add-Member -type NoteProperty -name MAXDOPVal -value $smo.Configuration.MaxDegreeOfParallelism.RunValue
	$output | Add-Member -type NoteProperty -name IP -value $ip
	$output | Add-Member -type NoteProperty -name Port -value $port
	$output | Add-Member -type NoteProperty -name PhysicalHost -value $smo.ComputerNamePhysicalNetBIOS
	return $output
}
catch
{
	write-host "Error collecting $name"
	return $null
}
}#Get-Instance

#Get host machine information via WMI
function Get-Machine([string]$name,[string]$clst)
{
 try
{
	$comp = gwmi Win32_ComputerSystem -Computer $name | select Model,Manufacturer,TotalPhysicalMemory
	$proc = gwmi Win32_Processor -Computer $name | select NumberOfLogicalProcessors,MaxClockSpeed
	$os = gwmi Win32_OperatingSystem -Computer $name | select OSArchitecture,Name,Version,ServicePackMajorVersion,ServicePackMinorVersion

	$output = New-Object System.Object

	$output | Add-Member -type NoteProperty -name MachineName -value $name
	$output | Add-Member -type NoteProperty -name Model -value $comp.Model
	$output | Add-Member -type NoteProperty -name Manufacturer -value $comp.Manufacturer
	$output | Add-Member -type NoteProperty -name Architechture -value $os.OSArchitecture
	$output | Add-Member -type NoteProperty -name PhysicalCPUs -value $(if(!$proc.Length){"1"}else{$proc.Length})
	$output | Add-Member -type NoteProperty -name LogicalCPUs -value ($proc | Measure-Object NumberOfLogicalProcessors -sum).Sum
	$output | Add-Member -type NoteProperty -name CPUSpeed -value ($proc | Measure-Object MaxClockSpeed -max).Maximum
	$output | Add-Member -type NoteProperty -name MaxMemory -value ($comp.TotalPhysicalMemory/1MB)
	$output | Add-Member -type NoteProperty -name OSName -value $os.name.split("|")[0]
	$output | Add-Member -type NoteProperty -name OsVersion -value $os.Version
	$SPMaj = $os.ServicePackMajorVersion
	$SPMin = $os.ServicePackMinorVersion
	$output | Add-Member -type NoteProperty -name SPVersion -value "$SPMaj.$SPMin"
	$output | Add-Member -type NoteProperty -name Cluster -value $clst

	return $output
}
catch
{
	write-host "Error collecting $name"
	return $null
}
}#Get-Machine

Both leverage the SMO and the WMI to get relevant info.  I’ll let you sort through the individual elements I’m pulling, it’s fairly standard config info.  Data gathering, because I think in terms of tables, is a collection of objects that are essentially rows.  I append each new object with all the appropriate properties to a larger collection, then use Write-Datatable to push that data directly into the related stage table.

Look at the entire script for my full process.  I’ve been pretty happy with it and it’s been chugging away in my environments now for about 8-9 months without a whole lot of interaction from me.  If we stand up a new server, then I just pop that name in the instance table, run the job, and everything gets updated.  What made me most proud was that we had a recent meeting with a new VP and pulled up the SSRS report built off of this information to give him an idea of what our environments.  His comment was “This is the most documentation I’ve seen since I’ve got here.”  The beauty is that it was provided without a whole lot of manual maintenance on my part.

This is very much an evolving work on my part, but hopefully gives you some insight into how I manage my environments.  I’ve still got lots of ideas on improvements, including automatic population (or other use) of Central Management Server.  If you have any ideas on how this can be improved, I’d love to hear them.

Finding Orphan Logins

Cleaning up old security info is always something that’s hard to stay on top of.  Employees who leave or change responsibilities can make it difficult to track which logins need to remain in our instances.  The best way to manage this is by using Active Directory groups and managing access from the group level instead of by explicit login.  However, because of the communication and organization that’s required to set up these groups in the first place, it’s not uncommon to find ourselves managing explicit logins in our severs.

The unfortunate side effect is this typically results in orphaned logins within our SQL Servers.  A question on how to manage this came up recently on dba.stackexchange.com and, while I couldn’t find a way to answer the specific question (because it was focused around finding orphaned logins in SQL 2000), I was able to come up with a handy utility script for 2005+ that makes use of xp_logininfo.  It’s not a complex script and provides a list of the actual drop login statements you can execute as part of your cleanup.

declare @user sysname
declare @domain varchar(100)

--store total output
declare @return table
(login_name sysname
,sql_statement varchar(max)
,error_msg varchar(100))

--Store output of xp_logininfo
declare @junk table
(a_name sysname NULL
,a_type char(8) NULL
,priv char(9) NULL
,m_login sysname NULL
,p_path sysname NULL)

--Declare domain you are checking
set @domain = 'FOO'

declare recscan cursor for
select name from sys.server_principals
where type = 'U' and name like @domain+'%'

open recscan
fetch next from recscan into @user

while @@fetch_status = 0
begin
	begin try
		insert into @junk
		exec xp_logininfo @user
	end try
	begin catch
		--Error on sproc because login does not exist
		insert into @return
		values(@user,'drop login '+convert(varchar,@user),ERROR_MESSAGE())
	end catch

	fetch next from recscan into @user
end

close recscan
deallocate recscan

select * from @return

A couple notes on this script:

  • It currently doesn’t handle removing the logins from individual databases.  I have another script I use to do that and will be working on combining the two.
  • I don’t automatically execute the drop login because I’m paranoid.  :)  By providing a list, I can review the logins, along with the error message output, and verify that I want to remove them

Hopefully this little utility script can help you with login management in your environments.

A short ride on a PoSH machine

I want to talk about one line of code.  Ok, well, two lines of code, but the first line isn’t that important, much like the opening act of a rock concert.  Let’s just pretend that the first line, while necessary, just gets us warmed up for the main event.

My problem was this:  I was collecting a SQL Server inventory but couldn’t retrieve all the information I was looking for.  I had started with some of the elements of Allen White’s(b|tscript as well as what Collen Morrow(b|t) assembled, but I was still missing the TCP/IP address and SQL port.  In my shop we use static addresses and I wanted to have that information handy.  So, to the Google-mobile.

My searching led me to a Sean McCown(b|t) video that showed me how to grab the TCP/IP address, so I was halfway home.  However, I just couldn’t find a clear way to grab the port information.  I think I was stretching my fingers to start pulling my hair out when I bumbled into the ManagedComputer class in the SMO.  Down the rabbit hole I went, using Get-Member, and eventually came up with this:

$managedcomp = new-object ('Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer') $sname
$port=$managedcomp.ServerInstances[$iname].ServerProtocols["Tcp"].IPAddresses["IPAll"].IPAddressProperties["TcpPort"].Value

Hopefully that doesn’t look to intimidating, but let’s break it down a bit.  The ManagedComputer class (in this case $managedcomp) represents the physical host for the machine.  Using this, we can then access the various properties of that.  In succession, we have:

  • ServerInstances – An array of all the named instances installed on the machine.  I’m passing a variable to this, which happens to be the name of the instance I want the port for.
  • ServerProtocols – An array of the different protocols, so in this case we want the TCP/IP protocols.
  • IPAddresses – Note, these are the different address configurations SQL is listening on, but it’s what appears in the SQL Server configuration tool.  Your array elements are named IPN or IPAll (if it applies to everything).  Since I’m looking for the port that is assigned for all my addresses, I’ll look at the IPAll object.
  • IPAddressProperties – Simple enough, the properties of the IPAddress object, and we want the TcpPort property specifically and its value.

How did I get here?  Making extensive use of Get-Member.  Much like Theseus in the Labyrinth, I used that to figure out where to go next each time I went to a deeper property level.

And this is what I hope people take away from this.  There’s so much information buried in the .Net objects that we can take advantage of, especially by using Powershell.  I’ll be the first to admit that there might be a better way to do this(please tell me if there is).  This shows, though, that by using the tools in front of me, I was able to expose the secrets of this particular set of objects and bend them to my will.

 

P.S.  For those interested, I’ll post the full server inventory script here in the near future.

T-SQL Tuesday #31: Logtime (#tsql2sday)

This month’s T-SQL Tuesday is hosted by Aaron Nelson(b|t), asking participants to talk a little about logging. Now there are plenty of technical operations and other logging tricks we could talk about, but Aaron’s request is a broad one and we have the opportunity to cover other forms of logging. So while I could talk about writing out text logs of Powershell actions or parsing and maintaining the various logs within SQL Server, I want to talk about a different kind of logging: time tracking my day.

Time tracking is one of those annoying administrative aspects of our job, much like documentation and project planning. While these tasks can be overdone, they’re all necessary for making our job more efficient and manageable (when done right, of course). The thing about time tracking is it’s hard to really understand where we spend our day when we’re in the trenches fighting fires. When we look back on our week, can we say that we mostly did client work? Administrative tasks? Maintenance jobs? And when we go to our boss and say “Hey, I can’t get this project done because I’m swamped doing ______”, how do we prove our case?

Now the problem with time tracking is it can be really tedious. We’re very much limited by the tools we have. This is why I was thrilled when I was reading a post by Kendra Little(b|t) about a great tool called Toggl. This handy application is a blessing. It has a simple to manage interface(either by using the web interface, mobile application, or desktop client) with a timer, where a simple click can either start a new task or continue something else you were working on. Because I can create time entries with a click or two (maybe some typing if I’m creating a new entry), tracking my time becomes less tedious and doesn’t intrude much on my day to day work.

Toggl Desktop

Also easy to manage in the interface is grouping tasks in to projects. Because the interface is so simple, I can create a project on the fly. Mostly, it’s just a description in the application, but it helps me group the individual tasks together. Personally, I have four projects: Client Work, Foundation Projects (internal work), Maintenance (break/fix activities), and Administrative. What you might have will vary, but this break down works well for me. The interface makes it very simple to assign a new task to one of these projects or create a new project.

Of course, we all know that while this may be easy to create at entry time, there’s always going to be some cleanup. Whether a misspelled project, an erroneous time entry, or just forgetting to enter a record, there will be times we’ll need to go back and correct things. Again, the application allows for simple lookups of our records as well as easy modification. I can use this to either update or add new records when I’ve missed something.

The beauty of this tool, however, is in the reporting. By including some simple bar graphs, pie charts, and date ranges, I can quickly see how my time breaks down. To me, this is the most important part, because then I can see if I really am spending too much time on meetings and other administrivia (I try to keep it to about 25% of my total time). Also, if my maintenance time is high, it probably means that we have some systemic issues that should be addressed. The nice thing is I can show interested parties how my time is being utilized at a glance with minimal interaction on my part.

The biggest benefit of this tool is that I tremendously reduce the effort I need for time tracking. I would easily spend one to two hours a week trying to track my time and log it properly. While this may not seem like much, it starts to eat in to everything when you start including that into time spent in meetings and similar administrative work. Also, let’s be honest, that’s time I could be spent writing a new script or solving a cool problem. With Toggl, I now only need about 15 minutes of the work week for time tracking.

Toggle has been a boon to me, but I wouldn’t have it without reading some of the great SQL bloggers out there. Thanks again to Kendra for cluing me in to this application, and thanks to Aaron for giving me the opportunity to write about it. I’m going to definitely be checking out some of the other T-SQL Tuesday posts in this series to see what other time save tips, tools, and tricks I can leverage to make my life easier.

Because, in the end, isn’t it all about making my life (and yours) easier?

Survival Monitoring

The IT world is a jungle. Countless threats lurk like predators, ready to devour us if we’re not careful. Seemingly benign events can quickly turn in to raging panic fests and danger lives everywhere that we’re not looking. To survive you need to be prepared and proactive. Unfortunately, many of us are thrown into this with little more than a pat on the back and a smile, as if we just got dropped out of a helicopter into the African jungle with little more than a pack of chewing gum and a pocket knife. Yet, we need to survive not just the next day, but the weeks and months ahead of us.

Because we need to survive, there’s some basic stuff we need to focus on in order to ensure our survival. It’s not everything we need to live happy, contented careers, but the minimum elements we need to watch in our environments to make sure we live to see the next day. If we were trapped in the wilderness, we’d first focus on shelter, fire, and food. In database terms, we need to first keep an eye on backups, services, and disk space if we want to make it to the next day.

Shelter from the storm

The most important item in a DBA’s life is backups. We can have screaming disk, tons of CPU, and all sorts of clustering, but that means nothing if our files get corrupted or the building burns down. Just like shelter in the wilderness is a place where we can always find protection, our database backups will always give us something to recover to.

Keep an eye on three things when it comes to your backups. First, make sure they’re actually occurring. Look to the backupset table in msdb for this, because it will tell you exactly when your backups are occurring, whether they’re log backups, fulls, or differentials. Next, where are your backups located? Backups won’t do much for you if they’re stored to the local computer and then that computer’s hard drive burns up or gets corrupted. Make sure that your backup files get to another location. Finally, make sure your backups work. Just because you take a backup doesn’t always mean that backup is reliable. Perform restores when it’s not an emergency to validate your backups, so you’ll know things will work when it’s an emergency.

Backups are your safety net. No matter what else happens, you should always have them to fall back on. It may not be pretty, but you’ll be glad they’re there when you need them.

Give me fuel, give me fire

Fire gives us the energy to get things done, whether it’s keeping us warm or being used to cook food. This is the same with your SQL Server services. If these aren’t running, your databases are down and your company is losing money. We can’t always prevent the interruption, but we need to be ready to respond when that interruption occurs. As DBAs we need to be proactive and watch our services.

Also, we can’t limit this just to the SQL Server database service. How many of you run SQL Agent jobs to perform your backups and maintenance? I know I do. If the Agent service is down, the databases will be working fine, but none of that other work is getting done. To boot, you probably won’t be getting any notifications about these jobs not running, so this will be one big blind spot.

We can’t take on faith that services will start automatically. Sometimes they stop for completely legitimate reasons. It’s our job to make sure they’re up and running and very few things are worse than that surprise call about something not running because a SQL related service is down (one thing that is worse is that we don’t have a backup, see above). Watch your services and you’ll sleep better and warmer.

How can you have any pudding if you don’t eat your meat?!

People got to eat. Once we have a place to sleep and fire to keep us warm, this is the next thing that we need to keep us going. For databases, this food is disk space. We could expand it out to CPU and RAM, but I’ve seen many a server that will limp along when these are consumed and stop stone cold when a file can’t grow anymore because the server ran out of space. If we want a happy database, we need to keep our database fed.

Primarily, watch the free space on your drives (wherever your files are stored), but also keep an eye on the free space within your files. You need to know when your files are going to grow and consume your space. The immediate survival goal is to make sure your server has enough disk to keep running, but you also need to monitor how that disk is getting so that you can be ready to add disk as necessary.

Getting by

Please note, doing all of the above doesn’t guarantee that your server will hum along happily. This isn’t happiness, this is survival. This is the bare minimum you want to do to ensure your company’s service and data. That’s the thing about monitoring: there are hundreds of counters and statistics you can watch, it’s up to you to figure out which of those are important. That’s why you want to start with the fundamentals first, or you could be putting your data, your job, and your company at risk.

I wanted to start with the overview of this strategy. Stay tuned my next post (might be next week or next Thursday, depends on my schedule), I will actually cover some technical solutions to this monitoring, some SQL and PowerShell scripts you can use to keep an eye on all of the above. If you want to get a head start, take a look at my post on backupset or look at Brent Ozar’s(b|t) sp_blitz.

 

Monday Scripts – xp_logininfo

How many times have you wished you could use SQL Server to query Active Directory and see who is in that group? This week’s script will allow you to do that, so if you have AD groups set up for logging into your server you can see exactly which logins have access.

xp_logininfo is an extended stored procedure that does just that. By passing a couple parameters, you can easily query any AD group that is a server principal. Wrap it up in a dynamic SQL loop, and you can quickly and easily audit your server’s security.

declare @winlogins table
(acct_name sysname,
acct_type varchar(10),
act_priv varchar(10),
login_name sysname,
perm_path sysname)

declare @group sysname

declare recscan cursor for
select name from sys.server_principals
where type = 'G' and name not like 'NT%'

open recscan
fetch next from recscan into @group

while @@FETCH_STATUS = 0
begin
insert into @winlogins
exec xp_logininfo @group,'members'
fetch next from recscan into @group
end
close recscan
deallocate recscan

select
r.name,
u.name,
u.type_desc,
wl.login_name,
wl.acct_type
from (select * from sys.server_principals where type = 'R') r
join sys.server_role_members rm on (r.principal_id = rm.role_principal_id)
join (select * from sys.server_principals where type != 'R') u on rm.member_principal_id = u.principal_id
left join @winlogins wl on u.name = wl.perm_path
order by login_name,r.principal_id,u.type_desc,u.name

Looking at the script, I first do a looped scan through sys.server_principals for all Windows groups (type ‘G’) and collect their AD group info. I store that in a table variable, then join it back to a select from server_principals again for a full listing of all my server permissions. Bam! Instant security report.

First post and first presentation!

Last Thursday, I got a chance to present to the Denver SQL Users Group on Managing and Auditing SQL Security. I crammed a lot in for 30 minutes and got some good immediate feed back. I think the one thing that made me nervous was no one really asked any questions, but since it was my first technical presentation in a long time, I’m not sweating it.

Anyway, for those of you who attended, thanks for listening. If you have a chance, please leave me some feedback at Speaker Rate. I’d really appreciate it. For those who didn’t make it, the presentation was an overview of creating accounts in SQL Server, granting access via GRANT, server roles, and database roles, as well as queries that you can use to audit that access.

Here’s the meat for all those interested:

Powerpoint Presentation – SQL Server Security
(Please note the presentation is released under the Creative Commons Attribution-NonCommercial license)

Scripts:
server_logins.sql
server_roles.sql
server_role_audit.sql – Uses xp_logininfo
database_roles.sql
db_roles_audit.sql – Uses xp_logininfo
object_grants.sql

Enjoy!

Last Thursday, I got a chance to present to the Denver SQL User’s Group on Managing and Auditing SQL Security. I crammed a lot in in 30 minutes and got some good immediate feed back. I think the one thing that made me nervous was no one really asked any questions, but since it was my first technical presentation in a long time, I’m not sweating it.

 

Anyway, for those of you who attended, thanks for listening. If you have a chance, please leave me some feedback at Speaker Rate for me. I’d really appreciate it. For those who didn’t make it, the presentation was an over view of creating accounts in SQL Server, granting access via GRANT, server roles, and database roles, as well as queries that you can use to audit that access.

 

Here’s the meat for all those interested:

Powerpoint Presentation – SQL Server Security

 

Scripts:

server_logins.sql

server_roles.sql

server_role_audit.sql – Uses xp_logininfo

database_roles.sql

db_roles_audit.sql – Uses xp_logininfo

object_grants

Last Thursday, I got a chance to present to the Denver SQL User’s Group on Managing and Auditing SQL Security. I crammed a lot in in 30 minutes and got some good immediate feed back. I think the one thing that made me nervous was no one really asked any questions, but since it was my first technical presentation in a long time, I’m not sweating it.

Anyway, for those of you who attended, thanks for listening. If you have a chance, please leave me some feedback at Speaker Rate for me. I’d really appreciate it. For those who didn’t make it, the presentation was an over view of creating accounts in SQL Server, granting access via GRANT, server roles, and database roles, as well as queries that you can use to audit that access.

Here’s the meat for all those interested:

Powerpoint Presentation – SQL Server Security

Scripts:

server_logins.sql

server_roles.sql

server_role_audit.sql – Uses xp_logininfo

database_roles.sql

db_roles_audit.sql – Uses xp_logininfo

object_grants