Art of the DBA Rotating Header Image

Sharing My Summit 2016 Submission Results

Sharing your abstract results for 2016 has become a Thing(TM) that Brent Ozar(@BrentO) started this week. At first I wondered about the value, but as I read others posts it made sense to share because it helps pull the curtain back a little to how the process works for speakers. It also gives us insight into the kind of work the volunteers are doing. As such, I want to share some thoughts on what I’ve seen and how it can be improved.

First off, let’s get the fine print out of the way:

  • Thank you to Allen White(@sqlrunr) and the speaker selection team for managing this cat herd. It’s a tough job, but it shouldn’t be thankless. We appreciate your hard work.
  • Thanks to the volunteers who gave of their time to read and offer feedback on the abstracts. While I have some thoughts on the nature of the feedback, I recognize and appreciate the time it took to write those comments.

Let’s go over the general thoughts. I’ve been deeply intrigued by the speaker selection process over the past five years. With each year, the speakers who submit get better feedback and the process is improving with each iteration. Just like the software we work on, it still has “bugs”, but I think we all need to recognize the strides this process makes and how PASS is doing its best to balance all the factors to create an annual Summit with a variety of voices and a solid conference offering.

On this, my biggest complaint is the overall subjective nature of many of the comments. I’ll address mine specifically when I list my abstracts, but as I read others feedback, it seems like many of the reviewers let their bias color things a bit too much. It might be that we have different perceptions of session levels or that there’s a misconception about a tool, technology, or technique. My suggestion back to the abstract teams is that, as you’re reviewing, step back a moment and be careful that your reviews do not contain too much subjectivity.

(Yes, even THAT might be considered a subjective judgement. Take it for what it’s worth.)

Now, let’s look at my abstracts (All General Session submissions):

Powershell Jumpstart for SQL Server DBAs – Level 100

(Accepted)

Abstract:
Powershell can be intimidating. There are many challenges to using it, especially for those not familiar with its nuances. Once understood, the language can be used for effective and robust automation that brings together SQL Server with other Windows components, such as the file system or Active Directory.

In this session, we will cover the Powershell fundamentals, how you can use Powershell itself to learn about concepts and syntax, and techniques for using Powershell with SQL Server. We will focus on core language patterns that you can use immediately, take the mystery out of the code, and help you get started writing your own PowerShell scripts. Attendees will not only acquire an understanding of Powershell, but see practical examples of how it can be used with SQL Server, along with a set of resources they can use to learn more about the language.

Prereqs:
None

Goals:

  • Introduce attendees to Powershell, what it is and where it came from. Cover basic syntax of Powershell and how it is written.
  • Demonstrate specifics of how Powershell can be used to manage and automate tasks around SQL Server.
  • Provide attendees learning resources for Powershell.

Feedback:

  • abstract – detailed but not compelling
    topic – level too low
    subjective rating: not interesting
  • The outline seems to clearly describe the contents of the presentation.  The topic and goals should be compelling to attendees.  The target audience should be big enough to support this session.  There appears to be a reasonable amount of live demonstrations in relation to the topic being presented.
  • Attendants would be interested this session because it is a beginners session.

My Thoughts:
This is where I have an issue with the subjective nature of the feedback. Maybe I’m a little too sensitive about DBA’s perception of PowerShell (I have the debate of “Why bother?” more than I care to count), but the “not compelling” comment is off base. A basic PowerShell session may not be compelling to the reviewer, but we have a LOT of DBAs out there who want this kind of information. I know because I talk to them a lot. I also take issue with the “level too low”, because this session is specifically designed to make PowerShell easy to grasp for those who haven’t worked with it.

I guess I shouldn’t complain too much since this was the one selected. I’m proud of it and the content. It has been very well received at other venues, so this is a case where I think the reviewer was letting their own personal bias of the content color the abstract review a little too much.

Automating Azure SQL Management with Powershell – Level 200

(Not Accepted Popular topic, higher rated session selected)

Abstract:
Azure is still a new frontier for many database administrators. While cloud environments bring a lot of flexibility and freedom, there is a lot to learn and a lot to keep on top of. The current portal provides a functional GUI experience, but the most effective way to manage Azure environments is by using Powershell. In this session, we will look at the fundamentals of using Powershell with Azure SQL databases. We will dive into the tools that can be used to deploy and manage these environments, as well as patterns and processes to automate these tasks. Attendees will leave with a solid foundation to build and administer SQL database environments in Azure.

Prereqs:
Basic familiarity with Powershell and Azure.

Goals:

  • Review Powershell Azure cmdlets and how to use them. Includes connecting to your Azure account and setting your subscription.
  • Examine the Azure cmdlets specifically built for Azure SQL Databases and how they apply to the Azure SQL Database operating model.
  • Demonstrate how to use the cmdlets to create, modify, and remove an Azure SQL Database. Discuss patterns for using these and automating Azure SQL Database deployments.

Feedback:

  • Abstract-  well written, compelling
    topic –  topic and title clear. Topic rarely covered
    subjective rating – interesting
  • Great topic. I would like to attend.

My thoughts:
I think this was just a full field and I had another session accepted. The feedback was pretty sparse, but there is an interest here. I’ll probably submit this one again.

Benchmarking SQL Server with Powershell – 300 Level

(Not Accepted Other sessions selected based on building a balanced program for track coverage, speaker coverage, topic coverage, and session rating.)

Abstract:
Performance is key for any database system. As professionals, we are constantly challenged to to keep our servers performing at peak condition. The struggle is many administrators do not understand their current profile, so it is hard to know when performance degrades. Benchmarking SQL Servers is a vital task that is often overlooked because it is either not understood or difficult to manage. This session will cover the important metrics to measure for an effective benchmark and will also demonstrate how a benchmark can be captured using Powershell. Attendees will leave with a clear plan of how they can measure their SQL Server performance, how that measurement can be automated, and how to develop an action plan to dynamically manage these tasks at scale.

Prereqs:
Understanding of SQL Server performance fundamentals, such as perfmon counters and wait statistics. Basic understanding of Powershell.

Goals:

  • Review what constitutes a good SQL Server benchmark. This will cover perfmon counters and wait statistics.
  • Demonstrate how this benchmark can be collected with Powershell.
  • Show how the benchmark can be automated and reported on.

Feedback:

  • abstract: one grammar error in abstract, but detailed;
    topic interesting
    subject rating: I like this topic
  • I would attend this session.
  • Topic is a good and relevant option.
    Level is good for prereqs and goals.
    Abstract is ok, with a few wording and grammar choices that could be improved/changed to make reading easier (ie. ‘performing at peak condition’ and  ‘The struggle is (that) many… ‘.

My thoughts:
Again, full field. This is a session of mine that has evolved over the past year. It started as more PowerShell focused and has become more “How to do a benchmark” focused with PowerShell the supporting actor. The grammar comments are something I continue to work on, because I tend to get waaaaaaay too verbose when I write and put myself into grammatical corners (see what I did there?). I would submit this again, because it appears to have been well received.

SQLPS In Depth – Level 200

(Not Accepted Higher rated session selected)

Abstract:
A big hurdle for using Powershell and SQL Server together is the SQLPS module. Both old and new users of Powershell don’t completely understand its capabilities. There’s also the concern of how Powershell, SQLPS, and SQL Server all interact with one another. In this session, we’ll talk about the cmdlets you may not know about, tricks to save time using the provider, and even a few gotchas on how the provider works that can save you some time and energy. We will go in depth on what happens in SQL Server when commands are run using the SQLPS module. When we’re finished, you will have a deeper understanding of how you can use SQL Server and Powershell together, along with best practices to integrate the module with your automated tasks.

Prereqs:
Minimal familiarity with Powershell, basic familiarity with SQL Server

Goals:

  • Review the SQLPS provider functionality and go over what it is and how it works.
  • Cover the cmdlets in SQLPS, how they are used and gotchas in their implementation.
  • Demonstrate practical applications for both the provider and cmdlets for SQL Server automation.

Feedback:

  • I would like to attend this session.
  • The outline seems to clearly describe the contents of the presentation.  The target audience may not be big enough to support this session.  The subject matter does not appear to match the experience level (somewhat advanced topic for a 200 level session).
  • abstract: detailed
    topic: goals not compelling
    Subjective Rating : so and so

My thoughts:
The biggest take away here is “The target audience may not be big enough to support this session.” I think that’s true, and the reason is a double edged sword. Many view PowerShell as a niche topic, so there might not be as much demand for this. However, the whole reason I talk about PowerShell and SQL Server (and I submit sessions like this) is to try and move it beyond being a niche topic. This session is intended to get more people excited about PowerShell by seeing what it can do and how to get over some of the hurdles of working with it.

That being said, I had an advanced PowerShell session at last Summit that was deeply technical and still had 300 people in the audience. Maybe it’s not quite as niche as I thought. However, I believe I need to focus on sessions with more general appeal for future Summits.

#TSQL2SDAY: SQL 2016 Direct Seeding

It’s been awhile since I’ve written a T-SQL Tuesday post, but thanks to Michael Swart(@MJSwart) for hosting this month’s edition. The request is simple enough: blog about some aspect of SQL Server 2016. This is actually a good opportunity for me to write about a feature I’ve been meaning to get to for some time now and just haven’t had the chance.

As I tell many people, I’ve been working with SQL Server since 1999 (version 7.0) and I can’t remember being more excited about a SQL Server release. 2016 has SO many cool features, but more than that, they are features that many people who use SQL Server have been asking for a while now. It is hard for me to simply blog about one feature since there are so many. However, there has been an improvement to Availability Groups that has not gotten a lot of coverage and I think it is a game changer, especially for those of us who have to manage AGs on a day to day basis: direct seeding.

Setting Up Replicas

In SQL Server 2012 and 2014, creating an Availability Group could take a significant amount of work. One of the more tedious tasks is setting up your replica databases. This is because that you need to restore your database to your replica node in a state close enough to the primary to allow synchronization to happen. It can take several log backup restores to hit that magic window where you can join the database on the secondary node. Then, you get to do it again on the next replica!

Enter direct seeding in 2016. With this feature you no longer have to do any of the restores. You simply create your replicas with direct seeding enabled, then when you add a database to the AG, SQL Server will directly seed the database to your replica nodes. It’s surprisingly simple.

How do we enable this magic? It’s part of the AG replica creation. We still need the prerequisites, so we’ll start with the following:

  • Create a cluster
  • Enable AlwaysOn for the SQL Service
  • Create endpoints and grant permissions on those endpoints
  • The database we’re adding must be in FULL recovery mode

I, unsurprisingly, have a Powershell script that will set this all up for me. Then we can create the Availability Group with the direct seeding setting. Currently, this is only supported using T-SQL (go wizard or Powershell support), but the syntax is documented in MSDN. Let’s start by creating our Availability Group:

CREATE AVAILABILITY GROUP [ENTERPRISE]
FOR
REPLICA ON 'PICARD' WITH (
 ENDPOINT_URL = 'TCP://PICARD:5022', 
 AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, 
 FAILOVER_MODE = AUTOMATIC,
 SEEDING_MODE = AUTOMATIC
),
'RIKER' WITH (
 ENDPOINT_URL = 'TCP://RIKER:5022', 
 AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, 
 FAILOVER_MODE = AUTOMATIC,
 SEEDING_MODE = AUTOMATIC
)
LISTENER 'ENTERPRISE' (WITH IP (('10.10.10.101','255.255.255.0')))

ALTER AVAILABILITY GROUP [ENTERPRISE] GRANT CREATE ANY DATABASE

--Run these on the secondary node
ALTER AVAILABILITY GROUP [ENTERPRISE] JOIN
ALTER AVAILABILITY GROUP [ENTERPRISE] GRANT CREATE ANY DATABASE

 

There are two important lines to call out here:

  • SEEDING_MODE = Automatic
  • ALTER AVAILABILITY [ENTERPRISE] GRANT CREATE DATABASE

The SEEDING_MODE is what enables direct seeding, but we need to grant the AG permission to create databases on the node as well. Then we add the database to the AG. At this point, though, we only have node. It’s time to add a second node and watch the magic happen.

Hands Off

Once we’ve created the AG, we just add the database to it:

ALTER AVAILABILITY GROUP [ENTERPRISE]
ADD DATABASE [WideWorldImporters]

Once we’ve done this, SQL Server takes over and pushes the database out to the secondary replicas:

2016-06-14_10-27-36

That’s it! SQL Server handles all the replication for you. All is not sunshine and rainbows, however. There are still some gotchas to keep in mind:

  • You need to create the Availability Group with all its replicas at the start. You can not add replicas at a later point (I thought I had done this, but more recent testing was not working).
  • The AG must also be set up before you add any databases. You can add the database as part of the AG creation or after, but you have to have all your nodes configured before adding the database to allow direct seeding to work.
  • This is a data operation, so we “canna’ change tha’ laws a’ physics, Captain!” If you add a database to an AG with direct seeding, SQL Server still has to copy all the bits and bytes across the wire to the nodes. Which means nodes in other geographical locations could be a problem.

Overall, direct seeding is pretty cool, but still feels a little raw to me. Personally I’d like to be able to seed to a replica I add to the cluster if I desire, but that does not work right now. The lack of GUI and Powershell support (both cmdlets and SMO) is also disappointing. I can still use it, but I’d rather have a simpler way to do it.

For some additional reading, Erik Darling wrote about the exact same feature for T-SQL Tuesday as I did, so it gives you another view at it. Otherwise, that’s it for me on this T-SQL Tuesday. Thanks to Michael Swart for hosting this month. Make sure you check out the rest of the T-SQL Tuesday posts!

Azure SQL Databases with #Powershell: Exporting and Importing

Previously we went over how Azure handles regular backups of your Azure SQL Databases. It is such a huge weight off an administrator’s shoulders when we only have to worry about restoring a database if something goes wrong. While many DBAs might struggle with letting go of the need to control and manage their backups, I think most of us will embrace the freedom from the tedium and worry of keeping an eye on this part of our disaster recovery.

There still is a need to take a backup of some sort. Over my career, while I have used backups for protecting my data, I have also used them for other tasks. Sometimes it is to snapshot the data at a point in time, such as before a code release or a major change. Other times a backup can serve as a great template for creating a new application database, especially if you have a federated database model. Whatever your use case, there are times we would need to snapshot a database so we can restore from it.

Exports

One approach would be to just mark down the time you want to use as your backup and restore from there, but this approach could be difficult to control and be tricky to automate. Azure offers us a better option: exporting the Azure SQL database in question to blob storage. We can restore (or, more precisely, import) this export to a new database.

To run an export is a simple call to the Start-AzureSqlDatabaseExport cmdlet. Just like the restore cmdlet, it will start the process in the Azure environment, running in the background while we do other work. To run an export, we need the following information:

  • Azure SQL database to export
  • The administrative login for the server hosting your Azure SQL Database (which we will define as a SQL Server storage context)
  • The storage container information

The only mildly frustrating thing with the export we need to use cmdlets from both the Azure module and the AzureRM module (assuming your storage blob is deployed using the resource manager model). Because of this, make sure you run Add-AzureAccount and Login-AzureRMAccount before you get started.

We first need to create a connection context for our Azure SQL Database instance, using a credential for our admin login and the server name/

$cred = Get-Credential
$sqlctxt = New-AzureSqlDatabaseServerContext -ServerName msfazuresql -Credential $cred

Once we have established our SQL connection context, we will then need to set our storage context using a combination of AzureRM and Azure cmdlets.

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName Test -StorageAccountName msftest).Key1
$stctxt = New-AzureStorageContext -StorageAccountName msftest -StorageAccountKey $key

Now we can then start the export. Notice, we need a name for the export, used in the BlobName parameter.

$exp = Start-AzureSqlDatabaseExport -SqlConnectionContext $sqlctxt -StorageContext $stctxt -StorageContainerName sqlexports -DatabaseName awdb -BlobName awdb_export

Since this only starts the export, we need a way to check on the status. We can check using Get-AzureSqlDatabaseImportExportStatus. Oddly enough, the status cmdlet requires the username and password to be passed separately and does not take a credential object.

Get-AzureSqlDatabaseImportExportStatus -RequestId $exp.RequestGuid -ServerName msfsqldb -Username $cred.UserName -Password $cred.GetNetworkCredential().Password

And then there is our blob.

2016-05-14_11-23-48

There are two gotchas to keep in mind with both the export and the status. The first is you can not overwrite an existing blob, so make sure your blob name is unique (or get rid of the old one). Also, you can not check on the status of an export that has finished. If you get an error, chances are your export has completed.

Imports

Once we have our export, we now have a “backup file” we can create new databases from. All we need to do is run an import of our blob. Just as for our export, we need some information for our import, which we will (unsurprisingly) run with Start-AzureSqlDatabaseImport.

  • The storage container and blob that we will import from
  • A destination server and credentials for the server
  • The name for our database

Now, since we are creating a new Azure SQL Database with the import, the process needs to define a service objective. By default, it will import the database at standard 0 (S0), but you can defined a higher or lower edition if you want. To simplify things, we will go with the defaults and use the contexts from above, so all we really need to do to kick off the import is:

Start-AzureSqlDatabaseImport -SqlConnectionContext $sqlctxt -StorageContext $stctxt -StorageContainerName sqlexports -DatabaseName awdb_imp -BlobName awdb_export

Which, when completed, gives us a new Azure SQL Database created from our export blob:

2016-05-14_11-56-01

The Secret Sauce

What makes this black voodoo magic work? Is this some proprietary technique Microsoft has snuck in on us? Surprisingly, this is a bit of technology that have existed for sometime now as part of SQL Server Data Tools called BACPACs. A BACPAC is essentially a logical backup of a database, storing the schema and data as SQL statements.

This differs from a typical SQL Server backup, which stores your database pages directly in a binary format. Because of this, native backups are smaller and can be made/restored faster. However, they are more rigid, as you can only restore a native backup in specific scenarios. A logical backup, since it is a series of SQL statements, can be more flexible.

I don’t know why Microsoft went with BACPACs over some native format, but because they did,we can also migrate a database from on-premise SQL Server to Azure SQL database. This is a follow up to a common question I get: “How can I copy my database up to Azure SQL Database?” I want to look at this in my next post. Tune in next week, where we will create a BACPAC with regular SQL Server database and migrate it up to Azure SQL Database. 

Azure SQL Databases and #Powershell: Database Restores

logoAzureSqlA database administrator’s first priority is to ensure their datais protected in the event of a disaster. Typical scenarios range from someone forgetting a WHERE clause to completely losing the data center that houses our servers. We are charged with thinking through all the possible scenarios and making sure we can support our company’s Recovery Time and Recovery Point Objectives (RTO and RPO). This sort of responsibility is a lot why being DBA is a tough job.

Owning our servers gives us a lot of control of how our disaster recovery is constructed, but also a lot of responsibility. This can be a real challenge. We already know that cloud services, like Azure SQL Database, take that responsibility away by managing the infrastructure behind the scenes. These services also provide much of the disaster recovery protection, which is much more comprehensive and (in most cases) cost effective. In this post, we will review these options and how they can be managed using Powershell.

A Big investment

The most fundamental form of disaster recovery is database backups and restores. Typically setting up backups is a lot of work. DBAs need to make sure there’s enough storage available for backups, create schedules that accommodate business operations and support RTOs and RPOs, and implement jobs that execute backups according to those schedules. On top of that, there is all the work that has to be done when backups fail and making sure disk capacity is always large enough. There is a huge investment that must be made, but it is a necessary one, as losing a database can spell death for a company.

This is one of the HUGE strengths of Azure SQL Database. Since it a service offering, Microsoft has already built out the backup infrastructure for you. All that stuff we talked about in the previous paragraph? If you use Azure SQL Database, you do not have to do any of it. At all.

What DBAs still need to manage is being able to restore databases if something happens. This is where Powershell comes into play. While we can definitely perform these actions using the portal, it involves a lot of clicking and navigation. I would much rather run a single command to run my restore.

Executing a Restore

Before we get started, we need to first load up the classic Azure module. While our work so far has been done with the AzureRM module, there are many Azure SQL Databases tasks that can only be managed using the Azure module and database restores are one of them. Unfortunately, it also means we have to login to our Azure account a second time, as authentication is managed separately between the modules. Using Add-AzureAccount is just like Login-AzureRMAccount and will bring up a graphical dialog for you to sign in with.

Import-Module Azure
Add-AzureAccount

2016-05-07_12-25-46

To restore a database, first we just need three things to start: the server where the original database came from, the name of the database we want to restore, and a name to restore the database as. When you restore an Azure SQL Database, you have to create a database that does not already exist, so there is no WITH REPLACE option. Starting the restore is then just a matter of calling the right cmdlet:

Start-AzureSqlDatabaseRestore -SourceServerName msfazuresql -SourceDatabaseName msfsqldb -TargetDatabaseName msfsqldb-restore

2016-05-07_12-37-34

This only initiates the database restore operation and we have to wait for it to complete. To check on the status, we can use another Powershell cmdlet to display that information.

Get-AzureSqlDatabaseOperation -ServerName msfazuresql -DatabaseName msfsqldb-restore | Sort-Object LastModifyTime -Descending | Select-Object -First 1

2016-05-07_12-57-24

All the pipeline calls are there to filter out additional operations and only show the most recent, as the cmdlet returns all restore operations that have ever happened for that server and database. It is difficult to predict how long a restore will take, as this is all handled behind the scenes. I have not tested restore times, but my test databases usually take a few minutes to restore.

There are other options we can declare, depending on how we want to manage our restore, including:

  • Point in time within a minute of when we want to recover.
  • An optional destination server that we want to restore the database
  • A dropped database as our source

One other gotcha to be aware of, albeit a minor one, is that you can not perform a restore of an Azure SQL Database until 15 minutes after the creation of the database. This should not be a concern for most, but it can get in the way of rapidly spinning up databases for demo purposes (ask me how I know!).

Other Scenarios

Simple database restores is the entry point for most recovery scenarios. There are other options for disaster recovery as well as managing copies of your database. There are other options within the platform to support these scenarios. In the next post, I want to dive into a couple cmdlets that we can use to copy our Azure SQL Database as well as make a “backup file” to Azure blob storage that can be used to store a copy of our database in a particular state.

Azure SQL Databases with #Powershell: Managing your Databases

I meant to write this post a month ago, but then life happened. This is a continuation of my short series on Azure SQL Database and how you can manage it with Powershell. If you want, you can go back and read the previous two posts: Getting started and creating your database.

Administrative Access

Now I’d like to show you how to connect to the database, but since it’s been so long since I’ve looked at this demo, I’ve forgotten my administrative password! While it would be simple to blow away the Azure SQL server and database, there are many situations where this is not possible. Fortunately we have an easy way to reset the administrative password using the cmdlets.

$pw = ConvertTo-SecureString -AsPlainText -Force '********'
Set-AzureRmSqlServer -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -SqlAdministratorPassword $pw

Walking through this, we just need to create a secure string for our password and then use the Set-AzureRmSqlServer cmdlet and pass the secure string to -SqlAdministratorPassword argument. Easy as that and we don’t even need to know what the previous password was. With this in mind, I also want to call out that you can only change the password and not the admin login name. While this is not such a big deal, be aware that once you have an admin login name, you are stuck with it.

Connecting and Querying

Armed with our reset password, now we can query the database. The easiest way for you to do this is to connect to your database with SQL Server Management studio:2016-04-30_10-20-41

Remember that your server name will be what you named it followed by “.database.windows.net”. You must have SQL Server Authentication selected, and then just enter your login and password. SSMS will then connect up to your Azure SQL Database and the rest is very much like managing your on premises databases.

What about using Powershell? Again, not all that different. Let’s run a basic query to get some information about our database using Invoke-SqlCmd:

$sql = @"
SELECT
DATABASEPROPERTYEX ( 'msfsqldb' , 'Edition' ) as Edition
,DATABASEPROPERTYEX ( 'msfsqldb' , 'ServiceObjective' ) as ServiceTier
,DATABASEPROPERTYEX ( 'msfsqldb' , 'Version' ) as Version
,convert(bigint,DATABASEPROPERTYEX ( 'msfsqldb' , 'MaxSizeInBytes' ))/1024/1024/1024 as MaxSizeGB

GO
"@

Invoke-Sqlcmd -ServerInstance msfazuresql.database.windows.net -Database msfsqldb -Username msf -Password '*********' -Query $sql

2016-04-30_10-39-00

As you can see, working with Azure SQL Database after it has been created and configured is not much different than your typical SQL Server installation. There is a lot of writing out there on features and use of Azure SQL Database, so I will skip that here. The best place to start is Grant Fritchey(@GFritchey) and his Azure blog posts.

Remaining Flexible

The last item I want to cover here is managing your database size. One of the promises of the cloud and Azure is the flexibility to change your resources as your needs demand and not be stuck on whatever hardware you purchased when you first built your data center. There are a couple items you should consider when first creating your Azure SQL Database, but the great thing is that you are not tied into many of those options permanently.

There are several service tiers available to you, with a mix between standard and premium offerings. Obviously, each service tier has a different price point, so how do you know what is the right choice for your database? What complicates this is you have a mystery metric for performance: Database Transaction Units. Ostensibly this is the number of transactions per second your database should be able to provide, but there’s more to it than that. My general recommendation is you should base your choice on features you need and user connections you expect. If you find, after that, you have a performance bottleneck, you can adapt upwards incrementally.

How do you make this change? It is just one line of Powershell:

Set-AzureRmSqlDatabase -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -DatabaseName msfsqldb -RequestedServiceObjectiveName 'S1' 

Then it is off to the races. The actual operation is not instantaneous and Azure needs some time to allocate the resources and adjust the settings on its side. The great news here is this does not interfere with the operation of your database at all and it will be available while the resizing happens. Existing connections will be maintained and even queries that are running will continue to run. The end result will simply be:

2016-05-01_18-18-03

Ease of Use

As you can see, we can manage the access and scale of Azure SQL Database with a few straightforward commands. While these changes can also be managed through the web portal, I find the Powershell approach to not only be simpler, as we can skip all the browsing through blade windows and troublesome clicks. While these actions are not really ones we would automate, using the cmdlets can also ensure that our actions can be more precise and consistent.

In my next post, I want to cover a topic near and dear to the hearts of DBAs everywhere: database restores. While backups for Azure SQL Database are managed for us, we need the ability to restore our data to a point and time of our choosing. Fortunately this is fairly easy to accomplish with Powershell. Stay tuned and I promise you will not have to wait another month for me to demonstrate this.

Azure SQL Databases with #Powershell: Components

As with any project, before we can start, we need to know what we have to work with. Last post, we went over loading the Azure RM module and getting connected to our Azure account. Before I go over how you create the Azure SQL databases, I’d like to talk first about exactly what we will create first. It’s the old adage: measure twice, cut once.

The very first thing to create is an Azure Resource Group, as we are going to create our Azure SQL Database using the Azure Resource Manager deployment model. There’s nothing really fancy about this, just that it is a container that can house multiple resources. This resource group can house any Azure resource, not just our databases. Creating one using Powershell is simple, we just need to know what to call it and which Azure region to create it in:

New-AzureRmResourceGroup -Name MSFAzureSQL -Location ‘West US’

Once we have our resource group, we can start deploying our Azure SQL database components.

Parallels

From a logical standpoint, working with Azure SQL databases is not very different from setting up a SQL Server instance and database in your environment. There are 3 main components we will need to deploy:

  • A server
  • A database
  • A firewall (server or database)

3-20-2016 7-27-15 PM

Starting with the server, there are only a few things we need to know. The first is a server name so we can identify and connect to it. After that, we need to know what version of Azure SQL databases we want. There are two options here, 11 and 12. Yes, those are fairly abstract numbers, though version 12 is currently what is being updated and has many of the features that will be included in SQL 2016. Finally we need to create an administrative login, the equivalent of the sa login found in all SQL Servers. This all boils down to just two lines of Powershell:

$cred = Get-Credential
New-AzureRmSqlServer -ResourceGroupName MSFAzureSQL -Location ‘West US’ -ServerName msfazuresql -SqlAdministratorCredentials $cred -ServerVersion '12.0'

Running this, in my opinion, beats all the clicking and selecting you have to do in the portal.

Next up, we need to create our database. As with creating a server, we need a name for the database. We also need the resource group and server name we are creating the database in. We will then need to declare a service level objective. While there is a performance aspect to this selection, it more rests on what sort of connection load you will have, what kind of features you need, and what sort of disaster recovery support you require. The nice thing about selecting a service level is their easy to change once you have your database. Finally, we need to select an edition (again, relates to features). To deploy, our command looks like this:

New-AzureRmSqlDatabase -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -Edition Standard -DatabaseName msfsqldb -RequestedServiceObjectiveName 'S0'

Security and Access

The final component to create is a firewall. By default, your Azure SQL Database has its access blocked to anything outside of it. Yes, this includes your access via management studio. To open this access, we need to create firewall rules that allow specific IP addresses to connect. There are two types of firewall rules we can use, server and database. These are what you would guess. The server rule allows access from that address to any database in your server, where the database rule only works for that database.

For our purposes, the server rule will work just fine. Creating a database rule is not really different, it just applies to a different scope. To create a rule, we need the resource group name, server name, rule name, and an IP address range. For this, we will use a trick I picked up from John Milner(@JFRMilner) to get our external IP and create a firewall rule for it:

$localIP = (Invoke-WebRequest ifconfig.me/ip).Content.trim()
New-AzureRmSqlServerFirewallRule -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -FirewallRuleName 'msffirewall' -StartIpAddress $localIP -EndIpAddress $localIP

Now, there is also a special firewall rule we can use. This one is handy because it will make your database accessible to all Azure services. This saves you the headache of creating separate rules for websites, PowerBI, or any other service you want to use. Just call the firewall rule cmdlet with -AllowAllAzureIPs:

New-AzureRmSqlServerFirewallRule -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -AllowAllAzureIPs

Fully Functional

With that, we now have a functioning Azure SQL database. To connect to it, you just need to use the servername along with database.windows.net as your connection string, along with the admin login you created. Put it into management studio and you are good to go:

3-21-2016 5-04-05 PM

Oh, you want the full script?

New-AzureRmResourceGroup -Name MSFAzureSQL -Location ‘West US’

$cred = Get-Credential
New-AzureRmSqlServer -ResourceGroupName MSFAzureSQL -Location ‘West US’ -ServerName msfazuresql -SqlAdministratorCredentials $cred -ServerVersion '12.0'

New-AzureRmSqlDatabase -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -Edition Standard -DatabaseName msfsqldb -RequestedServiceObjectiveName 'S0'

$localIP = (Invoke-WebRequest ifconfig.me/ip).Content.trim()
New-AzureRmSqlServerFirewallRule -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -FirewallRuleName 'msffirewall' -StartIpAddress $localIP -EndIpAddress $localIP
New-AzureRmSqlServerFirewallRule -ResourceGroupName MSFAzureSQL -ServerName msfazuresql -AllowAllAzureIPs

As you can see, deploying an Azure SQL database is a pretty simple process, it is just a matter of knowing what you need.  Next up, we will go over how you can manage and alter your Azure SQL Database. After all, the cloud promises us flexibility and using Powershell makes it easy.

Azure SQL Databases with #Powershell: Getting Started

I’m now a month into my new gig at UpSearch and loving it. The most interesting thing about moving from being a corporate world to being a consultant is the variety of work that I get to do. As tech folks, part of the reason we work in this field is because of all the things we get to learn.

One new area for me has been Azure SQL Databases. Sure, I’ve known about them for a while now, but it was never something that was needed by my corporate masters. Now, with several different clients to work for, it did not take long for Azure’s platform-as-a-service database offering to be a good fit for a project. Since this is me we are talking about here, I made a point of learning how to create and manage these databases in Powershell. I’d like to share some of those lessons here.

The Tools

Before we get started, we need the right cmdlets for working with Azure. With Windows Management Framework 5.0, it is actually really easy to install and get started with Azure and Powershell. The full process is documented up on Microsoft’s site. It takes about 30 minutes to get everything installed and configured.

If you have worked with the Azure cmdlets before, you might be surprised to see that the instructions references two different modules, Azure and AzureRM. Actually, if you do a lookup on any modules you will see a lot more:

Get-Module -ListAvailable  Azure* | Select-Object ModuleType,Version,Name

3-12-2016 11-48-54 AM

What is this all about? It took me a bit of digging, but what it boils down to is that Microsoft made a fundamental change to how things are managed within Azure. You will now find documentation on these two different deployment models: Classic Deployments and Resource Manager Deployments. These two different set of Powershell cmdlets reflect these different models, as anything for Classic Deployments are handled by cmdlets in the Azure and Azure.Storage modules. All the Resource Manager Deployment stuff is handled by the AzureRM* modules.

Choose Your Path

The reason I call this out is to address one major hangup with working with Azure: the documentation. Because Azure is changed so quickly, official documentation and other write ups (like this one) quickly go out of date. When I was looking for information on how to do this, I kept finding blog after blog that covered Classic Deployments. Since I am just starting with this, there is no reason for me to go down the Classic Deployments path. While these are still viable and perfectly valid, it makes more sense to me to focus on using Azure the way Microsoft intends it to be used.

To get started, we will first connect to our Azure RM account. I will be using my MSDN account. From my Powershell window, I’ll run the following commands:

Import-Module AzureRM
Login-AzureRMAccount

I am then greeted by a GUI login window, where I enter my credentials.

3-12-2016 12-35-48 PM

Hey, wait? A GUI? With Powershell? Yeah, this surprised me as well, because to me GUIs are an aberration when it comes to Powershell. You can not really automate them and that throws a wrench into the whole works. This is probably my biggest gripe about using these cmdlets. The nice thing is that once you login your session is maintained and you can make use of a session profile.

Thanks out to Adam Bertram(@adbertram) for helping me with this, but you have the ability to save off a profile file that will save a lot of this info for use by other Powershell sessions. This gives us the ability to connect and run commands without the need for this GUI nonsense. Accomplishing this is just a matter of using the right cmdlet:

Save-AzureRmProfile -Path C:\Users\Mike\mfalprofile.apf
Select-AzureRmProfile -Path C:\Users\Mike\mfalprofile.apf 

Note, the name of the file is pretty meaningless. I used the apf extension because of my own personal taste, but you can do whatever you want. The point is to have the file. Once you have this file, you can load it up in future/different Powershell sessions and avoid the login GUI completely. The real hurdle here is that, regardless of how you go about this, you need to login graphically at least once to get credentials.

First Steps

With our connection established, we can now start creating SQL databases. Before we do, however, we need to talk more about what we are going to create. While the beauty of working with public cloud providers such as Azure is it is really easy to spin up resources, it does not negate the importance of understanding how and what we are deploying to. It is just as important to “measure twice, cut once” when it comes to the cloud. My next post will focus on the different components that make up Azure SQL databases, deployment considerations, and how we end up creating SQL databases in the cloud.

Dynamic CSV Imports with #Powershell

A common task for data folks is loading comma separated values files into databases. It can be tedious, having to figure out column layouts, data formats, and sorting bad data. This sort of Extract, Transform, and Load (ETL) task is something we would do in SSIS or a similar loading tool, but those tools require us to review the data and have a good idea of what the staging table will look like. Most of the time with CSVs, we just need a quick dump of the file into a database table so we can work with it using T-SQL, which means that I don’t want to spend a lot of time figuring out what a stage table is going to look like.

I know SQL Server Management Studio will try and help you out with the Import Data task. I have used this pretty frequently myself. However, like any other GUI tool out there, you can not automate it. It builds a one time SSIS package (which you can save) and executes it, but if it does not save much time if I have to load multiple CSV files with differing formats.

We all know my weapon of choice is Powershell. I am sure there are other approaches to this problem and my ETL/BI pals are probably reading this post in dismay (if they are reading it at all!). However, I’m pretty comfortable with the patterns in Powershell to script this process, letting me quickly load multiple CSV files without having to do a lot of work.

All Around the Table

The challenge is getting the staging table in place. We need a process that can read the CSV header and build a CREATE TABLE statement for us. Once we have that, the load is actually pretty easy. Assuming the CSV has a header row, I then use this bit of Powershell code to figure out the different columns I’m going to load into:

$source = <Path to your source file>
$Header = (Get-Content $source | Select-Object -First 1).Split(',')

Two lines of Powershell and now I have a collection of all my column names as a string array. This is where the fun begins. Now, I can not assume that the column headers will play nice as SQL column names, so I need to do a little hygiene work. I will loop through the collection and strip out any non-alphanumeric characters:

$CleanHeader = @()
foreach($h in $Header){
$CleanValue = $h -Replace '[^a-zA-Z0-9_]',''
$CleanHeader += $CleanValue
}

I could use ForEach-Object here to simplify the code, but I expand it out in a foreach loop because I might add some other logic to the header name clean up. I want to keep it simple for this example. The result is a new collection that has names I can use to create a staging table.

Next up, I will take this new collection and build out a SQL statement. This is a matter of looping through the CleanHeader collection and using those values. Since we can’t really tell what the format will be, each table column will be VARCHAR(255).

$StagingTableName = ‘CSVFileLoad’
$sql = @("IF EXISTS (SELECT 1 FROM sys.tables WHERE name = '$StagingTableName') DROP TABLE [$StagingTableName];")
$sql += ("CREATE TABLE [$StagingTableName]($($CleanHeader[0]) VARCHAR(255)")
$CleanHeader[1..$CleanHeader.Length] | ForEach-Object {$sql += ",$_ VARCHAR(255)"}
$sql += ");"

The SQL format is a typical pattern:

  1. If the table name already exists, drop it (we’re doing a drop and replace of the data).
  2. Grab the very first column name in the collection and use it to start the CREATE TABLE.
  3. Loop through the remaining column names, putting a comma in front of each column.
  4. Close the CREATE TABLE statement with a parentheses and semicolon.

This block of code is the key, allowing me to create any table structure to import my data with.  Once I have it, I just execute it using Invoke-SqlCmd as I would any other T-SQL statement, and my staging table is now in place.

Stick the Landing

Once the staging table is in place, the load is actually fairly simple. You have lots of options, but the easiest one for my money is to build a BCP call and execute that. BCP is a simple utility that has been around for a while, mostly because it is effective. The build is not difficult:

$cmd = "bcp '$Database.dbo.[$StagingTableName]' in '$Source' -S'$SQLServerName' -F2 -T -c -t','"
Invoke-Expression $cmd

For those unfamiliar with the BCP syntax, all I do is declare the destination table, the key word ‘in’ to declare that I am loading data from a file to a SQL Server table, and then the SQL Server name. The remaining parameters set the following load options:

  • -F2: Start the load with the second row, skipping the header
  • -T: Use windows authentication to connect
  • -c: The input file is character data
  • -t’,’: The column terminator is a comma

This is how I do it, but there are plenty of other options. Allen White(@SqlRunr) has written about using the .Net methods to perform a Bulk Load. You could also use the BULK INSERT command run from an Invoke-SqlCmd call. There are probably a dozen ways to skin this particular cat. The choice is up to you.

Now For Your Parting Gift

As with many of these patterns that I blog about, I have a function I have written around it. This one is Import-CsvToSqlTable. This function has some additional pieces that are best discussed by reviewing the parameters:

  • InstanceName – SQL Server Instance you are going to load to.
  • Database – Database name you are going to load to.
  • SourceFile – Source file to load
  • SqlDataType – Data type the staging table columns will be created as. Defaults as VARCHAR(255).
  • StagingTableName – Table name you will create and load into.
  • Append – If declared, the staging table will not be dropped if it already exists.

The function wraps up all the different pieces of what we talked about above. With it wrapped in a function, we now can call it for a simple data load. For this load, I am using a file from https://catalog.data.gov/ for New York City demographics:

3-5-2016 11-37-15 AM

Nothing fancy and the output shows the instance, database, table, and count of rows in that table. If we look in SQL Server, we will see the table created with VARCHAR(255) columns as expected:

3-5-2016 11-39-19 AM

We can do more, though. Let’s say we know that the file contains numbers in all the columns. It might make more sense to load them into a FLOAT or a NUMERIC datatype. We’ll declare that now. I’m also going to declare the -Verbose so I can see the detail of how the function is processing the import:

3-5-2016 11-42-53 AM

3-5-2016 11-48-58 AM

With the right parameters, not only do we have control over what we create, but we can see how it’s created and loaded into SQL Server. This can help with troubleshooting in case we try and load the wrong data type or there is some other error for the load.

This function can be useful for quick and dirty data loads, importing CSVs that we don’t want to waste a lot of time figuring out. I wrote it for this express purpose, because I did not want to get tied up clicking on buttons in the Import Data Wizard. However, I definitely do not recommend this as a full on replacement for ongoing ETL processes and standardized data loads. For any ongoing process where you know the format of your data files, there are much better options and approaches. As with any task, it is important that you use the right tool for the job, but it never hurts to understand your options.

Getting Database File Info with #Powershell

As database administrators, we are often obsessed with free space. Whether it’s on disk or in our database files, we know that if that precious resource runs out, everything we manage can come to a crashing halt. There are also the other hidden impacts, like if things slow down while files grow and possible downtime to additional storage to support the needs of our databases. This is all part of what a former manager of mine called “DBA 101” and staying on top of it is one of our higher priorities.

Fortunately, there’s plenty of tools for us to manage these metrics. It comes down a lot to personal choice. Usually this can be managed via some monitoring suite, but there are times we need a handy tool to answer questions quickly. For disks, I have a function I borrowed from the web and put into my profile called Get-FreeSpace. Using it, I can quickly answer questions about how much free space is available on any of my servers.

The Database Challenge

Databases are a little more difficult. Sure, we can use a variety of queries to gather this information and there are ways to get it. We could always use SSMS to look at a database’s properties. Personally, I like to use Glenn Berry’s(@GlennAlanBerry) DMV scripts. The problem is that it’s difficult to get a concise report across all of our databases with this. Sure, we can create scripts that execute the query against each database, looping through them all, but that’s a hassle. I want something simpler.

Once I hit these limits using T-SQL, I did some investigation to see if Powershell offered any better alternatives. It’s one of my learning tools for Powershell: take a T-SQL task I’m familiar with and redo it in Powershell to see if it is easier and/or better.

To do this, I dove into the SMO object model. This gets a little /Net-y, but the good news is there’s lots of properties we can use to get the information we are looking for. If we look at both the DataFile and LogFile classes, there are properties readily available for us. Both classes have UsedSpace and Size properties (both measured in KB), from which we can derive both available space and percentage used. All it takes is wrapping some collection and formatting logic around these objects and we’re good to go. You can see my full function up on GitHub.

Reporting on Database Free Space

I’ve done a couple new things with this function. The biggest is to build it for the pipeline. This is a significant step for me, because it makes the function more flexible. I always talk about how Powershell supports multi-server execution, so I try to build that into my functions when I can. this is no different. I also made the output a generic PSObject instead of a formatted table. This is key because it lets the user manage it as they need to, which is a database design concept that can be carried over to Powershell quite easily.

So let’s use it. Running it for a single instance is easy enough:

Get-FreeDBSpace -InstanceName ‘SHION’ | Format-Table

2-27-2016 1-23-35 PM

I use the Format-Table to give me this clean table style output, as the generic output is a list that can be hard to read. What’s cool here is that, with a single Powershell command, I can quickly report out all the necessary database file size info. It gets better, of course. Because I designed the function to take a pipeline, I can run it for multiple instances with ease:

$Instances = @(‘SHION’,’SHION\ALBEDO’)
$Instances | Get-FreeDBSpace | Format-Table

2-27-2016 1-33-48 PM

How cool is that? As an administrator, this can be invaluable when I’m evaluating an environment or instance. Still, this is a lot of information to parse. Let’s leverage the pipeline again and filter down the report to show all the files have less than 10 percent free (greater than 90% used):

$Instances | Get-FreeDBSpace | Where-Object {$_.PercUsed -gt 90} | Format-Table

2-27-2016 1-38-18 PM

How cool is that? Now we’ve got an easy report showing all our databases that could be tight on space. But wait, there’s more! We’re going to use the pipeline one last time, this time to output an HTML report showing us the information we just displayed to the screen:

$Instances | Get-FreeDBSpace | Where-Object {$_.PercUsed -gt 90} | ConvertTo-HTML | Out-File .\FreeDBSpace.htm

2-27-2016 1-44-27 PM

Easing Our Burdens

As you can see, there’s a lot of potential to how we can use this. The key to building any Powershell tool is making it as flexible and repeatable as possible. The building blocks are there, it is just a matter of how we assemble them.

You’ll often hear how you should use the right tool for the job. I love T-SQL and use it for many tasks, but this is a great example of how Powershell can complement T-SQL and give administrators a more robust way to manage their environments. Using this function, we can easily evaluate a new environment or manage our existing one. Our time is precious and tools like these can help us best use that valuable resource.

Three Things I Want From My Job

I’ve been off for a while and I apologize for that. The beginning of the year has been crazy and I am just now getting my feet back under me. The short version is I recently changed jobs and have started working for UpSearch(@UpSearchSQL) as a SQL Server Consultant. This is an exciting change for me, full of new challenges and opportunities.

Blah blah blah blah, right?

Without making light of the change, I know you all hear this every time someone starts a new gig. This post is not about that, but about some reflections I had about why I changed jobs, what satisfies me at work, and the reasons we do this stuff in the first place. I want to share these thoughts with you to hopefully help you broaden your perspective on why you are at your job and what it is that keeps you going back (besides the obvious paycheck).

Three Things

When it comes to our jobs, whether we’re interviewing for something new or looking back on our current position, we should have a list of what keeps us happy. After all, we measure and track statistics around our databases and compare them to a baseline. It should be no different for our careers. This list can be as detailed or broad as suits you, but you should have something. My list is actually pretty simple, consisting of three questions that I ask myself:

  • Am I compensated fairly?
  • Am I working on cool s#!t?
  • Am I respected?

These questions do not have simple answers, but they do have answers. As we go over each question, hopefully you start to see how they might apply to you.

Compensation

This probably seems the simplest. After all, we all get paid and we want to make sure our salary is competitive to the market, right? This definitely is something we should be thinking about and reviewing. It also takes a bit of self honesty about our capabilities and how they match to what is being paid for. Are we senior or mid-level talent? How much do we drive value for our employer? Salary is very much a two way street.  If we want to be paid fairly, we need to demonstrate our worth.

Of course, we also need to keep in mind that compensation is far more than just salary. I’ve worked jobs where I got paid pretty well, but had to fight to go to conferences like the PASS Summit. Some jobs had liberal vacation policies while others offered the minimum. When you consider your compensation, you have to ask yourself what keeps you happy from a larger perspective.

Coolness Factor

I’m the kind of guy that needs to be challenged. It is why I spend time blogging and presenting, as well as playing with different kinds of technology. This is how my work fulfills me. I could simply punch the clock, work from 8 to 5, and collect my check, but this wears thin pretty quickly. I need more in a job than just going through tasks by rote so I can go home at the end of the day.

This is why it is important that I get interesting work. I need problems to solve and challenges to overcome.  To be clear, I understand that I will have to do the tedious stuff as part of any job. Work like this needs to be balanced out, though. Heck, sometimes the challenge of managing the tedious stuff in a more efficient manner is engaging in and of itself. The key is that I want my job to help me grow, not allow me to tread water. Job satisfaction is measured on this growth.

R-E-S-P-E-C-T

This last item is the linchpin. For the longest time, I was happy with the first two items on this list. Then one day, I sat up and realized that I was getting worn down by my job. I couldn’t quite place my finger on why, because I was getting what I wanted out of compensation and the work was definitely cool and challenging. The problem was I was struggling with going into work every day and my motivation was rapidly draining.

The key, I realized, was a matter of respect. Whether it is peers, team members, or management, I need the people I work with to respect my time and my knowledge. This may seem a little arrogant, but it is important to recognize your own value. Companies hire me to be a database expert, and an expensive one when you boil it down. This means my time and opinion carry a certain amount of cost, and when that cost is being wasted because others in the company do not recognize that value, it becomes wearing.

Without trying to sound too negative, we need to understand our professional value. As with the compensation question, it takes a lot of self honesty to value ourselves appropriately. It is important to recognize that within ourselves. This is because it is an important factor for job satisfaction, because not only should we recognize that value in ourselves, we need others to realize it and respect it.

Can I get a little satisfaction over here?

Job satisfaction, while ultimately a binary yes or no answer, has a lot of degrees of measurement. After all, I can think of jobs I’ve had where the pay was fantastic but the environment sucked. Or where the entire team thought I was a rock star, but I was not working on interesting or engaging work. When I felt my job satisfaction going down, I could always track it back to one of these three items.

What is important here is that by understanding what satisfies you at a job, it gives you a tangible item to resolve. It might simply be a matter of going to your boss and telling him what the problem is. This gives you something tangible to work on with your employer and is fair to them, because chances are they don’t want to lose you and will work with you. It also may mean you need to find another job that fills that gap. In this case, when you sit down in an interview, you can have a conversation with the person across the table about what fulfills you.

Whatever your reasons are, it is important that you know why you get up in the morning (or other time) and go to work. Your reasons are probably different than mine, as these tend to be pretty personal. I hope that by giving you some insight into what job satisfaction means to me, it can help you build your own list of questions. Measure your job satisfaction, know what makes you happy, and then figure out how to get it.