Art of the DBA Rotating Header Image


A Month of SQLPS: Firewalls and Networking

All IT professionals have to deal with firewalls in some capacity or another. Protecting communication with servers is a fundamental component of any security plan. By default, whenever we install SQL Server we will need to do two things to allow us to talk to our SQL Servers: enable communication (usually TCP) and create a firewall rule allowing inbound traffic. Performing these steps are so standard that it only makes sense to automate them.

Enabling TCP communication is usually something handled within the SQL setup. It is one of the command line options, so this means we can include it in a config.ini and consistently enable it as part of our setup. Firewall rules are an OS operation, though, and are not part of the SQL Server setup. There have been command line options for a long while courtesy of netsh, but since Powershell 3 we’ve also had New-NetFirewallRule to execute this task:

New-NetFirewallRule -DisplayName "Allow SQL Server" -Direction Inbound –LocalPort 1433 -Protocol TCP -Action Allow


Respectively, these cmdlets will create and remove a firewall rule for SQL Server. The help file shows only a handful of parameters to use. It also states a dependency on our friend, the Cloud Adapter Service. So let’s see how it works.

The command is easy enough to run:

Add-SqlFirewallRule -ServerInstance PICARD -Credential $cred

All we need to make this happen is an instance name and credential that has enough rights on the host machine to create a firewall rule. We could use an SMO Server object or a provider path to the instance. With any of these options, what we get is a firewall rule to the SQL Server application for that named instance:


This is actually an extremely convenient way to allow firewall access for our instance. Because it allows access specifically for the application executable, it helps minimize port management. However, that’s about all the control you get. The rule will always be for all your profiles, meaning you are opening access to anyone who can reach the machine. This kind of blanket security change could easily expose your data in a way you were not planning on.

Removing a firewall rule does not work quite as expected. Running Remove-SqlFirewallRule will not remove your firewall rule, only disable it. This is not such hassle, but nice to be aware of. If you add it back, the rule just gets re-enabled and there are not any naming conflicts.

There’s one other cmdlet to look at relating to firewalls and that’s the Set-SqlNetworkConfiguration cmdlet. It is used to change the IP address port that instance is listening on. The parameters are similar to any other cmdlet using the Cloud Adapter Service, where you will need the instance name and a credential with permissions to alter the service:

Set-SqlNetworkConfiguration -ServerInstance PICARD -Protocol TCP -Port 13000 -ForceServiceRestart -Credential $cred


It’s a handy way to change your IP port. However, doing this has no impact on the firewall rule created by Add-SqlFirewallRule, as that firewall maps to the application executable for the instance.

Overall, these cmdlets can be useful, but I am not sure I would recommend them. The lack of fine grained control over how the rule gets created is a big concern for me. Add to this the dependency on the Cloud Adapter Service, which opens up another vector to accessing the server. We’re trading ease for control and the benefits do not outweigh the drawbacks. Since we already have a full set of Network Security cmdlets that allow us to manage firewall rules, I would not use the SQLPS module cmdlets.

A Month of SQLPS: Managing Credentials

Security is an important aspect of SQL Server management. It’s important to control permissions within the instance and only allow the minimal level of rights to our users. This is the principle of least privilege, a fundamental concept for computer security. SQL Server provides a variety of tools to manage these permissions within the instance, but what if SQL Server has to interact with either the operating system or another application? For this, we have credentials, a way to manage authentication to other resources.

The next series of cmdlets to review from SQLPS are the ones used to manage these credentials. We’ll go through the usual process and use Get-Command to list the options:

Get-Command -Module SQLPS *SqlCredential

The result is four cmdlets for creating, retrieving, altering, and removing a credential. We’ll start by using New-Credential to create one for a windows domain login:

$pw = Read-Host 'Enter the password' -AsSecureString
New-SqlCredential -Path 'SQLSERVER:\SQL\PICARD\Default\' -Name SQLTest_Windows -Identity 'SDF\SQLTest' -Secret $pw

This command will create the credential in the PICARD instance. Unlike some of the other cmdlets, though, the -SqlCredential cmdlets do not support a -ServerInstance parameter. The way to specify the instance is to use the provider path to the instance. It also, unfortunately, doesn’t accept a Powershell credential object to create the SQL credential. It will accept a secure string for the password, meaning that the password will be encrypted as the cmdlet processes it.

Now that we’ve created our credential, let’s crack it open with Get-Member and see what kind of object we’re working with:

Get-SqlCredential -Path 'SQLSERVER:\SQL\PICARD\Default\' -Name SQLTest_Windows | Get-Member


It’s an Smo.Credential object, which matches with the other object types found within the provider.

The Set-SqlCredential gives us the capability to update the identity or the password for an existing credential. This means if something changes externally with credential information, we can update the credential to reflect that. Remove-SqlCredential rounds out the functionality by giving us a cmdlet to remove a credential using Powershell. These two cmdlets will also support the -Script switch, allowing you to output the resulting T-SQL for these calls.

Why would you use these? The functionality they provide does not really offer any real savings over doing the same work in T-SQL. Conversely, it is about the same amount of code. If you use Powershell for deploying these objects, the secure string functionality can be helpful to keep your credentials encrypted. The choice between T-SQL and Powershell is a matter of having different options to create your credentials, letting you choose the appropriate tool for your deployment.

A Month of SQL PS: Backups and Restores

Backups and restores are a key task for any database administrator. Our primary job is safeguarding the data and backups provide us with a redundant, point-in-time state of our data that we can use to recover from in the event of a Bad Thing(tm). Not taking backups is a great way to make sure your resume is up to date.

It’s no surprise that the SQLPS module has two cmdlets specifically designed for these functions, respectively named Backup-SqlDatabase and Restore-SqlDatabase. Go ahead and take a moment to look at the help files for these using Get-Help. The first thing you should see is there’s quite a bit of documentation for these two cmdlets. They have some of the most thorough help files for any of the SQLPS cmdlets.

Backing up a database with Backup-SqlDatabase is easy, much like it would be if we were to use BACKUP DATABASE in T-SQL:

Backup-SqlDatabase -ServerInstance PICARD -Database AdventureWorks2012 -BackupFile 'C:\Backups\AW2012_cmdlettest.bak’

If you run this, you’ll note that you get a nice little progress bar for the backup action. Not to much else to worry about here, because it’s straightforward in how it operates. This cmdlet does run T-SQL behind the scenes, meaning we can use the -Script argument generate the T-SQL statement the cmdlet will run when it backs up our database:


Looks suspiciously like what you would get if you scripted out a backup action in SQL Server Management Studio, doesn’t it? We’re going back to the concept that the SQLPS components all use the SMO, meaning they work the same way as other tools that use the components, like SSMS. This becomes handy if you want to use the cmdlets to shortcut creating scripts for your backups.

Most of the other arguments for this cmdlet are simply additional options for the T-SQL BACKUP command. This means we can have the same level of customization of our backups as we would if we were writing straight T-SQL. I’ll let you explore those on your own and consider which options make sense and how to use them.

So why would we use this, if all it’s doing is duplicating T-SQL? Mostly because it becomes easier to integrate with Powershell in this manner, allowing us to use Powershell for what it’s good at it. One pattern I like to advocate around this is using Powershell to create backup directories as part of my backup process, then backup the database, then again use Powershell to remove old backups. Consider this script:

$dbs = dir SQLSERVER:\SQL\localhost\DEFAULT\databases | Where-Object {$_.status -eq 'Normal'}

foreach ($db in ${
 $backuppath = "C:\DBBackups\$db"
 if(!(Test-Path $backuppath )){New-Item -ItemType Directory -Path $backuppath}
    $backupfile = Join-Path -Path $backuppath -ChildPath "$db-$(Get-Date -Format 'yyyyMMddhhmm').bak"
    Backup-SqlDatabase -ServerInstance LocalHost -Database $db -BackupFile $backupfile -Initialize 
    dir $backuppath | Where-Object {$_.FullName -ne $backupfile} | Remove-Item



This will dynamically backup all the user databases on the local instance with a unique name in its own directory. The script also uses Powershell to handle the file system tasks of creating the folder if it does not exist and cleaning up any backups outside of the one we just created. We have a simple, dynamic backup script all with a fairly minimal amount of coding.

The counterpart to this is the Restore-SqlDatabase. After all, our backups are not all that useful if we can’t restore them. The cmdlet is very similar to Backup-SqlDatabase with its arguments, where most of them are direct translations of the T-SQL RESTORE command options. One gotcha to call out is how relocating database files is handled.

Moving database files with a restore is a pretty common operation. The way the cmdlet handles it makes sense if you remember that everything is working through the SMO. What we need for moving files is a collection of RelocateFile objects, which the cmdlet will then use to establish the restore:

$MoveFiles = @()
$MoveFiles += New-Object Microsoft.SqlServer.Management.Smo.RelocateFile ('AdventureWorks2012_Data','C:\DBFiles\data\AdventureWorks2012New_Data.mdf')
$MoveFiles += New-Object Microsoft.SqlServer.Management.Smo.RelocateFile ('AdventureWorks2012_Log','C:\DBFiles\log\AdventureWorks2012New_Log.ldf')

Restore-SqlDatabase -ServerInstance PICARD -Database AdventureWorks2012New -RelocateFile $MoveFiles -BackupFile 'C:\DBBackups\AdventureWorks2012\AdventureWorks2012-201510101251.bak

As with our Backup-SqlDatabase cmdlet, we can see the T-SQL that would execute this by using the -Script argument.

Again, this cmdlet is useful by including within a larger script. I like using it for quick and dirty script generation for point in time restores:

$LastFull= Get-ChildItem 'C:\Backups\AdventureWorks2012\*.bak' | Sort-Object LastWriteTime -Descending | Select-Object -First 1
$logs = Get-ChildItem 'C:\Backups\AdventureWorks2012\*.trn' | Where-Object {$_.LastWriteTime -gt $LastFull.LastWriteTime} | Sort-Object LastWriteTime

$MoveFiles = @()
$MoveFiles += New-Object Microsoft.SqlServer.Management.Smo.RelocateFile ('AdventureWorks2012_Data','C:\DBFiles\data\AdventureWorks2012New_Data.mdf')
$MoveFiles += New-Object Microsoft.SqlServer.Management.Smo.RelocateFile ('AdventureWorks2012_Log','C:\DBFiles\log\AdventureWorks2012New_Log.ldf')

$db = 'AdventureWork2012New'
Restore-SqlDatabase -ServerInstance 'PICARD' -Database $db -RelocateFile $MoveFiles -BackupFile $LastFull.FullName -RestoreAction Database -NoRecovery -Script | Out-File 'C:\Temp\Restore.sql'
foreach($log in $logs){
    if($log -eq $logs[$logs.Length -1]){
        Restore-SqlDatabase -ServerInstance 'PICARD' -Database $db -BackupFile $log.FullName -RestoreAction Log -Script | Out-File 'C:\Temp\Restore.sql'
        Restore-SqlDatabase -ServerInstance 'PICARD' -Database $db -BackupFile $log.FullName -RestoreAction Log -NoRecovery -Script | Out-File 'C:\Temp\Restore.sql'

While it seems like a lot of code, it follows an understandable pattern. The first to Get-ChildItem calls collect our most recent full backup and any log backups that have occurred after that. Next up is to build our RelocateFile collection. Then we just run the Restore-SqlDatabase cmdlet with the -Script argument for each file, piping the output to a file. The result is a full restore script with all of our transaction log files sorted for us. This kind of script saves me a lot of time when I have to execute database restores and I can avoid wading through a lot of transaction log backups.

For my money, the Backup- and Restore-SqlDatabase cmdlets are two of the more useful functions from the module. They make it easy to combine my SQL Server work with the file system tasks that needed to happen as well. They also make it simpler because they reduce the amount of code I need to write to within Powershell, reducing the overall complexity of the work. While much of this could be done using straight T-SQL, leveraging the right parts of Powershell makes for robust and reliable processes.

A Month of SQLPS: SqlDatabase cmdlets

In the last post we covered the cmdlets that are used to acquire, start, and stop SQL Instances. Naturally, the next area to talk about are cmdlets designed for SQL databases. Let’s see what we have using Get-Command again:

Get-Command -Module SQLPS *SqlDatabase


Three cmdlets are available, each one easily understandable from their names. We’ll start by looking at Get-SqlDatabase’s help file (using Get-Help Get-SqlDatabase -ShowWindow). Fortunately, this is one of the better documented cmdlets, though I’m still disappointed that the help information doesn’t show the output object type.

I know I sound like a broken record, but it is so vital to remember that everything in Powershell is an object. This means that knowing what our output objects are is key to being able to use our cmdlets. What we need to do now is use Get-Member to investigate the output of the cmdlet:

Get-SqlDatabase -ServerInstance PICARD -Name AdventureWorks2012 | Get-Member


Our output is an Smo.Database object, with all of those methods and properties. As we previously covered, this is the same object that the provider gives you. This means the Get-SqlDatabase cmdlet is functionally equivalent to:

Get-Item SQLSERVER:\SQL\PICARD\DEFAULT\Databases\AdventureWorks2012

This means we have a couple different paths to the same result, so whichever technique is used is a matter of preference. Interestingly enough, the Get-SqlDatabase cmdlet takes a -Path argument, but only works for the instance. We can not use the -Path to define the database, which seems weird to me. Anyway, this command will use the path and result in the same output:

Get-SqlDatabase -Name AdventureWorks2012 -Path SQLSERVER:\SQL\PICARD\DEFAULT

Another little gotcha with this cmdlet has is there’s a -Script switch argument for the cmdlet. Typically, when you see the -Script argument, this is a way for the cmdlet to call the .Script() method and produce a T-SQL script for the action. The thing is, Get-SqlDatabase is not actually performing a T-SQL action, just acquiring a SMO object. If you use the -Script argument, nothing will get returned because of this.

One significant difference between getting the option via the cmdlet versus the path and Get-Item is that Get-SqlDatabase will take a PSCredential object for -Credential. This means we can use alternate authentication for getting our objects. While we know we can do this using the provider, this can save us a step.

There’s a lot of variations here and we would use this cmdlet in any situation where we wanted to get the database object. What I do want to call out is the disconnection between Get-SqlDatabase and Get-SqlInstance. The fact that Get-SqlInstance returned a different type of object than the Smo.Server class when everything else with the provider works using SMO objects makes it a little weird. At least Get-SqlDatabase maintains that consistency with the SMO, giving us a handy way to manage and manipulate database objects for our scripts.

A Month of SQLPS: Starting and Stopping

Before you can learn how to drive a car, you have to know how to stop it and start it. This carries over to many things, including SQL Server. Most of us are used to controlling SQL Server through either SQL Server Management Studio or the SQL Server Configuration Manager, stopping and starting the service through the management UI. Others may go into the general Services control panel to manage these services. However we do it, knowing how to stop, start, and restart SQL Server is necessary to managing instances.

Since managing all Windows services is an important management task, Powershell has supported it from the beginning. To manage services involves Get-Service, Start-Service, and Stop-Service. Using these cmdlets is easy and they are my go-to way to manage my services:

#Powershell console must be run as Administrator
Stop-Service MSSQLSERVER -Force

Nothing complex, right? So let’s move to the SQLPS module. Using our Get-Command call, let’s look up the cmdlets that relate to SQL Instances:

Get-Command -Module SQLPS *SqlInstance


The resulting three cmdlets seem pretty straight forward with their names. Shouldn’t be too difficult, right? Unfortunately, this is where things start to break down. First, if we look at the help files for each of these cmdlets, we find them woefully lacking in any useful information. There are one or two examples for their use, but no detailed examples and no documentation on the parameters or their use (with an exception for Get-SqlInstance).

Time for some detective work, right? I started with using Get-SqlInstance and quickly discovered I needed something called the Cloud Adapter Service, an add on for SQL Server. This service is apparently something born out of Azure for working with SQL Server on Azure VMs. It needs to be running to support these commands and also needs permissions to affect the Windows firewall.

Next up we need a credential. No, it’s not a SQL credential. Instead, it’s a Windows credential that has administrator rights on the target machine. Once this is all in place, we can use it to retrieve our information. For good measure, we’ll pipe the output to Get-Member to investigate what kind of object we’re dealing with:

$cred = Get-Credential
$instance = Get-SqlInstance -MachineName PICARD -Credential $cred
$instance | Get-Member


While this doesn’t have any surprising information, what’s critical here is the object type. Note that it’s a IaaS.SqlInstanceInfo object, specifically designed to manage instances on Azure VMs. I believe the idea was to give users of SQLPS a consistent way to manage their instances, regardless of whether they were on-premises or in the Azure cloud.

Understanding Get-SqlInstance helps us make sense of the Start-SqlInstance and Stop-SqlInstance cmdlets as well. When I ran them they performed the same operation as Start- and Stop-Service, including the gotcha that when you start a SQL instance/service, it will not automatically start the corresponding Agent service. All we get is the ability to retrieve information and manipulate our services through a specialized communication service.

Honestly, I can’t see any benefit to the inclusion of these cmdlets. When working with SQL Instances, I would expect to get and manage SMO server objects, as that is what most of the other interfaces use. When it comes to managing services, WinRM access allows me to use the traditional service management cmdlets included in the core of Powershell. These -SqlInstance cmdlets add an additional level of overhead that simply isn’t needed, even in the case of Azure VMs.

A Month of SQLPS: The SQL cmdlets

There’s more to the SQLPS module than just a fancy way to browse SQL Server components. While the provider gives us an effective way to interact with different sections of our database environment, we need more. That’s why there is a series of cmdlets included in the module to manage things like backups and availability groups, as well as retrieve SMO objects without having to always rely on provider paths or complex SMO objects.

To get a listing of all cmdlets is the same as with any Powershell module: Get-Command -Module SQLPS. This will list out the 45 cmdlets in the module along with the provider:

Get-Command -Module SQLPS


While I want to spend the next week or so covering these cmdlets in detail, I want to begin by re-emphasizing the foundational elements of the language will help us figure out these cmdlets on our own. We’ve already seen how we can see what cmdlets are available to us by using Get-Command, allowing to narrow down our focus for what we might be searching for.

Next, if we examine the names of the cmdlets, we will have a general idea of what they do based on the verb plus noun naming convention. By using a little pipeline magic, we can see what the different verb options are:

get-command -Module SQLPS | Group-Object verb


We can use the verb list to further narrow down the cmdlets. If we’re looking for a cmdlet to create something, we’d look at cmdlets starting with ‘New-’. If we wanted to retrieve an object, we’d look for a cmdlet named ‘Get-’. This continues on, but helps us intuitively narrow down the cmdlets by what they do.

Finally, once we have a cmdlet in mind, we can look at the specific help file to learn how that cmdlet is used with the Get-Help cmdlet. I usually recommend calling Get-Help with the -ShowWindow switch because it gives us a separate window where the help information is displayed:

Get-Help Get-SqlInstance -ShowWindow

Unfortunately, this is where we run into some additional problems with the module. While the cmdlets available with the core installation of Powershell have some pretty detailed help, the help files for the module are thin and incomplete. Many cmdlets have only basic descriptions and no information on outputs . This means we’re going to have to use a little detective work to evaluate these cmdlets and figure out what they do.

This is the main reason why I wanted to write this series. There’s actually some useful functionality built here, but the lack of documentation is a real struggle and leads to many database folks not using these cmdlets because of the lack of explanation around them. Some will turn to the SMO and write a lot of code to accomplish the same tasks. Others will simply give up on Powershell completely and turn to other tools. Stick around over the next few weeks and we’ll try and fill this gap, showing you what’s availble, how they work, and help build some patterns for their use.

A Month of SQLPS: Collecting

It’s one thing for us to talk about what your options are for a piece of tech, but that rarely shows what you can do. Whenever I talk about Powershell, I want to show people what can actually be done with the language so people can make practical use of it. The provider gives us more than just a nice way to browse SQL Server.

Treating SQL Server like a file system lets us easily gather up objects as collections for processing. The strength of this should be obvious to database folks, who like to work with sets of data. For example, we can use them to quickly gather information across our servers.

Ever wonder what your SQL Server versions are? Wouldn’t be nice if you could just query the build number of all our instances? Well, using the provider we can do that. Here’s my pattern for using your Central Management Server to getting a list of all our servers:

$servers = @(‘PICARD’)
$servers += (dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$servers").Name

Now, if we combine this with Get-Item and Select-Object and we can have a quick list of our version information across all our servers:

$servers = @(‘PICARD’)
$servers += (dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$servers").Name
foreach($server in $servers){
    Get-Item "SQLSERVER:\SQL\$server\DEFAULT" | Select-Object Name,VersionString


We can extend this to retrieving other properties and drilling down into other aspects of our environment. We can use this to gather inventory information quickly. But wait, there’s more! We’re not just limited to displaying properties, we can also change them.

Have you ever connected to an instance to create some database objects and accidentally created it in master because that’s set as your default? Not only is it annoying, but it can be problematic and difficult to clean up. This is why I like to set user logins to tempdb. The provider makes it extremely easy to do this:

$logins = dir SQLSERVER:\SQL\localhost\DEFAULT\logins
foreach($login in $logins){
    $login.DefaultDatabase = ‘TempDB’

This is a pretty simple approach that can be extended easily to suit our purposes. If we wanted to update only SQL Logins, we could add an additional filter to build our collection:

$logins = dir SQLSERVER:\SQL\localhost\DEFAULT\logins | Where-Object {$_.LoginType -eq ‘SqlLogin’}

I can hear you saying “So what, I can do that in T-SQL”. And you can by writing a T-SQL cursor and some dynamic SQL. The immediate advantage here is we avoid that dynamic SQL. But there’s another advantage: the ability to easily combine multiple collections. If we go back to the CMS pattern and combine it with this loop, we can now update all our default databases on all our servers:

$servers = @(‘PICARD’)
$servers += (dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$servers").Name

foreach($server in $servers){
    $logins = dir "SQLSERVER:\SQL\$server\DEFAULT\logins"
    foreach($login in $logins){
        $login.DefaultDatabase = ‘TempDB’

This means we are using collections to drive our automation. Also, because our collections are based on the current state of the data, our scripts become dynamic, adapting as the state of the data changes. This is awesome, because we’re used to letting data determine how our work gets done in our database scripts and now we get to do it easily in Powershell.

We’ve spent a lot of time on the provider because it’s the biggest piece of functionality in the module, but there’s more there. Contained within the module are several cmdlets for handling specific actions in and around SQL Server. Tomorrow we’ll start looking at what’s provided and how we can use them.


A Month of SQLPS: Of Drives and Shortcuts

When we think of file systems and drives, we know that we can navigate to almost anything in a network by using the Universal Naming Convention (UNC) path. This usually takes the following form: \\SERVERNAME\SharedFolder. Now, may be cumbersome to always type in this path or perhaps you want to always make use of this share, but commonly we will map these drives to a local drive letter. To do this in Powershell, we would use the New-PSDrive cmdlet:

#Mount a remote share to the Z drive 
New-PSDrive -Name Z -PSProvider FileSystem -Root '\\PICARD\C$\Backups'

Now here’s where things get cool. If providers are meant to give us an interface that behaves like a file system, doesn’t it stand to reason that we can use the provider to mount parts of SQL Server as their own drives? This opens up a new area of management of SQL Server using Powershell.

I’ve blogged before about using the SQL Provider and Central Management Server as a driver for running my Powershell scripts. I love this trick as it gives me a lot of flexibility for managing my environments. Unfortunately, it can get a little cumbersome always browsing down to that path. So let’s mount it as a drive and make our lives a little easier:

New-PSDrive -Name CMS -PSProvider SqlServer -Root ‘SQLSERVER:\SQLRegistration\Central Management Server Group\PICARD’
dir CMS:\
Get-PSDrive | Format-Table Name,Root -AutoSize

Now we have a handy and easy way to reference the servers in our CMS. As with any other mapped drive, this mapping is only good for as long as we are in this current Powershell session, but there are options in the New-PSDrive cmdlet that will allow us to persist this mapping globally, as well as define other options.

One of those other options is using a different credential to connect to the mapped target. Remember how yesterday we talked about the security context of the provider and how it passed through your current user? Well, by creating a new mapped PSDrive for a SQL component, we can connect to a database in the provider using a different context, including a SQL login.

Doing this takes a little more work, but doable. What we need to do is create a credential object for our SQL login, then create the New-PSDrive with that credential. Let’s try it out:

#Create a SQLTest2 login on the server and add it to AdventureWorks2012 db_datareader role
$pw = Read-Host 'Enter sqltest2 password' | ConvertTo-SecureString -AsPlainText -Force 
$cred = New-Object System.Management.Automation.PSCredential ('sqltest2',$pw) 
New-PSDrive -Name SQLTest2 -PSProvider SqlServer -Root 'SQLSERVER:\SQL\PICARD\DEFAULT\' -Credential $cred

Now if we do a lookup on the databases and check our extended events info, we should see exactly how we’re connecting to the information:

dir SQLTest2:\Databases

This clearly shows that any work we do through this new mapped drive is done via the context of the SQL login we created the drive with. While the default behavior is going to be using objects in the provider with our default Windows login, it is necessary to know that there you can connect to SQL assets using other logins and contexts. Hat tip to Shawn Melton(@wsmelton) for showing me this technique. His blog covers it in a little more detail along with some other tips and tricks.

Mapping drives can really help us customize our Powershell environment for how we want to manage SQL Server. Mapping components we use frequently can aid us in scripting and give us short cuts to the things we use frequently. It is also good to know that these tools also provide different paths for security management while maintaining the integrity of our SQL servers.

Next up, we’ll look at some practical applications for using the provider. We’ll dip our toe a little into the SMO, but we should be able to keep it pretty basic. There’s a lot of depth to the provider, so it is easy to get lost. The focus here should be understanding the foundation, allowing you to build on it for your own scripts and processes.

A Month of SQLPS: Security

One of the frequent questions I get about the SQL Server provider is “Who is it when it connects to the SQL Server?” Security is important to DBAs and, as we’ve seen, there’s a lot of things that can be done through the provide that can be dangerous if we’re not careful. Just like any other tool, we want to make sure that the user in the SQL Server provider can only do what they are allowed to do.

Let’s go back to our extended events session. Since I didn’t share it Friday, here’s the T-SQL for creating the session:

ADD EVENT sqlserver.sql_statement_completed(
WHERE ([sqlserver].[client_app_name] like N'SQLPS%'))
ADD TARGET package0.event_file(SET filename=N'C:\Temp\SQLPS_Tracking.xel');


This is a simple session that will capture SQL statements that were executed, where they were executed, and by whom. We filter on any application that starts with ‘SQLPS’, since this is how SQL Provider actions will show up in SQL Server. With that running, let’s perform some action and see who we are:

dir SQLSERVER:\SQL\localhost\DEFAULT\databases

This will give us a list of our databases on the default instance. Since we know this boils down to a SQL statement, let’s see what our extended events session captured:


It should not be a surprise that the user we’re working on is our Windows login for our Powershell session. We’re essentially passing our credential through to the SQL Server. This means the reason I’ve been able to do all the things I have is because I have high enough permissions to do all these things.

To validate, I’ve created a user in my SQL lab called SDF\SQLTest. I’ve created this user and given him the db_datareader role in AdventureWorks2012. Let’s see what happens:

#Launch powershell.exe with runas SDF\SQLTest
cd SQL\localhost\DEFAULT\databases\AdventureWorks2012

#Get only tables in dbo schema
dir Tables | Where-Object {$_.Schema -eq ‘dbo’}


We’re able to see the tables with no problem. The db_datareader role grants us SELECT rights on all objects within the AdventureWorks2012 database. Let’s try and drop something now:

Remove-Item Tables\dbo.ErrorLog


The red text tells the whole story. Looking at this error, we can see the familiar T-SQL message ‘Cannot drop the table….it does not exist or you do not have permissions.’ This clearly shows that permissions are enforced using the provider, so even if you can connect you can still only do what your login will allow.

This default behavior is great, because not only are we restricting the activities of our logins, by default the only logins that can connect are Windows logins. This adds all the typical domain account protections that we get within our active directory. However, what if we want to connect using a SQL credential? There is a way to do this, but it’s a little more obscure. Tomorrow, we’ll review how we can do that along with providing some handy shortcuts to SQL Server components.

A Month of SQLPS: Other Common cmdlets

Yesterday we looked at how you can navigate and retrieve items in the SQL Server provider using common Powershell cmdlets like Get-ChildItem and Set-Location. These are part of a set of core cmdlets that are implemented across providers to give a consistent experience to users. The remaining two cmdlets that we did not cover handle other tasks than simply finding and getting objects and, as such, require a little more care.

Rename-Item does exactly what the name says, rename something. With this, we can rename database objects directly from the command line. Nothing fancy, but gives us an easy way to manage our objects. Let’s see how it’s used:

#Change to the context of my dummy database
CD SQLSERVER:\SQL\localhost\DEFAULT\Databases\dummy

#Rename a table within dummy
dir tables
Rename-Item tables\dbo.Junk Garbage
dir tables


Because of the intricacies of implementing a cmdlet like Rename-Item, it is specific about the information you pass to it. You’ll note that our Rename-Item call references the table object specifically by schema name, but the new table name is simply the new name within the schema.

This is due to how Rename-Item is working behind the scenes. Remember how we talked about using the SMO? The SMO performs all of its actions via T-SQL, meaning that everything we do via the provider also can be boiled down to a T-SQL command. If we start an extended events session to track SQL statements, our rename action becomes exposed:


As you can see, the rename is being executed by sp_rename. DBAs have been using this to rename objects for decades and SQLPS is no different. While we’re using a different interface to manage our SQL Servers, the underlying action is the same.

What does this mean for the Remove-Item cmdlet? We already know that it is meant to delete files in our file system provider, so that means we’re probably going to be dropping objects in the SQL Server provider. Let’s take a look by using Remove-Item to drop our table:

#Drop the Garbage table
dir tables
Remove-Item tables\dbo.Garbage
dir tables


Just like that, our table is gone. Note, we still had to specify the schema in order to remove the correct table. If you had left the schema out, the provider would have thrown an error to warn you that it didn’t have enough information to carry out the action. We can also go back to our extended events session to validate what was done:


Now, while we can rename and remove database objects, what about creating new objects? As of this time, New-Item is currently not officially implemented. This makes sense because of all the variance within SQL for how different objects can be created and what would need to be declared for that. Creating a new login, table, or index have vastly different statements for those objects, meaning it would be simpler to create database objects using T-SQL than through a provider method.

New-Item can be used in certain spots in the provider, but those methods are currently undocumented and hard to find. I still don’t have a good grasp on where this cmdlet has been implemented. We will cover this a little more as well, but for the time being assume that New-Item can not be used.

Wrapping up the first week of this series, you should have a good idea of what the provider is and how to use it. If you’re getting started with Powershell to manage SQL Server, this is where to start. Even with just accessing and querying objects through the provider, a lot of scripting and automation options open op. Next week we’ll cover some basic scripting and automation patterns that can be used within the provider, helping you build out your use of Powershell within your SQL Server environments.