Art of the DBA Rotating Header Image

A Month of SQLPS: The SQL cmdlets

There’s more to the SQLPS module than just a fancy way to browse SQL Server components. While the provider gives us an effective way to interact with different sections of our database environment, we need more. That’s why there is a series of cmdlets included in the module to manage things like backups and availability groups, as well as retrieve SMO objects without having to always rely on provider paths or complex SMO objects.

To get a listing of all cmdlets is the same as with any Powershell module: Get-Command -Module SQLPS. This will list out the 45 cmdlets in the module along with the provider:

Get-Command -Module SQLPS


While I want to spend the next week or so covering these cmdlets in detail, I want to begin by re-emphasizing the foundational elements of the language will help us figure out these cmdlets on our own. We’ve already seen how we can see what cmdlets are available to us by using Get-Command, allowing to narrow down our focus for what we might be searching for.

Next, if we examine the names of the cmdlets, we will have a general idea of what they do based on the verb plus noun naming convention. By using a little pipeline magic, we can see what the different verb options are:

get-command -Module SQLPS | Group-Object verb


We can use the verb list to further narrow down the cmdlets. If we’re looking for a cmdlet to create something, we’d look at cmdlets starting with ‘New-’. If we wanted to retrieve an object, we’d look for a cmdlet named ‘Get-’. This continues on, but helps us intuitively narrow down the cmdlets by what they do.

Finally, once we have a cmdlet in mind, we can look at the specific help file to learn how that cmdlet is used with the Get-Help cmdlet. I usually recommend calling Get-Help with the -ShowWindow switch because it gives us a separate window where the help information is displayed:

Get-Help Get-SqlInstance -ShowWindow

Unfortunately, this is where we run into some additional problems with the module. While the cmdlets available with the core installation of Powershell have some pretty detailed help, the help files for the module are thin and incomplete. Many cmdlets have only basic descriptions and no information on outputs . This means we’re going to have to use a little detective work to evaluate these cmdlets and figure out what they do.

This is the main reason why I wanted to write this series. There’s actually some useful functionality built here, but the lack of documentation is a real struggle and leads to many database folks not using these cmdlets because of the lack of explanation around them. Some will turn to the SMO and write a lot of code to accomplish the same tasks. Others will simply give up on Powershell completely and turn to other tools. Stick around over the next few weeks and we’ll try and fill this gap, showing you what’s availble, how they work, and help build some patterns for their use.

A Month of SQLPS: Collecting

It’s one thing for us to talk about what your options are for a piece of tech, but that rarely shows what you can do. Whenever I talk about Powershell, I want to show people what can actually be done with the language so people can make practical use of it. The provider gives us more than just a nice way to browse SQL Server.

Treating SQL Server like a file system lets us easily gather up objects as collections for processing. The strength of this should be obvious to database folks, who like to work with sets of data. For example, we can use them to quickly gather information across our servers.

Ever wonder what your SQL Server versions are? Wouldn’t be nice if you could just query the build number of all our instances? Well, using the provider we can do that. Here’s my pattern for using your Central Management Server to getting a list of all our servers:

$servers = @(‘PICARD’)
$servers += (dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$servers").Name

Now, if we combine this with Get-Item and Select-Object and we can have a quick list of our version information across all our servers:

$servers = @(‘PICARD’)
$servers += (dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$servers").Name
foreach($server in $servers){
    Get-Item "SQLSERVER:\SQL\$server\DEFAULT" | Select-Object Name,VersionString


We can extend this to retrieving other properties and drilling down into other aspects of our environment. We can use this to gather inventory information quickly. But wait, there’s more! We’re not just limited to displaying properties, we can also change them.

Have you ever connected to an instance to create some database objects and accidentally created it in master because that’s set as your default? Not only is it annoying, but it can be problematic and difficult to clean up. This is why I like to set user logins to tempdb. The provider makes it extremely easy to do this:

$logins = dir SQLSERVER:\SQL\localhost\DEFAULT\logins
foreach($login in $logins){
    $login.DefaultDatabase = ‘TempDB’

This is a pretty simple approach that can be extended easily to suit our purposes. If we wanted to update only SQL Logins, we could add an additional filter to build our collection:

$logins = dir SQLSERVER:\SQL\localhost\DEFAULT\logins | Where-Object {$_.LoginType -eq ‘SqlLogin’}

I can hear you saying “So what, I can do that in T-SQL”. And you can by writing a T-SQL cursor and some dynamic SQL. The immediate advantage here is we avoid that dynamic SQL. But there’s another advantage: the ability to easily combine multiple collections. If we go back to the CMS pattern and combine it with this loop, we can now update all our default databases on all our servers:

$servers = @(‘PICARD’)
$servers += (dir "SQLSERVER:\SQLRegistration\Central Management Server Group\$servers").Name

foreach($server in $servers){
    $logins = dir "SQLSERVER:\SQL\$server\DEFAULT\logins"
    foreach($login in $logins){
        $login.DefaultDatabase = ‘TempDB’

This means we are using collections to drive our automation. Also, because our collections are based on the current state of the data, our scripts become dynamic, adapting as the state of the data changes. This is awesome, because we’re used to letting data determine how our work gets done in our database scripts and now we get to do it easily in Powershell.

We’ve spent a lot of time on the provider because it’s the biggest piece of functionality in the module, but there’s more there. Contained within the module are several cmdlets for handling specific actions in and around SQL Server. Tomorrow we’ll start looking at what’s provided and how we can use them.


A Month of SQLPS: Of Drives and Shortcuts

When we think of file systems and drives, we know that we can navigate to almost anything in a network by using the Universal Naming Convention (UNC) path. This usually takes the following form: \\SERVERNAME\SharedFolder. Now, may be cumbersome to always type in this path or perhaps you want to always make use of this share, but commonly we will map these drives to a local drive letter. To do this in Powershell, we would use the New-PSDrive cmdlet:

#Mount a remote share to the Z drive 
New-PSDrive -Name Z -PSProvider FileSystem -Root '\\PICARD\C$\Backups'

Now here’s where things get cool. If providers are meant to give us an interface that behaves like a file system, doesn’t it stand to reason that we can use the provider to mount parts of SQL Server as their own drives? This opens up a new area of management of SQL Server using Powershell.

I’ve blogged before about using the SQL Provider and Central Management Server as a driver for running my Powershell scripts. I love this trick as it gives me a lot of flexibility for managing my environments. Unfortunately, it can get a little cumbersome always browsing down to that path. So let’s mount it as a drive and make our lives a little easier:

New-PSDrive -Name CMS -PSProvider SqlServer -Root ‘SQLSERVER:\SQLRegistration\Central Management Server Group\PICARD’
dir CMS:\
Get-PSDrive | Format-Table Name,Root -AutoSize

Now we have a handy and easy way to reference the servers in our CMS. As with any other mapped drive, this mapping is only good for as long as we are in this current Powershell session, but there are options in the New-PSDrive cmdlet that will allow us to persist this mapping globally, as well as define other options.

One of those other options is using a different credential to connect to the mapped target. Remember how yesterday we talked about the security context of the provider and how it passed through your current user? Well, by creating a new mapped PSDrive for a SQL component, we can connect to a database in the provider using a different context, including a SQL login.

Doing this takes a little more work, but doable. What we need to do is create a credential object for our SQL login, then create the New-PSDrive with that credential. Let’s try it out:

#Create a SQLTest2 login on the server and add it to AdventureWorks2012 db_datareader role
$pw = Read-Host 'Enter sqltest2 password' | ConvertTo-SecureString -AsPlainText -Force 
$cred = New-Object System.Management.Automation.PSCredential ('sqltest2',$pw) 
New-PSDrive -Name SQLTest2 -PSProvider SqlServer -Root 'SQLSERVER:\SQL\PICARD\DEFAULT\' -Credential $cred

Now if we do a lookup on the databases and check our extended events info, we should see exactly how we’re connecting to the information:

dir SQLTest2:\Databases

This clearly shows that any work we do through this new mapped drive is done via the context of the SQL login we created the drive with. While the default behavior is going to be using objects in the provider with our default Windows login, it is necessary to know that there you can connect to SQL assets using other logins and contexts. Hat tip to Shawn Melton(@wsmelton) for showing me this technique. His blog covers it in a little more detail along with some other tips and tricks.

Mapping drives can really help us customize our Powershell environment for how we want to manage SQL Server. Mapping components we use frequently can aid us in scripting and give us short cuts to the things we use frequently. It is also good to know that these tools also provide different paths for security management while maintaining the integrity of our SQL servers.

Next up, we’ll look at some practical applications for using the provider. We’ll dip our toe a little into the SMO, but we should be able to keep it pretty basic. There’s a lot of depth to the provider, so it is easy to get lost. The focus here should be understanding the foundation, allowing you to build on it for your own scripts and processes.

A Month of SQLPS: Security

One of the frequent questions I get about the SQL Server provider is “Who is it when it connects to the SQL Server?” Security is important to DBAs and, as we’ve seen, there’s a lot of things that can be done through the provide that can be dangerous if we’re not careful. Just like any other tool, we want to make sure that the user in the SQL Server provider can only do what they are allowed to do.

Let’s go back to our extended events session. Since I didn’t share it Friday, here’s the T-SQL for creating the session:

ADD EVENT sqlserver.sql_statement_completed(
WHERE ([sqlserver].[client_app_name] like N'SQLPS%'))
ADD TARGET package0.event_file(SET filename=N'C:\Temp\SQLPS_Tracking.xel');


This is a simple session that will capture SQL statements that were executed, where they were executed, and by whom. We filter on any application that starts with ‘SQLPS’, since this is how SQL Provider actions will show up in SQL Server. With that running, let’s perform some action and see who we are:

dir SQLSERVER:\SQL\localhost\DEFAULT\databases

This will give us a list of our databases on the default instance. Since we know this boils down to a SQL statement, let’s see what our extended events session captured:


It should not be a surprise that the user we’re working on is our Windows login for our Powershell session. We’re essentially passing our credential through to the SQL Server. This means the reason I’ve been able to do all the things I have is because I have high enough permissions to do all these things.

To validate, I’ve created a user in my SQL lab called SDF\SQLTest. I’ve created this user and given him the db_datareader role in AdventureWorks2012. Let’s see what happens:

#Launch powershell.exe with runas SDF\SQLTest
cd SQL\localhost\DEFAULT\databases\AdventureWorks2012

#Get only tables in dbo schema
dir Tables | Where-Object {$_.Schema -eq ‘dbo’}


We’re able to see the tables with no problem. The db_datareader role grants us SELECT rights on all objects within the AdventureWorks2012 database. Let’s try and drop something now:

Remove-Item Tables\dbo.ErrorLog


The red text tells the whole story. Looking at this error, we can see the familiar T-SQL message ‘Cannot drop the table….it does not exist or you do not have permissions.’ This clearly shows that permissions are enforced using the provider, so even if you can connect you can still only do what your login will allow.

This default behavior is great, because not only are we restricting the activities of our logins, by default the only logins that can connect are Windows logins. This adds all the typical domain account protections that we get within our active directory. However, what if we want to connect using a SQL credential? There is a way to do this, but it’s a little more obscure. Tomorrow, we’ll review how we can do that along with providing some handy shortcuts to SQL Server components.

A Month of SQLPS: Other Common cmdlets

Yesterday we looked at how you can navigate and retrieve items in the SQL Server provider using common Powershell cmdlets like Get-ChildItem and Set-Location. These are part of a set of core cmdlets that are implemented across providers to give a consistent experience to users. The remaining two cmdlets that we did not cover handle other tasks than simply finding and getting objects and, as such, require a little more care.

Rename-Item does exactly what the name says, rename something. With this, we can rename database objects directly from the command line. Nothing fancy, but gives us an easy way to manage our objects. Let’s see how it’s used:

#Change to the context of my dummy database
CD SQLSERVER:\SQL\localhost\DEFAULT\Databases\dummy

#Rename a table within dummy
dir tables
Rename-Item tables\dbo.Junk Garbage
dir tables


Because of the intricacies of implementing a cmdlet like Rename-Item, it is specific about the information you pass to it. You’ll note that our Rename-Item call references the table object specifically by schema name, but the new table name is simply the new name within the schema.

This is due to how Rename-Item is working behind the scenes. Remember how we talked about using the SMO? The SMO performs all of its actions via T-SQL, meaning that everything we do via the provider also can be boiled down to a T-SQL command. If we start an extended events session to track SQL statements, our rename action becomes exposed:


As you can see, the rename is being executed by sp_rename. DBAs have been using this to rename objects for decades and SQLPS is no different. While we’re using a different interface to manage our SQL Servers, the underlying action is the same.

What does this mean for the Remove-Item cmdlet? We already know that it is meant to delete files in our file system provider, so that means we’re probably going to be dropping objects in the SQL Server provider. Let’s take a look by using Remove-Item to drop our table:

#Drop the Garbage table
dir tables
Remove-Item tables\dbo.Garbage
dir tables


Just like that, our table is gone. Note, we still had to specify the schema in order to remove the correct table. If you had left the schema out, the provider would have thrown an error to warn you that it didn’t have enough information to carry out the action. We can also go back to our extended events session to validate what was done:


Now, while we can rename and remove database objects, what about creating new objects? As of this time, New-Item is currently not officially implemented. This makes sense because of all the variance within SQL for how different objects can be created and what would need to be declared for that. Creating a new login, table, or index have vastly different statements for those objects, meaning it would be simpler to create database objects using T-SQL than through a provider method.

New-Item can be used in certain spots in the provider, but those methods are currently undocumented and hard to find. I still don’t have a good grasp on where this cmdlet has been implemented. We will cover this a little more as well, but for the time being assume that New-Item can not be used.

Wrapping up the first week of this series, you should have a good idea of what the provider is and how to use it. If you’re getting started with Powershell to manage SQL Server, this is where to start. Even with just accessing and querying objects through the provider, a lot of scripting and automation options open op. Next week we’ll cover some basic scripting and automation patterns that can be used within the provider, helping you build out your use of Powershell within your SQL Server environments.

A Month of SQLPS – Common Powershell cmdlets in SQLPS

One of the reasons to create a provider is so administrators have a common interface for using a section of the Windows ecosystem. Whether it’s the file system, the registry, or some other slice of the stack, you should be able to use a provider to browse, explore, and manipulate that part of Windows in the same way as you can in other providers. Because each part of the ecosystem is different, each provider will have its own way of functioning. A file or directory is simply not the same as a registry key. This means that some cmdlets will be implemented and some will not, all depending on how the provider is built.

With the SQL Server provider, there are only a handful of common cmdlets implemented:

  • Get-Location: Gets the current node.
  • Set-Location: Changes the current node.
  • Get-ChildItem: Lists the objects stored at the current node.
  • Get-Item: Returns the properties of the current item.
  • Rename-Item: Renames an object.
  • Remove-Item: Removes an object.

None of these are surprise, as they provide most of the basic navigation and retrieval of the provider. You probably know these cmdlets better by their aliases:



Get-ChildItem SQLSERVER:\SQL\localhost\DEFAULT\Databases
dir SQLSERVER:\SQL\localhost\DEFAULT\Databases

As you can probably guess, if these cmdlets weren’t included, we’d have a pretty tough time getting around the provider. They are also core to our provider experience, so most users don’t even think about them. Having the cmdlets may not seem like a big deal, but without them we’d be stuck. It’s all about that common navigation experience.

Of these cmdlets, Get-ChildItem is most commonly used to work with objects in the provider. We’ve already covered how we can combine Get-ChildItem with either Select-Object or Format-Table to view specific properties of our objects. Add to this Get-Item, which covers the same function, just up a level, allowing us to work directly with a specific object:

Get-Item SQLSERVER:\SQL\localhost\DEFAULT\Databases\AdventureWorks2012 | Format-List Name,Version,CompatibilityLevel,LastBackupDate,ActiveConnections


This small handful of functions let us get around. We can use Set-Location to browse to the path we need to work in, then leverage Get-Item and Get-ChildItem to acquire objects and work with them. The benefit here is we can use the same syntax for reviewing database objects, files, or environment variables. This common experience eases the use of Powershell and helps administrators move from one stack to another.

There are two other cmdlets on this list that I want to talk about, along with another one that is conspicuously missing. What we’ve covered here handles navigation, but the remaining cmdlets are more about object manipulation. In my next post, we’ll dive a little into those, why they are there, and what’s missing.

A Month of SQLPS: SMO Building Blocks

So far, we’ve covered how to start up SQLPS and some of how we can navigate within the provider. These are the first few steps along the path of using SQL Server and Powershell together. Next is to understand how it works behind the scenes. SQLPS makes a lot more sense once you understand how it’s constructed.

A key concept of Powershell is that everything in it is an object, specifically a .Net object. Everything. I stress this to new users of the language because so much of Powershell’s strength comes from this foundation. For people using other scripting languages, such as Korne or Bash, artifacts are simple strings or integers. Objects add an entirely new level of functionality to the language, giving scripters access to properties and methods for their scripts.

To see what the SQL provider is built on is a simple matter of using Get-Member to interrogate the components:

Get-ChildItem 'SQLSERVER:\SQL\localhost\DEFAULT\databases' | Get-Member


Note the object type, a Microsoft.SqlServer.Management.Smo.Database object. This is part of the SQL Server Management Objects (SMO) .Net library, an API that’s been around since 2002. What does this mean? That the way SQLPS interacts with SQL Server and its components is the same as if we were using almost any other SQL Server tool out there, including our age old standby, SQL Server Management Studio. There’s not strange voodoo magic going on here.

Because of this foundation on objects, we can leverage their methods and properties for our specific purposes. For example, let’s do a directory lookup on the path we use above:

dir 'SQLSERVER:\SQL\localhost\DEFAULT\databases'


Unsurprisingly, we get a listing of our databases, but the output columns are predefined. This is baked into formatting definitions within the provider, displaying these properties by default. That’s key, all the displayed columns are properties of the SMO database object. So what if we wanted to show different columns? It’s just a matter of displaying the properties we want:

dir 'SQLSERVER:\SQL\localhost\DEFAULT\databases' | Format-Table name,createdate,lastbackupdate


We could then start to leverage the methods to automate many of our tasks. Say, for example, you wanted to script off your instance’s logins. While we could use the script task within management studio, we could also use Powershell and the SMO’s .Script() method to accomplish the same task:

dir 'SQLSERVER:\SQL\localhost\DEFAULT\logins' | ForEach-Object {$_.Script() | Out-file C:\TEMP\logins.sql -Append}


Note that this is exactly the same as if you had scripted these objects out in SSMS. This is because both tools are using the same .Net libraries under the hood. There’s very little difference between these tools, except the interface. This means that, as you make the transition from one tool to the other, you can expect similar behaviors.

Properties and methods are the secret sauce of Powershell and open up many avenues for automation and script writing that we’ll explore as we continue this series. The emphasis here is to understand that using the SQLPS provider isn’t a whole different from other tools you’re used to using. Next up, we’ll start talking about some of the standard Powershell cmdlets available to us in the SQLPS module and how we can use them.

A Month of SQLPS: The Provider

Next up in my series on the SQLPS module is to talk about the fundamentals of the provider. This is the core of the SQL Server module, providing an extension to the shell for managing and working with SQL Server. Providers intended togive administrators a file system-like interface for a part of the Windows ecosystem, allowing for a more intuitive way for managing parts of their environment.  

To list all your providers, just use Get-PSDrive:



We have several different types of providers, including FileSystem, Environment, and Registry. SqlServer is  listed once you’ve imported the module. Once loaded, we can treat the SQL Server components as a file system, with many of the usual commands. Let’s switch to that drive and see what we have available to us:



Right away we can see many familiar components. Each of these folders is a different part of the SQL Server stack where we can access and manage our environment. We’ll go ahead and browse into the SQL Engine to see what’s there:




It’s that easy to start browsing around our SQL Servers.

Entering the ‘SQL’ folder means we will be working directly with the SQL Engine. Now, by default the provider will only list the local machine under the SQL folder, but we can access any SQL host from here. Just change your location to that machine name. Note, it’s the machine, not the SQL Server instance.

Once you’re in the target machine, the provider will list all instances on that server. Most of the time you’ll probably see DEFAULT, but if you’re using named instances, then those names will be listed. Next, we’ll move down into the instance itself to get all the components for the instance. It should look pretty familiar, since you probably see those components whenever you open up the object explorer in SSMS.

Powershell providers were written to provide an intuitive interface so that administrators wouldn’t get bogged down in trying to get to different parts of their environment, so it’s not surprising that navigating SQL Servers is easy. Take note of the this pattern:


This is the path for SQL Server components and is key to referencing objects with the SQL Server provider.

However, it’s still a little rough sailing. Since using the provider can involve remote servers, your experience will be bound by the ability to communicate with your SQL Instances. It also means that if you’re using tab complete or Intellisense in the Powershell ISE, it will likely not work with provider components because it will timeout when trying to communicate. You will need to rely more on your knowledge of your instances and some of the components than using auto-completion elements.

Why does this happen? It has to do with the provider’s fundamental building blocks the Server Management Objects (SMO). This is a standard .Net library and brings with it rich, consistent functionality that is used throughout the Microsoft ecosystem. In the next post, we’ll go into detail of the SMO and how it’s implemented in the provider, why this an advantage, and what considerations we need to keep in mind as we use it.

A Month of SQLPS: Getting Started

Time for me to get off the bench and start blogging again. What better way to go than to explore the oft-maligned SQL Server Powershell module, SQLPS. Over the course of the next month-ish, I want to explore the SQLPS module, the cmdlets it provides, and the functionality within the provider. We’ll look at the good and the bad within this module, seeing how we can leverage its functionality to our benefit.

Before we can start using SQLPS, we’ll need to get it installed and loaded. Installing the SQL Server Powershell tools is part of the SQL Server setup, included when you install the SQL Server components, but not separately declared. This is good and bad, because while it will always be available to you on a machine with SQL Server components (client or server), you don’t have the ability to control this.

Once installed, loading it is simple. Just open a Powershell session and use the Import-Module command:


Uh oh! A warning already? What’s that about? Don’t worry, a little investigation can show us what’s going on. Let’s run that same command with the -Verbose switch:


Using the -Verbose switch gives us a lot of additional detail and we can see what the problem is. Powershell uses a list of approved verbs, which can be listed using the Get-Verb cmdlet. However, when writing the SQLPS module, the SQL Server team included Encode-SqlName and Decode-SqlName for converting SQL Instance names to the SQLPS path syntax. Encode- and Decode- are not approved verbs, which is why we see that warning.

(Don’t worry about what path syntax is, we’ll cover that in this series.)

What does this mean? Not much, the module still gets loaded just fine. All Powershell is trying to do is warn you that you have some code that doesn’t match the language standards and could be hard for users to find. As we’re already pretty far down the path with the language, this warning probably won’t go away anytime soon. Now, you can have Powershell skip all that verb checking stuff with the -DisableNameChecking switch, but I don’t think it’s really necessary. I just accept the warning message for what it is.

Once the module loads, your Powershell session is put into the context of the SQL Server provider, SQLSERVER:\. This can be a bit of a gotcha because many file lookup patterns change once you are in the SQL Server provider. A notable example is you won’t be able to browse UNC paths until you are back in the context of a file system provider. This is why I recommend this pattern for loading the SQLPS module:

$pwd = Get-Location
Import-Module SQLPS
Set-Location $pwd

This simply captures your current working location and, once the SQLPS module is loaded, will switch you back to it.

It should be noted that the SQLPS module is only available on SQL Server 2012 or better and requires Powershell 2.0. Previous versions will require other logic to load the SQL Server snap-in. In general, I recommend always using the current client tools when possible, but this behavior could limit you if you are SQL Server 2008 R2 or earlier.

Feels a bit like a rough start, doesn’t it? This initial experience is one of the biggest challenges for DBAs getting into Powershell. It’s another language and since the first experience is filled with gotchas, it’s hard to embrace. The reason I want to call these out early, though, is to help you over that first hill. Over this series of posts, I hope to show you the ins and outs so you can effectively use the SQLPS module and not get tripped up on it.

Helping Your #Powershell With Twitter

Today I had a bit of a Twitter conversation with some SQL folks I know that were looking for Powershell help. It’s a common enough thing, of course. The trick is, we SQL people are so used to having the #sqlhelp hashtag as our first line of defense for figuring out problems that it’s weird for us to not have something like that for our other hurdles. Where most of these hash tags bombard you with spam and noise, this one is almost pure signal, so we often forget how good we have it.

PoSHHelp_TweetDeckOther communities would like to capture that magic as well. I know those of us using Powershell want our own hash tag for help. Well, now we’re going to try. Some of the notables already use #PoSHHelp, but not a lot of people pay attention. A group of us want to change that. I’ve already added a column for it in my TweetDeck and will keep my eye out for Powershell questions I can help with. Folks like Shawn Melton(@wsmelton), Adam Bertram(@adbertram), Derik Hammer(@SQLHammer), Rob Sewell(@fade2black), Boe Prox(@proxb), and others will be looking for your Powershell problems and try to assist you over Twitter with the same care and grace as SQLHelp.

It’s going to take some work, but we’ll try and apply the same ground rules to this hash tag:

  1. Questions should fit into 140 characters.
  2. If they don’t, put your question and information on another site (like and link to it.
  3. DO NOT SPAM THE HASH TAG. This is important, because in order to make it useful it needs to be kept clean. Don’t use it to advertise your blog posts or articles, but only for Q&A.
  4. Don’t be a dick, a.k.a. Wheaton’s Law. It’s all too easy to let the anonymity of the internet get the better of us. Be polite and respectful to those using and accidentally mis-using the hash tag.

Also, there are some people who also use #PowershellHelp. I’m going to watch this too, but I would encourage you to use #PoSHHelp instead. Mostly because it’s shorter and will give you more room for your question.

So, if you have a problem…

And if no one else can help…

And if you have a Twitter account, maybe you can tweet your question to…