Art of the DBA Rotating Header Image

SQL Server

Where’d that table go?

The other day I was working with Doug Lane (b|t) over Twitter on an object visibility issue. He was trying to hide objects from linked ODBC selection to a SQL Server, particularly the system views in his database. I was able to help him out, but it got me wondering about exactly what allows you to see objects. I mean, I always knew I could see something if I had SELECT on it, but was that what really controlled the visibility?

I see you!

Seeing a table boils down to being able to view the metadata in the database, which means we must be given the VIEW DEFINITION privilege. This permission was new to 2005, as previous versions of SQL Server would allow any user in the public role to view object metadata. You can GRANT, REVOKE, or DENY this privilege on any of the four levels within SQL Server: server, database, schema, and object. Many actions implicitly grant VIEW DEFINITON, such as granting a user SELECT privileges to a table or EXECUTE on a stored procedure.

Let’s look at a couple examples. I’ll go ahead and create demo_login user on my test instance and add him to my database, but we won’t give him any permissions yet:

use msf_test
create login demo_login with password='dummy';
create user demo_login for login demo_login;

Now, in my test database, I have a table called demo1. My login, demo_login, hasn’t been granted any privileges to this table,so I can’t see the table in my object browser.

As soon as I grant a permission on it, like SELECT, the table appears:

grant select on demo1 to demo_login;

Again, this is because any GRANT on our table implicitly grants VIEW DEFINITION.

Now just like any any other privilege, we can explicitly DENY that privilege to a user and override (almost) any implicit grant. So, if we want to prevent a user from viewing the metadata on an object, but still allow him to select from it, we simply have to deny him VIEW DEFINITON:

deny view definition on demo1 to demo_login;

And even though I can’t see the object, I can still query it.

You can’t stop me! I’m invincible!

The caveat to explicitly denying permissions to a user is that three access levels can not be denied view permissions. These levels are:

  • Server sysadmin role
  • The user owns the object in question
  • The database db_owner role (implied ownership of all database objects)

If any of these levels apply, then the user will always be able to see the object. To boot, the system views are a little tricky. I was unable to explicitly deny VIEW DEFINITION to either the INFORMATION_SCHEMA or the sys schemas and prevent them from being visible. This one will take a little more research, but in the interim I know that you can prevent them from being seen in a linked ODBC manager by explicitly denying select on individual system views.

To wrap up, the VIEW DEFINITION is the pivot point of visibility. If you have it, even by accident, you can see that object in any of the methods that list database objects (SSMS object browser, ODBC link manager, etc). There are some tricks to it, but it can be a useful tool if you have specific visibility and permission requirements.

T-SQL Tuesday #17: Using APPLY

I confess, I’m one of those guys who really didn’t know much about APPLY. Sure, I saw it in the occasional query I’d find on the ‘tubes, but I never really dug into what it was and why this piece of syntax was so important. Fortunately, this month’s T-SQL Tuesday has rolled around with an opportunity for me to do a little learning, so off to the Google-mobile!

Of course, you can easily find the MSDN article explaining apply, but there’s a much more detailed one I found on Simple Talk by Robert Sheldon. Conceptually, it’s pretty simple. APPLY, in both its CROSS APPLY and OUTER APPLY forms, allows you to call a table valued function for each row in your data set. In a lot of ways it’s like a join, but instead of two tables linked on a common field, you’re linking your query with a computed set of rows that you return with your function.

Now, I wish I could say I had some cool query to share, but tend to use APPLY in some simple ways to get query information. Here’s a query that makes use of the dynamic management functions dm_exec_query_plan and dm_exec_sql_text that gets a lot of work from me:

select
	er.session_id,
	d.name database_name,
	er.status,
	er.wait_type,
	eqp.query_plan,
	est.text
from sys.dm_exec_requests er
	join sys.databases d on (er.database_id = d.database_id)
	CROSS APPLY sys.dm_exec_query_plan(er.plan_handle) eqp
	CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) est

Using this, you can get your text and your query plan for all queries currently running on your server. I like to use the query plans to troubleshoot long running queries, and the wait types help you identify the queries you might want to look at first.

Enjoy the other T-SQL Tuesday posts out there and huge thanks to Matt Velic(blog|twitter) for hosting this month!

Monday Scripts – Free Space by Drive

Here’s a quick script that will allow you to gather up your free space by drive on your server. You can use this to evaluate your space usage and address immediate concerns. It’s a straightforward query that uses dynamic SQL to query the files in each database’s sys.database_files view and the SpaceUsed property collected by the FILEPROPERTY function.

declare @return table
(drive_letter varchar(100),
 allocated_space decimal(15,2),
 used_space decimal(15,2),
 free_space decimal(15,2))

declare recscan cursor for
select name from sys.databases
where state = 0

declare @db varchar(100)
declare @sql varchar(max)

open recscan
fetch next from recscan into @db

while @@FETCH_STATUS = 0
begin
	set @sql = 	'use ['+@db+'] '+
				'SELECT	UPPER(SUBSTRING(filename,1,2)) ,'+
				'sum(a.Size)/128.0,'+
				'sum(FILEPROPERTY(a.Name,''SpaceUsed''))/128.0,'+
				'sum(a.Size-FILEPROPERTY(a.Name,''SpaceUsed''))/128.0'+
				'FROM ['+@db+'].dbo.sysfiles a ' +
				'GROUP BY SUBSTRING(filename,1,2)'
	insert @return
	exec(@sql)

	fetch next from recscan into @db
end

close recscan
deallocate recscan

select
	drive_letter,
	sum(allocated_space) total_alocated,
	sum(used_space) total_used,
	sum(free_space) total_free,
	sum(free_space)/sum(allocated_space) perc_free
from @return
group by drive_letter
order by drive_letter

The main trick of this and the reason why we can’t use sys.master_files is because FILEPROPERTY only provides data for the current database context. If you try to call FILEPROPERTY for a file not in your current database, it will return a NULL.  As a result, we need to dynamically switch database contexts and gather the data for all of our databases.

Parallelism, SQL Server, and you: Round 2

Previously on Art of the DBA: The basics of parallelism.

Control! You must learn control!

So now that we’ve gotten the basics of  parallelism under our belt, it’s time to see how SQL Server uses it. The bad news is that most of the decision making about using parallelism is made by the black box known as the query optimizer. However, there are two parameters in the SQL Engine that we can manipulate to control whether or not a query will be parallelized, both managed in the server configuration.

The first option we can adjust is the cost threshold for parallelism.  Generally speaking, this is the value that SQL Server looks at when evaluating the cost of a query. If the query exceeds that cost, it will be parallelized. Otherwise, it will be run as a single threaded operation. Query cost is the estimated number of seconds a query will run in a single thread and to get an idea of what your query cost is, check your query execution plan. The default value for the cost threshold parameter is 5, so if a query is estimated to take longer than 5 seconds, SQL Server will parallelize it.

The second option is Max Degree of Parallelism, commonly referred to as MAXDOP. This defines the maximum number of threads a parallel query will be given.  The default value for MAXDOP is defaulted to 0, meaning  SQL Server will not limit the number of parallel threads it will create for a query, but it’s usually not the best option. You can find a lot of documentation out on the ‘tubes concerning MAXDOP, but the general rule of thumb is to set it to half the number of processor cores you have, up to 8.

The nice thing about MAXDOP is you can also use it as a query hint.  Using it, you can alter the degree of parallelism for your specific query and not have to worry about mucking with the server setting. Usually you’ll want to make use of this hint to set your max parallelism to 1 if you’re running a long running background task, like maintenance, and you don’t want it to interfere with your normal workload.

Decision Time

Knowing all this, how do we adjust our SQL Server to get the best use out of parallelism? The key is understanding which workloads take advantage of parallelism and which don’t. The first step is figuring out whether your server is an Online Transactional Processing (OLTP) server or a Decision Support System(DSS).

A DSS server, also known as a data warehouse, will benefit most from parallelism. Since your server will usually have a limited number of reporting queries with large datasets, you’ll want that workload spread out to get it processed faster. Also, because you’re dealing with data that is relatively static (it really only changes when you’re processing your warehouse loads), you’re not concerned with holding any locks on your databases, nor do you need to worry about preventing or delaying any data changes from happening while your parallel queries are running.  So parallelism is ideal for devoting as much horsepower to each query as possible.

This is also why parallelism is not a good choice for OLTP systems. Remember, when a CPU execution unit is working on a thread for your query, it won’t be working on anything else. This means that you could be queuing up and delaying important transactions waiting on parallel queries to complete. Since your OLTP system will usually be expected to be processing a lot of small transactions very quickly, a couple parallel queries could really bottleneck your system.  Instead, you want to make as many threads available to your processes as possible and not limit the number of processes you can handle at one time.  This is why you will usually see recommendations for low MAXDOP settings for OLTP, so you can make more CPU execution units available to your total transaction load.

When all is said and done, managing your MAXDOP and cost of parallelism settings are very much part of the DBA art form, where you will need to test and evaluate your workload to find the settings that are appropriate for your server. Here’s a couple steps to get started with on your server:

  1. What kind of queries and how many transactions does my server see? Talk to your developers about the processes and take a look at your Batch Requests/second perfmon counter to help you evaluate this.
  2. Check your current MAXDOP and cost of parallelism settings on your server.  Are they set to the defaults?
  3. What do your CXPACKET waits look like relative to other waits on your server? High CXPACKET waits might be an indication that the MAXDOP on your server is set to high.  To find out your waits, Glenn Berry has an excellent DMV query within his SQL Server 2008 Diagnostic Information Queries that will provide this information for you.

Once you have this information, you can make a plan to adjust your parallelism settings, but you’ll need to test any changes to your settings before implementing them.

Parallelism, while it may not have a tremendous impact on your system, is an important factor nonetheless. Understanding and making proper use of it will help you manage and improve performance on your server.  As an editorial note, I would like to see SQL Server give you some more fine grained control over parallelism, like Oracle does in their query hints.  Hopefully we’ll get to see that in a future release.

Thanks for reading and feel free to leave comments or questions below!

Parallelism, SQL Server, and you: Part 1

In today’s computing world, it usually doesn’t take long before you hear the word “parallelism”, certainly if you’re a SQL Server DBA. As servers now support more and more processors, the modern database application needs to be able to take advantage of parallel processing for query processing. This two part post should give you a good foundation on what parallelism is and how SQL Server makes use of it.

What is Parallelism?

To really understand what parallelism is, first we need to talk a bit about processor architecture. The CPU on any machine is a one-man band. It’s constantly working on all the different tasks the computer sends to it. However, that little machine has a big limitation: It can only do one thing at a time. Granted, it can do that one thing very quickly and very well, but the CPU must finish each individual task before it can move on to the next. This is where we get the processor queue. The CPU takes each task that is sent to it and puts it into a list. As each task is completed, it will go to this list to get the next job it needs to do.

Obviously, this can be a pretty big bottleneck on your system. So manufacturers started adding additional processors to servers to spread the workload out. Then, if you had 2 processors in a machine, you now had 2 workers that can get tasks done, 4 processors to give you 4 workers, and so on. Of course, you could quickly run out of physical space on a motherboard for all these processers, so then came the introduction of multi-core processors, helping to expand this even further and providing even more execution units for our workload.

Parallelism is born out of taking advantage of these multiple workers. Normally, an entire SQL query and the work associated with it would be assigned to a single processing core. On a machine with one or two cores this might not be a huge impact, but if your server has 16 available cores, you’ll have a lot of lost work time as cores sit idle. What SQL Server does is take your query and break it up into smaller workloads, distributing those workloads across the available cores, and then consolidating the results as each workload, or thread, completes. So parallelism is, simply put, doing a task in parallel.

While there’s a lot more involved here, the full detail of CPU architecture and how SQL server takes advantage of it, you should check out Chapter 5 in the excellent Professional SQL Server 2008 Internals and Troubleshooting.

Seeing Parallelism In Action

So how can we find parallel queries on our server? If you want to use the GUI, you can just pull up the Activity Monitor, open up processes, and look for any process ID that has multiple rows, as each row represents a different thread of work. However, the quickest way is to query the DMVs:

select
 er.session_id,
 er.status,
 er.command,
 er.blocking_session_id,
 er.wait_type,
 ot.exec_context_id,
 ot.task_state,
 st.text
from
 sys.dm_exec_requests er
 join sys.dm_os_tasks ot on (er.session_id = ot.session_id)
 cross apply sys.dm_exec_sql_text(er.sql_handle) st
where er.session_id in
 (select session_id
 from sys.dm_os_tasks
group by session_id
having count(exec_context_id)>1)

sys.dm_os_tasks is the key view here, as it will show you each individual thread that SQL Server is making use of. I then join to sys.dm_exec_requests and sys.dm_exec_sql_text for additional information on the processes.

And I’m still waiting!

One thing you will probably see a lot of when your server has a lot of parallelism is the infamous CXPACKET wait. When SQL Server breaks up a query for parallel processing, the optimizer tries to balance the workload as best as possible, but you will almost always have a case where some threads will finish ahead of the rest of the job. This is where CXPACKET shows up, because the query is waiting on the remaining threads to complete before it can consolidate the results and resolve the entire process.

The issue with CXPACKET is that it’s not always indicative of a problem. In a lot of ways, it’s very much business as usual, as it will always show up when you have parallelism. The key is trying to identify when it’s happening to much, the result of you over parallelizing your workload.

We’ll go over this in part two, as we look at how we can control parallelism in SQL Server and how different workloads make use of parallelism. Hope to see you then!

Meme Monday: SQL Server in 11 words

Stopped SQL during rollback, waited on recovery, and watched rollback again.

This is how I learned a fair amount about SQL Server and how it handles rollback.  The team I was on was working on a database with a huge table.  We’re talking billions of rows here huge.  Unfortunately, due to design and the growth of this table, it was all in a single partition.  And we were kind of stuck with it.

The biggest problem with this table was maintenance.  Sure, we could update statistics, but we couldn’t rebuild the index.  There simply wasn’t enough space on disk for the rebuild to happen.  How did we find this out?  Because the disk filled up in the middle of the job, causing it to fail and go in to rollback.

There we were, looking at a 14-hour rollback.  While the database was still functional, this table and all related tasks were blocked.  The smart thing to do was to wait it out and let SQL sort through the transactions.  Still, it’s easy to get a little antsy looking at that long of a rollback.  We decided that we could go ahead and bounce the SQL Service, causing SQL to go into recovery of all the committed work (i.e., BEFORE the reindex job died), but not process any of the uncommitted transactions.  Sure, we’d have to wait on recovery, but that couldn’t be as long as the rollback, right?

Well, not really.

The instance went into recovery, as expected, and didn’t take quite as long as the 14 hour rollback, but it still took a while.  After all, we were in full recovery mode and the transaction logs were still pretty sizable.  However, once it got done, we saw that the table was still locked.  It confused the heck out of us, because we didn’t see any processes locking the object when we ran sp_who2.  That is, we didn’t see any user processes.

You guessed it.  When we looked at the system processes, there was the locking processes.  SQL Server was still dealing with the rollback, it was just doing it under the covers as part of the recovery process.  That’s the other thing, this was all part of fast recovery, which is SQL Server allowing access to the database while it’s still finishing up it’s recovery processes after a crash.  Because even while some of those transactions may not have been committed, SQL Server still needs to sort through the chain and resolve the transactions in the log.  Including the one it was rolling back.

So Moral of the Story?  Let SQL Server do it’s thing when handling transactions.  Yeah, things may suck, but many times there’s not much you can do but wait it out.   There’s a reason these mechanisms are in place, so let the engine work.

Sorry there’s no script today, but thanks for joining me for Meme Monday.  It’s the brainchild of Tom Larock, the SQL Rockstar(blog|twitter), but I’m not tagging him.  Yet.  :)  However, I am going to tag three other bloggers I know:

Stop by their blogs and (hopefully) see their contributions.  Or just read their blogs and enjoy the knowledge!

Getting Started with Dynamic SQL

Since two of the scripts I’ve posted so far on Mondays involve the use of dynamic SQL code, I figured it would be a worthwhile to post on using dynamic SQL in SQL Server. After all, understanding and using dynamic SQL is one of the fundamental tools for a DBA to make management tasks automated, helping you save time and eliminate “fat fingers”.

What is dynamic SQL?

Before we get started, let’s define what we’re talking about. A quick Googling gives us a bunch of links that all describe the same process: “Dynamic SQL is a term used to mean SQL code that is generated programatically (in part or fully) by your program before it is executed”. What this means to us is now we can take SQL code that we want to run repeatedly with some changing parts, and write it in a way where those changes can be handled by other logic and keep the rest of the query unchanged, saving us a whole lot of time and effort.

Parametrization and code reuse

We’ll use my sp_spaceused script to show dynamic SQL in action. First off, we should review what the objective of that script is: running sp_spaceused to give us information about the size of the tables in our database in context with the other tables. Since the stored procedure only gives us information one table at a time, we need a way to gather all that information without having to run each query separately. We know that because the stored procedure accepts the table name as a parameter, the core code will remain the same no matter what we pass to it, so we only need to handle the changing table name.

Let’s look at the code:

declare recscan cursor for
select name from sysobjects where type = 'U'

open recscan
fetch next from recscan into @curr
while @@fetch_status = 0

begin
insert into @return
exec sp_spaceused @curr

fetch next from recscan into @curr
end

In the first two lines, we have a SQL cursor. If you need more information on cursors and how they work, hit up MSDN, but the short version is they give us a way to create a data set that we can use programmatically for processing. In this specific case, I’m building a data set of all the user tables in my database.

The next part is to open our list of table names and grab the first item in it. I’ll take that table name (the next available value in my cursor) and stuff it into a variable. Now I can use the stored procedure, with my variable as a parameter, to execute the code and get my return set. The rest of a the code is a simple loop that will continue until the SQL engine can no longer fetch a value from the cursor.

At the end of the day, we have two pieces that make this work:

  • A static set of code that performs the same task, but with a parametrized value that can alter the target of that task.
  • A programatically changing data set that we can use to provide the parameter for our code statement, so we can apply the same work to different targets.

But is this really dynamic SQL, since all we’re doing is using a stored procedure driven by a changing parameter? Sure thing, because while the stored procedure code remains the same, we are changing the parameter programmatically. We aren’t limited to just using stored procedures, though. While there are a lot of advantages to using stored procedures, such as pre-compiled SQL statements and making use of the procedure cache, sometimes you need something more fine grained. Fortunately, SQL server provides us with a couple other methods to execute dynamic SQL.

EXECution time

We can also use the EXECUTE function for more than just calling a stored procedure. It will also accept a SQL statement passed to it as a string. Let’s see how this works by modifying our script to get the number of tables in each database instead of the space used:

declare @sql varchar(max)
declare @db varchar(100)
declare @return table
(db_name varchar(100),
table_count int)

declare recscan cursor for
select name from sysdatabases where dbid >4

open recscan
fetch next from recscan into @db
while @@fetch_status = 0
begin
set @sql = 'select '''+@db+''',count(*) from '+@db+'.sys.objects where type = ''U'''

insert into @return
exec(@sql)
end

close recscan
deallocate recscan

select * from @return

Now we can take our SQL statement and change the database context programmatically, allowing us to reuse the same query over and over again.

SQL Server also provides the sp_executesql stored procedure as an alternative to using EXECUTE with two major differences. The first is that sp_executesql allows you to use parameters within your dynamic statement, so it allows you a cleaner option for parametrization. The second is you can use it to capture an output parameter, giving you some other options for using the data, though you are limited in using the statement output within an insert.

Finally, we can also make use of two undocumented stored procedures in SQL server, sp_MSforeachtable and sp_MSforeachdb. Just like they’re names imply, you can use these stored procedures to execute a SQL statement against every table in a database or every database on a server. While they don’t offer you as much control over your execution, they are pretty handy.

I’ve found dynamic SQL to be a huge time saver, since it is so flexible and powerful. With it, I can build out scripts to perform any number of tasks that remain the same, giving me the security in knowing that the task will run the same way, no matter how my parameters change.  Keep in mind that, while we only used SELECTs for the examples, you can use any SQL statement you normally would write ad hoc. For additional information and examples, I highly recommend Introduction to Dynamic SQL on SQLTeam.com, which will provide you more detailed information on the use of the different methods, as well as permissions and scope.

 

Monday Scripts – Adding database roles in bulk

As DBAs we’re in charge of granting access for our databases. Typically we can handle this using active directory groups and SQL Server Management Studio, but this can turn in to a pretty cumbersome and tedious task as the number of databases we have to manage grows. Currently, it’s not unusual for a user to ask me for read permissions on a server that has 20+ databases.   Since it doesn’t really make sense for me to add that user manually to each database on the server (both for time and the likelihood that it will create an error), I’ve created this script for adding a user to the same roles in every user database across a server.

/*Get Database Cursor*/
declare recscan cursor for
select name
from sys.databases
where name not in ('master','tempdb','model','msdb')

/*declare variables*/
declare @db varchar(100)
declare @sql varchar(4000)
declare @user varchar(100)
declare @roles varchar(max)
declare @role varchar(50)
declare @winuser bit
declare @message varchar(max)

/*string parsing variables*/
declare @strpos int
declare @strlen int
set @strpos = 0

/*Set user name and roles to be assigned, comma separated string.
@winuser defines if user is windows(1) or not(0).*/

set @winuser = 0
set @user = <INSERT USER NAME>/*User name, such as DOMAIN\msfal for windows or report_reader for direct db login*/
set @roles =<Comma list of roles>/*i.e.'db_datareader,db_datawriter'*/

/*create server login*/
Print 'Creating user ['+@user+'] on server...'
if @winuser = 1
begin
	set @sql= 'USE [master] CREATE LOGIN ['+@user+'] FROM WINDOWS WITH DEFAULT_DATABASE=[master]'
end
else
	set @sql= 'USE [master] CREATE LOGIN ['+@user+'] WITH PASSWORD = '''+@user+'!123'''
exec(@sql)

/*Cycle through DBs to assign roles.*/
open recscan
fetch next from recscan into @db
while @@fetch_status = 0
begin
	/*Add user if user does not exist in database.*/
	print 'Adding ['+@user+'] to ['+@db+']...'
	set @sql = 'USE ['+@db+'] CREATE USER ['+@user+'] FOR LOGIN ['+@user+']'
	exec(@sql)

	/*Parse role string and assign each role to the user.*/
	Print @db + char(10) + '****************************'
	while @strpos < LEN(@roles)
	begin
		select @strlen = case when charindex(',',@roles,@strpos) = 0 then LEN(@roles)-@strpos+1 else charindex(',',@roles,@strpos) -@strpos end
		select @role = SUBSTRING(@roles,@strpos,@strlen)

		print char(9)+'Adding '+@user+' to '+@role+' role in '+@db+'...'
		set @sql = 'USE ['+@db+'] EXEC sp_addrolemember N'''+@role+''', N'''+@user+''''
		exec(@sql)
		select @strpos = case when charindex(',',@roles,@strpos) = 0 then LEN(@roles) else charindex(',',@roles,@strpos) + 1 end
	end

print char(10)
fetch next from recscan into @db
set @strpos = 0
end

/*clean up cursor*/
close recscan
deallocate recscan

The script itself may look intimidating, but really it breaks down into 3 steps:

1 – Create the user as a server login
Then, for each database
2 – Create the user within the database
3 – Assign the user to the each role within the passed list of roles.

To use this script, all you need to do is set @user to the name of your user, set @winlogin to 1 or 0 if your user is an Active Directory login or not, then insert a comma separated list of roles you want to add the user to. Note, you’re not restricted to the default roles, you can use custom created roles in the script. Just make sure the role exists.

The only other item of interest here is the code section that parses through the role array. Take a look at it if you’re looking for away to split a string into its component parts using T-SQL.

Feel free to download the script here.

Monday Scripts – Making use of sp_spaceused

Happy Monday! One thing I’m planning to do with this blog is post useful scripts that I’ve written to hopefully make your life easier as a DBA. Nothing complex and scary, plus I hope to explain any of the particular tricks within the script so you can build upon them to fit your specific situation.

As DBA’s, we’re commonly called upon to manage the space on our servers. Usually this is just a matter of managing data files, the free space within those files, and the total space on the drives. However, there are times we need more detail. Today’s script is one I put together to use sp_spaceused, which is a stored procedure provided with SQL server to give you space information on specific objects (databases and tables, typically). It’s very handy for analyzing your tables for consumption.

The problem I ran into with the stored procedure is that it would just show one table at a time, and often I need to see all the tables in a database relative to each other. So this script is a simple cursor that executes dynamic SQL and gets all the space details for all your user tables for the current database context.

/*Create return table.*/
declare @return table
(name varchar(100),
rows bigint,
reserved varchar(30),
data varchar(30),
index_size varchar(30),
unused varchar(30))

/*Store current table.*/
declare @curr varchar(100)

/*Create cursor for all tables.*/
declare recscan cursor for
select name from sysobjects where type = 'U'

/*Use cursor to run sp_spaceused on all tables.*/
open recscan
fetch next from recscan into @curr
while @@fetch_status = 0
begin
insert into @return exec sp_spaceused @curr
fetch next from recscan into @curr
end

/*Clean up cursor.*/
close recscan
deallocate recscan

/*return data*/
select
name,
rows,
reserved,
data,
index_size,
unused
from @return
order by convert(bigint,left(reserved,len(reserved)-3)) desc

(You can download the script here.)

The only real gotchas to keep in mind is that the size values are returned as strings and the values are in kilobytes. If you need to modify them for readability/sorting, you’ll need a bit of string manipulation. See the ORDER BY clause for how I handle that.

You can also get this information from the DMVs in SQL Server. I plan on covering that in a later post. Until then, this script should take care of most of your needs and give you some additional insight into your databases.

First post and first presentation!

Last Thursday, I got a chance to present to the Denver SQL Users Group on Managing and Auditing SQL Security. I crammed a lot in for 30 minutes and got some good immediate feed back. I think the one thing that made me nervous was no one really asked any questions, but since it was my first technical presentation in a long time, I’m not sweating it.

Anyway, for those of you who attended, thanks for listening. If you have a chance, please leave me some feedback at Speaker Rate. I’d really appreciate it. For those who didn’t make it, the presentation was an overview of creating accounts in SQL Server, granting access via GRANT, server roles, and database roles, as well as queries that you can use to audit that access.

Here’s the meat for all those interested:

Powerpoint Presentation – SQL Server Security
(Please note the presentation is released under the Creative Commons Attribution-NonCommercial license)

Scripts:
server_logins.sql
server_roles.sql
server_role_audit.sql – Uses xp_logininfo
database_roles.sql
db_roles_audit.sql – Uses xp_logininfo
object_grants.sql

Enjoy!

Last Thursday, I got a chance to present to the Denver SQL User’s Group on Managing and Auditing SQL Security. I crammed a lot in in 30 minutes and got some good immediate feed back. I think the one thing that made me nervous was no one really asked any questions, but since it was my first technical presentation in a long time, I’m not sweating it.

 

Anyway, for those of you who attended, thanks for listening. If you have a chance, please leave me some feedback at Speaker Rate for me. I’d really appreciate it. For those who didn’t make it, the presentation was an over view of creating accounts in SQL Server, granting access via GRANT, server roles, and database roles, as well as queries that you can use to audit that access.

 

Here’s the meat for all those interested:

Powerpoint Presentation – SQL Server Security

 

Scripts:

server_logins.sql

server_roles.sql

server_role_audit.sql – Uses xp_logininfo

database_roles.sql

db_roles_audit.sql – Uses xp_logininfo

object_grants

Last Thursday, I got a chance to present to the Denver SQL User’s Group on Managing and Auditing SQL Security. I crammed a lot in in 30 minutes and got some good immediate feed back. I think the one thing that made me nervous was no one really asked any questions, but since it was my first technical presentation in a long time, I’m not sweating it.

Anyway, for those of you who attended, thanks for listening. If you have a chance, please leave me some feedback at Speaker Rate for me. I’d really appreciate it. For those who didn’t make it, the presentation was an over view of creating accounts in SQL Server, granting access via GRANT, server roles, and database roles, as well as queries that you can use to audit that access.

Here’s the meat for all those interested:

Powerpoint Presentation – SQL Server Security

Scripts:

server_logins.sql

server_roles.sql

server_role_audit.sql – Uses xp_logininfo

database_roles.sql

db_roles_audit.sql – Uses xp_logininfo

object_grants