Art of the DBA Rotating Header Image

space management

Monday Scripts – Getting Partition Information

I’m going to follow up my post on partitioning with a query I wrote while implementing the partitions and cleaning up the empty ones. One of the issues I ran in to was clearly seeing each partition’s range and the number of rows within it. These two questions are answered in different places in the catalog views, sys.partition_range_values and sys.partitions. To build this out, I started with a nice little query from Derek Dieter, adding in a couple joins to get me to sys.partitions. Here’s the result!

select 
	t.name as TableName
	, ps.name as PartitionScheme
	, pf.name as PartitionFunction
	, p.partition_number
	, p.rows
	, case 
		when pf.boundary_value_on_right=1 then 'RIGHT' 
		else 'LEFT' 
	  end [range_type]
	, prv.value [boundary]
from sys.tables t
    join sys.indexes i on t.object_id = i.object_id
    join sys.partition_schemes ps on i.data_space_id = ps.data_space_id
    join sys.partition_functions pf on ps.function_id = pf.function_id
    join sys.partitions p on i.object_id = p.object_id and i.index_id = p.index_id
    join sys.partition_range_values prv on pf.function_id = prv.function_id and p.partition_number = prv.boundary_id
where i.index_id < 2  --So we're only looking at a clustered index or heap, which the table is partitioned on
order by p.partition_number

My Partitioning Odyssey

Recently I had my first real experience with SQL Server partitioning. I had known about it, sure, but most of it was general understanding of the concepts and no real practical experience. Well, all of that changed last month when I successfully partitioned a table with several billion (with a “B”) rows to help get a grip on an application that was slipping out of control. I’d like to walk you through the process to try and help you with any partitioning you intend to do.

That’s no moon!

First off, let’s look at the problem. Like I said, we had a big table, but what does that mean? Well, here’s the hard stats from sp_spaceused:

19,600,710,213 rows
~759 GB reserved space
~432 GB data space
~326 GB index space
~120 MB free space

Yeah, pretty big right? I mean, at 759 GB, this one table was bigger than most databases that I’ve worked with. Structurally, it wasn’t all that complex, though, as it had 3 fields, a composite clustered primary key, and a non-clustered index. However, even with it being very narrow, the size of it prevented us from being able to do any maintenance. And it was almost impossible to properly identify and roll off any data, so our problem was only going to get worse.

How do you eat an elephant?

Based on some of my reading, I was pretty sure partitioning was our way out, but I needed more information. A couple weeks of research later, I came up with the following links that really helped me “get it”:

Partitioned Tables and Indexes in SQL Server 2005
Simple Talk-Partitioned Tables in SQL Server 2005
Kimberly Tripp Web Cast
SQLServerpedia Partitioning article

The first thing I got out of all these articles was why you would want to do partitioning on a table this size:

  • Ability to rebuild/manage indexes as each partition could be rebuild separately.
  • Increased performance for reads based on better data targeting by the query engine, as it should only retrieve needed partitions.
  • Increased performance on all queries, as we would be able to maintain the indexes on the table.
  • Possible performance increase on inserts.
  • Ability to better archive off data as the table grows, thus being able to overall manage space on the server.

So what partitioning gives you is a way to manage the table in small chunks instead as one large object. Or a better way to think of it is the old cliché: How do you eat an elephant?

One bite at a time.

Now, there were also some key ground rules I learned about partitioning:

  • Table partitioning defines how the table will be physically stored
    • If partitioning a heap, then the partition field can be any field in the table since no field is involved in defining how the table is physically stored on disk.
    • If partitioning a table with a clustered index (like a primary key), then the partitioning field must be one of the fields included in the index, as the clustered index defines how the table is physically stored on disk.
  • You are always partitioning by RANGE. If you wish to partition on specific values, then your range declaration will be of that value.
  • Each partition is a physically separate entity, meaning that you can allocate partitions to different file groups within SQL Server. This would be the method of explicitly assigning parts of a table to specific disk locations.
  • SQL 2005 and 2008 are limited to 1000 partitions.

The first point was pivotal for me, as I had seen a bunch of examples where some guys had partitioned on a table field, and some on a clustered index field. Understanding that table partitioning revolves around how the table is physically stored on disk really made it all clear for me.

Planning time!

Armed with this knowledge, it was time to build out my partition structure. The first step was to figure out what I was going to partition on. The table was designed to store a value based on a combination of an individual id, representing a person, and a field id, detailing what type of data was being stored for that individual. Reviewing the ground rules, I knew you could only partition on one field and it must be part of that clustered index, so I was limited to selecting one of those fields. Your partitioning doesn’t have to be the entire clustered index, so using just one of these fields was fine. As the individual id values was pretty granular(about 1.6 billion distinct values), I went with that as my partitioning field.

Next up, I needed to decide on my ranges. The individual id was a numeric, auto-incrementing value, so this made it easy to establish a numeric range. The real question was how large to make each partition, since I wanted them small for manageability, but not to small so I would go over the 1000 partition limit. I ended up deciding on making each partition 5 million distinct individual ID values, which would mean roughly 50 million rows per partition (the individual id to assigned fields ratio was about 1 to 10).

With these two decisions made, I was able to write the following SQL code to create my function, scheme, and table(names have been changed to protect the troublesome):

create partition function pf_bigtable (int)
as range left for values();

declare @boundary int
declare @increment int

set @boundary = 5000000
set @increment = 5000000

while @boundary < 2000000000
begin
	alter partition function pf_bigtable() split range (@boundary)
	set @boundary = @boundary+@increment
end

create partition scheme ps_bigtable
as partition pf_ bigtable
all to ([PRIMARY]);

--Create new table
CREATE TABLE [dbo].[bigtable_p](
	[individualid] [int] NOT NULL,
	[fieldid] [int] NOT NULL,
	[valueid] [int] NOT NULL,
 CONSTRAINT [pk_bigtable_p] PRIMARY KEY CLUSTERED
(
	[individualid] ASC,
	[fieldid] ASC
) ON ps_recipidsv(individualid)
--NC index (created later)
CREATE NONCLUSTERED INDEX [ix] ON [dbo].[bigtable_p]
(
	[fieldid] ASC,
	[valueid] ASC
)

Notice that I used a looping statement to add in all my ranges to save myself the effort of explicitly declaring several hundred partitions.

Getting it in place

The biggest hurdle was partitioning out the table itself. This table was part of an OLTP system that needed to be up almost 24/7. We had a weekly maintenance window of 3 hours one night in the week, but that wasn’t enough time to apply this operation to the live table. Fortunately, we did have plenty of disk space.

What I did was pretty straightforward. I went ahead and created the table with the fully created partition scheme as described above, but empty of data. I also held off creating the non-clustered index. At this point I had an empty copy of my live table. With the help of one of the other guys on my team, we built a batch insert process to step through the live table and insert small batches of records (~6 million at a time) into the partitioned table, so we could build up about 99% of the data from the live table to our partitioned copy. Once we had gotten the bulk of the data inserted, I then created the non-clustered index. When this was completed, we then took the application offline to stop writes to the table, ran a final insert process to get all the new records we had missed since we created the non-clustered index, and then changed the table and index names, swapping the unpartitioned table with the partitioned one.

The initial copy actually took several days to complete, but we were able to have uninterrupted application service during the whole time. When the non-clustered index build was finished, we ended up having only 30 minutes of application downtime to catch everything up and swap the objects.

The finish line

So now that we got all this into place, it was time for us to see what the partitioning bought us. The first item on my list was seeing if I could rebuild the index on the table. By making use of the dm_db_index_physical_stats dynamic management function, I was able to write a script to rebuild each individual partition as necessary:

declare recscan cursor for
SELECT partition_number
FROM sys.dm_db_index_physical_stats (DB_ID(),OBJECT_ID(N'dbo.bigtable'), NULL , NULL, NULL)
where avg_fragmentation_in_percent >= 30;

declare @sql varchar(max)
declare @partnum varchar(10)

open recscan
fetch next from recscan into @partnum
while @@fetch_status = 0
begin
set @sql = 'alter index all on bigtable_p rebuild partition='+@partnum
exec(@sql)
fetch next from recscan into @partnum
end

close recscan
deallocate recscan

The initial run of this still took 8+ hours to complete, so not any real time savings there. However, since most of the table wasn’t getting affected (it was mostly inserts, very few updates), subsequent maintenance only took 5-15 minutes each run because only a handful of partitions actually needed to be addressed.

We also saw dramatic performance in the use of the application. SELECTs of the data were much faster, and the load process that did most of the inserting into the table saw upwards of a 150% speed increase.

Overall, this was a huge win for me and a fantastic learning experience. You can never really put something in your toolbag until you’ve actually been able to do it, and it’s not every day a challenge like this comes your way. However, this experience should also stress the necessity of planning, because we had to jump through a lot of hoops to get the partitions on this table after it had already grown to an unmanageable size. A lot of that effort could have been saved if someone had given some more thought to the amount of data flowing into this system.

Anyway, there’s my partitioning odyssey. I hope this was beneficial to you and if there’s anything that wasn’t clear or may have been missed, don’t hesitate to drop me a note and let me know!

Monday Scripts – Free Space by Drive

Here’s a quick script that will allow you to gather up your free space by drive on your server. You can use this to evaluate your space usage and address immediate concerns. It’s a straightforward query that uses dynamic SQL to query the files in each database’s sys.database_files view and the SpaceUsed property collected by the FILEPROPERTY function.

declare @return table
(drive_letter varchar(100),
 allocated_space decimal(15,2),
 used_space decimal(15,2),
 free_space decimal(15,2))

declare recscan cursor for
select name from sys.databases
where state = 0

declare @db varchar(100)
declare @sql varchar(max)

open recscan
fetch next from recscan into @db

while @@FETCH_STATUS = 0
begin
	set @sql = 	'use ['+@db+'] '+
				'SELECT	UPPER(SUBSTRING(filename,1,2)) ,'+
				'sum(a.Size)/128.0,'+
				'sum(FILEPROPERTY(a.Name,''SpaceUsed''))/128.0,'+
				'sum(a.Size-FILEPROPERTY(a.Name,''SpaceUsed''))/128.0'+
				'FROM ['+@db+'].dbo.sysfiles a ' +
				'GROUP BY SUBSTRING(filename,1,2)'
	insert @return
	exec(@sql)

	fetch next from recscan into @db
end

close recscan
deallocate recscan

select
	drive_letter,
	sum(allocated_space) total_alocated,
	sum(used_space) total_used,
	sum(free_space) total_free,
	sum(free_space)/sum(allocated_space) perc_free
from @return
group by drive_letter
order by drive_letter

The main trick of this and the reason why we can’t use sys.master_files is because FILEPROPERTY only provides data for the current database context. If you try to call FILEPROPERTY for a file not in your current database, it will return a NULL.  As a result, we need to dynamically switch database contexts and gather the data for all of our databases.

Monday Scripts – Making use of sp_spaceused

Happy Monday! One thing I’m planning to do with this blog is post useful scripts that I’ve written to hopefully make your life easier as a DBA. Nothing complex and scary, plus I hope to explain any of the particular tricks within the script so you can build upon them to fit your specific situation.

As DBA’s, we’re commonly called upon to manage the space on our servers. Usually this is just a matter of managing data files, the free space within those files, and the total space on the drives. However, there are times we need more detail. Today’s script is one I put together to use sp_spaceused, which is a stored procedure provided with SQL server to give you space information on specific objects (databases and tables, typically). It’s very handy for analyzing your tables for consumption.

The problem I ran into with the stored procedure is that it would just show one table at a time, and often I need to see all the tables in a database relative to each other. So this script is a simple cursor that executes dynamic SQL and gets all the space details for all your user tables for the current database context.

/*Create return table.*/
declare @return table
(name varchar(100),
rows bigint,
reserved varchar(30),
data varchar(30),
index_size varchar(30),
unused varchar(30))

/*Store current table.*/
declare @curr varchar(100)

/*Create cursor for all tables.*/
declare recscan cursor for
select name from sysobjects where type = 'U'

/*Use cursor to run sp_spaceused on all tables.*/
open recscan
fetch next from recscan into @curr
while @@fetch_status = 0
begin
insert into @return exec sp_spaceused @curr
fetch next from recscan into @curr
end

/*Clean up cursor.*/
close recscan
deallocate recscan

/*return data*/
select
name,
rows,
reserved,
data,
index_size,
unused
from @return
order by convert(bigint,left(reserved,len(reserved)-3)) desc

(You can download the script here.)

The only real gotchas to keep in mind is that the size values are returned as strings and the values are in kilobytes. If you need to modify them for readability/sorting, you’ll need a bit of string manipulation. See the ORDER BY clause for how I handle that.

You can also get this information from the DMVs in SQL Server. I plan on covering that in a later post. Until then, this script should take care of most of your needs and give you some additional insight into your databases.