One of the commonly referenced blogs out there is Kimberly Tripp’s(@KimberlyLTripp) seminal post on Virtual Log Files. If you haven’t read it yet, I highly recommend it. It provides a good description of one of those quirks with your log files and how you should manage it.
Of course, while folks are familiar with this, it’s often not a practice that gets followed. Why? Because it’s hard to manage. Usually it’s just easier to set the autogrowth settings and let the log find its own size. This is where I hope I can help you out.
Just like anything we do, if we need to execute it more than once it should be script. As such, I figured I’d share a T-SQL script I’ve put together for growing my log files out in 8GB chunks. I’ve used this quite a bit when managing my own systems for when I either need to expand a log file or I need to resize it after an accidental blowout.
--Set variables declare @dbname sysname = 'test' ,@limit int = 32000 ,@filename sysname ,@currsize int ,@growth int ,@v_sql nvarchar(1000) --Get initial settings select @currsize = convert(int,floor(size/128.0)) ,@filename = name ,@growth = 8000 from sys.master_files where database_id = db_id(@dbname) and file_id = 2 --Grow file while @currsize < @limit begin select @growth = CASE WHEN @limit - @currsize < 8000 THEN @limit - @currsize ELSE 8000 END select @currsize += @growth select @v_sql = N'alter database '+@dbname+' modify file (name='+@filename+',size='+convert(nvarchar(10),@currsize)+'MB);' exec sp_executesql @v_sql end
This is a fairly handy script and pretty easy to follow. I set the database name and limit, then let the loop keep adding 8000 MB chunks until I get to the size I want. No strange black voodoo magic, right? That part is next.
Hopefully everyone reading this blog knows that I’m a big fan of Powershell. If not, well…I’m a big fan of Powershell. So I gave myself a little challenge to re-write that T-SQL script as a Powershell function to see how it would work. Here’s the end result:
#load assemblies [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null $ErrorActionPreference = 'Inquire' function Expand-SqlLogFile{ param( [string]$InstanceName = 'localhost', [parameter(Mandatory=$true)][string] $DatabaseName, [parameter(Mandatory=$true)][int] $LogSizeMB) #Convert MB to KB (SMO works in KB) [int]$LogFileSize = $LogSizeMB*1024 #Set base information $srv = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server $InstanceName $logfile = $srv.Databases[$DatabaseName].LogFiles[0] $CurrSize = $logfile.Size #grow file while($CurrSize -lt $LogFileSize){ if(($LogFileSize - $CurrSize) -lt 8192000){$CurrSize = $LogFileSize} else{$CurrSize += 8192000} logfile.size = $CurrSize $logfile.Alter() } } #Call the function Expand-SqlLogFile -DatabaseName 'test' -LogSizeMB 35000
You’ll note that it’s about the same number of lines, does more or less the same thing. For those of you familiar with the SMO, you’ll also know that at behind the scenes all it’s doing is running T-SQL. So why write it this way?
First, it’s an interesting challenge just to help understand Powershell and how to write it. It’s these kind of challenges that help me learn and expand my knowledge. The second, though, is to explore the differences between the methods. We know in the IT field, you can solve a problem in any number of ways. While I love Powershell, I wouldn’t use it everywhere. The only way to know where to draw the line, though, is to experiment and solve the same problem in different ways.