Art of the DBA Rotating Header Image

June 6th, 2011:

Meme Monday – Dumb Questions

Here I am, jumping in again for this month’s Meme Monday.  For June, Tom LaRock(t|b) is asking for our dumb SQL questions.

Can you disable the transaction log?

The answer is, emphatically, “NO”.  Most people I’ve run into ask this because they struggle with delays caused by locking and deadlocks, or they’re looking for ways to speed up inserts.  The facts are that without the transaction log and everything that comes with it, you lose the ACIDity (Atomicity, Consistency, Isolation, Durability) within the database.  In other words, you lose all the elements that control and maintain the integrity of your data while thousands of different statements are trying to get at it.

So the next time someone comes to you asking about just disabling this key feature, send ’em to this great article by Steve Jones.

Database Oil Changes – Part 3

Captain’s Log

The last bit of maintenance is one I think most folks know about, and that is transaction log backups. The reason I consider this maintenance is because its how you manage the overall size and transactional state of your database when it’s not in SIMPLE recovery mode. Without the proper jobs set up, you could be impacting your overall space usage and recovery time on your servers.

SQL Server supports three recovery models:

  • Simple – The most basic recovery model, SQL Server regularly cleans up the transaction log. Transactional history is not maintained, so no point-in-time recovery is available. NOTE: SQL Server still uses the transaction log in the same as in FULL mode.
  • Full – Transactional history is maintained in the log, as SQL Server only performs clean up on the transaction log when a transaction log backup occurs. Since transactional history is maintained, point-in-time recovery is possible.
  • Bulk logged – Similar to FULL, transactional history is retained in the log, but certain actions within the database are minimally logged, meaning that you don’t get the transactional detail, but must recover the entire action that occurred.

What all this means is that in FULL and BULK-LOGGED modes, our transaction logs are going to fill up and grow unless we backup our transactions and signal to SQL Server that it can clean up the log. If you neglect your log backups, you could easily fill up whatever drives you have your log files on. Also, when you have to  bring a database online, SQL Server will have to go through your log files to reconcile the data in your database with all of the transactions.  The larger your log file, the longer this recovery phase will take.

Backing up your log is pretty simple:

BACKUP LOG AdventureWorks TO DISK='X:\Backups\logs.bak'

Another option is to use the Maintenance Plans. Whichever method, you should be making transaction log backups on a regular basis in your databases, dependent (again) on your particular situation. For example, we have a pretty active OLTP system and I have regular log backups scheduled to occur every 15 minutes. Your server might be active enough for only every hour or so. The factors to keep in mind when setting your schedule is:

  • How active is your database? The more transactions you have, the more information will be put into the log, meaning they are apt to grow more with a heavy transaction load.
  • How much time are you willing to lose? You’ll only be able to restore back to the point of your last transaction log backup, so keep that in mind. If your service level agreements (SLAs) allow you to lose up to an hour, than that’s the minimum window you should have.
  • Performance considerations. Remember, this IS a backup job, so your server will take a hit whenever you run it. Not a tremendous one, but if you make your backup jobs to frequent you could be causing unnecessary contention.

Also remember that log backups are dependent on your full backups. Transaction log backups need to be done in conjunction with full database backups to maintain your disaster recovery information.


As you’ve probably figured out, our maintenance revolves around managing changes in our databases. This, above all, should be the defining rule of thumb when creating and scheduling maintenance tasks. Again, if we think of it in terms of taking care of a car, you (hopefully) change your oil after a certain amount of use with your car. If your car sits in the garage a lot, you probably won’t have to do much to keep it in shape, but if you’re out on the road every day, you’ll definitely want to let it go.

It’s not at all different with your databases, so when you sit down to set up your jobs and schedules, as yourself the following questions:

  • What’s my environment like? Is it an OLTP system with many, many, many small changes to the data? Is it a data warehouse where changes are few and far between, but significant when they occur?
  • What kind of activity does my server see? Are there periods of “down time” when I can run these kind of jobs uninterrupted or is that time minimal or non-existent?

Just as a database needs proper planning and design when you build out your tables and indexes, the upkeep of a database requires a similar amount of planning. If you do it right, though, you drastically reduce the factors that could impede your server’s performance. Hopefully, this also reduces the number of phone calls you get from users asking why the database is so slow!