I had an odd one yesterday. I was sorting out internal fragmentation in our transaction logs (see Kim Tripp’s blog http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx) and I noticed that on our core production server the model database transaction log was over 200MB and had 163 virtual log files (VLFs). Clearly something was logging transactions in there and causing repeated log file growth. But what? Stumped, I asked Ask SQL Server Central and found out, thanks to ‘meltondba’. (I suspect that’s not his real name.) The model database defaults to the full recovery model and we’re backing up the system databases every hour on this server. As explained by CSS here http://blogs.msdn.com/b/psssql/archive/2009/11/24/doctor-this-sql-server-appears-to-be-sick.aspx, a full backup is recorded in the transaction log. And of course if you stay in full recovery and never clear out the transaction log it will keep on growing.
What’s the problem with this? Well, there are several. The model database is the template from which new databases are created. I created a new database on this server (create database [dw]) and it had a 3MB data file and a 200MB log file with 3 VLFs. The disk space for the log file and backups, and the time required for backups. OK it’s not major but it all adds up. If enough ‘small stuff’ like this accumulates it can really dent the performance of your system.
What should you do? I’d suggest changing the model database to the simple recovery model. You don’t want new databases to be in full recovery unless you’re going to actively manage the logs, and you don’t want to bother with that unless you need point in time recovery. Backup the log, shrink it and set it to a small size, say 3MB. Kim Tripp’s blog post shows all the code for this. When you create a new database you’ll want to set the file sizes and growth characteristics anyway.