Tag Archives: sql 2005

2008 R2 Collation Gotcha

On my dev machine I installed R2 alongside my existing 2005 instance and it did a funny thing with the new instance’s collation. Our servers are all on Latin1_General_CI_AS, including my 2005 dev instance. But my R2 instance defaulted to SQL_Latin1_General_CI_AS and I’ve only just noticed, which is a damn nuisance. I thought I checked the collation during the setup but I guess I did a mental LIKE ‘%Latin1_General_CI_AS’. I think this article (http://msdn.microsoft.com/en-us/library/ms143508.aspx) explains the reasoning:
SQL Server Collations (SQL_*) 
Select this option to match settings with English-language versions SQL Server 2005 or earlier versions.
Presumably it detected the 2005 instance and was trying to be helpful. Close but no cigar. Just as well my machine is long overdue for a rebuild anyway.
Advertisements

Upgrade to 2008 R2

I don’t like it, Carruthers. It’s just too quiet.

Well, I’ve done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don’t actually get upgraded until you recover them but there was no noticable delay. It’s gone from 2005 Workgroup – limited to 4GB of memory – to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It’s soo much faster.

The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service’s credentials for database access when running reports. It didn’t preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade.

All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

Upgrading log shipping from 2005 to 2008 or 2008R2

If you’re using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won’t work if you’re keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you’re using native log shipping you’ll have some work to do. If you’ve rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:

restore database [dw] with norecovery;

and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!)

You can either upgrade the secondary server in place or rebuild it. A secondary database doesn’t actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over.

For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx.

Upgrading from 2005 to 2008 R2

We’re about to take the plunge and upgrade our servers from SQL 2005 to SQL 2008 R2. Real world accounts of people upgrading to R2 are a bit hard to find so I thought it might be useful to blog what happens. (I don’t count marketing ‘case studies’ that just say stuff like “The process was effortless and the upgrade will pay for itself by the end the week.”)
We’re using the database engine, Analysis Services and Reporting Services so upgrading by a major version number was looking a bit daunting. I wasn’t expecting much trouble on the engine side of things but, as most of the action in 2008 and R2 appears to have been on the Reporting and BI front, I expected to have quite a bit of work to do. But our testing so far has been one nice surprise after another:
  • The 2005 backups restore cleanly onto R2.
  • R2’s BI Studio upgraded the Reporting and Analysis Services solutions without any issues.
  • The cubes all deployed and processed just fine.
  • R2 BI Studio interacts fine with TFS 2008 version control.
 I’ll blog some more as things develop.

Runaway version store in tempdb

Today was really a new one. I got back from a week off and found our main production server’s tempdb data file had gone from its usual 200MB to 36GB. Ironically I spent Friday at the most excellent SQLBits VI and one of the sessions I attended was Christian Bolton talking about tempdb issues – including runaway tempdb databases. How just-in-time was that?!
I looked into the file growth history and it looks like the problem started when my index maintenance job was chosen as the deadlock victim. (Funny how they almost make it sound like you’ve won something.) That left tempdb pretty big but for some reason it grew several more times. And since I’d left the file growth at the default 10% (aaargh!) the worse it got the worse it got. The last regrowth event was 2.6GB. Good job I’ve got Instant Initialization on. Since the Disk Usage report showed it was 99% unallocated I went into the Shrink Files dialogue which helpfully informed me the data file was 250MB.
I’m afraid I’ve got a life (allegedly) so I restarted the SQL Server service and then immediately ran a script to make the initial size bigger and change the file growth to a number of MB. The script complained that the size was smaller than the current size. Within seconds! WTF? Now I had to find out what was using so much of it. By using the DMV sys.dm_db_file_space_usage I found the problem was in the version store, and using the DMV sys.dm_db_task_space_usage and the Top Transactions by Age report I found that the culprit was a 3rd party database where I had turned on read_committed_snapshot and then not bothered to monitor things properly.
Just because something has always worked before doesn’t mean it will work in every future case. This application had an implicit transaction that had been running for over 2 hours.

Have you really fixed that problem?

The day before yesterday I saw our main live server’s CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you’d want to see.

We’re log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We’re on SQL2005 so there’s no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a ‘live’ reporting database that is completely rebuilt every 20 minutes. (I’m just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening – fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first.

The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it’s not a good guide what’s using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes.

So the moral is: even when you’re convinced you’ve found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone.

And, yes, I have checked the CPU again today and it’s still looking sweet.

Log shipping and shrinking transaction logs

I just solved a problem that had me worried for a bit. I’m log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn’t think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser.

So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn’t ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn’t blink and the log on the secondary shrank too.

I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)