Tag Archives: Tip

Handling service failures

There are times when a SQL-related service has a problem and just stops. Windows services are like that sometimes. Unless you do some additional configuration, you’re not going to know about it until users start shouting at you. Here’s what you can do to make that less likely.

Open up Services and double click on the SQL Server service. By default the Recovery tab looks like this:


If this services stops unexpectedly you won’t know. Not until your phone starts ringing off the hook!

Here’s how I suggest you improve things:


The first time the service fails the system will wait a minute and try to restart it. A second failure will get the same treatment. If it fails a third time you need to know so the third failure runs a PowerShell script to send an alert email.

This won’t kick in if the service is stopped by hand or by something like a SQL Server service pack. It’s only for when things go wrong.

The full contents of the Command line parameters box goes like this:

-file "\\server_name\share_name\email_service_crash.ps1" -ServerName "server_name" -ServiceName "SQL Server"

Clearly you’ll need to edit the file path, server name and service name to suit your environment.

The powershell script, email_service_crash.ps1, is this:

Param (


$from = "from_address"
$to = "to_address"
$subject = $ServiceName + " service is down on " + $ServerName
$body = "Find out why and get it started again."

$creds = new-object System.Net.NetworkCredential("smtp_user_name", "smtp_user_password")
$smtp = new-object System.Net.Mail.SmtpClient("smtp_server_name")
$smtp.UseDefaultCredentials = "False"
$smtp.Credentials = $creds
$smtp.Send($from, $to, $subject, $body)

Again, you’ll need to edit some of the variables and parameters: $from, $to, and the SMTP server, user and password.

Set the script up and test it in a cmd window like this:

powershell.exe -file "\\server_name\share_name\email_service_crash.ps1" -ServerName "server_name" -ServiceName "SQL Server"

Once you’ve got it working, you can take the string from ‘-file’ onwards and use it in the service Recovery tab.

You might be asking what the checkbox Enable actions for stops with errors means. Good question. The documentation is no help but it turns out there are some situations where this stuff won’t kick in unless that box is ticked. For more details have a look at this question on Server Fault: http://serverfault.com/questions/72318/set-up-recovery-actions-to-take-place-when-a-service-fails

You can re-use this in the Recovery tab for each service you want to cover. Just remember to change the -ServiceName parameter you pass to the PowerShell script.


In praise of StatisticsParser.com

Here’s a great free addition to your query tuning toolbox that you may well not have heard about: StatisticsParser.com. It hasn’t been mentioned nearly enough in my opinion.

Here’s how the author, Richie Rump, puts it:

Sometimes reading SQL Server ouput from Statistics IO and Statistics Time can be a total drag. This page will help with that. Just paste in the output of Statistics IO and/or Statistics Time and press Parse. Your output will be formatted and totaled. Enjoy.

Here’s a quick look at what it does.

If you want to see how long each part of a query takes or what logical reads are involved you can set a couple of options on: statistics io and statistics time.

set statistics io on;
set statistics time on;

-- List all indexes in the current DB
select	t.name as TableName
	, i.name as IndexName
	, i.*
from	sys.indexes i
	inner join sys.objects t on t.object_id = i.object_id
where	i.type > 0
	and t.type = 'U'
	and t.is_ms_shipped = 0
	and t.name not like 'sys%'
order by t.name asc
	, i.type desc
	, i.name asc;

This adds extra output to the Messages tab when you run your query:

SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 14 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

(1161 row(s) affected)
Table 'syspalvalues'. Scan count 0, logical reads 2322, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'syssingleobjrefs'. Scan count 0, logical reads 2403, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 1, logical reads 2566, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysidxstats'. Scan count 634, logical reads 1472, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 47 ms,  elapsed time = 190 ms.

It’s not that readable, even for a simple example like this. Here’s where StatisticsParser comes in. Paste in this output and hit the Parse button to see the information laid out nice and clearly:


I think this is brilliant.

Thank you, Richie.

URLs to reports with non-default parameters

You can give somebody a hyperlink to get to a Reporting Services report and that’s all they need if there are no parameters or if the default values are what they want. But how do you specify values for parameters? It turns out to be pretty simple but the documentation is a little vague on some of the details.
We have a Sales League report where you can look at a Sales team’s figures and see how the team members are doing. In Report Manager the user has to choose their team and run the report. I was asked to supply links on the intranet home page for each team to go straight to their figures. It turns out all I had to do was grab the URL (NB: the ReportServer URL, not the Report Manager URL) and add the parameters as name value pairs but there are a few things that the documentation doesn’t make clear:
  • The name is the internal name, not the display name. From the example given it could have been either.
  • The value is the actual Value value, not the Label value (ie. again, it’s the internal value not the display value.)
  • You only need to supply name-value pairs for the parameter(s) you want to change. Our report has four parameters and I only needed to change the third one, so I just added the name-value for that one and the rest stuck with their default values.
Help links:

Finding rows that intersect with a date period

This one is mainly a personal reminder but I hope it helps somebody else too. Let’s say you have a table that covers something like currency exchange rates with columns for the start and end dates of the period each rate was applicable. Now you need to list the rates that applied during the year 2009. For some reason this always fazes me and I have to work it out with a diagram. So here’s the recipe so I never have to do that again:

select  *
from    ExchangeRate
where   StartDate <= '31-DEC-2009'
        and EndDate >= '01-JAN-2009'

Does this folder exist?

Today I tripped over a problem that was new to me. I wanted to find out, in SQL, whether a folder existed in the file system. I knew about using xp_fileexist to check for existence of a file but what about a folder? Well, it doesn’t work on a folder name and there’s no sign of xp_folderexist.

I finally found in a forum that you use xp_fileexist and pass in the imaginary file nul in that folder (eg. to check if the folder ‘C:\test’ exists, check if file ‘C:\test\nul’ exists). There was no explanation but, according to Raymond Chen who knows a few things, NUL is a special, ‘magic’ file name that goes back to DOS 1.0. So some days you learn an old new thing.

Report Manager folder & file security

Just found the answer to something that has been irritating me slightly for ages: how to give a user access to a report without giving them access to all the other reports in the folder.

We have folders that are based on the roles people perform in the company. We granted the appropriate domain groups access to their folders in the Browser role. That gives those people access to the reports in those folders because, by default, a report inherits its security from its parent folder. This works great until the Sales Manager says, “Can you give Student Joe access to the Sales Stats report so he can do a special job for me?” This report is in the Management folder which also contains a load of sensitive reports that Student Joe cannot be allowed to see, so you can’t give him the Browser role on the folder. You can override the inherited security on the Sales Stats report by putting Student Joe in the Browser role at that level but he still can’t get to it in the Report Manager because he has no rights on the Management folder. His only access is via a URL direct to the report.

We thought we were stuck with this and had got used to it. After all, people could get to what they needed so there wasn’t much of a business case for sorting it out properly. But today I got another such request and the red mist seized me. (Perhaps it’s the weather?) I couldn’t believe this was impossible. Surely they thought of this? And, thanks to the excellent Hitchhiker’s Guide to SQL Server Reporting Services, I discovered that they did. They just didn’t go out of their way to mention it.

Here are the steps:

  • On the Home page, click on Site Settings.
  • Click on Configure item-level role definitions.
  • Create a new role (say, View Folders Role) and assign it the View folders task.
  • Now you can assign a user or group the View Folders Role on a folder and they will be able to navigate to the folder and there they will see any reports you have given them access to.

Easy when you know how!