Category Archives: SQL Agent

Filtered index gotcha

We added a filtered index to a table today and a SQL Agent job that updates that table started to fail with this error message:

Msg 1934, Sev 16, State 1, Line 290 : UPDATE failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations. [SQLSTATE 42000]

It turns out that you must have certain options enabled in your session in order to use or update a filtered index.

All these must be ON:

  • ANSI_NULLS
  • ANSI_PADDING
  • ANSI_WARNINGS
  • ARITHABORT
  • CONCAT_NULL_YIELDS_NULL
  • QUOTED_IDENTIFIER

And this one must be OFF:

  • NUMERIC_ROUNDABORT

See Books Online for details: https://msdn.microsoft.com/en-us/library/ms188783.aspx (it’s about half way down the page in the Remarks section).

So what happened here? Well it looks like SQL Agent does not, by default, set QUOTED_IDENTIFIER on. I can’t find any clear official information on this but that’s what I’m seeing in SQL Profiler. I also couldn’t find any way to override this in SQL Agent’s configuration settings.

Anyway, I added SET QUOTED_IDENTIFIER ON at the start of the job step and it worked fine.

SQL Agent CmdExec gotcha

I was writing a new SQL Agent job for somebody else so I thought I should comment it a bit more thoroughly than I usually do. There’s an Operating System (CmdExec) step that calls sqlcmd to run a script file:

sqlcmd -S localhost -E -d master -i "\\127.0.0.1\joblogs\somescript.sql" -b

This pattern works fine elsewhere. Here though, I added a comment just to point out something:

rem A comment
sqlcmd -S localhost -E -d master -i "\\127.0.0.1\joblogs\somescript.sql" -b

The job ran successfully but it was suspiciously fast. It turns out that’s because it didn’t do anything. Apparently SQL Agent interprets this differently. I know the DOS shell language is a bit arcane sometimes but rem means ‘ignore everything until the next command’. This works in a .bat file. I just tested to make sure I wasn’t going mad.

Time to dig deeper. Books Online (https://msdn.microsoft.com/en-GB/library/ms190264.aspx) says:

This topic describes how to create and define a Microsoft SQL Server Agent job step in SQL Server 2016 that uses an executable program or operating system command

That ‘an‘ is a subtle clue. I had always assumed this worked like a batch file but it seems only the first command gets executed. In my case the first line is commented out, which effectively means ‘do nothing’. I tested with multiple statements and again only the first one executed.

So don’t confuse it with batch files. Put one command per step and – a bit of lateral thinking here – put your comments in the line(s) underneath:

sqlcmd -S localhost -E -d master -i "\\127.0.0.1\joblogs\somescript.sql" -b
rem A comment

PowerShell in SQL Agent: not what I’d hoped for

Like many DBAs, I suspect, I just haven’t found much need for PowerShell so far. There were lots of articles about how you could backup a database, run a query, get a list of databases etc. I just wasn’t seeing anything I couldn’t already do just fine in T-SQL. But with many Microsoft server products, PowerShell is taking over as the main management tool with the GUI serving as just a thin skin over the PowerShell API. Could this be coming our way?

So I’ve been meaning to give it a try and I just had a request to set up a daily FTP upload of some data to a partner organisation. I don’t know the command line stuff for FTP so this looked like a good excuse to learn a bit of PowerShell. After a while I had a script that worked nicely until I put it into a PowerShell step in a SQL Agent job. Then it failed complaining about how I was specifying non-existent arguments for a cmdlet. WTF?

After a lot of digging, I discovered that SQL Agent shells your script out to PowerShell 2.0. This is SQL Server 2014 and PowerShell 2.0 was RTMd in 2009. We’re up to PowerShell 5.0 now.

I imagine there are sound technical reasons for this but it’s a bit disappointing after all the hype.

Does this folder exist?

Today I tripped over a problem that was new to me. I wanted to find out, in SQL, whether a folder existed in the file system. I knew about using xp_fileexist to check for existence of a file but what about a folder? Well, it doesn’t work on a folder name and there’s no sign of xp_folderexist.

I finally found in a forum that you use xp_fileexist and pass in the imaginary file nul in that folder (eg. to check if the folder ‘C:\test’ exists, check if file ‘C:\test\nul’ exists). There was no explanation but, according to Raymond Chen who knows a few things, NUL is a special, ‘magic’ file name that goes back to DOS 1.0. So some days you learn an old new thing.

How to process a cube in a SQL Agent job

It can be done but it’s not well documented and it’s complicated by the fact that you can’t easily get the result of the process. Unless you actively check and manually raise an error if there’s been a problem, the job will report sucess no matter how screwed up the cube processing is. That part is all about the XML results that are output by the process.

Here’s how I do it (and I’d be very interested if you have any suggestions to improve it):

Connect to the Analysis Services server in SQL Server Management Studio.

Right click on the database and select Process.

Configure all the options and then use the Script button to grab the XML that defines the processing job. It will look something like this:

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <ErrorConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
        <KeyErrorLimit>-1</KeyErrorLimit>
        <KeyErrorLogFile>\\rep01\joblogs\ProcessASOnBI1KeyErrors.log</KeyErrorLogFile>
        <KeyNotFound>ReportAndContinue</KeyNotFound>
        <KeyErrorAction>ConvertToUnknown</KeyErrorAction>
        <KeyErrorLimitAction>StopProcessing</KeyErrorLimitAction>
        <KeyDuplicate>IgnoreError</KeyDuplicate>
        <NullKeyConvertedToUnknown>IgnoreError</NullKeyConvertedToUnknown>
        <NullKeyNotAllowed>ReportAndContinue</NullKeyNotAllowed>
    </ErrorConfiguration>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
    <Object>
            <DatabaseID>BI1</DatabaseID>
        </Object>
        <Type>ProcessFull</Type>
        <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
</Batch>

Create a new SQL Agent job.

Add a step to get rid of any old output file(s). Change the step type to CmdExec and paste in this code, modifying the file name:

if exist "\\rep01\joblogs\ProcessASOnBI1Log.xml" (del "\\rep01\joblogs\ProcessASOnBI1Log.xml")

Add a step for the processing. Change the step type to SQL Server Analysis Services Command and paste in the XML. Go to the Advanced page and give it an XML output file – with the same name you used in the previous step – to write its results to.

Add a step to check the output XML file to see whether the process was successful. Change the step type to T-SQL and paste in this code:

-- Check the result files from the cubes
 
set nocount on;
 
create table #File
(
        ResultFileName        varchar(255)
);

insert #File (ResultFileName) values ('\\rep01\joblogs\ProcessASOnBI1Log.xml');
insert #File (ResultFileName) values ('\\rep01\joblogs\ProcessASOnBI2Log.xml');

create table #FileContents 
(
        LineNumber        int identity
        , LineContents        nvarchar(4000)
);
 
declare @FileContents        nvarchar(4000)
        , @NewLine        char(2)
        , @FileName        varchar(255)
        , @CubesFailed        bit
        , @CmdLine        varchar(300)
 
set @NewLine = char(13) + char(10);
 
set @CubesFailed = 0;
 
-- Loop through result files
declare file_cursor cursor
local forward_only 
for 
select        ResultFileName
from        #File;
 
open file_cursor
 
        fetch next from file_cursor 
        into         @FileName
 
 
        while @@fetch_status = 0
        begin
 
                set @CmdLine = 'type ' + @FileName;
 
                insert #FileContents
                exec master.dbo.xp_cmdshell @CmdLine;
 
                select        @FileContents = isnull(@FileContents, '') + @NewLine + isnull(LineContents, '')
                from        #FileContents;
 
                select @FileName;
 
                select @FileContents;
 
                set nocount off;
 
                if @FileContents like '%error%'
                begin
                        set @CubesFailed = 1
                end
 
                delete #FileContents;
 
 
                -- Get the next cursor row
                fetch next from file_cursor 
                into         @FileName
 
        end
 
close file_cursor
deallocate file_cursor
 
 
drop table #FileContents;
drop table #File;
 
 
if @CubesFailed = 1
begin
        raiserror('Cubes failed to build.', 16, 1)
end

Modify it to add your own file name(s) into #File at the top. This step will raise an error if the processing failed. If you don’t do this, you won’t know whether the job has worked.

Start a SQL Agent Job in SQL

How do you start a SQL Agent job in SQL? I recently found myself wanting to be able to do this and it turns out it’s really easy:

exec msdb.dbo.sp_start_job 'Check Integrity'

It returns as soon as the job has started, without waiting until the job has finished. You could use it to run a job from a script or a stored procedure. But the way I use it is in the last step of a job in order to run jobs in sequence without having to guess in the schedules how to time them. No wasted time and no danger of the second job running before the first one has finished.

 

Monitoring SQL Agent Jobs

In my last post I showed how to reliably get your SQL Agent jobs to only bother you if they fail. But how can you be confident that they are in fact running? Well, what I did is I set up a report in Reporting Services that reads the job history tables on a list of servers and shows me what jobs have run lately and whether they succeeded. I have the report sent to me every morning. Successful jobs are green and failed jobs are red. I get a clear consolidated picture of what’s been going on.

Here’s the query the report uses:

if exists (	select	1 
		from	dbo.sysobjects 
		where	id = object_id(N'dbo.rpt_ITSQLAgentJobs') 
			and objectproperty(id, N'IsProcedure') = 1  )
	drop procedure dbo.rpt_ITSQLAgentJobs
go
 
set quoted_identifier on
go
set ansi_nulls on
go
 
create proc dbo.rpt_ITSQLAgentJobs
(
	@From   datetime
	, @To   datetime
)
as
 
/*
----------------------------------------------------------------------
Version:    1.0
Date:       13/05/2009
----------------------------------------------------------------------
*/
 
set nocount on
 
declare @FromInt int
	, @ToInt int
 
-- In sysjobhistory Run Date is an integer in the form YYYYMMDD.
set @FromInt = cast((convert(varchar(8),@From,112)) as int)
set @ToInt = cast((convert(varchar(8),@To,112)) as int)
 
-- I tried a union but the servers aren't all on the same collation so that doesn't work.
-- The temp table approach gets round that.
 
----------------------------------------------------------------------
-- server1
----------------------------------------------------------------------
  
-- Create the temp table from the first server's results.
select	sjh.run_date
	, sjh.run_time
	, sjh.server
	, sj.name as job_name
	, sjh.step_name
	, sjh.message
	, sjh.run_status
into	#t
from	server1.msdb.dbo.sysjobhistory sjh with (nolock)
	inner join server1.msdb.dbo.sysjobs_view sj with (nolock) on sj.job_id = sjh.job_id
where	sjh.step_id = 0 --(Job outcome)
	and sjh.run_date >= @FromInt
	and sjh.run_date <= @ToInt

---------------------------------------------------------------------- 
-- server2 
---------------------------------------------------------------------- 

begin try 

	insert	#t 
	select	sjh.run_date 
		, sjh.run_time 
		, sjh.server collate Latin1_General_CI_AS 
		, sj.name as job_name 
		, sjh.step_name 
		, sjh.message 
		, sjh.run_status 
	from	server2.dbo.sysjobhistory sjh with (nolock) 
		inner join server2.msdb.dbo.sysjobs_view sj with (nolock) on sj.job_id = sjh.job_id 
	where	sjh.step_id = 0 --(Job outcome) 
		and sjh.run_date >= @FromInt
		and sjh.run_date <= @ToInt
 
end try
 
begin catch
 
	insert	#t
		(run_date
		, run_time
		, [server]
		, job_name
		, step_name
		, [message]
		, run_status)
	select	99999999
		, 0
		, 'server2'
		, ''
		, ''
		, error_message()
		, 0
 
end catch
 
-- Repeat the server2 bit for every other server you want to monitor
 
--Return results
select	*
	, case run_status
		when 0 then 'Failed'
		when 1 then 'Succeeded'
		when 2 then 'Retry'
		when 3 then 'Canceled'
		when 4 then 'In progress'
		else 'Unknown (' + cast(run_status as varchar(3)) + ')'
	  end as status_desc
from	#t
order by run_date desc
	, run_time desc
	, [server]
 
go

Better Notification from SQL Agent Jobs

I manage about 10 SQL servers and I rely heavily on SQL Agent jobs to do this. My aim is to get everything going smoothly with minimal intervention from me, but I want to know straight away when something goes wrong. SQL Agent has an excellent scheduler and what you can do in a job step is flexible and powerful but it’s not great at coping if the job goes wrong. Here’s my simple solution.

Let’s look at an example: a job to backup a number of databases, one in each step. I want it to backup as many as it can, so if there’s an error I want it to keep going and then tell me at the end that the job wasn’t successful.

You can’t pass data directly between job steps. MSDN says you can do it via a permanent table or global temporary table – the permanent table worked for me but not the global temporary table – but this doesn’t feel right either. Depending on how you configure what a step does if it fails, you can control things to a degree:

  • If each step quits the job I don’t get all my backups done.
  • If each step goes to the next step the job won’t notify me there was a problem because it got successfully to the last step and that succeeded too. (I raised a request on Connect about this. If the job history viewer can show an asterisk when one of the steps failed, the notification system should be able to make this distinction too.)
  • I suppose I could use the ‘Go to step X’ to create a structure with an optional ‘Step X-1 failed’ step after each step which is skipped when there’s no error. But that’s just ugly and fragile.

None of these work for me. So what I do is have each step just go to the next step and the last step is a stored proc that checks the job history for this run and raises an error if anything failed. That last step quits the job reporting failure or success. Then I set the job to notify me if it fails. If any backup fails the job just moves on to the next one and, at the end, the stored proc sees the error and raises a new one, which triggers the notification email to me.

Here’s the stored proc:

if exists (	select	1 
		from	dbo.sysobjects 	
		where	id = object_id(N'dbo.dba_CheckJobSuccess') 
			and objectproperty(id, N'IsProcedure') = 1	)
	drop proc dbo.dba_CheckJobSuccess;
go

set ansi_nulls on
go
set quoted_identifier on
go
 
 
create proc [dbo].[dba_CheckJobSuccess]
(
	@JobID		uniqueidentifier 
)
as
 
/*
------------------------------------------------------------------
Version : 1.0
Date    : 07/05/2009
------------------------------------------------------------------
*/
 
set nocount on
 
 
--Check for job success
 
declare @PrevInstance	int
	, @ErrCount	int
 
--Find the end of the history for the previous run
select	top 1 @PrevInstance = sjh.instance_id
from	msdb.dbo.sysjobhistory sjh with (nolock)
where	sjh.job_id = @JobID
	and sjh.step_id = 0 --0=(Job outcome)
order by sjh.instance_id desc
 
select	@PrevInstance = isnull(@PrevInstance,0)
 
--Look for failed steps since then
select	@ErrCount = count(*)
from	msdb.dbo.sysjobhistory sjh with (nolock)
where	sjh.job_id = @JobID
	and sjh.instance_id > @PrevInstance
	and sjh.step_id <> 0 --0=(Job outcome)
	and sjh.run_status = 0
 
select @ErrCount = isnull(@ErrCount,0)
 
--Raise an error if any step(s) failed
if @ErrCount > 0
begin
	raiserror('One or more job steps failed.', 16, 1)
end
 
go

And here’s how the last step calls it:

--Check for job success
exec master.dbo.dba_CheckJobSuccess @JobID = $(ESCAPE_SQUOTE(JOBID))

The value for @JobID uses token replacement to get the ID of the job in which the code is executing so I can paste this into any job and it will just work.