Monthly Archives: November 2009

A better way to arrange your query windows

The developers of Management Studio and, before that, Query Analyzer have made a bit of a habit of building killer features and then hiding them in the small print of Books Online. Now you may already know that SQL 2000 introduced a new way to arrange your query and result windows, but I only just found out about it. I think it’s great so, in case you blinked and missed it too, here’s how it works.

Since the dawn of time you edited your query in a full window and, when you ran it, the results covered the bottom half of the query. You could use Ctrl+R to toggle the visibility of the results. When you think about it, it’s actually pretty clumsy. These are the only combinations you can see easily:

  • half of the query and half the results
  • the whole query
  • er, that’s it

I was so used to it that I didn’t really notice. Until somebody showed me a much better way to do it.

There’s an option under Query Results | SQL Server | Results to Grid called Display results in a separate tab. If you tick this and its sub-option the behaviour gets much more usable. When you run the query the results fill the screen. You get a second row of tabs below the row of tabs for the query windows you’ve got open. This second row has tabs for the query, results and messages and (optionally) things like the query plan. It looks like this:

You can tab backwards and forwards between these tabs using F6 and Shift+F6 respectively. You can quickly toggle between the query and results and you can see both of them properly.

You really should try this. Your muscle memory will protest but stick with it. I think you’ll find it was worth it.

How to process a cube in a SQL Agent job

It can be done but it’s not well documented and it’s complicated by the fact that you can’t easily get the result of the process. Unless you actively check and manually raise an error if there’s been a problem, the job will report sucess no matter how screwed up the cube processing is. That part is all about the XML results that are output by the process.

Here’s how I do it (and I’d be very interested if you have any suggestions to improve it):

Connect to the Analysis Services server in SQL Server Management Studio.

Right click on the database and select Process.

Configure all the options and then use the Script button to grab the XML that defines the processing job. It will look something like this:

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <ErrorConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
        <KeyErrorLimit>-1</KeyErrorLimit>
        <KeyErrorLogFile>\\rep01\joblogs\ProcessASOnBI1KeyErrors.log</KeyErrorLogFile>
        <KeyNotFound>ReportAndContinue</KeyNotFound>
        <KeyErrorAction>ConvertToUnknown</KeyErrorAction>
        <KeyErrorLimitAction>StopProcessing</KeyErrorLimitAction>
        <KeyDuplicate>IgnoreError</KeyDuplicate>
        <NullKeyConvertedToUnknown>IgnoreError</NullKeyConvertedToUnknown>
        <NullKeyNotAllowed>ReportAndContinue</NullKeyNotAllowed>
    </ErrorConfiguration>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
    <Object>
            <DatabaseID>BI1</DatabaseID>
        </Object>
        <Type>ProcessFull</Type>
        <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
</Batch>

Create a new SQL Agent job.

Add a step to get rid of any old output file(s). Change the step type to CmdExec and paste in this code, modifying the file name:

if exist "\\rep01\joblogs\ProcessASOnBI1Log.xml" (del "\\rep01\joblogs\ProcessASOnBI1Log.xml")

Add a step for the processing. Change the step type to SQL Server Analysis Services Command and paste in the XML. Go to the Advanced page and give it an XML output file – with the same name you used in the previous step – to write its results to.

Add a step to check the output XML file to see whether the process was successful. Change the step type to T-SQL and paste in this code:

-- Check the result files from the cubes
 
set nocount on;
 
create table #File
(
        ResultFileName        varchar(255)
);

insert #File (ResultFileName) values ('\\rep01\joblogs\ProcessASOnBI1Log.xml');
insert #File (ResultFileName) values ('\\rep01\joblogs\ProcessASOnBI2Log.xml');

create table #FileContents 
(
        LineNumber        int identity
        , LineContents        nvarchar(4000)
);
 
declare @FileContents        nvarchar(4000)
        , @NewLine        char(2)
        , @FileName        varchar(255)
        , @CubesFailed        bit
        , @CmdLine        varchar(300)
 
set @NewLine = char(13) + char(10);
 
set @CubesFailed = 0;
 
-- Loop through result files
declare file_cursor cursor
local forward_only 
for 
select        ResultFileName
from        #File;
 
open file_cursor
 
        fetch next from file_cursor 
        into         @FileName
 
 
        while @@fetch_status = 0
        begin
 
                set @CmdLine = 'type ' + @FileName;
 
                insert #FileContents
                exec master.dbo.xp_cmdshell @CmdLine;
 
                select        @FileContents = isnull(@FileContents, '') + @NewLine + isnull(LineContents, '')
                from        #FileContents;
 
                select @FileName;
 
                select @FileContents;
 
                set nocount off;
 
                if @FileContents like '%error%'
                begin
                        set @CubesFailed = 1
                end
 
                delete #FileContents;
 
 
                -- Get the next cursor row
                fetch next from file_cursor 
                into         @FileName
 
        end
 
close file_cursor
deallocate file_cursor
 
 
drop table #FileContents;
drop table #File;
 
 
if @CubesFailed = 1
begin
        raiserror('Cubes failed to build.', 16, 1)
end

Modify it to add your own file name(s) into #File at the top. This step will raise an error if the processing failed. If you don’t do this, you won’t know whether the job has worked.

Is this log backup file the next one to restore?

When you restore from a transaction log backup, SQL Server checks the file against the database and gives you an error if the file is not the next one in the sequence. I wanted to know how this works so I did some digging. Not much in Books Online that I could see so I did some practical investigation with some of my log shipping files.

It looks like it compares the last LSN in the backup file’s header with the last LSN in the database’s restore history. If the last LSN in the file is less than the last LSN in the restore history, it indicates that the database already contains those transactions. Here’s some code that does it:

declare    @LastDBLSN    numeric(25,0)
        , @LastFileLSN    numeric(25,0)
 
--Get last LSN from database
select    top 1 @LastDBLSN = bs.last_lsn
from    msdb.dbo.restorehistory rh with (nolock)
        left join msdb.dbo.backupset bs with (nolock) on bs.backup_set_id = rh.backup_set_id
where    rh.destination_database_name = @DBName
order by rh.restore_history_id desc
 
--Get last LSN from backup file
declare @logbak table
(
    BackupName nvarchar(128)
    , BackupDescription nvarchar(255)
    , BackupType smallint
    , ExpirationDate datetime
    , Compressed tinyint
    , Position smallint
    , DeviceType tinyint
    , UserName nvarchar(128)
    , ServerName nvarchar(128)
    , DatabaseName nvarchar(128)
    , DatabaseVersion int
    , DatabaseCreationDate datetime
    , BackupSize numeric(20, 0)
    , FirstLSN numeric(25, 0)
    , LastLSN numeric(25,0)
    , CheckpointLSN numeric(25,0)
    , DatabaseBackupLSN numeric(25, 0)
    , BackupStartDate datetime
    , BackupFinishDate datetime
    , SortOrder smallint
    , CodePage smallint
    , UnicodeLocaleId int
    , UnicodeComparisonStyle int
    , CompatibilityLevel tinyint
    , SoftwareVendorId int
    , SoftwareVersionMajor int
    , SoftwareVersionMinor int
    , SoftwareVersionBuild int
    , MachineName nvarchar(128)
    , Flags int
    , BindingID uniqueidentifier
    , RecoveryForkID uniqueidentifier
    , Collation nvarchar(128)
    , FamilyGUID uniqueidentifier
    , HasBulkLoggedData bit
    , IsSnapshot bit
    , IsReadOnly bit
    , IsSingleUser bit
    , HasBackupChecksums bit
    , IsDamaged bit
    , BeginsLogChain bit
    , HasIncompleteMetaData bit
    , IsForceOffline bit
    , IsCopyOnly bit
    , FirstRecoveryForkID uniqueidentifier
    , ForkPointLSN numeric(25, 0) null
    , RecoveryModel nvarchar(60)
    , DifferentialBaseLSN numeric(25, 0) null
    , DifferentialBaseGUID uniqueidentifier
    , BackupTypeDescription nvarchar(60)
    , BackupSetGUID uniqueidentifier null
);
 
insert @logbak exec ('restore headeronly from disk = ''' + @FileName + '''');
 
select     @LastFileLSN = LastLSN
from    @logbak;
 
-- Compare them
if @LastFileLSN < @LastDBLSN
begin
        -- Already got these transactions
end
else
begin
        -- Restore log
end

Idera discontinues SQLSafe Freeware Edition

No longer available for download. Check it out here: http://www.idera.com/Products/Free-Tools/SQL-safe-Freeware-Edition/

Bummer! I’m using this extensively. Now you have to choose between the Enterprise edition and the Lite Edition. The Lite Edition is $295 per instance, which is not quite as attractive a price as free. I suppose it was too good to last. Luckily, I just got the budget approved to upgrade to 2008 R2 Standard, which has native backup compression. Can’t wait!

Report Manager folder & file security

Just found the answer to something that has been irritating me slightly for ages: how to give a user access to a report without giving them access to all the other reports in the folder.

We have folders that are based on the roles people perform in the company. We granted the appropriate domain groups access to their folders in the Browser role. That gives those people access to the reports in those folders because, by default, a report inherits its security from its parent folder. This works great until the Sales Manager says, “Can you give Student Joe access to the Sales Stats report so he can do a special job for me?” This report is in the Management folder which also contains a load of sensitive reports that Student Joe cannot be allowed to see, so you can’t give him the Browser role on the folder. You can override the inherited security on the Sales Stats report by putting Student Joe in the Browser role at that level but he still can’t get to it in the Report Manager because he has no rights on the Management folder. His only access is via a URL direct to the report.

We thought we were stuck with this and had got used to it. After all, people could get to what they needed so there wasn’t much of a business case for sorting it out properly. But today I got another such request and the red mist seized me. (Perhaps it’s the weather?) I couldn’t believe this was impossible. Surely they thought of this? And, thanks to the excellent Hitchhiker’s Guide to SQL Server Reporting Services, I discovered that they did. They just didn’t go out of their way to mention it.

Here are the steps:

  • On the Home page, click on Site Settings.
  • Click on Configure item-level role definitions.
  • Create a new role (say, View Folders Role) and assign it the View folders task.
  • Now you can assign a user or group the View Folders Role on a folder and they will be able to navigate to the folder and there they will see any reports you have given them access to.

Easy when you know how!