Trade Durability For Performance Boost–SQL Server 2014 Delayed Durability Feature

Another interesting feature that made it into SQL Server 2014 and into my bag of performance tuning tricks is delayed durability. What is delayed durability and why we may use it?

Traditionally every transaction in RDBMS should follow ACID principles. In computer science, ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. SQL Server guarantees transactional  ACID by implementing Write Ahead Transaction Logging, which is explained in SQL BOL here. Essentially, idea is that when a transaction is committed, SQL Server performs a physical write to disk in the transaction log. The commit command does not return until that write is successful.  That way data is guaranteed not to be lost unbeknown to front end system even in case of RAM or power failure.

However that approach has its own drawbacks when it comes to performance. Imagine an application that has huge number od small concurrent transactions and its transaction log write overhead.  I used to call such applications that I frequently had to tune against all odds as “death by thousand cuts”.  Yes, you can ask for faster storage, SSDs, application to be optimized by changing “chattier” transactions to “chunkier” transactions, but with delayed durability we have another option as well.

Essentially in the system where you have a significant bottleneck on transaction log and can sustain moderate chance of possible data loss you now have the option to temporarily suspend the requirement to wait for the log flush and acknowledgement via this new feature. With Delayed Durability transactions are logged to transaction log buffers in memory and control is returned to the application without writing to directly to disk.  The log buffers are flushed and written to disk later. According BOL such flush would happen when following will occur:

  • The in-memory transaction log buffer fills up.
  • sp_flush_log is executed
  • A “normal” fully durable transaction makes any change in database and commits
  • SQL Server Shutdown request

You can set this option at the database level, at the individual transaction level, or – in the case of natively compiled procedures in In-Memory OLTP – at the procedure level. The database level setting overrides lower level settings; if the database is set to disabled, trying to commit a transaction using the delayed option will simply be ignored, with no error message. Some transactions are set to be always durable and option will be ignored – system transactions, cross-database transactions, distributed transactions controlled by MSDTC, operations involving CDC, change tracking or File Table.

At the database level you can use ALTER DATABASE statement to set this option:

ALTER DATABASE dbname SET DELAYED_DURABILITY = DISABLED | ALLOWED | FORCED;

Essentially with three options here:

  • ALLOWED – Individual transaction can use Delayed Durability on transaction level
  • FORCED – Means that all transactions that can use Delayed Durability will do so.
  • DISABLED – Disables this feature, default.

You can also set these options in database properties screen of SSMS:

dd

Once you set Delayed Durability to allowed on database level you can use following to use it on transaction level:

 COMMIT TRANSACTION WITH (DELAYED_DURABILITY = ON);

With In-Memory OLTP memory optimized procedures (see my previous post) you can use Delayed Durability in procedure like:

BEGIN ATOMIC WITH (DELAYED_DURABILITY = ON, ...)

Well, enough theory. Lets see performance difference inserting 100,000 rows in a loop with Delayed Durability vs. traditional durable way:

Lets first create a test database and set Delayed Durability to allowed on database level:

use master

go

if exists (select 1 from sys.databases where name = 'my_dd_test')

begin

DROP DATABASE my_dd_test

end

go

CREATE DATABASE my_dd_test

go

use my_dd_test

go

ALTER DATABASE my_dd_test SET delayed_durability = ALLOWED

go

Now lets create table to insert data into:

CREATE TABLE dbo.InsertTest (ID int primary key clustered, VALUE varchar(255))

Lets create “traditional” insert procedure into this table:

CREATE PROCEDURE TestInsertTraditional @c1 int, @c2 varchar(255) as

begin tran

insert into dbo.InsertTest values (@c1, @c2)

commit

Lets create Delayed Durability version of procedure:

create procedure TestInsertDelayed @c1 int, @c2 varchar(255) as

begin tran

insert into dbo.InsertTest values (@c1, @c2)

commit with (DELAYED_DURABILITY=ON)

go

Now lets commence with the test. First we will insert 100,000 rows into our table with full durability via traditional insert…commit:

declare @cnt int = 1 , @maxcnt int = 100000, @start datetime

truncate table dbo.InsertTest

select @start = getdate()

while @cnt<@maxcnt

begin

exec TestInsertTraditional @cnt, 'This is a test';

select @cnt = @cnt + 1;

end

select datediff(s, @start, getdate()) truncate table dbo.InsertTest

This took 16 seconds on my laptop:

time1

Now lets do the same with Delayed Durability in action:

set nocount on

declare @cnt int = 1 , @maxcnt int = 100000, @start datetime

set @cnt = 1

select @start = getdate()

while @cnt<@maxcnt

begin exec [dbo].[TestInsertDelayed] @cnt, 'This is a test';

select @cnt = @cnt + 1;

end

select datediff(s, @start, getdate())

truncate table dbo.InsertTest

exec sp_flush_log

This only took three seconds on my laptop:

time2

Obviously great performance boost for certain workloads as I stated above. However, this isnt a feature to be taken lightly and turned on by default every instance of SQL Server you manage – after all it comes with a cost of possible data loss. Under a well-performing disk, the maximum you should expect to lose in a catastrophe – or even a planned and graceful shutdown – is up to one full block (60KB). However, in the case where your I/O subsystem can’t keep up, it is possible that you could lose as much as the entire log buffer (~7MB).  So although this another great feature and very useful in some circumstances please consider above before you turn it on and use it. Its analogous to traction control on high-performance sports car, its frequently turned off while racing to get extra performance boost, yet it will not pull you out of trouble on wet roadway corner while you going 120 mph.   

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s