Starting Down In-Memory Lane – SQL Server 2014 In-Memory OLTP Beginnings Part 1

Starting from announcement some time ago and related blog post by David Campbell – The coming in-memory database tipping point I was really excited to get my hands on SQL SERVER 2014 CTP and experiment with new In-Memory OLTP, aka Hekaton feature. I spoke about it higher level to number of customers, but not long ago after downloading SQL 2014 CTP2 I started actually playing with it. So how do you create memory optimize tables and query these?
First lets touch on what are memory optimized tables in SQL 2014:
Memory-Optimized tables are fully ACID compliant data storage structure, which stores it’s data mainly within the system’s memory, allowing users rapid access to data stored within the In-Memory structure in a transactional (OLTP) environment; which means users can read and write to this data structure without running into transactional consistency issues.
This feature has been achieved by SQL Server 2014 through taking the following approach:

  • In-Memory Row-Versioning: Memory-Optimized tables maintains various versions of the same row dispersed over the table’s in-memory data structure, this allows the reading and writing of rows concurrently without transactional inconsistency issues. That being said, and as with the SNAPSHOT ISOLATION LEVEL, row-versioning handles conflicts by throwing an error to the application, expecting the application to implement some sort of retry logic for failed transactions.
  • Data Durability is guaranteed through transaction logging
  • Checkpoint and Delta files: placed on a special FileGroup for Memory-Optimized storage, these files are written to sequentially (and so optimised for spinning-disk), and are used mainly during SQL Server start-up to load in the Memory-Optimized tables in a database.
  • More on architecture on In-Memory OLTP can be found here as well – Architectural Overview of SQL Server 2014’s In-Memory OLTP Technology

So let fire up SQL 2014 SSMS and create a database called Orders, in order to create memory optimized tables in that database we will have to add a memory optimized file group to that database and file to that file group, essentially enabling In-Memory OLTP in that database, create memory optimized tables, insert some data and query these with T-SQL. In upcoming part 2 of this post I will create a simple natively compiled stored procedure as well and get data that way.

  • Create Database and enable In-Memory OLTP:

    create database orders
    alter database orders
    add filegroup orders_in_memory_optimized_filegroup contains memory_optimized_data
    alter database orders
    add file (name ='orders_in_memory_oltp_1', filename ='C:\Program Files\Microsoft SQL Server\MSSQL12.GENNADYKSQL2K14\MSSQL\DATA\Orders_InMemory_OLTP_1')
    To filegroup orders_in_memory_optimized_filegroup;


    IF NOT EXISTS (SELECT name FROM sys.filegroups
    WHERE is_default=1 AND name = N'orders_in_memory_optimized_filegroup')
    ALTER DATABASE orders MODIFY FILEGROUP orders_in_memory_optimized_filegroup DEFAULT


  • Now lets create some memory optimized tables

    create table Orders
    Order_Id int not null primary key nonclustered hash with (bucket_count =20000) ,
    OrderDate_Time datetime2 not null,

    with (memory_optimized=on, durability =schema_and_data)


    create table Items
    Item_Id int not null primary key nonclustered hash with (bucket_count =20000) ,
    ItemDesc varchar (200) not null,

    with (memory_optimized=on, durability =schema_and_data)


    create table OrderItems

    Order_Item_Id int not null primary key nonclustered hash with (bucket_count =20000),
    Item_Id int not null ,
    Order_Id int not null

    with (memory_optimized=on, durability =schema_and_data)

Couple of interesting items above worth stopping and looking deeper.

  • Note durability option I used creating these tables – SCHEMA_AND_DATA. This is actually default setting. This option ensures that data is recovered to the Memory-Optimized table when SQL Server is restarted, or is recovering from a crash. If I picked another setting – SCHEMA_ONLY, memory-optimized table will be truncated if/when SQL Server is restarted or is recovering from a crash,however the Memory-Optimized table will be re-created as a blank table at the end of the restart/recovery operation. Alas, my data in the table wouldn’t persist.
  • That new weird bucket_count index parameter. The bucket_count index parameter on a Non-Clustered Hash Index dictates the size of the Hash table allocated for the index. In real world unlike my demo there is a case for careful capacity planning here. Higher bucket count could lead to larger memory utilization and longer scans, lower bucket count could lead to performance degradation on lookups and inserts. Microsoft recommends the bucket_count should be twice the maximum number of unique index keys.
  • Indexing. Only 8 indexes are allowed on a Memory Optimized table. These also cannot be altered, but will need to be dropped and recreated. All indexes are covering indexes, i.e. they include all columns on the table
  • Constraints. Primary Key is a must requirement for memory optimized table. No foreign key constraints are supported.

Now lets load some data into our tables

insert Orders values (1,GETUTCDATE())
insert Orders values (2,GETUTCDATE())
insert Orders values (3,GETUTCDATE())
insert Orders values (4,GETUTCDATE())


insert Items values (1, 'Bike')
insert Items values (2,'Chair' )


insert OrderItems values (1,1,2)
insert orderitems values (2,2,1)
insert orderitems values (3,1,3)
insert orderitems values (4,2,4)


Now lets get some data from these tables via T-SQL, example lets get all of the orders for chairs

select o.order_id, o.OrderDate_Time from orders o
inner join OrderItems oi on o.Order_Id=oi.Order_Id
inner join Items i on oi.Item_Id= i.item_id
where i.ItemDesc ='Chair'

Here is the easy result as expected:


And execution plan:


There is also new report added to SSMS that does great job reporting memory usage by memory optimized objects from DMVs. Note tables we created are there:


Next in part 2 and 3 I plan to move into natively compiled procs, In-Memory OLTP DMVs, perfmon counters and troubleshooting.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s