Design principles of the in-memory OLTP engine – Avoid CPU and memory overhead for interpreted T-SQL

In this blog, I explain why the in-memory OLTP engine is significantly faster than the traditional on-disk engine. T-SQL is interpreted code with high CPU and memory overhead when SQL Server has to compile the command before it can be executed by the engine. In-memory OLTP reduces the number of recompilations, which may lead to much faster query execution with less impact on CPU and memory.

By |2020-04-17T20:33:58+02:00April 6th, 2020|Microsoft SQL Server|0 Comments|Reading Time: 10 minutes

Design principles of the in-memory OLTP engine – Avoid physical reads

Microsoft introduced in-memory OLTP in SQL Server 2014, advertising that queries can be up to 100 times faster. In this blog series I will explain the design principles of the in-memory OLTP engine to explain why it has the potential to be significantly faster than the traditional on-disk engine by skipping the complexity of handling the data pages in the buffer pool and totally eliminates logical and physical reads which can result in much faster query execution and less CPU impact.

By |2020-04-17T20:41:24+02:00December 30th, 2019|Microsoft SQL Server|2 Comments|Reading Time: 8 minutes

The limitations of the missing index feature – It does not suggest filtered indexes

This will be my last blog about the limitations of the missing index feature. This time I will demonstrate that the missing index feature will not suggest filtered indexes. Filtered indexes reduce the index size and thus the storage allocation in the data file is reduced. Rebuilding or reorganizing these indexes also requires fewer resources. So it's worth creating filtered indexes when possible. The following example will prove that the missing index feature will not consider creating filtered indexes.

By |2020-04-17T20:41:55+02:00July 7th, 2019|Microsoft SQL Server|0 Comments|Reading Time: 6 minutes