The SQL Dude!

On-Disk & In-Memory Pages that teach you the SQL way of life!

Posts Tagged ‘optimizer’

Scan, Find & Understand Missing Indexes in your SQL Database

Posted by Sudarshan Narasimhan on June 17, 2016


This post is Part 4 of my series of posts on Index Maintenance for a DBA. So here goes.

Enough has been said & talked about regarding missing indexes in SQL Server. Before you read any further, it is very important to understand that “missing indexes” that your colleagues & friends talk about are really missing index “recommendations“. The keyword is the recommendations part. Take the information in this post as a whole and don’t go crazy creating every missing index recommended when you run the query given below. Refer to my earlier posts in the indexing series (duplicate, unused indexes) to know why it is a bad idea to go “index-crazy”. You need to carefully evaluate the benefits of having an index vs the cost of maintaining it.

Some Background

SQL Server provides exposes index recommendations via DMV’s (sys.dm_db_missing_index_detailssys.dm_db_missing_index_groups and sys.dm_db_missing_index_group_stats). These dynamic views contain information collected by the SQL Server Query optimizer engine as your DB workloads and queries execute on the database engine. Since this is “dynamic” information, it is only as relevant as your last SQL Service restart. So this is something that needs to get into your blood as a DBA to check for periodically and is NOT a one-off activity.

 

Things to Consider & Understand before implementing Missing Index Recommendations

Your boss asks you if this index thingy will improve the performance of the database. The favourite DBA response “It Depends”  😀

MrBeanItDepends

 

 

 

 

 

 

 

 

 

 

Well, it really does! I’ll explain why… Like any other recommendations you need to take into account additional information along with this data to come to a conclusion. Consider for example the following:

  1. Has your DB workload changed since the time you collected the missing index recommendations? (i.e. Has there been an application release or DB schema upgrade since)
  2. Are there similar existing indexes on the table where it’s recommending you create another index? (Maybe you just need to add a few additional columns to the existing index as included columns to “cover” it…)
  3. Is the query batch or workload very frequently run, enough that the impact is very high if you create this index? (Refer to the last_user_seek and last_user_scan columns to find out when a particular query last used these columns for seek/scan operations)
  4. Do you have enough space in the DB/disk to afford yet another index? Especially important, if it’s a large table. (DBA 101 question, I know!)
  5. If you create this index and it’s rarely used, you are increase the maintenance overhead and also slowly down WRITE operations on the table.

 

IT DEPENDS, really it does!

 

Missing Index – Improvement Measure Calculation Logic

To help answer some of these questions, I calculate an arbitrary number to determine the “usefulness” of a new index, called improvement measure. The higher this number, the more useful it is to create it. The improvement measure is calculated based on 4 factors

  1. Average cost of the user queries that could be reduced by the new index. This is the current cost of the query.
  2. Average impact on the query if this index were created. i.e. The query cost would on average drop by this percentage value.
  3. Number of Seek operations that this index could have been used by user queries.
  4. Number of Scan operations that this index could have been used by user queries.

 

Now that you understand what missing indexes are, factors to consider and the logic behind the recommendations, go ahead and run the below query on your databases. The output provides you the “CREATE INDEX” T-SQL statement that you can use to create the index, if you see fit after considering all factors mentioned so far. This saves you time having to write the T-SQL yourself. The CREATE INDEX statement will include the index key columns in order, suggest included columns where applicable.

 



Script – Find Missing Indexes Recommendations in SQL Server

USE [msdb]  /*Replace with your Database Name */
GO
SELECT TOP 50 
    GETDATE() AS [RunTime],
    DB_NAME(mid.database_id) AS [DBNAME], 
    OBJECT_NAME(mid.[object_id]) AS [ObjectName], mid.[object_id] AS [ObjectID],
    CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) AS [Improvement_Measure],
    'CREATE INDEX missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) 
    + ' ON ' + mid.statement 
    + ' (' + ISNULL (mid.equality_columns,'') 
    + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '')
    + ')' 
    + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS [CREATE_INDEX_Statement],
    migs.user_seeks, migs.user_scans, migs.last_user_seek, migs.last_user_scan, migs.avg_total_user_cost, migs.avg_user_impact, migs.avg_system_impact,
    mig.index_group_handle, mid.index_handle
FROM sys.dm_db_missing_index_groups mig
    INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
    INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) > 10
    AND mid.database_id = DB_ID()
ORDER BY [Improvement_Measure] DESC
GO

 

Script Result
Here is a sample output from my test system showing all missing indexes recommended on the MSDB database.

MissingIndexes

 

 

 

Test, Test, Test!

Did I mention that you must test these index recommendations on your Test system? If you know your application workload/query, you must test to gauge the impact of having this index.

Keep in mind, your testing must not be to run just a single batch/query, as 95% of the time this will yield a positive result. But, you must include other sample workloads along with your specific query. Why? Because, indexes are structures on the table not specific to just your query. Other queries can use it too and SQL has to keep the index up to date when data changes happen to the table. Always test any index changes with as close to a real world production like workload as possible. You don’t want any surprises on Production now, do you?  🙂

Well SQL folks, this concludes my indexing series of posts for DBA’s. Leave your comments down below if you find this useful and would like more material on these lines.

 

-Sudarshan (TheSQLDude)

Posted in DB Maintenance, Indexing, Performance, T-SQL | Tagged: , , , , , , | 1 Comment »

A scenario on Query Tuning–Included Columns are a good thing!

Posted by Sudarshan Narasimhan on December 20, 2012


Well, this post has been due for quite some time. I had opened up a survey/opinion poll on https://thesqldude.com/2012/07/10/user-opinion-poll-what-topics-would-you-like-to-see-posted-on-thesqldude-weblog/ sometime back and as expected most of you out there want to see more on performance tuning and query optimization.

image

  Figure 1: Opinion Poll Results

Without much ado and by popular demand here goes a simple scenario on query tuning.

Tuning


One of my engineers came to me with the following scenario…

Problem Statement

We have this Update query that is taking much longer to run compared to before. No schema changes have been made though its quite possible data changes have been happening. The update query goes like this,

update msgboard set mb_status =10, mb_changed_by ='SATCOM'  
from msgboard 
where mb_type = 210 

Table Schema

I was easily able to reproduce this issue on my instance, here is how my table setup looks.

/****** Object:  Table [dbo].[msgboard]    Script Date: 12/20/2012 01:52:07 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[msgboard](
    [mb_id] [int] NOT NULL,
    [mb_type] [int] NOT NULL,
    [mb_status] [int] NOT NULL,
    [mb_changed_by] [varchar](50) NULL,
    [mb_changed_dt]  AS (getdate())
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
CREATE UNIQUE CLUSTERED INDEX [pk_msgboard] ON [dbo].[msgboard] 
(
    [mb_id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 100) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [IX_msgboard_type] ON [dbo].[msgboard] 
(
    [mb_type] ASC,
    [mb_status] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO

I populated this table with some sample data of 1000 rows using the following script.

declare @counter int
set @counter=0
declare @mbid int
set @mbid=1
declare @mbtype int
set @mbtype=200
declare @mbstatus int
set @mbstatus=10
while(@counter<1000)
begin
INSERT INTO [dbo].[msgboard]
           ([mb_id]
           ,[mb_type]
           ,[mb_status]
           ,[mb_changed_by])
     VALUES
           (@mbid,@mbtype,@mbstatus,'THESQLDUDE')
set @mbid+=1
set @counter+=1
end
GO
INSERT INTO [test].[dbo].[msgboard]
           ([mb_id]
           ,[mb_type]
           ,[mb_status]
           ,[mb_changed_by])
     VALUES
           (1001,210,10,'MANWITHNONAME')

So, I have 2 indexes on this table:-

1) A unique Clustered Index pk_msgboard on the mb_id column

2) A composite non-clustered index IX_msgboard_type on the columns mb_type and mb_status (in this order).

Query Execution Details

Here is what I observed when I ran the following update statement.

update msgboard set mb_status =10, mb_changed_by ='SATCOM'  
from msgboard 
where mb_type = 210 

To actual execution plan looks like this,

image

Total Cost: 0.03124485

As you can see, this query is using the Clustered Index pk_msgboard and its performing a Scan. At this point, two questions should pop into your head

1. Why is the query using the clustered index?

2. Why is it performing a Scan operation?

Answer to question #1

Let’s look at the update query carefully. The query is updating the columns mb_status and mb_changed_by, and there is a filter on column mb_type. You might think, well, I have an index IX_msgboard_type on the mb_type column, why is SQL Server not using this non-clustered index?

To prove I was smarter than SQL Server optimizer, I forced it to use the non-clustered index by adding an index hint. The query looks like this,

update msgboard set mb_status =10, mb_changed_by ='SATCOM'  
from msgboard WITH(INDEX(IX_msgboard_type))
where mb_type = 210 

image

Total Cost: 0.03651106

  • Notice that the Cost for the query with index hint is higher than the one without! If the table had more rows, we would have seen a bigger difference in cost Smile.
  • Additionally, a Table Spool operator is now part of the execution plan. The table spool operator caches the row to be updated (In tempdb) and then feeds it to the Clustered Index update operator.
  • This proves that SQL Server optimizer was actually choosing the cheapest and best plan it could come up with.

The reason SQL Server picked the clustered index is because the query is updating the columns mb_status =10 and mb_changed_by. This column mb_status is not the leading column in the NC index IX_msgboard_type, but the 2nd column. Why does this matter? It does because, the statistics for this NC index would be built on the leading column which is mb_type. Here is how the histogram looks.

dbcc show_statistics ('dbo.msgboard','IX_msgboard_type')

image

To avoid a bookmark lookup (Key Lookup operator from SQL 2005 onwards), SQL Server decided to use the clustered index since it covers all the columns in the table. If SQL Server were to use the NC index, it would be do a lookup or cache the results (ahem, Table Spool operator!) and then find the matching row in the clustered index to actually update the 2 columns. This is an additional cost, and this is why SQL decided not to go with this plan.

Coming to the 2nd question of why a Scan? It might surprise some of you, but a Seek does not imply good performance always. My favourite answer – “It depends”. In this case, the table only had 1001 rows and of which only 1 row (mb_id=1001) qualified for the update. When using the CI Scan, SQL Server was applying the filter on mb_type=210 and that returned 1 row back. A Clustered Index Seek in this case would not make any positive difference in query execution time.

OBJECT:([test].[dbo].[msgboard].[pk_msgboard]), WHERE:([test].[dbo].[msgboard].[mb_type]=(210)) ORDERED

To test this, I added the FORCESEEK table hint to this update query and as expected the query did not run faster.

update msgboard set mb_status =10, mb_changed_by ='SATCOM'  
from msgboard WITH (FORCESEEK) where mb_type = 210 

In fact it has the same cost as the query with index hint –> 0.03651106
 

Solution

Instead of trying to second guess the query optimizer and trying to tweak the execution plan it is producing, this issue needs a different approach –> back to basics. Based on the query design, if the column being updated mb_status was “covered” then it would help. In this case that was also also take care of, since IX_msgboard_type also includes the column mb_status, though as a non-key column. This is an important thing and this was why the “included columns” feature was introduced.

So, I modified the index IX_msgboard_type to remove the column mb_status. Instead, I added the column back as an included column

DROP INDEX [IX_msgboard_type] ON [dbo].[msgboard] WITH ( ONLINE = OFF )
GO
CREATE NONCLUSTERED INDEX [IX_msgboard_type] ON [dbo].[msgboard] 
(
    [mb_type] ASC
)
INCLUDE ( [mb_status]) WITH (STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO

The biggest advantage of having an included column in a non-clustered index is that, the query optimizer can locate all the column values within the index. The base table or clustered index data is not accessed resulting in fewer disk I/O operations and hence faster query execution. Also, the index size is lesser since non-key columns are only added to the leaf level. Many advantages indeed.

Here is how the update query plan looked after added mb_status as an included column.

image

My oh my, this plan looks so simple, clean and yeah faster. The actual cost is now 0.0132843, which is a big improvement over the original plan of 0.03651106

So we have achieved our original goals of

a) Getting SQL Server to use the Non-clustered index IX_msgboard_type, instead of the clustered index.

b) Getting an Seek operation instead of a Scan.

c) Query cost is cheaper.



So, hope everyone understood how beneficial included columns are on non-clustered indexes. For more information on guidelines and restriction with included columns refer this MSDN article.

That’s all for now folks. Keep tuned to this site for more.

-TheSQLDude (Sudarshan)

Posted in Performance, Query Optimization | Tagged: , , , , , , , , , , | 6 Comments »