The SQL Dude!

On-Disk & In-Memory Pages that teach you the SQL way of life!

Archive for June, 2016

Scan, Find & Understand Missing Indexes in your SQL Database

Posted by Sudarshan Narasimhan on June 17, 2016


This post is Part 4 of my series of posts on Index Maintenance for a DBA. So here goes.

Enough has been said & talked about regarding missing indexes in SQL Server. Before you read any further, it is very important to understand that “missing indexes” that your colleagues & friends talk about are really missing index “recommendations“. The keyword is the recommendations part. Take the information in this post as a whole and don’t go crazy creating every missing index recommended when you run the query given below. Refer to my earlier posts in the indexing series (duplicate, unused indexes) to know why it is a bad idea to go “index-crazy”. You need to carefully evaluate the benefits of having an index vs the cost of maintaining it.

Some Background

SQL Server provides exposes index recommendations via DMV’s (sys.dm_db_missing_index_detailssys.dm_db_missing_index_groups and sys.dm_db_missing_index_group_stats). These dynamic views contain information collected by the SQL Server Query optimizer engine as your DB workloads and queries execute on the database engine. Since this is “dynamic” information, it is only as relevant as your last SQL Service restart. So this is something that needs to get into your blood as a DBA to check for periodically and is NOT a one-off activity.

 

Things to Consider & Understand before implementing Missing Index Recommendations

Your boss asks you if this index thingy will improve the performance of the database. The favourite DBA response “It Depends”  😀

MrBeanItDepends

 

 

 

 

 

 

 

 

 

 

Well, it really does! I’ll explain why… Like any other recommendations you need to take into account additional information along with this data to come to a conclusion. Consider for example the following:

  1. Has your DB workload changed since the time you collected the missing index recommendations? (i.e. Has there been an application release or DB schema upgrade since)
  2. Are there similar existing indexes on the table where it’s recommending you create another index? (Maybe you just need to add a few additional columns to the existing index as included columns to “cover” it…)
  3. Is the query batch or workload very frequently run, enough that the impact is very high if you create this index? (Refer to the last_user_seek and last_user_scan columns to find out when a particular query last used these columns for seek/scan operations)
  4. Do you have enough space in the DB/disk to afford yet another index? Especially important, if it’s a large table. (DBA 101 question, I know!)
  5. If you create this index and it’s rarely used, you are increase the maintenance overhead and also slowly down WRITE operations on the table.

 

IT DEPENDS, really it does!

 

Missing Index – Improvement Measure Calculation Logic

To help answer some of these questions, I calculate an arbitrary number to determine the “usefulness” of a new index, called improvement measure. The higher this number, the more useful it is to create it. The improvement measure is calculated based on 4 factors

  1. Average cost of the user queries that could be reduced by the new index. This is the current cost of the query.
  2. Average impact on the query if this index were created. i.e. The query cost would on average drop by this percentage value.
  3. Number of Seek operations that this index could have been used by user queries.
  4. Number of Scan operations that this index could have been used by user queries.

 

Now that you understand what missing indexes are, factors to consider and the logic behind the recommendations, go ahead and run the below query on your databases. The output provides you the “CREATE INDEX” T-SQL statement that you can use to create the index, if you see fit after considering all factors mentioned so far. This saves you time having to write the T-SQL yourself. The CREATE INDEX statement will include the index key columns in order, suggest included columns where applicable.

 



Script – Find Missing Indexes Recommendations in SQL Server

USE [msdb]  /*Replace with your Database Name */
GO
SELECT TOP 50 
    GETDATE() AS [RunTime],
    DB_NAME(mid.database_id) AS [DBNAME], 
    OBJECT_NAME(mid.[object_id]) AS [ObjectName], mid.[object_id] AS [ObjectID],
    CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) AS [Improvement_Measure],
    'CREATE INDEX missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) 
    + ' ON ' + mid.statement 
    + ' (' + ISNULL (mid.equality_columns,'') 
    + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '')
    + ')' 
    + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS [CREATE_INDEX_Statement],
    migs.user_seeks, migs.user_scans, migs.last_user_seek, migs.last_user_scan, migs.avg_total_user_cost, migs.avg_user_impact, migs.avg_system_impact,
    mig.index_group_handle, mid.index_handle
FROM sys.dm_db_missing_index_groups mig
    INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
    INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) > 10
    AND mid.database_id = DB_ID()
ORDER BY [Improvement_Measure] DESC
GO

 

Script Result
Here is a sample output from my test system showing all missing indexes recommended on the MSDB database.

MissingIndexes

 

 

 

Test, Test, Test!

Did I mention that you must test these index recommendations on your Test system? If you know your application workload/query, you must test to gauge the impact of having this index.

Keep in mind, your testing must not be to run just a single batch/query, as 95% of the time this will yield a positive result. But, you must include other sample workloads along with your specific query. Why? Because, indexes are structures on the table not specific to just your query. Other queries can use it too and SQL has to keep the index up to date when data changes happen to the table. Always test any index changes with as close to a real world production like workload as possible. You don’t want any surprises on Production now, do you?  🙂

Well SQL folks, this concludes my indexing series of posts for DBA’s. Leave your comments down below if you find this useful and would like more material on these lines.

 

-Sudarshan (TheSQLDude)

Advertisement

Posted in DB Maintenance, Indexing, Performance, T-SQL | Tagged: , , , , , , | 1 Comment »

Seek & Evaluate Unused Indexes in your SQL database

Posted by Sudarshan Narasimhan on June 16, 2016


This post is Part 3 of my series of posts on Index Maintenance for a DBA. So here goes.

As a regular part of your DBA work, it is very important to keep evaluating the need for indexes in your database environment. Any DBA worth his salt, will always look to “maintain” the indexes by ensuring the fragmentation is kept low. Most of you probably have maintenance tasks/scripts to take care of this. An often under-looked aspect in index maintenance is to also consider whether some indexes are an over-kill a.k.a unnecessary and potentially drop them if it’s not really utilized. This is especially important when dealing with 3rd party vendor databases, where the DBA does not have full control on what comes in and what goes out in terms of index changes.

I like to think of the database as my “house” and take care of it the same way one would do to your own home. You always consider who/what comes in to your house. The same care needs to be applied to the databases you maintain. As gatekeepers to the database, even if you don’t have complete control on what comes in, it is good to be aware & have the stats handy so that you can work with your application teams or vendors, to keep the house tidy.

If you link this data back to my previous posts Part 1 – Seek & Destroy Duplicate Indexes in your SQL Database and Part 2 – Seek & Understand Top Indexes by Size in your SQL database , you will be on your way to hopefully free up some space and realize space cost and performance gains.

Here is a handy script that I use to:-

1) Lists the indexes on each table in the database, that are completely unused.

This scripts lists those indexes where not even a single Read operation has used these indexes. Keep in mind, write operations still need to update this index as data is being updated/inserted contributing to overall time of the Write operations. When no-one is using the index for Read it becomes a system overhead.

2) Lists the indexes on each table in the database, that are having more Writes compared to Reads

The maintenance cost of having these indexes from insert, update, or delete operations on the underlying table are more compared to the benefits for Read operations. These indexes needs to be investigated to see if they are still required. A cost benefit analysis on these indexes needs to be performed.
Note: I’m not advocating that you remove these indexes, but just that you carefully consider the benefits vs the cost, before making any decision.



Script #1 – TOTALLY UNUSED INDEXES

USE [DBNAME] /* Replace with your Database Name */
GO
--TOTALLY UN-USED INDEXES
SELECT DB_NAME(s.database_id) as [DB Name], OBJECT_NAME(s.[object_id]) AS [Table Name], i.name AS [Index Name], i.index_id,
    i.is_disabled, i.is_hypothetical, i.has_filter, i.fill_factor, i.is_unique,
    s.user_updates AS [Total Writes],
    (s.user_seeks + s.user_scans + s.user_lookups) AS [Total Reads],
    s.user_updates - (s.user_seeks + s.user_scans + s.user_lookups) AS [Difference],
    (partstats.used_page_count / 128.0) AS [IndexSizeinMB]
FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK)
    ON s.[object_id] = i.[object_id]
    AND s.index_id = i.index_id
    AND s.database_id = DB_ID()
INNER JOIN sys.dm_db_partition_stats AS partstats
    ON i.object_id = partstats.object_id AND i.index_id = partstats.index_id
WHERE OBJECTPROPERTY(s.[object_id],'IsUserTable') = 1
    AND user_updates > (user_seeks + user_scans + user_lookups)
    AND (s.user_lookups=0 AND s.user_scans=0 AND s.user_seeks=0)
    AND i.index_id > 1
ORDER BY [Difference] DESC, [Total Writes] DESC, [Total Reads] ASC OPTION (RECOMPILE);
GO

Script #1 Result
Here is a sample output from my test system showing all unused indexes on the MSDB database.

UnusedIndexes

 

 

 

 

 

 



Script #2 – INDEXES WITH MORE WRITES THAN READS

USE [DBNAME] /* Replace with your Database Name */
GO
--INDEXES WITH WRITES > READS
SELECT DB_NAME(s.database_id) as [DB Name], OBJECT_NAME(s.[object_id]) AS [Table Name], i.name AS [Index Name], i.index_id,
    i.is_disabled, i.is_hypothetical, i.has_filter, i.fill_factor, i.is_unique,
    s.user_updates AS [Total Writes],
    (s.user_seeks + s.user_scans + s.user_lookups) AS [Total Reads],
    s.user_updates - (s.user_seeks + s.user_scans + s.user_lookups) AS [Difference],
    (partstats.used_page_count / 128.0) AS [IndexSizeinMB]
FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK)
    ON s.[object_id] = i.[object_id]
    AND s.index_id = i.index_id
    AND s.database_id = DB_ID()
INNER JOIN sys.dm_db_partition_stats AS partstats
    ON i.object_id = partstats.object_id AND i.index_id = partstats.index_id
WHERE OBJECTPROPERTY(s.[object_id],'IsUserTable') = 1
    AND (s.user_lookups<>0 OR s.user_scans<>0 OR s.user_seeks<>0)
    AND s.user_updates > (s.user_seeks + s.user_scans + s.user_lookups)
    AND i.index_id > 1
ORDER BY [Difference] DESC, [Total Writes] DESC, [Total Reads] ASC OPTION (RECOMPILE);
GO

Script #2 Result
Here is a sample output from my test system showing all indexes having more Writes compared to Reads on the MSDB database.

WritesMoreThanReadsIndexes

 

 

 

 

 

Well, that’s all folks.

-Sudarshan (TheSQLDude)

Posted in DB Maintenance, Indexing, T-SQL | Tagged: , , , , , , | 1 Comment »

Checklist for SQL Server Database Audit & Security Hardening

Posted by Sudarshan Narasimhan on June 8, 2016


TheAuditIsComing
I’ve been involved in a lot of IT security audits, many times due to legislative requirements. A few good examples of these are PCI, SOX, HIPAA, GLBA audits. Each of these US legislative acts have a set of defined security standards & configurations that organizations have to adhere to.  Audits for these systems happen both internally and externally to ensure the company is compliant to these standards and any remediation’s are acted upon.

Being a DBA, you’re usually involved in these IT audits. The scope for these audits should be very well defined per your internal IT security policies. As a database administrator, you are responsible for the secure management of the company’s data, vis-à-vis compliance data. The systems in scope for these databases are usually ones (but not limited to) that contain the following data:

  1. Payment Card Data or database systems used for Payment processing (Online payment web systems, retail POS etc)
  2. Financial Data (like Bank, Insurance, Stocks etc)
  3. Health Information (like medical, patient records etc)
  4. Personally Identifiable Information (PII) data (like client/member data including DOB, Tax Number, SSN etc)

While each of these compliance regulations are very-well documented (especially PCI DSS), you must follow certain best practices or standards for any DBMS system, irrespective of the requirements. In today’s world, data leaks are becoming increasing prevalent and within the DB world, it is very essential to adhere to strict & enforceable data security practices.

All that said, coming to SQL Server security good practices, I’ve listed what I feel are the bare-minimum security checks that need to be performed. You can add stuff to this, but in my view these are essential and non-negotiable settings that you must check as a DBA. I’ll probably share some scripts & code to monitor these setting centrally in an automated way, in another post. This post is to list down and share these settings so that you are aware of the various things to consider when looking at SQL Server Security Hardening & Audits.

 

CHECKLIST: Security Audit of a SQL Server Database Instance

 

Instance Level:-
1. Server Role Membership
2. Server Permissions e.g. CONTROL SERVER, ALTER ANY DATABASE
3. Public Role Permissions (harden this as much as possible on user database, leaving aside system databases)
4. BUILTIN Admins present as a login?
5. Default Port changed?
6. Orphaned AD Users & Groups must be removed
7. [NT AUTHORITY\ANONYMOUS LOGON] must be removed.

Database Level:-
1. Database Role Membership (Elevated)
2. Database Role Membership (Standard)
3. Database Permissions e.g. DELETE, ALTER, SCHEMA
4. Guest User Disabled?
5. DBO User ownership (you can read my previous post on DBO ownership HERE (point #2) to understand why this is important)
6. No Orphaned Users should exist

Windows OS Level:-
1. Local Administrators Group Members
2. Local Security Policy Privileges (following this MSDN  doc to grant only those privileges required to run SQL Services to the SQL service accounts)
3. Guest User enabled?
4. Windows Firewall configured to allow connections via a specific Port Only (I hate systems setup to allow all incoming connections to sqlservr.exe. Duh!)
5. SQL Browser Service Disabled?

Configuration:-
1. XP_CMDSHELL Disabled?
2. Password Complexity & Lockout for SQL Logins (SQL logins inherit the account & password policies of the server on which the instance is running. Work with your IT admins to implement a stronger policy using GPO if required. Otherwise make sure CHECK_POLICY setting is turned ON for all SQL logins [including SA])
3. SA password rotated frequently (if you can disable the SA account, even better)
4. Cross database ownership chaining Disabled?

Patch Management
1. Are all the security bulletin patches for known vulnerabilities applied on your SQL Server instance. (Refer to https://technet.microsoft.com/en-us/security/bulletins.aspx and filter on your specific version & patch level to get the list of known security patches available).
2. Are all known Windows security patches applied (Depending on your company, this might be handled by another team, but it’s still a DB Server, so better be prepared if someone else isn’t doing their job properly 😉 )

Compliance:-
1. Access Control mechanisms in place for Sensitive/PII Data?
2. Access Control mechanisms in place for Regulatory Data?
3. SQL Server Audit enabled to track permissions/role/users changes to compliance data?

Penetration Tests
1. SA Login checks. Is anyone trying to login by brute-force using the SA account?
2. Main App login. Is anyone trying to login by brute-force using the application login (especially if it’s a SQL login)

 

I was almost about to add managed service accounts (MSA/GMSA) to this list, but considering SQL 2016 just released a week ago and most of you are probably running SQL 2012/2014 with either Clustering/AlwaysON, I’m not going to make it a must-do yet. (Damn you Microsoft, you promised GMSA support for SQL 2014 but didn’t follow through 😦 )

That’s all for now. I’ll continue this in another post with some sample code to help you through all these items.

Until then, stay safe and may the force be with you.

-TheSQLDude (Sudarshan)

Posted in Auditing, General Thoughts, Security | Tagged: , , , , , , | Leave a Comment »

Seek & Understand Top Indexes by Size in your SQL database

Posted by Sudarshan Narasimhan on June 3, 2016


This post is Part 2 of my series of posts on Index Maintenance for a DBA. So here goes.

Often, when you are performing capacity planning or upgrades or working with 3rd party vendor on database schema upgrades, you need to look into the existing indexes on the database. One of the key things you need to consider is disk space. Aside from the actual “data” in the tables, indexes also occupy space depending on the type of index. Here is a handy script that I use to quickly understand how many indexes are present for each table and the size of each index, in any given SQL database.

If you link this data back to my post “Seek & Destroy Duplicate Indexes in your SQL Database“, you will be on your way to hopefully free up some space and realize space cost savings.

This query only reports on user objects (object_id>99) and filters out HEAP tables. There are some exceptions to the filter on user objects (like service broker tables, CDC tables etc) which will still be considered as “user objects” though it supports a system functionality.

USE [msdb]  /*Replace with your Database Name */
GO
SELECT 
    DB_NAME() DatabaseName,
    OBJECT_SCHEMA_NAME(i.OBJECT_ID) SchemaName,
    OBJECT_NAME(i.OBJECT_ID ) TableName,
    i.name IndexName,
    i.type_desc as [IndexType],
    i.is_primary_key as [PrimaryKeyIndex],
    i.is_unique as [UniqueIndex],
    i.is_disabled as [Disabled],
    SUM(s.used_page_count) / 128.0 IndexSizeinMB,
    CAST((SUM(s.used_page_count)/128.0)/1024.0 as decimal(10,6))  IndexSizeinGB
FROM sys.indexes AS i
INNER JOIN sys.dm_db_partition_stats AS S
    ON i.OBJECT_ID = S.OBJECT_ID AND I.index_id = S.index_id
WHERE i.object_id>99 and i.type_desc  'HEAP'
GROUP BY i.OBJECT_ID, i.name, s.used_page_count, i.type_desc, i.is_primary_key, i.is_unique, i.is_disabled
ORDER BY s.used_page_count DESC
GO

 

Script Result
Here is a sample output from my test system showing all the indexes on MSDB database.

Index Size

 

Stay tuned for more index related scripts in this series.

-Sudarshan (TheSQLDude)

Posted in DB Maintenance, Indexing, T-SQL | Tagged: , , , | 1 Comment »