The SQL Dude!

On-Disk & In-Memory Pages that teach you the SQL way of life!

Data Collector (MDW) slow purge issue also affects SQL Server 2008

Posted by Sudarshan Narasimhan on June 5, 2012


I had posted on this topic before in my previous blog. SQL Server 2008 is also affected by slow purge procedure core.sp_purge_data as well as deadlocks in the purge procedure. This issue was fixed in SQL Server 2008 R2 SP1. For SQL Server 2008, the fix to the purge procedure has been done in Service Pack 3. Those of you using MDW to collect performance data, should apply SP3 as soon as possible. This will greatly reduce the purge procedure run time and your jobs will run faster.

The KB 2584903 has been updated to include SQL 2008 as an affected product. I’ve posted this on the Connect Bug as well here –> https://connect.microsoft.com/SQLServer/feedback/details/657148/management-data-warehouse-mdw-purge-job-proc-performance

Just applying SP1 for R2 or SP3 for SQL 2008 will not be enough to get the MDW purge procedures updated. Once the service pack has been successfully installed, you will have to run through the Configure Management Data Warehouse Wizard from SSMS to get the purge procedures updated with the new T-SQL code.

image

Advertisement

Posted in Data Collector | Tagged: , , , , , , | Leave a Comment »

Auditing T-SQL Script to find out who made changes to data in a table

Posted by Sudarshan Narasimhan on June 1, 2012


Having worked with multiple customers and different application where at times the data in the table has gotten changed and we need to find out when/where/who made the changes, I used the below T-SQL code to setup a trigger on the table and track the changes into an auditing table. There are obviously other ways to do this like SQL Profiler, XEvents etc., but I find using T-SQL more simpler and lesser overhead. So, sharing the script for others to use.

-- Note: This trigger tracks Updates and Deletes happening on a table. 
-- Please delete this trigger once the source has been identified and corrective actions have been taken.

-- 1. Creating the audit table to store information on Update/Delete
CREATE TABLE AuditTable
(
AuditID [int] IDENTITY(1,1) NOT NULL,
Timestamp datetime not null CONSTRAINT AuditTable_Timestamp DEFAULT (getdate()),
OperationType char(1),
OperationDate datetime DEFAULT (GetDate()), 
PrimaryKeyValue varchar(1000), 
OldColValue varchar(200), 
NewColValue varchar(200), 
UserName varchar(128),
AppName varchar(128),
ClientName varchar(128)
)
go

--2. Creating  the audit trigger
-- Replace PrimaryKeyValue with the PK Column Name
-- Replace NewColValue with the column name in the IF BLOCK
-- Replace OldColValue with the column name in the final SELECT statement
-- Replace TBLNAME with the name of your table which you want to track the changes for.

Create trigger TBLNAME_Audit on TBLNAME for update, delete
AS
declare @OperationType char(1),
@OperationDate datetime,
@NewColValue varchar(200),
@OldColValue varchar(200),
@UserName varchar(128),
@AppName varchar(128),
@ClientName varchar(128)

select @UserName = system_user
select @OperationDate = CURRENT_TIMESTAMP
select @ClientName = HOST_NAME()
select @AppName = APP_NAME()

if exists (select * from deleted)
      if exists (select * from inserted)
      begin
            select @OperationType = 'U'
            select @NewColValue = NewColValue from inserted
      end
      else
      begin
            select @OperationType = 'D'
            select @NewColValue = null
      end
      
Insert AuditTable (OperationType, OperationDate, PrimaryKeyValue, OldColValue, NewColValue, UserName, AppName, ClientName)
select @OperationType, @OperationDate, PrimaryKeyValue, OldColValue, @NewColValue, @UserName, @AppName, @ClientName
from deleted
go

--3. Query the audit table once the values in the base table has changed
select * from AuditTable

Posted in Auditing, Security, T-SQL | Tagged: , , , , , , | 5 Comments »

Monitoring tempdb space usage and scripts for finding queries which are using excessive tempdb space

Posted by Sudarshan Narasimhan on May 15, 2012


Many times during the life of a DBA, you might notice the tempdb database growing excessively, though no changes have recently been done. It’s often the case that due to data increase, the application T-SQL queries are not written to scale up, hence end up doing excessive sorting/hashing operations which consume space from your tempdb database. Here are some T-SQL scripts that you can use to monitor who/what is consuming space from tempdb and plan accordingly.

Before we get into identifying queries that use tempdb, it is very important to understand what all activities in SQL Server (both internal & user activities), which use the tempdb database. Broadly you can classify these into 3 categories:-

  1. Internal Objects
  2. Version Stores
  3. User Objects

From a feature perspective, here are the features in SQL Server that use space from tempdb.

  1. Query
  2. Triggers
  3. Snapshot isolation and read committed snapshot (RCSI)
  4. MARS
  5. Online index creation
  6. Temporary tables, table variables, and table-valued functions
  7. DBCC CHECK
  8. LOB parameters
  9. Cursors
  10. Service Broker and event notification
  11. XML and LOB variables
  12. Query notifications
  13. Database mail
  14. Index creation
  15. User-defined functions

From a query performance standpoint, here are some operators that use tempdb space.

1. Sort Operator : The sort operator needs tempdb space to sort the full rowset of incoming rows. This is usually send when user ODER BY and also for DISTINCT ORDER BY

        image

2. Hash Match Operator: Depending on the size of row, a hash table could use tempdb

      image

3. Spool Operator: This operator is used to save the intermediate set of rows for re-use and uses the tempdb database to save the query result set.

         image

Tempdb out of space error

Error: 1105, Severity 17, State 2
Could not allocate space for object dbo.TBL1  in database ‘tempdb’ because the ‘PRIMARY’ filegroup is full.

 

Identify which type of tempdb objects are consuming  space

The following query helps you understand if user objects or version store or internal objects are the ones using the space in tempdb. According to this output, you can focus on the below sections.

SELECT
SUM (user_object_reserved_page_count)*8 as user_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8  as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage
If user_obj_kb is the highest consumer, then you that objects are being created by user queries like local or global temp tables or table variables. Also don’t forget to check if there are any permanent 
tables created in TempDB. Very rare, but I’ve seen this happening.
If version_store_kb is the highest consumer, then it means that the version store is growing faster than the clean up. Most likely there are long running transactions or open transaction (Sleeping state), 
which are
preventing the cleanup and hence not release tempdb space back.



Query that identifies the currently active T-SQL query, it’s text and the Application that is consuming a lot of tempdb space

SELECT es.host_name , es.login_name , es.program_name,
st.dbid as QueryExecContextDBID, DB_NAME(st.dbid) as QueryExecContextDBNAME, st.objectid as ModuleObjectId,
SUBSTRING(st.text, er.statement_start_offset/2 + 1,(CASE WHEN er.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max),st.text)) * 2 ELSE er.statement_end_offset 
END - er.statement_start_offset)/2) as Query_Text
, tsu.session_id ,tsu.request_id, tsu.exec_context_id, (tsu.user_objects_alloc_page_count - tsu.user_objects_dealloc_page_count) as OutStanding_user_objects_page_counts, (tsu.internal_objects_alloc_page_count - tsu.internal_objects_dealloc_page_count) as OutStanding_internal_objects_page_counts, er.start_time, er.command, er.open_transaction_count, er.percent_complete, er.estimated_completion_time, er.cpu_time, er.total_elapsed_time, er.reads,er.writes,
er.logical_reads, er.granted_query_memory FROM sys.dm_db_task_space_usage tsu inner join sys.dm_exec_requests er ON ( tsu.session_id = er.session_id and tsu.request_id = er.request_id
) inner join sys.dm_exec_sessions es ON ( tsu.session_id = es.session_id ) CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) st WHERE (tsu.internal_objects_alloc_page_count+tsu.user_objects_alloc_page_count) > 0 ORDER BY (tsu.user_objects_alloc_page_count - tsu.user_objects_dealloc_page_count)+(tsu.internal_objects_alloc_page_count - tsu.internal_objects_dealloc_page_count)
DESC



Tempdb and the Version Store

The version stored (SQL 2005 onwards) is a collection of objects that are used when Snapshot Isolation or Read-Committed Snapshot Isolation (RCSI) or online index rebuild etc. are used in a database.

Version store contains the committed rows which is how a SELECT operation does not get blocked when another UPDATE/DELETE is operating on the same row, because the SELECT reads the row from the version store, instead of the actual base table. When you enable this, the row has to be stored somewhere and tempdb happens to be the place. A row is maintained in the version store when there are transactions operating on that row in questions. When the transaction is committed, the row is cleaned up from the version store tables.

You can check the version store using the DMV sys.dm_tran_version_store

At times, when there are long running transactions or orphaned transactions, you might notice tempdb growth due to the version store.

You can use the following query to find the oldest transactions that are active and using row versioning.

SELECT top 5 a.session_id, a.transaction_id, a.transaction_sequence_num, a.elapsed_time_seconds,
b.program_name, b.open_tran, b.status
FROM sys.dm_tran_active_snapshot_database_transactions a
join sys.sysprocesses b
on a.session_id = b.spid
ORDER BY elapsed_time_seconds DESC

Trace Flag 1118

This trace flag is available starting with SQL 2000 SP3 to reduce tempdb contention by forcing uniform extent allocations as opposed to mixed extent allocations. This trace flag is only to be used if you seeing contention (wait_Stats) on the PFS/GAM pages like 2:1:1 etc.. More internal details on this trace flag is available in Paul Randal’s blog post here.

Not only does enabling the trace flag help but you need to create multiple tempdb files equal to the number of logical processors. So if you have 4 CPU’s you will create 4 tempdb data files. Now, what if you have 16 or 32 processors, do you still need to create that many tempdb files?

The answer is NO, you don’t have to. The above recommendation has been stated in many KB articles like http://support.microsoft.com/default.aspx?scid=kb;EN-US;328551

If the number of logical processors on your server is greater than or equal to 8, then use 8 data files for tempdb. If the number of logical processors is less than 8, then use as many data files as your processor count.

You can use the following against any of the SQL Servers you manage to find out if any change is required in the tempdb data files to reduce contention and improve general performance.

Declare @tempdbfilecount as int;
select @tempdbfilecount = (select count(*) from sys.master_files where database_id=2 and type=0);
WITH Processor_CTE ([cpu_count], [hyperthread_ratio])
AS
(
      SELECT  cpu_count, hyperthread_ratio
      FROM sys.dm_os_sys_info sysinfo
)
select Processor_CTE.cpu_count as [# of Logical Processors], @tempdbfilecount as [Current_Tempdb_DataFileCount], 
(case 
      when (cpu_count<8 and @tempdbfilecount=cpu_count)  then 'No' 
      when (cpu_count<8 and @tempdbfilecount<>cpu_count and @tempdbfilecount<cpu_count) then 'Yes' 
      when (cpu_count<8 and @tempdbfilecount<>cpu_count and @tempdbfilecount>cpu_count) then 'No'
      when (cpu_count>=8 and @tempdbfilecount=cpu_count)  then 'No (Depends on continued Contention)' 
      when (cpu_count>=8 and @tempdbfilecount<>cpu_count and @tempdbfilecount<cpu_count) then 'Yes'
      when (cpu_count>=8 and @tempdbfilecount<>cpu_count and @tempdbfilecount>cpu_count) then 'No (Depends on continued Contention)'
end) AS [TempDB_DataFileCount_ChangeRequired]
from Processor_CTE;

 

Here is a sample output. As you can see I have 8 processors and only 1 tempdb file. So I need to add 7 more files.

clip_image002


Last point before I wrap up this post. Once your tempdb database or log file is full, you have these options:-

1. Either you have to rollback any transactions consuming tempdb space or kill the transactions (not a good idea).

2. Create additional tempdb files in other drives which have free space, while you dig around to find the culprit who is growing tempdb.

3. Restart your SQL Server service.

Have fun working with tempdb. Here are some good references

Working with tempdb – http://technet.microsoft.com/hi-in/library/cc966545(en-us).aspx

Storage Engine Blog – http://blogs.msdn.com/b/sqlserverstorageengine/archive/2009/01/12/tempdb-monitoring-and-troubleshooting-out-of-space.aspx

Posted in Performance, T-SQL | Tagged: , , , , , , , , , | 18 Comments »

www.TheSQLDude.com is now officially released. Rush to your nearest web browser, to catch the action!

Posted by Sudarshan Narasimhan on May 12, 2012


Folks,

TheSQLDude.com is now live and all the older blog posts have been migrated to the new domain name. Existing/Older links will be re-directed to the new site name. As always, I leave you with another of Dilbert’s very best…
 

Image

Posted in SQL News | Tagged: , , , , | Leave a Comment »

IO Cost in an execution plan–What it actually means?

Posted by Sudarshan Narasimhan on May 11, 2012


I/O is probably one the slowest of the resources. The information below gives a quick meaning of what these operators mean when you look at them in an execution plan. This specific concentrates on the “Estimated I/O Cost” value in an execution plan. I’ll post some other day in detail on how to decipher a SQL Server query execution plan.

Thanks to Niraj Mehta for putting together this content.

Table Scan

  • The total number of data pages in the table

Clustered Index Scan

  • The number of levels in the index plus the number of data pages to scan (data pages = #rows / #rows per page)

Non-Clustered Index Seek on a Heap (Bookmark Lookup)

  • The number of levels in the index plus the number of leaf pages to read for qualifying rows plus the number of qualifying rows (1 I/O for each row on the heap)

Non-Clustered Index (I1) Seek on a Clustered index (I2) (Bookmark Lookup)

  • The number of levels in the I1 index plus the number of leaf pages to read for qualifying rows plus the number of qualifying rows times the cost of searching for a clustered index (I2) key

Covering Non-Clustered index

  • The number of levels in the index plus the number of leaf index pages to read for qualifying rows (#qualifying rows / # rows per leaf page).

Posted in Performance | Tagged: , , , , , , , , , , | Leave a Comment »

Shedding light on SQL Server Statistics and Auto Update Statistics

Posted by Sudarshan Narasimhan on May 4, 2012


SQL Server collects statistical information about indexes and column data in the database. These statistics play a very important role in coming up with an efficient query plan. Statistical information is used by the Query Optimizer when it decides to choose Seek vs. Scan or using Index A vs. Index B, etc. So it’s important as a DBA to pay careful attention to statistics and making sure they are updated and maintained well.

Just like Chandler Bing’s job is “Statistical analysis and data reconfiguration”, its a good DBA’s job to take care of maintaining the statistics Smile.


SQL Server 2005 statistics features allow you to:

  • Implicitly create and update statistics with the default sampling rate (in the SELECT, INSERT, DELETE, and UPDATE commands, use of a column in a query condition such as a WHERE or JOIN clause causes statistics to be created or updated if necessary when automatic statistics creation and update is enabled)
  • Manually create and update statistics with any desired sampling rate, and drop statistics (CREATE STATISTICS, UPDATE STATISTICS, DROP STATISTICS, CREATE INDEX, DROP INDEX)
  • Manually create statistics in bulk for all columns of all tables in a database (sp_createstats)
  • Manually update all existing statistics in the database (sp_updatestats)
  • List statistics objects that exist for a table or database (sp_helpstats, catalog views sys.stats, sys.stats_columns)
  • Display descriptive information about statisticsobjects (DBCC SHOW_STATISTICS)
  • Enable and disable automatic creation and update of statistics database-wide or for a specific table or statistics object (ALTER DATABASE options AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS, sp_autostats, and the NORECOMPUTE option on CREATE STATISTICS and UPDATE STATISTICS)
  • Enable and disable asynchronous automatic update ofstatistics (ALTER DATABASE option AUTO_UPDATE_STATISTICS_ASYNC

Behaviour change from SQL 2005 onwards

In SQL Server 2000, statistics update was determined by the number of row changes. Now, changes are tracked at the column level, and auto update of statistics can now be avoided on columns which have not changed enough to warrant statistics update.  This is a behaviour change between SQL 2000 and SQL 2005 onwards when it comes to auto update statistics. SQL Server 2005 and onwards determines whether to update statistics based on the colmodctrs (column modification counters) value.

A statistics object is considered out of date in the following cases:

1. The table size has gone from 0 to > 0 rows.
2. The number of rows in the table when the statistics were gathered was 500 or less, and the colmodctr of the leading column of the statistics object has changed by more than 500 since then.
3. The table had more than 500 rows when the statistics were gathered, and the colmodctr of the leading column of the statistics object has changed by more than 500 + 20% of the number of rows in the table when the statistics were gathered.
4. If the statistics object is defined on a temporary table, it is out of date as above, except that there is an additional threshold Recomputation after insertion of 6 rows.


Q: How is the COLMODCTR maintained by SQL Server?
Ans: The colmodctr values that SQL Server keeps track of are continually modified as the data in the table changes. Depending on the amount of INSERT/UPDATE/DELETE/TRUNCATE/BULK OPS that are occurring on the table, this counter value keeps changing. The below table, describes what the effect on colmodctr value are for each operation.

Cc293623.table_C05621961_2(en-us,TechNet.10).jpg
Reference: http://technet.microsoft.com/en-us/library/cc293623.aspx

 

Controlling STATISTICS Behaviour

You have options like AUTO_UPDATE_STATISTICS, AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS_ASYNC etc, which are all database level options. So you need to configure this per each database. You can find if your user databases have these options by using sp_helpdb and looking at the status column or by using a query like below.

select name as [DB_NAME], is_auto_create_stats_on, is_auto_update_stats_on, is_auto_update_stats_async_on 
from sys.databases

How to find out which indexes or statistics needs to be updates?

You can use the following query on any SQL 2005+ instance to find out the % of rows modified and based on this decide if any indexes need to be rebuilt or statistics on the indexes need to be updated.

select
schemas.name as table_schema,
tbls.name as Object_name,
i.id as Object_id,
i.name as index_name,
i.indid as index_id,
i.rowmodctr as modifiedRows,
(select max(rowcnt) from sysindexes i2 where i.id = i2.id and i2.indid < 2) as rowcnt,
convert(DECIMAL(18,8), convert(DECIMAL(18,8),i.rowmodctr) / convert(DECIMAL(18,8),(select max(rowcnt) from sysindexes i2 where i.id = i2.id and i2.indid < 2))) as ModifiedPercent,
stats_date( i.id, i.indid ) as lastStatsUpdateTime
from sysindexes i
inner join sysobjects tbls on i.id = tbls.id
inner join sysusers schemas on tbls.uid = schemas.uid
inner join information_schema.tables tl
on tbls.name = tl.table_name
and schemas.name = tl.table_schema
and tl.table_type='BASE TABLE'
where 0 < i.indid and i.indid < 255
and table_schema <> 'sys'
and i.rowmodctr <> 0
and i.status not in (8388704,8388672)
and (select max(rowcnt) from sysindexes i2 where i.id = i2.id and i2.indid < 2) > 0
order by modifiedRows desc

Updating Statistics on all the table in any given database

I often get this often as to how you update all the tables in a database. You can use a script like below to achieve this.

Disclaimer: Do not run this unless you know its implications on a production server. Statistics update on all the tables will use CPU resources and depending on the size of the table take its own time.

use <YOUR_DB_NAME>
go
select identity(int,1,1) as rownum,table_name into table_count from information_schema.tables where table_type='base table'
declare @count int,@stmt varchar (255),@maxcount int,@tblname varchar(50)
set @count=1
select @maxcount=count(*) from table_count
while @count < @maxcount+1
begin 
      select @tblname=table_name from table_count where rownum=@count
      set @stmt = 'UPDATE STATISTICS '+ '[' +@tblname+ ']' + ' WITH FULLSCAN'
      PRINT ('Updating statistics for table :'+@tblname)
      EXEC(@stmt)
      PRINT ('Finished Updating statistics for table :'+@tblname)
      print ''
      set @count=@count+1
      set @stmt=''
End
drop table table_count

Some tips on Statistics

1. Table variables do not have statistics at all.

Table variables are meant for operations on a small number of rows, a few thousand rows at max. This is a good scenario where you need to think about temporary tables (#tbl), because unlike table variables, temp tables can have indexes created on them, which means they can have statistics.

2. Multi-Statement Table Value Functions (TVF’s) also do not have any statistics

So if you have a complex query logic implemented in a function in SQL Server, think again! This function does not have any statistical information present, so the SQL optimizer must guess the size of the results returned. The reason for this is a multi-statement TVF returns you a TABLE as an output and table does not have any statistics on it.

3. You can find out from the Execution Plan aka SET STATISTICS PROFILE statement if any statistics would help a particular query

When you enable STATISTICS PROFILE ON and execute any query/batch it displays the execution plan. In this output look for the column called “Warnings”. During the course of compiling the plan, if the SQL Server optimizer felt that some statistics on column A would have helped the query, it displays this warning in the execution plan as “NO STATS”. If you see any such warning, consider creating some column statistics or indexes on the particular object in the row.

4. Avoid creating indexes on very frequently updated columns as the statistics also will have to keep up with the amount of data modifications.

5. Viewing Statistics

You can use DBCC SHOW_STATISICS (‘tablename’ , ‘index name’) to view the statistics on any given index’/column stats along with the histogram. The system DMV sys.stats stores information on each statistics available in a particular database.

Any statistics having the name prefixed as _WA_Sys_ is a auto-created statistics, which means SQL Server itself created them. User created statistics will have a given name or have the index name, e.g. PK_TBL1

Posted in Performance | Tagged: , , , , , , , , | Leave a Comment »

Setting up SSL encryption for SQL Server using certificates – Issues, tips & tricks

Posted by Sudarshan Narasimhan on April 21, 2012


I had posted quite a lengthy post on setting up SQL Server for SSL encryption back in October. This post has frequently ranked high on page views, thanks all for the overwhelming response. Since the time, I’ve got pinged so many times on this, that I really wanted to post another blog covering what I’ve learnt during this time and answers to some common problems faced when setting up SSL Encryption for SQL Server. So, here goes…


Basics

If a certificate is provided to SQL Server and for some reason it is not valid or SQL cannot find the certificate in the store, then it generates a self-signed certificate to encrypt communication between the server and the client. This method ensures that the login packet is always encrypted. Whether the data communication is also encrypted depends on both the server and the client. In order for the client to force encryption, the certificate used by the server should be signed by a trusted certificate authority.

When loading a certificate on the SQL Server machine, you have to keep in mind what the SQL startup account is.

1. If the SQL Server is running under a specific domain account, then you need to be logged in to the machine as the same domain account and when opening MMC, choose this option to load the Certificates snap-in, before doing the import. This makes sure that the certificate goes into the Personal store of the user who is also the SQL Service account.

image

2. If the SQL Server is running under any machine account like LocalSystem, NetworkService or LocalService, then you need to choose the option “Computer Account” in the above screenshot and then import the certificate. This will ensure that the certificate is placed under the Personal store of the machine account.

Needless to say for doing both of the above, the logged-in account must be an administrator on the machine.


Issues

1. SQL Server Configuration Manager (SSCM)  – How does it pull up the certificates and why doesn’t it show my valid certificate it the list?

Usually, SQL DBA’s don’t create certificates. This is usually done by someone else in your organization who will either install Certificate Services on a server and make it a trusted authority or buy a certificate from any on the certificate providers like VeriSign, GoDaddy.com etc. So when this certificate is not shown in SQL configuration manager, you are stuck. As I mentioned in the earlier blog post you can manually load the certificate by adding the thumbprint of the certificate into the SQL registry key “Certificate” in the location HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.<INSTANCENAME>\MSSQLServer\SuperSocketNetLib

I always wanted to find out how SSCM shows the certificate list and why sometimes some certificates even if valid are not shown there. SSCM uses WMI infrastructure to find out the certificates which are valid to be used by SQL Server. Now there are some additional checks done here which is not done by SQLServr.exe when actually loading the certificate on service startup.

You can use the following WMI query to see a list of certificates. This will be the same output you see in the SSCM Certificate drop-down. See example below

From SSCM drop-down certificate picker

image

  • As you can see above it listed 2 certificates on my machine.

Using the WMI Query

Save the below code as sqlcerts.vbs

strComputer = "."
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\Microsoft\SqlServer\ComputerManagement10")
Set colItems = objWMIService.ExecQuery( _
"SELECT * FROM SecurityCertificate",,48)
For Each objItem in colItems
Wscript.Echo "———————————–"
Wscript.Echo "SecurityCertificate instance"
Wscript.Echo "———————————–"
Wscript.Echo "ExpirationDate: " & objItem.ExpirationDate
Wscript.Echo "FriendlyName: " & objItem.FriendlyName
Wscript.Echo "IssuedBy: " & objItem.IssuedBy
Wscript.Echo "IssuedTo: " & objItem.IssuedTo
Wscript.Echo "Name: " & objItem.Name
Wscript.Echo "SHA: " & objItem.SHA
Wscript.Echo "StartDate: " & objItem.StartDate
Wscript.Echo "SystemStore: " & objItem.SystemStore
Next

Run this from command prompt cscript sqlcerts.vbs and you will see the output as shown below. I see the same 2 certificates shown by SSCM. So now we know that the certificates are shown by pulling the information from the WMI class SecurityCertificate.

Note
:
If you are using using SQL 2005, in the above VBScript, replace ComputerManagement10 with ComputerManagement.

C:\Users\sudarn\Desktop>cscript certs.vbs
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

———————————–
SecurityCertificate instance
———————————–
ExpirationDate: 30363701
FriendlyName: SUDARN2 (SQLCert)
IssuedBy: US, VA, Somewhere, MyOrg, MyOU, SUDARN2.org.company.com
IssuedTo: US, VA, Somewhere, MyOrg, MyOU, SUDARN2.org.company.com
Name: SUDARN2 (SQLCert)
SHA: 876753a93eb4310e4415a563219a41dda732c260
StartDate: 30216851
SystemStore: MY
———————————–
SecurityCertificate instance
———————————–
ExpirationDate: 31674726
FriendlyName: SCVMM_CERTIFICATE_KEY_CONTAINERSUDARN2.org.company.com
IssuedBy: SUDARN2.org.company.com
IssuedTo: SUDARN2.org.company.com
Name: SCVMM_CERTIFICATE_KEY_CONTAINERSUDARN2.org.company.com
SHA: 4ca85862a6ed93e329f68283c6d2ad1710e2c23f
StartDate: 30205010
SystemStore: MY

Part 2 of this question is why the SSCM doesn’t show your certificate. Here are some common reasons :-

  1. If yours is a clustered instance of SQL Server, then the CN in your certificate will contain the FQDN of your virtual server name of the instance and not the node names. SQL Configuration Manager does a direct match between the current machine name and the CN name in the certificate [i.e. certificates that match the machine name are only displayed], which will never match in case of a clustered instance. So you can ignore this and use the registry method to load the certificate.
  2. The certificate has expired.
  3. The Server Authentication property or the AT_KEYEXCHANGE property is not set.
  4. The Certificate is using Subject Alternate Names (SAN), which is not understood by SSCM.

 

2. Subject Alternate Names (SAN’s) and whether a certificate having SAN’s can be used by SQL Server.

Subject Alternative Names allow you to specify a list of host names to be protected by a single SSL certificate. This is useful when you want to buy 1 single certificate to secure multiple SQL Servers. Cheaper option I guess Smile. But anyways, certificates having SAN names are supported and SQL Server can use it for SSL encryption. If you provide the thumbprint value in registry, SQL Server will load this certificate. SQL Server doesn’t examine the SAN while choosing a certificate to load. I confess I hadn’t heard about SAN names before, so I tested this by creating a certificate using openssl.exe by having 2 alternate names. You can download openssl.exe for testing purposes from HERE.

1. Create a configuration file as follows. I called it cert_config.txt

distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = VA
L = Somewhere
O = MyOrg
OU = MyOU
CN = SQLSERVER.SUDARN.COM
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = alternatename1
DNS.2 = alternatename2
IP.1 = 10.191.84.22
IP.2 = 10.191.84.23

The alternate names I have used are alternatename1 and alternatename2. Replace this with actual server names as applicable.

2. Once you install the tool, go to the ..\bin\openssl.exe location from command prompt and run the following command to create the certificate.

openssl.exe req -x509 -nodes -days 730 -newkey rsa:2048 -keyout C:\Users\sudarn\Desktop\cert.pem -out C:\Users\sudarn\Desktop\cert.pem -config C:\Users\sudarn\Desktop\cert_config.txt

3. Run the following command to create the PFX file

openssl.exe pkcs12 -export -out C:\Users\sudarn\Desktop\cert.pfx -in C:\Users\sudarn\Desktop\cert.pem -name "SUDARN2 (SQLCert)" -passout pass:mypassword

This will create a file called cert.pfx which can be used to install the certificate or imported using the MMC certificates snap-in.

Now that you a certificate with 2 alternate names you can see for yourself that SQL Server can load this certificate fine. Again you will see the SQL Config Manager doesn’t show this certificate. Most cases the main CN name will not match the machine name in case of SAN certificates and that’s why SSCM doesn’t show you the certificate.

Important
The SAN field in the SSL certificate is not examined by all SQL Server client drivers when they are validating the server certificate. Only fairly recent drivers examine the SAN when validating a certificate, like the following.

– SQL Server Native Client 10.5 (and higher)
– ADO.Net SqlClient 4.5
– JDBC 4.0

So, if the client application is using .Net 4.0, for example, and you try to rely on the SAN, the client application will not accept it. Keep this factor in mind when deciding to use SAN certificates or go with individual certificates for each machine.

3. I’ve setup SSL encryption but when I connect to my SQL Server, I get this error

Cannot connect to thematrix.CORP.COMPANY.COM
——————————
ADDITIONAL INFORMATION:

A connection was successfully established with the server, but then an error occurred during the pre-login handshake.
(provider: SSL Provider, error: 0 – The certificate’s CN name does not match the passed value.)
(Microsoft SQL Server, Error: -2146762481)

I was able to reproduce this issue quite easily. As you can see in the above error message, the reason the connection didn’t go through was because the certificate’s CN Name did not match the SQL Server name.

As documented in Books Online, there are 5 properties for a certificate that are mandatory to be use by SQL Server which are documented here http://msdn.microsoft.com/en-us/library/ms189067.aspx 

Here is the catch!

If your SQL Server machine is called “THEMATRIX” and you create a certificate called “MyServerName” and you provide the thumbprint of this certificate directly in the SQL server registry, SQL will load this certificate just fine. Surprised? Have a look at my repro.

1. I created a certificate using openssl.exe with the following properties.
CN = MyServerName
extendedKeyUsage = serverAuth
Valid From: 29 March 2012 21:27:39
Valid To:    29 March 2014 21:27:39

2. I loaded this certificate using the SHA1 hash value directly in the registry key "Certificate" in the following location,
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQLServer\SuperSocketNetLib

3. SQL Server successfully loaded this on restart of the service.
2012-04-03 02:52:57.100 Server       The certificate [Cert Hash(sha1) "A1A3DB2BD029B39FF9329E46B730CF8DF4BE2383"] was successfully loaded for encryption.

4. When I connected from SSMS, I get the same error as reported by my customer.
provider: SSL Provider, error: 0 – The certificate’s CN name does not match the passed value.) (Microsoft SQL Server, Error: -2146762481)

So now read the documentation again and it will make sense. What Books Online is telling you is that for a successful SSL encryption and connection, the subject Name in the certificate has to match the FQDN/Server name. In the above scenario, I deleted the certificate “MyServerName” and created a new certificate with the CN as “TheMatrix” and was able to connect successfully from SSMS.

4. Again with a new SSL encryption setup, you are faced with this error when connecting from SSMS

(provider: SSL Provider, error: 0 – The certificate chain was issued by an authority that is not trusted.)

This messages tells us that a server side certificate is used but the client could not validate the server side certificate. If your certificate is from a trusted authority but the client connection is failing with above error, do this

1) Go to SQL Configuration Manager
2) Go to SQL Native Client 10.0 Configuration > Right Click and Properties > Flags tab >
3) Set the property "Trust Server Certificate" to Yes
4) Restart the SQL Service

When the above property is set to True, SSL is used to encrypt the channel whilst bypassing walking the certificate chain to validate trust. If TrustServerCertificate is set to true and Encrypt is set to false, the channel is not encrypted. So you need to have "Force Protocol Encryption"=Yes and "Trust Server Certificate"=Yes.

5. The certificate thumbprint value in the registry is getting automatically changed whenever a failover of SQL Server is done

I must confess this was a strange issue to encounter and I saw this happening with my own eyes. Every time the SQL instance was failed over, even though the certificate was present on both the nodes, SQL was using a different self-signed certificate when starting up. I initially suspected this to be a cluster registry checkpoint issue, but after making sure that the registry value was correct on both the nodes, still on next failover some new thumbprint was getting loaded and this thumbprint value was also being placed in the registry.

There is a thumb-rule you can follow. When in doubt, use Process Monitor. So I did Smile. I collected a process monitor trace without any filters and saw this happening.

00:57:03.7155401 wmiprvse.exe 10752 RegQueryValue HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.A1XX02\MSSQLServer\SuperSocketNetLib\Certificate 6.0.6002.18005 SUCCESS Type: REG_SZ, Length: 82, Data: 7697654a3be60f1931ec04b37eae21af98aea1bd

00:57:03.7263934 wmiprvse.exe 10752 RegSetValue HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.A1XX02\MSSQLServer\SuperSocketNetLib\Certificate 6.0.6002.18005 SUCCESS Type: REG_SZ, Length: 82, Data: b8c0cf223b0efb4ca8ede15dd910400488a06631

From the above snippet you can see that wmiprvse.exe (WMI) is setting the value of the certificate to a new value. The thumbprint value is getting changed to a new value from what was there before. On further research, it was identified that this environment had System Center 2012 Configuration Manager installed and these SQL Server nodes were being managed by System Center. My colleague from the System Center team confirmed that there is a component there called “Certificate Manager” that frequently polls all the systems (ones with any roles installed) to check whether the certificate information there matches with what is stored in the System Center database (for consistency reasons). Obviously, since this is a clustered instance of SQL Server, we have edited the registry to provide the certificate value. System Center is not aware of this change, and it thinks that the certificate has been changed and puts in a request to generate a new self-signed certificate and updates the registry with the thumbprint of the new certificate. This was what was happening here. I am told System Center 2012 Service Pack 1 will have a fix for this problem. If you want confirmation, contact System Center MS Support.

6. You have configured SSL encryption for your SQL Server by manually entering the Thumbprint of a certificate, and now the SQL Service fails to start

There are many reasons for this, which you need to find out from the SQL Error log. One common problem that can happen when inputting the thumbprint into the registry when you Ctrl+C the value and paste it into the registry, some extra spaces/junk/invisible characters get copied over if you copy this from MMC. Have a look at the following KB article.

2010-04-16 18:56:31.48 Server      Error: 17182, Severity: 16, State: 1.
2010-04-16 18:56:31.48 Server      TDSSNIClient initialization failed with error 0xd, status code 0x38

SQL Server fails to start with error 17182 "TDSSNIClient initialization failed with error 0xd, status code 0x38" when server is configured to use SSL
http://support.microsoft.com/kb/2023869

 

7. You don’t see the option to export the Private Key when trying to export the certificate from one machine to another machine

This is definitely outside the realms of SQL Server. When you try to export the certificate you get to the screen where you have to choose the option to export the Private Key of the certificate, but the option is greyed out as shown below.

image

The reason the option to export the private key is greyed out is because the certificate was created without the option to export keys. There is an option during certificate creation for Server Authentication to “Mark Keys as exportable”, and you should choose this option. If you don’t, you can still export the certificate and install it on another node, but it won’t have the private key and without the private key SQL Server will not be able to load the certificate and hence cannot start.


Well folks, I’ve covered a lot of topics here. Believe me if you can follow these steps, I’m pretty confident you should have any issues before, during and after configuring SQL Server for SSL encryption. If you do happen to run into any issues, please drop in a comment on this post and I will respond back to this. Cheers!

Posted in Security | Tagged: , , , , , , , , | 22 Comments »

How to prevent users from accessing SQL Server from any application or any login expect your main application & its login

Posted by Sudarshan Narasimhan on April 7, 2012


I heard of these interesting requirements when working with one of my customers. If I were to break it down, there are 2 types of scenarios where we want to restrict access to SQL Server.

  1. Only the main application which uses a SQL Login should be able to connect to the SQL Server. None of the Windows Logins or any other login should be able to connect the SQL database. In order words, the SQL database should be accessed only by the Application Login and no one else.
  2. Access to SQL Server and the database should be only through a custom application and no other application including SQL Server Management Studio, SQLCMD etc. should allow users to access the database.

Solution for Scenario #1

The easiest and secure way of preventing other logins from accessing SQL Server is through Logon Triggers. You might wonder why this is an issue, when only required logins are added in SQL Server. Think of a scenario where the database is deployed on client machines and obviously the client machine means the user will be an Administrator on his own machine. A perfect example of this is SQL Server Express databases.

You can use Logon Triggers to prevent Windows logins or other logins from accessing SQL Server. Logon triggers are fired every time a new connection is established to SQL Server. Just like regular triggers we can perform a ROLLBACK which will roll back the connection if the login is not your application login.

For this example assume that your application has a login called “MyApplicationUser” and this is the only login that should have access (even the local Administrator must not be able to access SQL Server).

Here is the T-SQL code to implement such a requirement

--Step1: Create Audit Table
USE master
GO
CREATE TABLE dbo.loginAuditTable (
    id INT IDENTITY PRIMARY KEY,
    data XML,
    program_name nvarchar(128)
)
GO

--Step2: Create VIEW To read XML Audit Data in user-friendly format
use master
go
CREATE VIEW dbo.loginAuditView
AS
SELECT id
      ,data.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname') AS EventType
      ,data.value('(/EVENT_INSTANCE/PostTime)[1]', 'datetime') AS PostTime
      ,data.value('(/EVENT_INSTANCE/SPID)[1]', 'int') AS SPID
      ,data.value('(/EVENT_INSTANCE/ServerName)[1]', 'nvarchar(257)') AS ServerName
      ,data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname') AS LoginName
      ,data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname') AS LoginType
      ,data.value('(/EVENT_INSTANCE/ClientHost)[1]', 'sysname') AS ClientHostName
      ,data.value('(/EVENT_INSTANCE/IsPooled)[1]', 'bit') AS IsPooled
      ,program_name
      ,data.value('(/EVENT_INSTANCE/SID)[1]', 'nvarchar(85)') AS SID
FROM master.dbo.loginAuditTable
GO

--Step3: Create Logon Trigger to block all users expect SA & MyApplicationUser
IF EXISTS(
    SELECT * FROM master.sys.server_triggers
    WHERE parent_class_desc = 'SERVER' AND name = N'Allow_only_Application_Login_Trigger')
DROP TRIGGER [Allow_only_Application_Login_Trigger] ON ALL SERVER
GO
 
CREATE TRIGGER Allow_only_Application_Login_Trigger
ON ALL SERVER WITH EXECUTE AS 'sa'
FOR LOGON
AS
BEGIN

DECLARE @data XML
SET @data = EVENTDATA()
 
DECLARE @AppName sysname
       ,@LoginName sysname
       ,@LoginType sysname
       ,@LoginDomain sysname
       ,@HostName sysname
SELECT @AppName = [program_name]
FROM sys.dm_exec_sessions
WHERE session_id = @data.value('(/EVENT_INSTANCE/SPID)[1]', 'int')

SELECT @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname')
      --,@LoginDomain = @data.value('(/EVENT_INSTANCE/LoginDomain)[1]', 'sysname')
      ,@LoginType = @data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname')
      ,@HostName = @data.value('(/EVENT_INSTANCE/ClientHost)[1]', 'sysname')

IF @LoginName not in ('MyApplicationUser','sa') 
    BEGIN
        ROLLBACK; --Disconnect the session
        --Log the exception to our Auditing table
        INSERT INTO master.dbo.loginAuditTable(data, program_name)
        VALUES(@data, @AppName)
    END 
END;

 

When a user who isn’t in the logon trigger exception list above tries to connect, they will get this error.

TITLE: Connect to Database Engine

——————————

Cannot connect to SERVERNAME.

——————————

ADDITIONAL INFORMATION:

Logon failed for login ‘Domain\TestUser1’ due to trigger execution.

Changed database context to ‘master’.

Changed language setting to us_english.

(Microsoft SQL Server, Error: 17892)

Every time a user who isn’t SA or MyApplicationUser tries to connect to SQL Server and failed with error given above, you will see the following information in the audit table including the login name, client machine name, application name etc.

image


Note:

Be careful when creating logon triggers as any mistake/bug in your code logic, will block you and then you wont be able to connect back in again. If you are in such a situation, then you can use the Dedicated Admin Connection (DAC) to make a connection and then disable/drop the logon trigger. Connections via the DAC don’t fire login triggers and there can be at max 1 DAC connection for an entire instance. DAC can enabled from sp_configure.

EXEC sp_configure 'remote admin connections', 1;
GO
RECONFIGURE;

As a best practice, make sure you have DAC enabled on all your SQL instances, because it is a life-saver in situations like this Smile. To make a DAC connection use the -A switch, sqlcmd -A -D master or Admin:ServerName from SSMS.

Solution for Scenario #2

Scenario #2 is a little more interesting and more complicated. Basically, the ask is to prevent all database access except from 1 specific client application. So even tools like SSMS, SQLCMD, Profiler etc. should not be able to connect to SQL Server.

We can still implement a reasonable amount of security using Logon Triggers, but it is not 100% fool-proof. With the same logon trigger code given above, we need to use the program_name from sys.dm_exec_sessions to check if this program is allowed to establish a connection to SQL Server.

--Step3: Create Logon Trigger to block all users expect SA & MyApplicationUser
IF EXISTS(
    SELECT * FROM master.sys.server_triggers
    WHERE parent_class_desc = 'SERVER' AND name = N'Allow_only_Application_Login_Trigger')
DROP TRIGGER [Allow_only_Application_Login_Trigger] ON ALL SERVER
GO
 
CREATE TRIGGER Allow_only_Application_Login_Trigger
ON ALL SERVER WITH EXECUTE AS 'sa'
FOR LOGON
AS
BEGIN

DECLARE @data XML
SET @data = EVENTDATA()
 
DECLARE @AppName sysname
       ,@LoginName sysname
       ,@LoginType sysname
       ,@HostName sysname

SELECT @AppName = [program_name]
FROM sys.dm_exec_sessions
WHERE session_id = @data.value('(/EVENT_INSTANCE/SPID)[1]', 'int')

SELECT @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname')
      ,@LoginType = @data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname')
      ,@HostName = @data.value('(/EVENT_INSTANCE/ClientHost)[1]', 'sysname')

IF @AppName not in ('MyProductionApplicationName1', 'MyReportingApplicationName2', 'Microsoft SQL Server Management Studio', 'Microsoft SQL Server Management Studio - Query') 
    BEGIN
        ROLLBACK; --Disconnect the session
        
        --Log the exception to our Auditing table
        INSERT INTO master.dbo.loginAuditTable(data, program_name)
        VALUES(@data, @AppName)
    END 
END;

 

This is the output from the Auditing table to track the applications and users who tried to connect to SQL Server.

image


The catch with this method is the the program_name or application name is not really a fool-proof method, since it is very much possible for someone to provide the correct application name in the connection string and get connected to SQL Server. There is a property called ApplicationName which can be set in the connection string. So if the application name is known, anyone can write a program to provide that name in the connection string and gain access to SQL Server.



E.g.

OLEDB Connection String

Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=ProdServer1;Application Name=MyProductionApplicationName1

SQL Native Client Connection String

Integrated Security=SSPI;Persist Security Info=False;User ID="";Initial Catalog=master;Data Source=ProdServer1;Application Name=MyReportingApplicationName2

If we use the [Application Name] property in the connection string, then even if we have logon triggers, it will allow the user to connect to SQL Server, since we are just matching the application name which is specified in the connection string.

A more secure approach you could use for this is called "Application Roles". When connecting from an application you assume a particular role and only that role is granted privileges required in the database. So all apps connect via this mechanism and don’t give out SQL or NT logins for any unauthorised use. You will have to call sp_setapprole in your application with a password, once the connection is established. Once this call succeeds then the connection gets the privileges of the application role and loses privileges of the actual user, which is what we want. So if someone tried to connect to the database from SSMS or SQLCMD, they will access the DB using their credentials, which won’t have the required permissions on the tables, since only the application role has the rights on the tables. This is more secure & reliable approach, but one that requires application code change and as a DBA you will have to create the application role in SQL Server.

The steps to create application roles are given in Books Online.

Posted in Security | Tagged: , , , , , , , , | 12 Comments »

SQL Server 2012 – Released and available for public download

Posted by Sudarshan Narasimhan on April 3, 2012


SQL Server 2012 codenamed “Denali” is now ready for download and available for public consumption. Please check out the following blog post that talks about the general release.

http://blogs.technet.com/b/dataplatforminsider/archive/2012/04/02/sql-server-2012-is-generally-available.aspx

SQL 2012 is specifically targeted at Big Data and High Availability, including a powerful set of feature-rich BI tools like PowerView. Some of the new engine features are :-

  1. SQL Server AlwaysON
  2. Contained Databases
  3. Column Store Indexes
  4. User-Defined Server Roles
  5. Big Data (Hadoop)


Again, Books Online is your best resource to read up on the new release and to understand the new set of features.

You can download the evaluation/trial version of SQL 2012 here,

http://www.microsoft.com/sqlserver/en/us/get-sql-server/try-it.aspx

http://www.microsoft.com/download/en/details.aspx?id=29066

Posted in SQL News | Tagged: , , , , , , , | Leave a Comment »

Programmatically find and set the Process Affinity mask for Windows Applications

Posted by Sudarshan Narasimhan on March 30, 2012


Well, here come my first Windows programming post. I often dabble in Win32 coding and have lots of small tools or programs written. I’m still sifting through my old code to see how many of them are useful for others. This one is quite handy to understand how Windows affinity mask can be found for any process and also set the affinity mask for running processes.


Program Details

Screenshot

image

You can download the visual studio solution called “ProcessAffinity.zip” from SkyDrive below,

https://skydrive.live.com/?cid=2c5b6bdee139d310&sc=documents&uc=2&id=2C5B6BDEE139D310%21135#

Posted in Windows | Tagged: , , , , , , , , , | Leave a Comment »