Somehow I ended up working on data collector issues quite a bit in the last 6 months or so. It’s certainly a useful feature but not an easy one to work with and definitely not an easy one to troubleshoot for you DBA’s. I’m listing down some issues that I faced and solutions for these issues. If you come up with or face any new issues in data collector, please drop a mail to sudarn.
1. Data Collector Upload Job Timeouts
The Data Collector jobs were getting timeout errors on for data upload job ONLY AT THE TIME WHEN PURGE JOB IS RUNNING. You notice that once the purge job completes, the data upload job also starts succeeding, but till that time it simply keeps failing. Here is what you would see in job history for these upload jobs.
02/08/2011 05:25:00,collection_set_3_upload,Error,0,SERVERXYZ\INSTANCEXYZ,collection_set_3_upload,(Job outcome),,The job failed.
The Job was invoked by Schedule 2 (CollectorSchedule_Every_5min). The last step to run was step 2 (collection_set_3_upload_upload).,01:00:01,0,0,,,,0
02/08/2011 05:25:00,collection_set_3_upload,Error,2,SERVERXYZ\INSTANCEXYZ,collection_set_3_upload,collection_set_3_upload_upload,,Executed as user: STARWARS\Yoda
The thread "ExecMasterPackage" has timed out 3600 seconds after being signaled to stop. Process Exit Code 259. The step failed.,01:00:01,0,0,,,,0
Since we know that the purge job running was the only time when these uploads were failing, we have a simple solution
Schedule the purge and upload to run at different schedules. You can use the SSMS UI to define a new schedule for the collection set, just make sure it doesn’t fall under the schedule of the purge job.
2. Data Collector Upload Job Deadlocks intermittently
The collection set upload job is running into deadlocks now and then (aka intermittent). This is again related to the purge jobs. Why?
There have been multiple reports of this issue on Connect & MSDN Forums and I’ve had the “pleasure” of talking to customers about this issue. Here are some,
Deadlock in MDW Upload Purge Logs Job Step
Management Data Warehouse Data Collector upload job deadlocks
Here is a sample output of a failed Upload job that reported the deadlock.
Log Job History (collection_set_3_upload)
Step ID 1
Job Name collection_set_3_upload
Step Name collection_set_3_upload_purge_logs
Executed as user: STARWARS\Yoda. Transaction (Process ID 457) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [SQLSTATE 40001] (Error 1205). The step failed.
Stagger the two collector jobs. E.g. collection_set_2_upload Server Activity and collection_set_3_upload Query Statistics to run a couple of minutes apart. Now, here is a catch! Don’t change the schedule for the SQL Server Agent job, you need to change the schedule using the collector “pick schedule” option in SSMS.
To do this, right click on the collector agent (not the SQL Agent job) select properties and then uploads in the left hand pane. Click on "New" to create a new schedule. If you do it this way you can create two new schedules that are independent of each other.
3. Unable to change/define Schedules for MDW Collection Sets
I did run into another quirky issue when attempting to define a new schedule for the collection sets. i.e I was not able to define a new schedule for this and kept getting this error.
These are the collection sets that are present by default,
- Disk Usage
- Server Activity
- Query Statistics
- Utility Information
Of these, the Utility Information is disabled and does not have a schedule defined by default. The reason for the above error was the Collection set “Query Statistics” mentioned above, did not have a valid schedule UID stored. You can confirm this by doing these queries.
1. In the context of MSDB database, run the following query and note down the schedule_uid value for Query Statistics collection set
select * from dbo.syscollector_collection_sets where collection_set_id=3
e.g output. A575FFD0-98A0-4D0E-B43C-B63482FB5B00
2. Again in the MSDB context, run the following,
SELECT schedule_id from sysschedules_localserver_view WHERE ‘XYZ’ = schedule_uid
– XYZ is the value obtained from step#1. You will see that there is NO value returned for step#2
3. As I mentioned, Utility Info has no schedule by default, so we need to use this to get out of this situation. So I used the Utility Information collection set and defined a new schedule for it for every 10 minutes.
4. Next, I queried the schedule_id for this in dbo.syscollector_collection_sets and used this schedule_id to map to the collection set that was failing.
declare @schedule_uid uniqueidentifier
select @schedule_uid = schedule_uid from dbo.syscollector_collection_sets where collection_set_id = 5 – whichever is the Utility Info collection set ID
exec dbo.sp_syscollector_update_collection_set @collection_set_id = 3, @schedule_uid = @schedule_uid – whichever is the Query Statistics collection set ID (change according to the one failing at your end)
5. This fixed the issue with Query Stats and I was able to change/define a schedule for that. But, since we created a schedule for Utility Info and you don’t want that to run, I tried to disable it by setting it to “On Demand”. But this failed. Oops!
6. So I enabled the Utility Info collection set and only then did it create a valid job_id for it, but I got another error when trying to remove the schedule.
7. I stopped the collection and then deleted the job manually. To get things back to old state, I updated the metadata using like this,
SET [collection_job_id] = NULL, [upload_job_id] = NULL
WHERE collection_set_id = 7 – whichever is the Utility Info collection set ID
8. Now, you can stop the Utility Info collection set and also get the other collection sets schedule changed to fix issue #2 mentioned above.
4. Data Collector Purge Job (Clean-up job) takes a long time to complete
This is actually the root cause of issue #1 and #2 listed above. The purge procedure is complicated and is responsible for cleaning up the metadata tables of old entries. This work is done by the core.sp_purge_data stored procedure. As a troubleshooting step, I captured the execution plan of the procedure and noticed a missing index recommendation in the XML Showplan.
<MissingIndex Database="[MDW]" Schema="[snapshots]" Table="[query_stats]">
<Column Name="[sql_handle]" ColumnId="1" />
If you were to translate this into a CREATE INDEX statement this is how it would look,
CREATE NONCLUSTERED INDEX [Ix_query_stats_sql_handle]
ON [snapshots].[query_stats] ([sql_handle] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF,
DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
Now, creating this requires modifying the MDW system table and of course this isn’t supported! This same thing is mentioned in this blog as well and is this blog post by the SQL Server Development Team. Don’t do it! Patience, I will explain.
Here are some facts:-
1. Purge job can get slow on large MDW databases (40+ GB).
2. The DELETE TOP statement on snapshots.notable_query_plan is the one where most execution time is spent.
Like I mentioned earlier, don’t modify system Stored Procedure code unless guided by Microsoft Support. Luckily, for the slow purge procedure a fix has been released in SQL Server 2008 R2 Service Pack 1, which can be downloaded here.
This fix updates the Purge procedure TSQL code and the purge has been broken down and re-written in an optimized way. The runtime will come down drastically once you update to SP1. The new procedures doing the purge is called “[core].[sp_purge_orphaned_notable_query_plan]” and “[core].[sp_purge_orphaned_notable_query_text]”
Hang on, it’s not over yet!
AFTER you apply SP1, you will need to modify the Stored Procedure sp_purge_orphaned_notable_query_text as shown below. These changes are required to correct the text of the sp_purge_orphaned_notable_query_text stored procedure because the delete statement incorrectly references the snapshots.notable_query_plan table after you apply Service Pack 1.
— Deleting TOP N orphaned rows in query plan table by joining info from temp table variable
— This is done to speed up delete query.
DELETE TOP (@delete_batch_size) snapshots.notable_query_plan
FROM snapshots.notable_query_plan AS qp , #tmp_notable_query_plan AS tmp
WHERE tmp.[sql_handle] = qp.[sql_handle]
Change this to following once you apply SP1
– Deleting TOP N orphaned rows in query text table by joining info from temp table
– This is done to speed up delete query.
DELETE TOP (@delete_batch_size) snapshots.notable_query_text
FROM snapshots.notable_query_text AS qt, #tmp_notable_query_text AS tmp
WHERE tmp.[sql_handle] = qt.[sql_handle]
Hopefully, this code change will be included in a future cumulative update post-SP1, so that you don’t have to manually change the code. The same applies for SQL Server 2008 as well, where I am hopeful these changes will be included in a future Service Pack. This fix mentioned above is at present once valid for SQL Server 2008 R2 (as of Aug 2, 2011 when I wrote this). With these, the slow purge issues should be put to bed, once and for all!
UPDATE (August 3rd, 2011)
After working with our KB team, we have published an official KB article that talks about this issue. For all those running into slow purge issues, please follow the resolution given in this KB article,
FIX: Data Collector job takes a long time to clear data from a MDW database in SQL Server 2008 R2
Other Useful Links
FIX: The Management Data Warehouse database grows very large after you enable the Data Collector feature in SQL Server 2008