Q I had to recover a SQL Server 2005 database from tape. After the restore, most of the permissions users had to the database were lost. Can you tell me what I did wrong during the restore? The data was fine, but the permissions were a mess.
A Most likely you didn't restore the master database to the same point in time and the login IDs in the user database don't match the current master database. You need to sync them up. The sidebar "Where to Get Help When Moving a Database" lists the most useful resources for getting to the bottom of the problems you experience when restoring or moving a database.
Q I developed an application in which I use a stored procedure that shreds XML data into relational tables using OpenXML in SQL Server 2005 SP1. The XML document is at most 5KB (the average is 2.5KB). The stored procedure is called many times in parallel (up to 50 times).
I'm facing serious lock contention problems and I think they might be due to OpenXML. What do you think?
A While OpenXML may be faster for shredding or converting data on a single thread than the nodes method, the nodes method normally scales better, especially if used in parallel. However, if you're going to use OpenXML at all, you should use the following guidelines to improve your overall OpenXML performance.
Instead of calling OpenXML five times with the same row pattern (as you noted you were doing in your solution), you should extract all the data with the same row pattern into a temp table and then perform your selects from the temp table. Try to release the memory with sp_xml_removedocument as early as possible. Also, it is a good idea to avoid using wildcards such as * and //, whenever possible; providing the explicit path will result in a better performing query.
Q DBCC SHRINKFILE is running really slowly on my server. Would I get better performance on a multi-processor machine? What can I do to improve the situation?
A DBCC SHRINKFILE is a single-threaded operation, which means it does not take advantage of multiple CPUs. It moves pages from the back end of the file to the front end of the file, one page at a time. And shrink, as it is often called, does not perform defragmentation; as a matter of fact, in a lot of situations, shrink increases logical fragmentation.
Some ways to improve shrink performance include moving pages in clustered indexes. If you have heaps and those heaps have many non-clustered indexes, the speed would be noticeably slow (compared to a clustered index situation).
Note also that moving pages for large object blob (LOB) data is slow, because in-row data has to be read to find the root of LOB data.
If most content of an index/table resides at the end of the file, you can rebuild the indexes to move them to the front end of the file. Rebuilding the indexes takes advantage of multiple CPUs and could use less log space in bulk_logged mode. Then when you run shrink, it will run faster.
Thanks to the following Microsoft IT pros for answering this month's questions: Chad Boyd, Cindy Gross, John Hadden, Saleem Hakani, Stephen Jiang, Mahesh Nayak, Paul Randal, and Wayne Yu.
© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.