Skip to main content
Cliquez pour évaluer et commenter 

Programmabilité

Découvrez ici les ressources qui vous permettront de programmer sous SQL Server en utilisant les technologies d'accès aux données comme SQL CLR, T-SQL et XML.

Vue d'ensemble de la programmation SQL Server Vue d'ensemble de la programmation SQL Server

Les interfaces de programmation d'applications (API) sont les mécanismes utilisés par les applications pour accéder aux ressources sur l'ordinateur local ou disponibles sur un réseau. Microsoft SQL Server 2005 prend en charge de nombreuses classes d'API que les applications peuvent utiliser pour accéder aux ressources SQL Server.

Visite guidée de la productivité des développeurs de SQL Server 2005 Visite guidée de la productivité des développeurs de SQL Server 2005
Suivez le guide pour découvrir comment l'intégration entre Visual Studio 2005 et SQL Server 2005 permet aux développeurs d'obtenir des résultats plus rapides lorsqu'ils créent des applications pilotées par les données.

Fonctionnalités de programmabilité

Programmation de SQL Server 2005 à l'aide de .NET Framework

Microsoft SQL Server 2005 comporte l'intégration du composant Common Language Runtime (CLR) de .NET Framework pour Microsoft Windows. En d'autres termes, vous pouvez désormais écrire des procédures stockées, des déclencheurs, des types, des fonctions et des agrégats définis par l'utilisateur, ainsi que des fonctions intéressantes de table de diffusion, à l'aide d'un langage .NET Framework, y compris Microsoft Visual Basic .NET et Microsoft Visual C#.

Centre de développement Accès aux données et stockage
Découvrez les nouveautés concernant les technologies de données, dont SQL Server, ADO.NET, MDAC et bien d'autres, ainsi que des instructions complètes pour utiliser ces technologies afin de créer des solutions efficaces.
Transact-SQL
Transact-SQL a longtemps été la base de toute la programmabilité de SQL Server. SQL Server 2005 dispose de nombreuses capacités de langage pour développer des applications évolutives de bases de données. Ces améliorations incluent le traitement des erreurs, de nouvelles capacités de requête récursive et la prise en charge des nouvelles capacités du Moteur de base de données SQL Server.
XML dans SQL Server 2005
Des progrès comme le type natif de données XML et XQuery vous permettent de connecter de manière transparente des systèmes internes et externes. SQL Server 2005 prend en charge les données relationnelles et XML de manière native, de sorte à pouvoir stocker, gérer et analyser des données au format qui répond le mieux à vos exigences.
Programmabilité SQL et blog de l'équipe API Programmabilité SQL et flux RSS du blog de l'équipe API

SQL Server 2005 - Database Programability

Interesting issue with Filtered indexes.
Recently, an ISV I work with ran into an interesting problem with Filtered Indexes. The application does all SQL INSERT, DELETE, and UPDATE operations using individual stored procedures. To improve performance, they decided to create a Filtered Index (new in SQL 2008) to restrict the data such that it does not contain NULL values, in this case drastically reducing the number of rows in the index. See documentation here: http://msdn.microsoft.com/en-us/library/ms175049.aspx This all worked fine, until months later, when the ‘UPDATE’ stored procedure was updated as part of a routine application upgrade. The stored proc was replaced with a newer version using a TSQL script. After adding the ‘new’ stored proc the application was tested and the following was observed; · the SELECT statements accessing the table continued to use the filtered index · the ‘INSERT’, and ‘DELETE’ stored procs continued to work · However, the modified ‘UPDATE’ stored proc returned the following error: InnerException: System.Data.SqlClient.SqlException: UPDATE failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations. Note that the error message is very explicit and actually points us to the source of the problem. Can you guess what it is? You are correct! The TSQL script used SET options that were set differently for the new ‘UPDATE’ stored proc than they were originally, and they don’t conform to the rules required to utilize Filtered Indexes. In this case, the SET QUOTED_IDENTIFIER was set to an invalid setting. The solution to the problem was to recreate the stored proc using the ‘correct’ settings required to use Filtered indexes, documented here: http://msdn.microsoft.com/en-us/library/ms188783.aspx The following simplified TSQL example shows the problem and the solution. NOTE: the fact that Stored Procedures were used is important, because they ‘inherit’ the SET statement values they were created with, and NOT the values they are executed with. USE USE master GO CREATE DATABASE FI_Test GO USE FI_Test GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [FactSalesQuota]( [SalesQuotaKey] [int] IDENTITY(1,1) NOT NULL, [EmployeeKey] [int] NOT NULL, [DateKey] [int] NOT NULL, [CalendarYear] [smallint] NOT NULL, [CalendarQuarter] [tinyint] NOT NULL, [SalesAmountQuota] [money] NOT NULL, CONSTRAINT [PK_FactSalesQuota_SalesQuotaKey] PRIMARY KEY CLUSTERED ([SalesQuotaKey] ASC) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX FI_FactSalesQuota ON FactSalesQuota(Employeekey,CalendarQuarter) WHERE CalendarYear = 2009 -- <---- This makes it a Filtered index GO INSERT FactSalesQuota values(53,20090101,2009,4, 37000.00) GO SELECT * FROM FactSalesQuota GO CREATE PROCEDURE UPDATE_FactSalesQuota @EmployeeKey int, @CalendarYear smallint, @CalendarQuarter tinyint, @SalesAmountQuota money AS UPDATE FactSalesQuota SET SalesAmountQuota = @SalesAmountQuota WHERE EmployeeKey = @EmployeeKey and CalendarYear = @CalendarYear and CalendarQuarter = @CalendarQuarter GO EXECUTE UPDATE_FactSalesQuota 53,2009,4,50000.00 GO SELECT * FROM FactSalesQuota GO --- this was the script to update the sp use FI_Test GO sp_rename UPDATE_FactSalesQuota , UPDATE_FactSalesQuota_V1 GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER OFF -- <------ HERE is what caused the error --SET QUOTED_IDENTIFIER ON -- <------ it will work if this is set correctly GO CREATE PROCEDURE UPDATE_FactSalesQuota @EmployeeKey int, @CalendarYear smallint, @CalendarQuarter tinyint, @SalesAmountQuota money AS UPDATE FactSalesQuota SET SalesAmountQuota = @SalesAmountQuota, DateKey = cast((CONVERT (char(8) ,getdate(), 112)) as int) WHERE EmployeeKey = @EmployeeKey and CalendarYear = @CalendarYear and CalendarQuarter = @CalendarQuarter GO --this fails EXECUTE UPDATE_FactSalesQuota 53,2009,4,52000.00 GO SELECT * FROM FactSalesQuota GO -- Now go back fix the script, and retry the sript and now it will work Cross Posted from http://blogs.microsoft.com/mssqlisv
Why did the size of my indexes expand when I rebuilt my indexes?
Recently I worked with a partner who was seeing some interesting behavior. Upon rebuilding their indexes they noticed that the total space used by all indexes increased significantly. The table has no clustered index but does have a total of nine non-clustered indexes. The sequence of events is as follows: · Step 1: Approximately 12 million rows are inserted into an existing table via some batch loading of the data. · Step 2: All the indexes on the table are rebuilt using: ”ALTER INDEX MyIndex ON MyTable WITH (SORT_IN_TEMPDB=ON, ONLINE=ON)” The sp_spaceused procedure was used before and after each of the steps above to measure the amount of space used by the table and indexes. Here are the results: BEFORE STEP 1: sp_spaceused MyTable Name Rows Reserved Data Index Size Unused ------ ------ ---------- ------- ---------- ----------- MyTable 1156563588 324009704 KB 88318384 KB 235511080 KB 180240 KB AFTER STEP 1, BEFORE STEP 2 (Data added, nothing done to indexes): sp_spaceused MyTable Name Rows Reserved Data Index Size Unused ------ ------ ---------- ------- ---------- ----------- MyTable 1169556034 329729960 KB 89645944 KB 240051312 KB 32704 KB AFTER STEP 2 (All indexes rebuilt): sp_spaceused MyTable Name Rows Reserved Data Index Size Unused ------ ------ ---------- ------- ---------- ----------- MyTable 1169595370 459848840 KB 89649160 KB 363548216 KB 6651464 KB Notice the size of the index after adding the rows to the table and then after rebuilding the indexes (highlighted in red above). Before the rebuild the index size was approximately 240GB but after the rebuild it was nearly 365GB, an increase of nearly 50%. It is also worth noting that in the above sample there were concurrent inserts against this table while the 12 million rows were added, as well as during the index rebuilds. This is why the indexes were rebuilt online and explains the differences in rowcount between each step. Why did my index sizes increase so much? The answer is related to the fact that 1) RCSI was enabled on the database and 2) the index was rebuilt ONLINE. When RCSI is enabled on a database there is an additional 14 bytes appended to each row as it is inserted, updated or deleted. This applies to the table as well as any index modified by the action. The purpose of this extra space is to maintain information about row versions that is needed for the RCSI functionality. This is described in greater depth in the following blog: http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/30/overhead-of-row-versioning.aspx. When an index is rebuilt using the option ONLINE=ON SQL Server will append these 14 bytes to every row during the rebuild. However, when the index is rebuilt with the option ONLINE=OFF the 14 bytes are not appended but instead removed from any existing rows, unless the row is currently part of an active transaction. The difference in behavior is by design. When RCSI is not enabled, the above does not apply since there is no need to maintain row version information. An existing index can be expected to increase in size after an online rebuild when either of the following is true. 1. An index with existing data has been rebuilt offline prior to the online rebuild. In this case the size of the index will increase because the 14 bytes removed during the offline rebuild are added to each row during the online rebuild. 2. The database had existing data prior to RCSI being enabled. In this case any online rebuild will add the additional 14 bytes to each row. For this specific scenario the database had existing data prior to RCSI being enabled and these indexes were also rebuilt offline at some point in time before the online index rebuilds were performed. How much can I expect my indexes to grow in size? The size increase as a percentage of the original index sizes will be dependent on the size of the index keys for the indexes. This can be much higher in cases when the size of the key columns in bytes is relatively small. The below illustrates the approximate size of the index keys and the increase in size as a percentage of the key size for two of the nine indexes. The Row Identifier below (RID) is added to each row since this table is a heap (no clustered index) and is used to identify the FILEID:PAGEID:ROWID for each index row. CREATE UNIQUE NONCLUSTERED INDEX [NC_Idx_1] ON [MyTable] ( [Column1] ASC, --[PersonID] [numeric](16, 0) (9 bytes) [Column2] ASC, --[int] (4 bytes) [Column3] ASC, --[varchar](20) (20 bytes max) [Column4] ASC, --[varchar](20) (20 bytes max) [Column5] ASC --[char](10) (10 bytes) --9+4+20+20+10+8(RID)=71 (Maximum original key size) -- +14 (RCSI – versioning information) -- =85 -- (~20% increase in size, could be more if varchar columns have <20 bytes) ) CREATE UNIQUE NONCLUSTERED INDEX [NC_Idx_2] ON [MyTable] ( [Column2] ASC, --[int] (4 bytes) [Column5] ASC --[char](10) (10 bytes) --4+10+8(RID)=22 (Approx. original key size) -- +14 (RCSI – versioning information) -- =36 -- (~60% increase in size) ) As shown above, the additional 14 bytes introduced as part of the rebuild is significantly more as a percentage of the total row size for NC_Idx_2 than for NC_Idx_1. For this particular example there were nine indexes on the existing table, five of which had relative small key sizes (similar to NC_Idx_2 above). This explains way there was such a large increase in size as a percentage of the original size. In addition to the sp_spaceused procedure, the DMV sys.dm_index_physical_stats exposes a column avg_record_size_in_bytes which can be used to measure the average row size within an index before and after index rebuilds. The avg_record_size_in_bytes includes the 14 bytes added by RCIS, if present, as part of the calculation. Related to this behavior, there are some other interesting considerations. 1. After an index is rebuilt offline, workloads that do many updates or deletes may introduce fragmentation to the index. When an index is rebuilt offline any existing versioning information is removed from the row however any update or delete will add these 14 bytes back into to the row. When data pages are nearly full, as is the case after index rebuilds, the increase in row size as a result of the addition 14 bytes may result in page splits. This problem can be avoided by explicitly specifying a FILLFACTOR less than 100% when rebuilding an index. This will leave free space on the data/index pages and reduce the likelihood of splits. This is not a consideration if indexes are rebuilt online since the row versioning information will exist on each row already. 2. Data compression (in SQL Server 2008) performed on an index or table (either ROW or PAGE compression) using the ONLINE=ON option may result in less space saving than expected when RCSI is enabled due to the addition of the 14 bytes. These 14 bytes are initialized with a timestamp and place holder for the version record pointer and the information is not compressed by either ROW or PAGE compression. The stored procedure sp_estimate_data_compression_savings can be used as a method to measure the expected impact of data compression on the index. It is possible that this behavior may change in a future release of SQL Server. Cross Posted from http://blogs.microsoft.com/mssqlisv
SQL Server 2005 / 2008 table partitioning : Important things to consider when switching-out partitions.
An ISV recently found an anomoly in their implementation of a "drop table partition" function which could lead to unexpected partitions being switched out. Typically, to do this "drop partition logic" SQL Server partitions are manipulated using the following operations: · switch-out the partition which isn't needed anymore (or being archived) into an empty target table · merge / reset the partition function ranges of the partitioned source table. For example; move some of the remaining data into a new partition / filegroup · possibly drop the target table The simple, standard way to switch-out a partition is to specify a partition number. The ISV implemented a select statement which used a boundary value of a partition range to return the corresponding partition number. Then the TSQL switch command was used to switch out this partition. This works fine as long as there is only one process doing the "switching". But now let's assume that two processes would try this at the same time. Both processes will run the selects to get the partition numbers. Then the first one does the switch-out and the merge of the partition function. Unfortunately, the latter command will change the partition numbers as they are dynamically maintained by SQL Server. An activity like merge or splits of partitions will trigger a re-enumeration of all or parts of the partitions of a the table. Afterwards the partition number which the second process got before the re-enumaration (tiggered by the first process) might not be accurate any more. A switch-out using the old partition number could result in switching out the wrong partition. The solution is simple. We recommend using the $PARTITION function for the switch-out which allows you to specify a partition boundary value instead of a partition number. The boundary values are not dynamic and therefore this issue won't come up. Repro : The repro script below does the following: · create a partitioned test table with 5 boundary values: 1960, 1970, 1980, 1990, 2000 · insert 1 row into the 1970 range, 2 rows into the 1980 range and 5 rows into the 1990 range · now switch out the 1970 range and the 1980 range by specifying the boundary values · as expected, three ranges remain: 1960, 1990 with 5 rows and 2000 · now repeat the same test by specifying a partition number instead of the boundary value · the output after creating the test table shows partition number 2 for boundary 1970 and 3 for boundary 1980 · using "hard-coded" partition numbers for the "switch partition" function simulates the select mentioned above · however, now the results look different. Like before the boundaries 1960, 1990 and 2000 remain as expected. But the number of rows are not at all what we would expect! · instead of 5 rows for boundary 1990 we see only 2 ! What happened ? · well - the first process did the merge of the partition function. This changed the partition number of boundary 1990 from 4 to 3 and the one of boundary value 1980 from 3 to 2. The call of the "switch partition" function with partition number 3 will now switch-out the rows of boundary value 1990. Therfore the 5 rows are gone and the 2 rows of 1980 will be kept. The merge function would still use the correct boundary value. So the list of boundary values looks ok but the content is wrong ! The solution to use the $PARTITION function also works in case two processes interfere between switch-out and the partition function merge. TSQL Repro: if exists ( select * from sys.procedures where name = 'dp_reset_test' ) drop procedure dp_reset_test if exists ( select * from sys.procedures where name = 'dp_list_partitions' ) drop procedure dp_list_partitions if exists ( select * from sys.procedures where name = 'dp_switch_partition_via_boundary' ) drop procedure dp_switch_partition_via_boundary if exists ( select * from sys.procedures where name = 'dp_switch_partition_via_partno' ) drop procedure dp_switch_partition_via_partno go set nocount on go -- stored procedure to create partitioned test table create procedure dp_reset_test as begin if exists ( select * from sys.objects where name = 'dp_test1' and type = 'U' ) drop table dp_test1 if exists ( select * from sys.objects where name = 'dp_test1_clone' and type = 'U' ) drop table dp_test1_clone if exists ( select * from sys.partition_schemes where name = 'ps_year' ) drop partition scheme ps_year if exists ( select * from sys.partition_functions where name = 'pf_year' ) drop partition function pf_year CREATE PARTITION FUNCTION pf_year (int) AS RANGE LEFT FOR VALUES ( 1960, 1970, 1980, 1990, 2000 ) CREATE PARTITION SCHEME ps_year AS PARTITION pf_year ALL TO ([PRIMARY]) create table dp_test1 ( col1 int default 99 , col2 int, col3 int primary key ) on ps_year(col3) create table dp_test1_clone ( col1 int default 99 , col2 int, col3 int primary key ) insert into dp_test1 values ( 1966,1966,1966 ) insert into dp_test1 values ( 1971,1971,1971 ) insert into dp_test1 values ( 1972,1972,1972 ) insert into dp_test1 values ( 1984,1984,1984 ) insert into dp_test1 values ( 1985,1985,1985 ) insert into dp_test1 values ( 1986,1986,1986 ) insert into dp_test1 values ( 1987,1987,1987 ) insert into dp_test1 values ( 1988,1988,1988 ) print '' end go -- stored procedure to print partitions info create procedure dp_list_partitions ( @tabname char(20) ) as begin declare @p_number int declare @p_rows int declare @boundary_value int declare p_details cursor for select partition_number, rows, convert(int,sprv.value) from sys.partitions sp, sys.partition_functions spf, sys.partition_range_values sprv where object_id = OBJECT_ID(@tabname) and spf.function_id = sprv.function_id and sprv.boundary_id = sp.partition_number and ( sp.index_id = 1 or sp.index_id = 0 ) and spf.name = 'pf_year' order by partition_number open p_details FETCH NEXT FROM p_details INTO @p_number, @p_rows, @boundary_value print 'part no ' + '# rows ' + 'boundary' WHILE @@FETCH_STATUS = 0 begin print convert(char(10), @p_number) + ' ' + convert(char(10), @p_rows) + ' ' + convert(char(10), @boundary_value) FETCH NEXT FROM p_details INTO @p_number, @p_rows, @boundary_value end close p_details deallocate p_details print '' end go -- stored procedure to get rid of a partition by specifying the partion number create procedure dp_switch_partition_via_partno ( @partno int, @boundary int ) as begin truncate table dp_test1_clone alter table dp_test1 switch partition @partno to dp_test1_clone alter partition function [pf_year]() merge range (@boundary) end go -- stored procedure to get rid of a partition by specifying the boundary value create procedure dp_switch_partition_via_boundary ( @boundary int, @merge_flag int ) as begin if( @merge_flag = 1 ) begin truncate table dp_test1_clone alter table dp_test1 switch partition $PARTITION.pf_year(@boundary) to dp_test1_clone alter partition function [pf_year]() merge range (@boundary) end if( @merge_flag = 2 ) begin truncate table dp_test1_clone alter table dp_test1 switch partition $PARTITION.pf_year(@boundary) to dp_test1_clone end if( @merge_flag = 3 ) begin alter partition function [pf_year]() merge range (@boundary) end end go -- test sample -- create partitioned test table execute dp_reset_test print 'Test table with 5 boundary values : ' print '' execute dp_list_partitions 'dp_test1' print 'Switch partitions with boundary 1970,1980 via boundary value : ' print '' execute dp_switch_partition_via_boundary 1970, 1 execute dp_switch_partition_via_boundary 1980, 1 execute dp_list_partitions 'dp_test1' print 'as expected boundary values 1970 and 1980 are gone and 1990 remains with 5 rows' print '' print '' -- reset test table execute dp_reset_test print 'Switch partitions with boundary 1970,1980 via partno : ' print '' execute dp_switch_partition_via_partno 2, 1970 execute dp_switch_partition_via_partno 3, 1980 execute dp_list_partitions 'dp_test1' print 'boundary values 1970 and 1980 are gone but 1990 remains with 2 rows which is unexpected !' print '' print '' -- reset test table execute dp_reset_test print 'Switch partitions with boundary 1970,1980 via boundary value with ' print '"deferred merge of the partition function" : ' print '' -- switch out only execute dp_switch_partition_via_boundary 1970, 2 execute dp_switch_partition_via_boundary 1980, 2 -- merge partition function only execute dp_switch_partition_via_boundary 1970, 3 execute dp_switch_partition_via_boundary 1980, 3 execute dp_list_partitions 'dp_test1' print 'as expected boundary values 1970 and 1980 are gone and 1990 remains with 5 rows' print ''Cross Posted from http://blogs.microsoft.com/mssqlisv
Avoid using JDK Date APIs to handle timezone sensitive date and time
JDK APIs for Class “java.util.Date” and “java.sql.Timestamp” (subclass of Date) including getHours(), getMinutes(), getSeconds(), getTimestamp() allow you to retrieve date/time related information. However, the JVM (Java Virtual Machine) won’t handle timezone sensitive data properly using these APIs. As matter of fact, these APIs were deprecated starting JDK 1.1 (http://java.sun.com/j2se/1.5.0/docs/api/java/util/Date.html). But I am still seeing ISV developers use these APIs in their JAVA applications, leading to incorrect results or behavior. What happens is when date/time data is inserted into the SQL Server database, it’s stored correctly in SQL Server. However, when the date/time is read using the APIs mentioned above, the retrieved value is implicitly converted to “local time” depending on where the host of JVM (Java Virtual Machine) is. For example, when a java application reads “1/8/2009 3:30:00 AM” from SQL Server database, you would get different results depending on the location of the app. SQL Server (in pacific time zone): create table datetime_tbl (id INTEGER unique not null, dateTime_v DATETIME not null) insert into datetime_tbl (id, dateTime_v) values (1, '2009-01-08 03:30:00') go Java application: Statement s = connection.createStatement(); query = "select id, dateTime from datetime_tbl where id = " + 1; s.execute(query); ResultSet rs = s.getResultSet(); rs.next(); String localApptimezone = Calendar.getInstance().getTimeZone().getID(); // -8 is offset of GMT to read the time as pacific time. java.util.TimeZone timeZoneP = new java.util.SimpleTimeZone(-8*3600000, "GMT-8 (Pacific)"); java.util.Calendar cal= java.util.Calendar.getInstance(timeZoneP); Timestamp tStamp = rs.getTimestamp(2, cal); System.out.println("local application timezone: " + localApptimezone); System.out.println("time: " + tStamp.toString() + " in " + timeZoneP.getID()); Results (note: 1 hour difference for Arizona test): Location of app/JVM Result Washington (Pacific time zone) local application timezone: America/Los_Angeles time: 2009-01-08 03:30:00.0 in GMT-8 (Pacific) Arizona (Mountain time zone) local application timezone: America/Phoenix time: 2009-01-08 04:30:00.0 in GMT-8 (Pacific) The recommended way of handling this type of scenario is to utilize DateFormat and avoid getTimestamp() and other aforementioned APIs. Alternatively, you can convert the date/time to character string directly. Statement s = connection.createStatement(); query = "select id, dateTime from datetime_tbl where id = " + 1; s.execute(query); ResultSet rs = s.getResultSet(); rs.next(); String localApptimezone = Calendar.getInstance().getTimeZone().getID(); // -8 is offset of GMT to read the time as pacific time. java.util.TimeZone timeZoneP = new java.util.SimpleTimeZone(-8*3600000, "GMT-8 (Pacific)"); java.util.Calendar cal= java.util.Calendar.getInstance(timeZoneP); java.text.DateFormat dateFormat = java.text.DateFormat.getInstance(); dateFormat.setTimeZone(timeZoneP); System.out.println("local application timezone: " + localApptimezone); System.out.println("time: " + dateFormat.format(cal.getTime()) + " in " + timeZoneP.getID()); Results (note: consistent results for both region tests as expected): Location of app/JVM Result Washington (Pacific time zone) local application timezone: America/Los_Angeles time: 1/8/09 3:30 AM in GMT-8 (Pacific) Arizona (Mountain time zone) local application timezone: America/Phoenix time: 1/8/09 3:30 AM in GMT-8 (Pacific) Cross Posted from http://blogs.microsoft.com/mssqlisv
Zeroing in on blocking on seemingly unrelated tables
In one of our recent lab tests we were surprised to see blocking occur on a table that did not participate in the transaction being reported as the cause of the blocking. From the sp_lock output we noticed an ‘X’ lock being held on the table, but we could guarantee that there was no insert, delete or update activity on the table. Needless to say, this was odd and baffled us for while and it was only when we analyzed the definitions of the tables that we could determine the source of the problem. Let’s take a look at this scenario via a simplified example and explain the cause of the blocking. Consider the case where we have two tables, ORDERS and ORDER_LINE and a foreign-key relationship as shown below. <v:shapetype id=_x0000_t75 stroked="f" filled="f" path="m@4@5l@4@11@9@11@9@5xe" o:preferrelative="t" o:spt="75" coordsize="21600,21600"><v:stroke joinstyle="miter"></v:stroke><v:formulas><v:f eqn="if lineDrawn pixelLineWidth 0"></v:f><v:f eqn="sum @0 1 0"></v:f><v:f eqn="sum 0 0 @1"></v:f><v:f eqn="prod @2 1 2"></v:f><v:f eqn="prod @3 21600 pixelWidth"></v:f><v:f eqn="prod @3 21600 pixelHeight"></v:f><v:f eqn="sum @0 0 1"></v:f><v:f eqn="prod @6 1 2"></v:f><v:f eqn="prod @7 21600 pixelWidth"></v:f><v:f eqn="sum @8 21600 0"></v:f><v:f eqn="prod @7 21600 pixelHeight"></v:f><v:f eqn="sum @10 21600 0"></v:f></v:formulas><v:path o:connecttype="rect" gradientshapeok="t" o:extrusionok="f"></v:path></v:shapetype><v:shape id=Picture_x0020_1 style="VISIBILITY: visible; WIDTH: 284pt; HEIGHT: 82pt; mso-wrap-style: square" type="#_x0000_t75" o:spid="_x0000_i1026"><v:imagedata mce_src="file:///C:\Users\pschar\AppData\Local\Temp\msohtmlclip1\01\clip_image001.png" o:title="" src="file:///C:\Users\pschar\AppData\Local\Temp\msohtmlclip1\01\clip_image001.png"></v:imagedata></v:shape> These tables are populated with the following 4 rows of data: INSERT INTO ORDERS (ORDER_ID, CREATED_BY, DUE_DATE) VALUES (100, 'Burzin', '11/24/2003'), (101, 'Burzin', '11/28/2003'); INSERT INTO ORDER_LINE (ORDER_ID, ORDER_LINE_ID, ITEM, QUANTITY) VALUES (100, 1, 1028, 12), (101, 2, 1029, 24); Furthermore, we had Read Committed Snapshot Isolation (RCSI) enabled on the database. In our scenario we had two transactions executing the following two T-SQL statements via separate database connections (different SPIDs). SPID-56 BEGIN TRAN UPDATE ORDER_LINE SET ORDER_ID = 101 WHERE ORDER_ID = 100; ... SPID-57 BEGIN TRAN DELETE ORDERS WHERE ORDER_ID = 100; ... (NOTE: both these transactions operate on different tables.) When these transactions were executed, we observed that blocking occured on the ORDERS table. This was a bit non-intuitive and baffling at first. To get to the root of the problem we started by investigating the common causes but couldn’t find any reason for the two transactions operating on different tables to block each other. However, on further analysis we noticed that the ORDER_LINE table had a foreign-key relationship to the ORDERS table and because of this when the ORDER_LINE table was updated, the ORDERS table was referenced to ensure that the foreign-key relationship was being preserved. This was why the DELETE statement held a shared (‘S’) lock on the rows of child table, ORDER_LINE, even though it wass only deleting from parent table, ORDERS. This solved a part of the mystery. Upon looking at the output of sp_lock we observed that there were two exclusive (X) Keylocks (see rows 11 and 12 in the screenshot below) acquired on the ORDERS table which were causing other transactions operating on the ORDERS table to block. <v:shape id=Picture_x0020_7 style="VISIBILITY: visible; WIDTH: 265pt; HEIGHT: 180pt; mso-wrap-style: square" type="#_x0000_t75" o:spid="_x0000_i1025"><v:imagedata mce_src="file:///C:\Users\pschar\AppData\Local\Temp\msohtmlclip1\01\clip_image003.png" o:title="" src="file:///C:\Users\pschar\AppData\Local\Temp\msohtmlclip1\01\clip_image003.png"></v:imagedata></v:shape> This once again was a bit confusing. If the ORDERS table was being accessed solely to verify the referential integrity of the data and preserve the foreign-key relationship, why were exclusive locks being acquired? Furthermore, since RCSI was enabled on the database we expected the read and write operations to not cause any blocking. On digging deeper we determined that the database engine had to acquire an ‘X’ lock on the ORDERS table as soon as the second transaction tried to modify a row to prevent the possibility of the referential integrity being broken. One could imagine a pathological case where the first transaction updated the ORDER_LINE table with a value that qualified the foreign-key relationship, and then the second transaction DELETE the value from the ORDERS table leaving the referential integrity broken and the foreign-key pointing to a phantom value. In fact this is exactly what occurs in the example presented above and is depicted by the screenshot where ‘S’ lock on ORDER_LINE table requested by the DELETE statement is blocked by the UDATE statement. Therefore the blocking chain can be viewed as: Queries accessing the ORDERS table à blocked by DELETE statement à blocked by UPDATE statement. To prevent this situation from occurring, the database engine acquires an ‘X’ lock on a referenced table (ORDERS) as soon as it determines that there is a data modification transaction operating on it. This is by design and expected behavior as without this behavior there is a possibility of the referential integrity between the tables breaking. If your application is encountering a similar issue you may want to try work around it by: 1. Modifying your application so that two transactions do not operate on the same or referenced objects concurrently, e.g. in the above example the update and the delete could have been performed via a single transaction 2. Removing the referential integrity. This assumes that your application does not need this, or can enforce it via some other means, e.g. within the business logic layer of the application. Cross Posted from http://blogs.microsoft.com/mssqlisv
Microsoft réalise une enquête en ligne pour comprendre votre opinion sur le site Web de. Si vous choisissez de participer, l’enquête en ligne vous sera présentée lorsque vous quitterez le site Web de.

Souhaitez-vous y participer ?