Export (0) Print
Expand All
Creating Audit Tables, Invoking COM Objects, and More
Exception-handling Techniques
Exploring SQL Server Triggers: Part 2
Deliver User-Friendly Reports from Your Application with SQL Server Reporting Services
SQL Server: Display Your Data Your Way with Custom Renderers for Reporting Services
Updating Data in Linked Servers, Information Schema Views, and More
XML Features in SQL Server 2000
Expand Minimize

Evolution of the SQL Server Data Access Model

SQL Server 2000
 

Dino Esposito
Wintellect

April 2004

Applies to:
   Microsoft® SQL Server™
   Microsoft® ADO.NET

Summary: Get an annotated overview of the data access layers available today that work with SQL Server, including ADO, the XML interface of SQL Server 2000 (SQLXML), ADO.NET, and the upcoming ADO.NET 2.0. (14 printed pages)

Contents

The Road to OLE DB and ADO
With Further ADO
ADO.NET
Working with SQL Server Through XML
The SQLXML Managed Library
What Will Be New (and Hot) With Visual Studio 2005
Conclusion

In the past few years, several programming models for Microsoft® SQL Server™ databases have followed one upon another without a break. The features provided by the underlying database management system (DBMS) changed over time, catching up with users' requirements, and the overall goal of designing a more powerful and successful product. Likewise, the associated data access models evolved to incorporate newly released features, while still providing an easy and increasingly abstract way for developers and architects to build an effective data access layer in end-applications.

I worked on my first Windows® project in the early 1990s. Our application needed some sort of database support. In itself, the database wasn't a critical part of the application, and there was no reason for choosing, say, Microsoft® Access over Paradox, or SQL Server™ instead of Oracle. We just needed a common programming interface and the ability to read, write, and filter a few records. We ended up adopting Open Database Connectivity (ODBC) as the emerging standard for open database connectivity. It wasn't an obvious choice, though. It was perfect for our purposes because we could afford it. The ODBC API, in fact, was designed for C/C++ programmers. It was great for us, just not for everybody.

ODBC was the first attempt to normalize access to a variety of DBMS systems through a common API. Before that, programmers were unfairly expected to have an intimate knowledge of the database's nuts and bolts. Programmers had to be in sync with the specialties and idiosyncrasies of the particular SQL dialect and the underlying architecture design. Although not perfect on its own, ODBC showed the right way to go.

In this article, I'll provide an annotated overview of the data access layers available today to work with SQL Server. I'll focus on ActiveX Data Objects (ADO), the XML interface of SQL Server 2000, and ADO.NET, but I won't forget to add a word or two on the future, specifically ADO.NET 2.0.

The Road to OLE DB and ADO

ODBC was designed as a C-oriented application programming interface (API), but the advent of Component Object Model (COM) and rapid application development (RAD) tools like Microsoft® Visual Basic® urgently posed the problem of finding a COM-oriented alternative. More or less at the same time, Access and Visual Basic developers were enjoying the Data Access Object (DAO) object model bundled with Microsoft Access, and leveraging the Jet database engine. DAO was first enhanced to call into DBMS systems such as SQL Server and Oracle through ODBC. Basically, DAO acted as a proxy between code and the ODBC API. This model introduced some overhead, though, because the resultset had to be converted to the DAO object model to really become usable. To work around the issue, Microsoft first introduced RDO and then ODBCDirect. Remote Data Objects (RDO) is a COM object model that exposes the functionalities of the ODBC model. It is not richer or more powerful than ODBC; it is simply a COM wrapper for ODBC designed to make ODBC accessible from within script, Web, and Visual Basic applications. ODBCDirect, instead, was created to speed up existing DAO applications calling into SQL Server. ODBCDirect is a sort of DAO working mode; when enabled, it uses RDO and ODBC instead of Jet to get and set data on SQL Server databases.

However, the first significant innovation in data access after ODBC is OLE DB. OLE DB was introduced in 1998 as part of the Visual Studio® 6.0 platform. OLE DB is the programming side of what was then called the Universal Data Access (UDA) vision. The inspiring principle of UDA was making any data storage accessible through a common set of COM interfaces. Basically, it is the same abstract idea behind ODBC, but reworked and refined to fit into the most popular and successful programming platform of that time, the Component Object Model.

Once again, the model came with little imperfections at first. Fully COM-based, the consumer/provider model of OLE DB was relatively easy to implement in C++ applications, but proved impossible to follow in Visual Basic and Active Server Pages (ASP) applications. Both Visual Basic and ASP components are unable to work with low-level COM interfaces, not to mention the overall programming model is extremely sophisticated and requires a good deal of work even for simple operations.

ADO is the COM library created to make the OLE DB model accessible to all programming environments. Like RDO for ODBC, it sits between the application and the low-level data access API and mediates operations, ensuring that the stream of data retrieved from the DBMS is exposed to callers in an appropriate and convenient format.

With Further ADO

The OLE DB programming model consists of two interacting entities, the consumer and the provider. Both are implemented as COM objects and are required to expose a fixed number of interfaces to guarantee interoperability. The provider wraps the functionalities of a given data storage, and makes functionalities externally available through the suite of interfaces designed for OLE DB providers. The consumer must know and be able to invoke the provider's interfaces. Next, it is responsible for managing data in input and output.

When the client reads data out of the data source, the consumer receives a pointer to an object that implements the IRowSet interface. The data is available as a stream of bytes, and mapping it to more manageable containers like arrays or Recordset objects is up to the consumer.

ADO deals with the underlying OLE DB machinery for you. When used, ADO is the real OLE DB consumer, not your application. In terms of raw performance, this inevitably adds some overhead due to both data packaging and some housekeeping code. It is remarkable, in fact, that ADO cross-checks the value of any object properties to ensure they do not contradict one another. You might be surprised to see that ADO is smart enough to fix some incompatible settings, such as the cursor type and location. If the two are in direct disagreement, ADO transparently adjusts their values. Here's a quick example.

rs.CursorLocation = CursorLocationEnum.adUseClient
rs.Open(cmd, connString, CursorTypeEnum.adOpenKeyset)

First, the Recordset object is configured to use a client-side cursor location; then, when the Open method is invoked, the cursor type parameter is set to a server-side cursor. No exception is thrown, but the cursor type parameter is silently adjusted to adOpenStatic to reflect the client-side location.

Table 1 lists the key objects that form the ADO object model.

Table 1. Main objects in the ADO object model

TypeDescription
ConnectionRepresents a connection to the specified data source
CommandRepresents a command to execute
FieldRepresents a column in a returned record
ParameterRepresents a parameter to add to a command
RecordsetRepresents a block of records generated by a query

In order to execute a command with ADO, first create a Command object, then bind it to an active connection, and run it. If the command is expected to return a block of records, you can opt for a Recordset object. The Open method of the Recordset object does everything—opens the connection, executes the command, and fills the Recordset with the resultset. Here's a quick example.

Dim rs As New Recordset
Rs.Open("SELECT * FROM employees", _
    "PROVIDER=sqloledb;DATABASE=northwind;SERVER=(local);" & _
    "Integrated Security=SSPI", _
    CursorTypeEnum.adOpenForwardOnly, _
    LockTypeEnum.adLockReadOnly)

ADO is the primary choice (often the only reasonable choice) for building a SQL Server data access layer in COM, ASP, and Visual Basic 6.0 applications. It also has been incorporated and is easy to call from other non-Microsoft RAD tools such as Delphi and PowerBuilder.

What if You're Building .NET Applications Instead?

ADO is a COM object, so there's no architectural or syntax counter indication in calling it from within a .NET application. This said, using ADO in .NET is not generally a good idea.

In the .NET Framework, a large part of the ADO functionalities have been incorporated into the ADO.NET framework. The OLE DB provider is no longer the preferred way to access data replaced by the managed data provider. In addition, in the .NET Framework you can call a fair number of OLE DB providers directly, without the intermediation of ADO. In summary, ADO is not a good or recommended choice for .NET applications, except in a couple of situations.

ADO.NET doesn't support server cursors and doesn't supply a rich object model for schema manipulation like ADOX. If your application needs these features, ADO is the preferred choice, no matter if you're building a .NET-managed system.

ADO.NET

ADO.NET is a set of classes that expose data access services to .NET applications. Several syntax differences exist between the object models of ADO and ADO.NET. In spite of this, the functionalities of ADO and ADO.NET look much the same, just because Microsoft strove to align as many programming aspects of the ADO.NET object model with ADO as was possible. In this way, data developers don't need to get familiar with too many new concepts in order to use ADO.NET, and can migrate on a relatively short learning curve. With ADO.NET, you probably won't be able to reuse much of your existing code; you will certainly be able to reuse all of your ADO skills, though.

ADO.NET consists of two high-level blocks, data containers and data providers. Data container classes form a sort of in-memory database model. Classes like DataSet, DataTable, and DataView are array-like classes and can be filled with any data, including data retrieved from a database. In addition, these classes provide a disconnected and memory-based database model that supports advanced features such as tables, relations, constraints, and primary keys.

Managed providers are the second group of logical components in the ADO.NET architecture.. They are the .NET counterpart to OLE DB providers. Managed providers wrap DBMS systems, and in general data stores, and expose their functionalities through common programming interfaces and data containers.

Managed providers differ from OLE DB providers in at least two key areas; they're managed object (as opposed to COM objects), and they're simpler objects (implementing a more compact suite of interfaces). Another big benefit of managed providers is that they return data using high-level, framework-specific objects, making any conversion to manageable containers (like the Recordset in ADO) totally unnecessary.

A .NET application that needs to work with SQL Server will use the classes of the SQL Server managed provider to open and close a connection, prepare and run commands, and parse the results of a query. At the same time, ADO.NET container classes (specifically, the DataSet class) will be used in all those situations in which the results of a query must be cached and used in a disconnected manner. The DataSet is also a fundamental tool to update a database in batch mode.

The ADO.NET object model revolves around the objects listed in Table 2.

Table 2. Logical Components of a Managed Provider

ComponentDescription
ConnectionCreates a connection with the specified data source, be it SQL Server, Oracle, or any data source for which you can indicate either an OLE DB provider or an ODBC driver.
TransactionRepresents a transaction in the underlying database server.
CommandRepresents a command that hits the underlying database server.
ParameterRepresents a parameter for the command object.
DataAdapterRepresents a database command based on a disconnected set of records. The DataAdapter can be used to get a collection of records or to batch-update the database with the values in the current collection. The collection of record is represented with a DataSet or DataTable class.
DataReaderRepresents a read-only, forward-only cursor created on the underlying database server.

The .NET Framework supplies a managed provider for SQL Server 7.0 and newer versions, which is by far the most effective way to access SQL Server from within .NET applications. In general, a .NET application can access a SQL Server database in two ways. It can use the SQL Server managed provider (recommended), or the OLE DB managed provider. In the latter case, the OLE DB provider passes through the COM-based OLE DB provider used by ADO. The rub is that the COM-based OLE DB provider component (called SQLOLEDB) passes through the COM Interop layer, which seamlessly provides for data and type marshaling with the accompanying overhead that this requires. (Using the SQLOLEDB component is necessary if, for some reason, you have to connect to SQL Server 6.5.)

The ADO.NET programming model is based on a relatively standard sequence of steps that first create a connection, then prepare and execute a command, and finally process the data retrieved. This simple model is broadly equivalent to the ADO's (the names of the objects involved are also similar), and doesn't significantly change if you switch to another database.

Dim conn As New SqlConnection(connString)
Dim cmd As New SqlCommand(cmdText, conn)
cmd.Connection.Open()
Dim cursor As SqlDataReader = cmd.ExecuteReader()
' Process the data
cmd.Connection.Close()

The code above shows how to run a query. To execute a non-query statement like an UPDATE, change it as follows.

Dim conn As New SqlConnection(connString)
Dim cmd As New SqlCommand(cmdText, conn)
cmd.Connection.Open()
cmd.ExecuteNonQuery()
cmd.Connection.Close()

If you're going to execute a stored procedure, set the command text to the stored procedure's name and add the following declaration to the command object.

cmd.CommandType = CommandType.StoredProcedure

A key difference between ADO and ADO.NET is that ADO is a general-purpose object model and a direct result of the OLE DB and UDA vision. Simply put, it is one programming model that fits all data stores (as long as the data store stocks an OLE DB provider). The same code works unchanged (or requires limited changes) if you switch, say, from Access to SQL Server.

All .NET managed providers are optimized for the needs and the features of a particular data source. Unlike OLE DB providers, they have a smaller number of interfaces to implement and can offer a mechanism closer to the real working of the DBMS, with no abstraction or unnecessary complexity. This increased speed comes at a price; writing database agnostic code is a bit harder, although perfectly possible.

Each target database has its own working set of classes to carry out basic functions—connection, transaction, commands, and the like. To open a connection with SQL Server, you use the SqlConnection class. To run a command, the SqlCommand class is needed. To process a read-only cursor, the SqlDataReader object is necessary. Different classes are required to target Oracle or Access. For example, you use OracleConnection and OracleCommand classes to work with the Oracle database. The naming convention in use guarantees that names are similar, but the actual behavior may differ quite a bit. Differences don't show up that much with basic operations like executing a query or inserting a new record; it is likely to happen, instead, with more specific tasks like managing BLOB fields or server cursors.

Note   Good news is expected with ADO.NET Whidbey on the point of database-independent code. The next version of ADO.NET features a brand new factory model for .NET data providers. Stay tuned with this Developer Center to learn more in the upcoming months.

Working with SQL Server Through XML

In the past few years, the evolution of the computer industry has raised the need for total software integration and communication. As a result, any available data must be transformable into another model in order to be consumed as required by the context. The capability to accept and return data through XML streams is a key factor for a modern and up-to-date database.

Microsoft SQL Server 2000 comes with an embedded engine capable of manipulating data as XML. The syntax of some T-SQL commands has been modified to incorporate these new features, and a new set of commands has been added. All in all, a new XML-based API exists for SQL Server 2000 which lets users send and receive XML streams. The low-level changes in the SQL Server programming interface also have effects on the higher-level interface modules like ADO and ADO.NET. The SQLXML managed library in particular provides many new .NET classes specifically designed to support the XML capabilities of SQL Server. Let's review these capabilities first and the SQLXML API next.

In SQL Server 2000 there are two basic ways to retrieve XML data. You can use the XML extensions to the SELECT statement or, alternatively, execute a query on a particular text or BLOB field that contains text formatted as XML data. SQL Server 2000 doesn't mark those fields with a special attribute or data type to indicate they contain XML data, though. Only with SQL Server 2005 will the XML data type allow you to create table columns made of XML data.

If the SELECT statement contains a trailing FOR XML clause, the resultset is transformed in a string of XML text. Let's consider the following statement.

SELECT TOP 3 customerid, companyname FROM Customers FOR XML AUTO

The generated output is an XML fragment, as shown below.

<Customers customerid="ALFKI" companyname="Alfreds Futterkiste"/>
<Customers customerid="ANATR" companyname="Ana Trujillo Emp ..."/>
<Customers customerid="ANTON" companyname="Antonio Moreno Taquería"/>

Within the FOR XML clause, you can specify a working mode. Feasible values are listed in Table 3.

Table 3. Modes of the FOR XML extension

ModeDescription
AUTOReturns query results as a sequence of XML nodes named after the table. Columns are rendered as attributes. If the additional ELEMENTS clause is specified, rows are rendered as child nodes instead.
RAWReturns query results as a sequence of generic <row> nodes with as many attributes as the selected fields
EXPLICITDefines the schema of the XML document being returned.

Note that the XML data may optionally include schema information if you append the XMLDATA attribute to the FOR XML mode of choice, as shown below.

SELECT * FROM Employees FOR XML, XMLDATA

Schema information is incorporated in a <schema> node prepended to the rest of the XML fragment. As you may have noticed already, what SQL Server returns is not a whole, well-formed XML document. It fulfills all syntax requirements for a well-formed document, but lacks a unique root node. For this reason, it is called an XML fragment.

If the query joins two tables on the value of a column, then the resulting XML schema provides nested elements. For example, consider the following query.

SELECT Customers.CustomerID, Customers.ContactName, Orders.OrderID
FROM Customers 
INNER JOIN Orders ON Customers.CustomerID = Orders.CustomerID
FOR XML AUTO

Here's the XML output. As you can see, it automatically groups child records below the parent.

<Customers CustomerID="ALFKI" ContactName="Maria Anders">
  <Orders OrderID="10643"/>
  <Orders OrderID="10692"/>
  <Orders OrderID="10783"/>
   :
</Customers>
<Customers CustomerID="ALFKI" ContactName="Ana Trujillo">
  <Orders OrderID="11459"/>
  <Orders OrderID="10987"/>
:
</Customers>

SQL Server 2000 also supports XML as an input format for write operations. In particular, it accepts XML input through the following channels: the OPENXML statement, XML bulk loading, and updategrams.

OPENXML is a T-SQL function that takes care of inserting data represented as an XML document into a table. OPENXML parses the contents of the XML document and exposes it as a rowset. Bear in mind that OPENXML has been designed and optimized to handle relatively small documents up to 50 KB. For documents over that threshold, you should constantly monitor the response time, to decide whether you should stick with OPENXML or if you need something different, like XML bulk loading.

XML bulk loading is another technique that lets you load XML data into SQL Server tables. Functionally similar to OPENXML, bulk loading is implemented through a COM object and provides higher performance when large amounts of XML data are processed. The bulk loader reads the XML data and tries to use the SQL Server's BULK INSERT statement, meaning it attempts to batch all the records in a single shot. By doing this, it achieves a higher throughput. In other situations—for example, when you have an identity column that needs to propagate an ID from a parent to a child row—the bulk loader processes records individually.It identifies the database tables and columns involved, and then prepares and executes SQL statements. When the bulk loader encounters an XML element, it utilizes schema information to associate it with a record on a table. The record is actually written when the closing tag for that element is found. This algorithm ensures that, in cases of parent-child relationships, all the children are always processed before the parent row.

As mentioned earlier, XML Bulk Loading is implemented through a COM object. The object's progID is SQLXMLBulkLoad. The following Visual Basic 6.0 code shows how to use the object.

conn = "PROVIDER=sqloledb;SERVER=(local);DATABASE=" & _
    "northwind;Integrated Security=SSPI;"
Set bulk = CreateObject("SQLXMLBulkLoad.SQLXMLBulkload.3.0")
bulk.ConnectionString = conn 
bulk.Execute "schema.xml", "data.xml"

If run within a transaction, the XML bulk loader uses a temporary file for each table involved in the operation. Each file gathers all the changes for the specified table. When commit occurs, the contents of the various files are flushed into the corresponding SQL Server table using the BULK INSERT statement.

Finally, updategrams are an XML description of the changes (insertions, deletions, and updates) that must be applied to the database. Updategrams can be executed in various ways. For example, you can send the updategram text to SQL Server over HTTP, you write the XML content out to a file and then point the browser (or any other HTTP-enabled software) to that URL so that the content gets executed, or you can send an updategram out using an ADO command stream.

The SQLXML Managed Library

SQLXML 3.0 (Service Pack 2 now available) is an extension to SQL Server 2000 aimed to keep it current with evolving W3C standards for XML. Available as a free download, SQLXML 3.0 also includes many managed classes in order to expose some of the XML-based functionalities to .NET applications. SQLXML 3.0 gives SQL Server 2000 the ability to expose stored procedures as a Web Service via SOAP and also adds support for ADO.NET diffgrams and client-side XML transformations. To learn more about using SQLXML 3.0, check out the Data Access and Storage Developer Center's SQLXML content.

When you install SQLXML 3.0, you also get a few managed classes for .NET programming. The set of SQLXML managed classes consists of two main classes— SqlXmlCommand and SqlXmlAdapter—plus a few ancillary classes. SqlXmlCommand is the fundamental class used to execute an XML-driven command against SQL Server. The SqlXmlAdapter class is actually a wrapper for the command that just exposes the results through a DataSet object. A good introduction on SQLXML managed classes can be found here.

The key thing to note about SQLXML 3.0 is that any operations directed at SQL Server pass through the OLE DB provider for SQL Server (SQLOLEDB). This situation will change in Visual Studio 2005, where all the XML-oriented features of SQL Server (e.g., bulk loading, XML data readers) will be fully integrated with the .NET Framework.

What Will Be New (and Hot) With Visual Studio 2005

A lot of new features are ready for SQL Server programmers in ADO.NET 2.0. First and foremost, the excellent provider factory model makes it possible to create a connection object in an indirect way,that is, using a factory object instead of the usual new operator. Batch update enhancements are also available for all data providers. Specific to the SQL Server managed provider are other features, like bulk copy and asynchronous execution of commands.

In ADO.NET 2.0, each .NET data provider defines a factory class. Among other things, this class is responsible for returning a connection object for the provider. You pass the factory the name of the provider you want to use, and it returns a connection object of the correct type, as shown by the following code snippet.

DbProviderFactory fact;
fact = DbProviderFactories.GetFactory("System.Data.SqlClient");
IDBConnection conn = fact.CreateConnection();

The global GetFactory method takes the name of the provider with which you want to work, and returns the corresponding factory class. Once you've got it, you call the CreateConnection method to obtain the connection object in an indirect way. The introduction of this model makes generic database programming much easier to code.

Batch update is a feature introduced with ADO 2.1 and significantly enhanced in ADO.NET 1.x. The biggest limitation of ADO.NET batch update is that records are always submitted one at a time. This means that, for example, if one hundred rows have been updated, inserted or deleted, then one hundred roundtrips to SQL Server occur to complete the operation. ADO.NET 2.0 introduces a new property on the data adapter object that lets you control the number of records grouped together and sent to the DBMS in a single shot. The property is named BatchUpdateSize, and set to 1 by default. You can increase that number at will, but you will notice that an overly large number may clog the network and result in a loss of performance instead.

The bulk copy functionality provides a much faster way to transfer large amounts of data into a SQL Server table. The performance you get with a specialized operation, such as a bulk copy, is typically remarkably better than using an INSERT statement. In SQL Server 7.0 and newer, the BULK INSERT statement is used to copy formatted data, stored in a ASCII file, into a SQL Server table. You can use this statement from within any .NET Framework 1.1 applications that use an appropriate SqlCommand object. In ADO.NET 2.0, a new class named SqlBulkCopyOperation provides bulk copy facilities directly at the application level, without the need to know about SQL Server internal utilities and T-SQL commands.

A database operation is normally a synchronous operation, meaning that the caller regains the control of the application only when the interaction with the database has completed. This way of working may pose performance and scalability issues in cases of lengthy operations. ADO.NET 2.0 provides true asynchronous support for two specific scenarios: opening connections and executing commands. Leveraging these features, you can open a connection and populate the command object while the connection is physically established. This is a clear performance advantage, because it really gives you a bit of parallelism if SQL Server lives on a remote machine.

In ADO.NET 2.0, you find an additional pair of methods to open a connection, BeginOpen and EndOpen. A connection can only be closed synchronously, though.

string connStr = "SERVER=...;DATABASE=...;... ";
SqlConnection conn = new SqlConnection(connStr); 
SqlCommand cmd = new SqlCommand(query, conn);

// Begin connecting
IAsyncResult ar = conn.BeginOpen(null, null);
        
// Poll until connection is opened
while(!ar.IsCompleted) {
     // Do some work in the mean time
}
        
// End connecting 
conn.EndOpen(ar);

The asynchronous pattern is even more interesting if applied to the execution of commands. The support for asynchronous operations is built into the SqlCommand class and is limited to executing non-query commands, getting a reader, and an XML reader. Let's briefly review the case of readers.

The first step is calling the BeginExecuteReader method, to which you pass a callback function and an object that represents the state of the particular call. The state object is any object that contains information useful to the callback. In this case, I simply pass a reference to the command object.

IAsyncResult ar = cmd.BeginExecuteReader(MyCallback, cmd);

After initiating the asynchronous operation, you can forget about it and do some other work. If there's a place in your code from which you can't move away without the results of the query, you place a synchronizer so that the code will automatically be stopped until the other thread invokes the callback.

ar.AsyncWaitHandle.WaitOne(); 

The callback follows the following scheme.

public void MyCallback(IAsyncResult ar) 
{
   // Retrieve the context of the call (the command object)
   SqlCommand cmd = (SqlCommand) ar.AsyncState; 

   // Terminate the async operation
   SqlDataReader reader = cmd.EndExecuteReader(ar);

   // Process the results
   :
}

The context of the call that you specified as the second argument to BeginExecuteReader is packed in the AsyncState property of the IAsyncResult object. Typically, the callback will perform any user interface refresh that is needed after completing the operation.

Note   Asynchronous calls are implemented only within the Whidbey.NET data provider for SQL Server. Asynchronous calls require a network library with true asynchronous support, which is supplied with MDAC 9.0, that ships with SQL Server 2005 Beta 1, and the more recent Community Technology Preview of Visual Studio .NET 2005.

Conclusion

ADO (along with some made-to-measure tools like Remote Data Services) and ADO.NET are the two main data access technologies for SQL Server. Choosing one depends chiefly on the Windows platform you're targeting, Win32® and COM, or .NET. In addition to ADO and ADO.NET, and spanning the worlds of Win32 and the bright lights of .NET, is SQLXML, an object model that fully exploits the XML capabilities of SQL Server 2000.

In this article, I've discussed the main data access layers available to work with SQL Server. I've tried to put each into perspective to give the sense of the history, the current situation, and the future evolution.

In the end, ADO is the most reasonable (sometimes, unique) choice if you have to write COM, ASP, or Win32 applications. If you are instead writing an application to be based on the .NET Framework, then ADO.NET is a no-brainer. If you need to perform particular operations (e.g., server cursors, schema manipulation, bulk copy), ADO.NET might not offer the support you expect, but the trend is highly encouraging. ADO.NET 2.0, slated in Beta 1 in mid-2004, promises to integrate missing functions and facilities into the existing Framework, thus delivering a powerful, consistent, and self-sufficient data access layer.

Dino Esposito is a trainer and consultant based in Rome, Italy. A member of the Wintellect and VB2TheMax teams, Dino specializes in ASP.NET and manages the ADO.NET and .NET Framework courseware for Wintellect. Dino spends most of his time teaching and consulting across Europe and the United States, but this didn't prevent him from writing the Microsoft Press guide to Programming Microsoft ASP.NET and the monthly "Cutting Edge" column for MSDN Magazine.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft