Export (0) Print
Expand All

Migrating Informix Databases to Microsoft SQL Server 2000

Updated : July 26, 2002

On This Page

Introduction
Overview
Architecture and Terminology
Installing and Configuring Microsoft SQL Server
Defining Database Objects
Enforcing Data Integrity and Business Rules
Transactions, Locking, and Concurrency
SQL Language Support
Implementing Cursors
Tuning SQL Statements
Using ODBC
Developing and Administering Database Replication
Migrating Your Data and Applications
Conclusion

Introduction

This paper is for developers of Informix applications who want to convert their applications to Microsoft® SQL Server. The tools, processes, and techniques required for a successful conversion are described. Also highlighted are the essential design points that allow you to create high-performance, high-concurrency SQL Server–based applications.

Target Audience

The target audience can be new to Microsoft SQL Server and its operation but should have a solid foundation in the Informix RDBMS and general database concepts. The target audience should possess:

  • A strong background in Informix RDBMS fundamentals.

  • General database management knowledge.

  • Familiarity with the Informix SQL and SPL languages.

  • A working knowledge of the C/C++ programming language.

  • Membership in the NT sysadmin fixed server role.

For clarity and ease of presentation, the reference development and application platform is assumed to be the Microsoft Visual Studio® development system version 6.0, (Visual Studio 6.0 Service Pack 4 allows you to alter database diagrams, stored procedures, table designs, or view designs, but you cannot save them. A future Visual Studio Service Pack will allow a limited ability to save changes.) (The SQL Server 2000 tools cannot access database diagrams saved using the design tools in Visual Studio 6.0 until you have modified the dtproperties table in the database.), the Microsoft Windows 2000® operating system version 2 (Service Pack 2), SQL Server 2000 with Service Pack 2, and Informix 7.3. The Intersolv Software ODBC driver is used with Informix, and the Microsoft Corporation ODBC driver (version 3.70, MDAC 2.6 SP1) is used with SQL Server 2000. (Please note that Microsoft Windows NT® Server 4.0, Service Pack 5 (SP5) or later must be installed as a minimum requirement for all SQL Server 2000 editions.)

Overview

The application migration process can appear complicated. There are many architectural differences between each RDBMS. The words and terminology used to describe Informix architecture often have completely different meanings in Microsoft SQL Server. Additionally, both Informix and SQL Server have made many proprietary extensions to the SQL-92 standard.

From an application developer's perspective, Informix and SQL Server manage data in similar ways. The internal differences between Informix and SQL Server are significant, but if managed properly, have minimal impact on a migrated application.

SQL Language Extensions

The most significant migration issue that confronts the developer is the implementation of the SQL-92 SQL language standard and the extensions that each RDBMS has to offer. Some developers use only standard SQL language statements, preferring to keep their program code as generic as possible. Generally, this means restricting program code to the entry-level SQL-92 standard, which is implemented consistently across many database products, including Informix and SQL Server.

This approach can produce unneeded complexity in the program code and can substantially affect program performance. The CASE expression in Microsoft SQL Server and Informix is a SQL-92 extension beyond entry level and is not implemented in all database products. Informix also supports the DECODE statement which is functionally equivalent to the CASE statement.

Also, procedural extensions to the SQL language can cause difficulties. The Informix SPL and SQL Server Transact-SQL languages are similar in function, but different in syntax. There is no exact symmetry between each RDBMS and its procedural extensions. Consequently, you might decide not to use stored programs such as procedures and triggers. This is unfortunate because they can offer substantial performance and security benefits that cannot be duplicated in any other way.

The use of proprietary development interfaces introduces additional issues. The conversion of a program using the Informix CLI (Call Level Interface) often requires a significant investment in resources. When developing an application that may use multiple RDBMSs, consider using the Open Database Connectivity (ODBC) interface.

ODBC

ODBC is designed to work with numerous database management systems. ODBC provides a consistent application-programming interface (API) that works with different databases through the services of a database-specific driver.

A consistent API means that the functions a program calls to make a connection, execute a command, and retrieve results are identical whether the program is talking to Informix or SQL Server.

ODBC also defines a standardized call-level interface and uses standard escape sequences to specify SQL functions that perform common tasks but have different syntax in different databases. The ODBC drivers can automatically convert this ODBC syntax to either Informix native or Microsoft SQL Server native SQL syntax without requiring the revision of any program code. In some situations, the best approach is to write one program and allow ODBC to perform the conversion process at run time.

ODBC is not a magical solution for achieving complete database independence, full functionality, and high performance from all databases. Different databases and third-party vendors offer varying levels of ODBC support. Some drivers just implement core API functions mapped on top of other interface libraries. Other drivers, such as the Microsoft SQL Server driver, offer full Level 2 support in a native, high-performance driver.

If a program uses only the core ODBC API, it will likely forego features and performance capabilities with some databases. Furthermore, not all native SQL extensions can be represented in ODBC escape sequences (such as SQL Server CASE expressions).

Additionally, it is common practice to write SQL statements to take advantage of the database's optimizer. The techniques and methods that enhance performance within Informix are not necessarily optimal within Microsoft SQL Server 2000. The ODBC interface cannot translate techniques from one RDBMS to another.

ODBC does not prevent an application from using database-specific features and tuning for performance, but the application needs some database-specific sections of code. ODBC makes it easy to keep the program structure and the majority of the program code consistent across multiple databases.

OLE DB

OLE DB is the next generation of data access technology. Microsoft SQL Server 2000 takes advantage of OLE DB within the components of SQL Server itself. Additionally, application developers should consider OLE DB for new development with SQL Server 2000. Microsoft does not include an OLE DB provider for Informix 7.3 with SQL Server 2000 however there are third-party drivers available.

OLE DB is Microsoft's strategic system-level programming interface to manage data across the organization. OLE DB is an open specification designed to build on the features of ODBC. ODBC was created to access relational databases, and OLE DB is designed to access relational and non-relational information sources, such as mainframe ISAM/VSAM and hierarchical databases, e-mail and file system stores, text, graphical and geographical data, and custom business objects.

OLE DB defines a collection of COM interfaces that encapsulate various database management system services and allows the creation of software components that implement such services. OLE DB components consist of data providers (that contain and expose data), data consumers (that use data), and service components (that process and transport data, for example, query processors and cursor engines).

OLE DB interfaces are designed to help components integrate smoothly so that OLE DB component vendors can bring high quality OLE DB components to the market quickly. In addition, OLE DB includes a bridge to ODBC to allow continued support for the broad range of ODBC relational database drivers available today.

Organization of this Paper

To assist you in implementing a step-by-step migration from Informix to SQL Server, each section includes an overview of the relevant differences between Informix 7.3 and Microsoft SQL Server 2000. It also includes conversion considerations, SQL Server 2000 advantages, and multiple examples.

Architecture and Terminology

To start a successful migration, you should understand the basic architecture and terminology associated with Microsoft SQL Server 2000. Many of the examples in this section have been drawn from the sample Informix- and SQL Server–based applications included as part of this paper.

Definition of Database

In Informix and Microsoft SQL Server a database provides a logical separation of data, applications, and security mechanisms and both can support multiple databases. With both Informix and SQL Server applications can use databases to logically divide business functionality. There can be multiple instances of SQL Server on a single computer. Each instance of SQL Server can have multiple databases.

Informix tablespaces are in some ways equivalent to SQL Server filegroups. Each SQL Server database can support filegroups, which provide the ability to physically distribute the placement of the data, which is the same functionality as an Informix tablespace. A SQL Server filegroup categorizes the operating system files containing data from a single SQL Server database to simplify database administration tasks, such as backup. Informix tablespaces accomplish the same task but they may contain data from more than one database. A filegroup is a property of a SQL Server database and cannot contain the operating system files of more than one database, though a single database can contain more than one filegroup. Informix allows multiple databases in one tablespace as well as multiple tablespaces for one database. In both SQL Server and Informix filegroups and tablespaces can be added after database creation.

Cc917718.infmx01(en-us,TechNet.10).gif

Figure .01 Informix DBMS Architecture

Cc917718.infmx02(en-us,TechNet.10).gif

Figure .02 SQL 2000 DBMS Architecture

Microsoft SQL Server also installs the following databases by default:

  • The model database is a template for all newly created user databases.

  • The tempdb database is similar to an Informix temporary tablespace in that it is used for temporary working storage and sort operations. Like the Informix temporary tablespace, users can create temporary tables that are automatically dropped when the user disconnects from the database.

  • The msdb database supports the SQL Server Agent and its scheduled jobs, alerts, and replication information.

  • The pubs and Northwind databases are provided as sample databases for training.

For more information about the default databases, see SQL Server Books Online.

Database System Catalogs

Each Informix and Microsoft SQL Server 2000 database maintains its own system catalog, which contains information about:

  • Database objects (tables, indexes, stored procedures, views, triggers, and so on).

  • Constraints.

  • Users and permissions.

  • User-defined data types.

  • Replication definitions.

  • Files used by the database.

SQL Server also contains a centralized system catalog in the master database, which contains system catalogs as well as some information about the individual databases:

  • Database names and the primary file location for each database.

  • SQL Server login accounts.

  • System messages.

  • Database configuration values.

  • Remote and/or linked servers.

  • Current activity information.

  • System stored procedures.

The SQL Server master database must be available to access any other database. As such, it is important to protect against failures by backing up the master database after any significant changes are made in the database. Database administrators can also mirror the files that make up the master database.

In SQL Server you should not code Transact-SQL statements that directly query the system tables unless that is the only way to obtain the information required by the application. In most cases applications should obtain catalog and system information from:

  • The SQL-92 Information Schema Views.

  • SQL-DMO.

  • The catalog functions, methods, attributes, or properties of the data API used in the application, such as ADO, OLE DB, or ODBC.

  • Transact-SQL system stored procedures, catalog statements, and built-in functions.

The SQL Server master database must be available to access any other database. It is important to protect against failures by backing up the master database after any significant changes are made in the database. Database administrators can also mirror the files that make up the master database. There is no SQL Server equivalent for the Informix sysmaster and sysutils databases.

Physical and Logical Storage Structures

The Informix RDBMS is comprised of dbspaces, which in turn are comprised of one or more chunks. A tablespace resides in a dbspace. Tablespaces are formatted into internal units termed pages. The page size is normally 2K. When an object is created in an Informix tablespace, the user can specify its space in units called extents (initial extent, next extent). An extent is a sequence of contiguous aggregate blocks allocated to a JFS (Journaled File System) object as a unit. An extent is wholly contained within a single aggregate (and therefore a single partition); however, large extents may span multiple allocation groups.

Microsoft SQL Server uses filegroups at the database level to control the physical placement of tables and indexes. Filegroups are logical containers of one or more files, and data contained within a filegroup is proportionally filled across all files belonging to the filegroup.

If filegroups are not defined and used, database objects are placed in a default filegroup that is implicitly defined during the creation of a database. Filegroups allow you to:

  • Distribute large tables across multiple files to improve I/O throughput.

  • Store indexes on different files than their respective tables, again to improve I/O throughput and disk concurrency.

  • Store text, ntext, and image columns (large objects) on separate files from the table.

  • Place database objects on specific disk spindles.

  • Back up and restore individual tables or sets of tables within a filegroup.

Like Informix, SQL Server formats files into internal units called pages. The page size is fixed at 8192 bytes (8 KB). Pages are organized into extents that are fixed in size at 8 contiguous pages (64 KB). When a table or index is created in a SQL Server database, it is automatically allocated one page within an extent. As the table or index grows, it is automatically allocated space by SQL Server. Striping Data

Informix-type tablespaces are not needed for most Microsoft SQL Server installations. Instead, SQL Server can distribute, or stripe, data more efficiently with hardware-based RAID or with software–based RAID available through the Windows NT Disk Administrator utility or from third parties. With RAID, you can set up stripe sets consisting of multiple disk drives that appear as one logical drive. If database files are created on this stripe set, the disk subsystem assumes responsibility for distributing I/O load across multiple disks. It is recommended that administrators spread out the data over multiple physical disks using RAID.

The recommended RAID configuration for SQL Server is RAID 1 (mirroring) or RAID 5 (stripe sets with an extra parity drive, for redundancy). RAID 10 (mirroring of striped sets with parity) is also recommended, but is more expensive than the first two options. Stripe sets work very well to spread out the usually random I/O done on database files.

If RAID is not an option, filegroups are an attractive alternative and provide some of the same benefits available with RAID. Additionally, for very large databases that might span multiple physical RAID arrays, filegroups may be an attractive way to further distribute your I/O across multiple RAID arrays in a controlled fashion.

Transaction log files must be optimized for sequential I/O and must be secured against a single point of failure. Accordingly, RAID 1 (mirroring) is recommended for transaction logs. When migrating, the size of this drive should be at least as large as the sum of the size of the Informix logical logs. Create one or more log files that take up all the space defined on the logical drive. Unlike data stored in filegroups, transaction log entries are always written sequentially and are not proportionally filled.

For more information about RAID, see SQL Server Books Online, your Windows NT Server documentation, and the Microsoft Windows NT Resource Kit.

Transaction Logs and Automatic Recovery

The Informix RDBMS performs automatic recovery each time it is started. It verifies that the contents of the tablespace files are coordinated with the contents of the logical and physical logs. If they are not, Informix applies the contents of the logical and physical logs to the tablespace files (roll forward), and then removes any uncommitted transactions that are found. Informix will not archive a logical log until all transactions in the log have been committed.

Microsoft SQL Server 2000 also performs automatic data recovery by checking each database in the system each time it is started, when a database is attached, or as the final step in restoring a database from backups. Recovery performed by SQL Server when it starts is called restartstartup recovery. During recovery it first checks the master database and then launches threads to recover all of the other databases in the system. For each SQL Server database, the automatic recovery mechanism checks the transaction log. If the transaction log contains any uncommitted transactions, the transactions are rolled back. The recovery mechanism then checks the transaction log for committed transactions that have not yet been written out to the database. If it finds any, it performs those transactions again, rolling forward.

The functionality of the SQL Server transaction log is the same as the Informix logical and physical logs. Unlike Informix, which has one set of logical and physical logs for all databases in the instance, each SQL Server database has its own transaction log that records all changes to the database and is shared by all users of that database. When a transaction begins and a data modification occurs, a BEGIN TRANSACTION event (as well as the modification event) is recorded in the log. This event is used during automatic recovery to determine the starting point of a transaction. As each data modification statement is received, the changes are written to the transaction log prior to being written to the database itself. For more information, see the "Transactions, Locking, and Concurrency" section later in this paper.

Like Informix, SQL Server has an automatic checkpoint mechanism that ensures completed transactions are regularly written from the SQL Server disk cache to the transaction log file. A checkpoint writes any cached page that has been modified since the last checkpoint to the database. Checkpointing these cached pages, known as dirty pages, onto the database, ensures that all completed transactions are written out to disk. This process shortens the time that it takes to recover from a system failure, such as a power outage. This setting can be changed by modifying the recovery interval setting by using SQL Server Enterprise Manager or with Transact-SQL (sp_configure system stored procedure).

Backing Up and Restoring Data

Microsoft SQL Server offers several options for backing up data:

Full database backup

A database backup makes a copy of the full database. Not all pages are copied to the backup set, only those actually containing data. Both data pages and transaction log pages are copied to the backup set.

A database backup set is used to re-create the database as it was at the time the BACKUP statement completed. If only database backups exist for a database, it can be recovered only to the time of the last database backup taken before the failure of the server or database. To make a full database backup, use the BACKUP DATABASE statement or the Backup Wizard.

Differential backup

After a full database backup, regularly back up just the changed data and index pages using the BACKUP DATABASE WITH DIFFERENTIAL statement or the Backup Wizard. Differential backups are quicker in SQL Server 2000, due to an enhancement that tracks database changes at the extent level.

Transaction log backup

Transaction logs in Microsoft SQL Server are associated with individual databases. The transaction log fills until it is backed up or truncated. The default configuration of SQL Server 2000 is that the transaction log grows automatically until it uses all available disk space or it meets its maximum configured size. When a transaction log gets too full, it can create an error and prevent further data modifications until it is backed up or truncated. Other databases are not affected. Transaction logs can be backed up using the BACKUP LOG statement or the Backup Wizard.

Another threat to a database is the possibility that invalid data may be entered or that valid data may be destroyed by a user's action. In this case, you need to determine when the problem transaction began. In SQL Server 2000, you can mark transactions in the log. Later, if you need to restore, you can reference the mark that was used at the time of execution, rather than using wall-clock time. To do this, use a named BEGIN TRANSACTION statement and the WITH MARK [description] clause. The marks are stored in msdb. Recovery can include or stop right before a transaction that contains the mark.

SQL Server 2000 shrinks the log as much as possible and then indicates if further shrinking will be possible after a log backup.

File backup, filegroup backup

A file or filegroup backup copies one or more files of a specified database, allowing a database to be backed up in smaller units: at the file or filegroup level. For more information, see SQL Server Books Online.

Log Shipping

In Microsoft® SQL Server™ 2000 Enterprise Edition, you can use log shipping to feed transaction logs from one database to another on a constant basis. Continually backing up the transaction logs from a source database and then copying and restoring the logs to a destination database keeps the destination database synchronized with the source database. This allows you to have a backup server and also provides a way to offload query processing from the main computer (the source server) to read-only destination servers. SQL Server 2000 Enterprise Edition must be installed on all servers used in log shipping.

Recovery Models

Recovery models were added to SQL Server 2000 to facilitate data protection planning. They clarify tradeoffs between performance, log space requirements, and protection from media (disk) failure. There are three models: Simple Recovery, Full Recovery, and Bulk-Logged.

The choice of recovery model is based on database usage and availability requirements and helps determine appropriate backup and restore procedures. Recovery models only apply to media recovery, that is, recovery from backups. Restart recovery recovers all committed work. For more information, see "Selecting a Recovery Model" in SQL Server 2000 Books Online.

You can easily transition between recovery models. For example, on a very large database, you can use full or bulk logged, or both. You can use full during the day and bulk_logged at night, during a data load process that consists of bulk insert and rebuilding indexes. You can also switch to bulk logging while you run a data load and switch back to full mode, run a transaction log backup, and be able to restore to that point in time without having to run a full database backup. This feature allows you to do the bulk processing more efficiently; all you need to do is make a transaction log backup afterwards.

To change recovery models, use the following syntax:

ALTER DATABASE SET RECOVERY RecoveryModel

For more information, see "Switching Recovery Models" in SQL Server 2000 Books Online.

Simple recovery model

The Simple Recovery model typically requires less log space, but it incurs the greatest potential work loss if data or log files are damaged. Only events needed for basic recovery are logged. Using the Simple Recovery Model, only full database and differential database backups are available. In the event of a failure, all committed work since the last backup must be redone. This model is the simplest to administer, but it is not a good choice for a mission-critical application where loss of committed work cannot be tolerated.

Full recovery model

In the Full Recovery model, everything is logged. Full Recovery model provides complete protection against work loss from a damaged data file. If the transaction log is damaged, work committed since the most recent log backup is lost and must be redone manually.

Even when you use the Full Recovery model, it is important to use fault-tolerant disks for the transaction log to prevent data loss. The Full Recovery model also allows recovery to any specific point in time.

Bulk-logged recovery model

The Bulk-Logged Recovery model provides the highest performance for bulk operations. These operations also consume less log space than they do under the Full Recovery model. For example, the allocation of a new page is logged, but the data inserted onto the page is not. In SQL Server 2000, bulk operations consist of bulk load (BCP and BULK INSERT, including when they run within a DTS package), SELECT INTO, CREATE INDEX, WRITETEXT, and UPDATETEXT.

Compared with the Full Recovery model, the Bulk-Logged Recovery model minimizes logging for bulk operations. Keep in mind that in the event that recovery becomes necessary, if the log is damaged or if bulk operations have occurred since the most recent log backup, changes made in the database since the last log backup are lost.

This model does not support recovery to a specific point in time, but it will allow recovery to the end of a transaction log backup containing bulk changes. Transaction log backups made using the Bulk-Logged Recovery model contain the extents modified by bulk operations. This feature improves support for log shipping, because you no longer need to worry that a bulk operation will invalidate your backups. SQL Server maintains a bitmap to track the data extents modified, which optimizes the process by which SQL Server identifies changes.

Improved backup functionality

In addition to the introduction of recovery models to simplify data protection in general, SQL Server 2000 has improved manageability: snapshot technology, differential backups, and security have been enhanced.

Backup operations do not conflict with applications or other administrative actions. For example, backups can occur concurrently with bulk operations such as create index and bulk load.

Log and file backups can occur concurrently.

Unattended backup operations, regardless of system activity, are also well supported in SQL Server 2000.

SQL Server supports snapshot backup and restore technologies in conjunction with independent hardware and software vendors. Snapshot backups minimize or eliminate the use of server resources to accomplish the backup. This is especially beneficial for moderate to very large databases in which availability is extremely important. The primary benefits of this technology are:

A backup can be created in a very short time, usually measured in seconds, with little or no impact on the server.

A disk backup can be used to restore a database just as quickly.

Another host can create a backup with no impact on the production system.

A copy of a production database can be created instantly for reporting or testing.

Snapshot backups and restores are accomplished in cooperation with third-party hardware and/or software vendors who use features of SQL Server 2000 designed for this purpose. The backup technology creates an instantaneous copy of the data being backed up, usually by splitting a mirrored set of disks. At restore time, the original is immediately available. The underlying disks are synchronized in the background, resulting in almost instantaneous restores.

Differential database backups can be completed in a time that is proportional to the amount of data changed since the last full backup. The less your data has changed, the quicker the backup. SQL Server 2000 uses a bitmap to track data extents modified since the most recent database or file backup to enable them to be located efficiently. In addition, SQL Server 2000 supports file differential backups.

Backups still accumulate changes made to the database since the most recent full backup, functioning the same way in the event of recovery. They are significantly faster, however, because they only record the small amount of information that has changed, especially for very large databases that contain only a small amount of changed data.

For added security, you can implement password protection for your backup media and backup sets. This helps prevent unauthorized users from adding to your backups or restoring to your database.

Both Informix and SQL Server require a specific format for log files. In SQL Server, these files, called backup devices, are created using SQL Server Enterprise Manager, the Transact-SQL sp_addumpdevice stored procedure, or the equivalent SQL-DMO command.

Just as you can turn off the Informix backups by setting the log archive tape device to /dev/null, in Microsoft SQL Server, members of the db_owner fixed database role can force the transaction log to erase its contents every time a checkpoint occurs. This can be accomplished by using SQL Server Enterprise Manager (truncate log on checkpoint), InTransact-SQL (sp_dboption stored procedure), or SQL-DMO.

Networks

Informix Connect supports networked connections between Informix database servers and their clients. It communicates with the Informix instance, and allows users to run either TCP/IP TLI or TCP/IP socket connections.

With Microsoft SQL Server, Net-Libraries (network libraries) support the networked connections between the clients and the server by using the Tabular Data Stream (TDS) protocol. They enable simultaneous connections from clients running Named Pipes, TCP/IP Sockets, or other Inter-Process Communication (IPC) mechanisms. The SQL Server CD-ROM includes all client Net-Libraries so that there is no need to acquire them separately.

Cc917718.infmx03(en-us,TechNet.10).gif

Figure .03 Informix Network Overview

Cc917718.infmx04(en-us,TechNet.10).gif

Figure .04 SQL 2000 Network Overview

Most of the capabilities of the Client Network Utility are designed for advanced users who want to create and edit client configuration entries. In most cases, the client default settings will work.

Configuring Net-Libraries has been greatly simplified for applications using the SQL Server 2000 client connectivity components to connect to SQL Server 2000 named instances. You only have to:

Select the client protocols installed on the application computer during setup.

Select the server protocols enabled during setup for the instance of SQL Server.

No other configuration is required. You can then connect to any instance of SQL Server 2000 by specifying the network name of the database computer and the instance name.

For more information about Net-Libraries, see SQL Server Books Online.

Database Security and Roles

To adequately migrate your Informix applications to Microsoft SQL Server 2000, you must understand how SQL Server implements database security and roles.

Login Accounts

A login account allows a user to access SQL Server data or administrative options. The login account allows users only to log in to SQL Server and view databases that allow guest access. (The guest account is not set up by default and must be created.)

SQL Server offers two types of login security: Windows NT Authentication Mode (also known as integrated security) and SQL Server Authentication Mode (also known as standard security). SQL Server 2000 also supports a combination of standard and integrated security, known as mixed security.

The Windows NT Authentication Mode uses the security mechanisms within Windows 2000 when validating login connections, and relies on a user's Windows NT security credentials. Users do not need to enter login IDs or passwords for SQL Server—their login information is taken directly from the network connection. When this occurs, an entry is written to the syslogins table and is verified between Windows 2000 and SQL Server. This is known as a trusted connection and works like a trust relationship between two servers running Windows 2000. In SQL Server 2000 a user must code "trusted_connection=yes" in the connection strings for ADO, OLE DB, or ODBC to obtain a trusted connection

The SQL Server Authentication Mode requires that a user enter a login ID and password when requesting access to SQL Server. This is known as a nontrusted connection. With the use of the standard security model, the login provides access to the SQL Server database engine only; it does not provide access to the user databases.

By default, Informix uses the network login id to authenticate users. The login id and password must be valid on both the client and database server (or domain under NT). The user id and password can be different from the user's network login id and password by using the CONNECT command and USER parameter. In this case the login id and password must still be valid on the host (or domain) where the database resides.

For more information about these security mechanisms, see SQL Server Books Online.

Groups, Roles, and Permissions

Microsoft SQL Server and Informix use permissions to enforce database security. SQL Server statement-level permissions are used to restrict the ability to create new database objects (similar to the Informix permissions).

SQL Server also offers object-level permissions. As in Informix, object-level ownership is assigned to the creator of the object and cannot be transferred. Object-level permissions must be granted to other database users before they can access the object. Members of the sysadmin fixed server role, db_owner fixed database role, or db_securityadmin fixed database role can also grant permissions on one user's objects to other users.

SQL Server statement and object-level permissions can be granted directly to database user accounts. However, it is often simpler to administer permissions to database roles. SQL Server roles are used for granting and revoking privileges to groups of database users (much like Informix roles). Roles are database objects associated with a specific database. There are a few specific fixed server roles associated with each installation, which work across databases. An example of a fixed server role is sysadmin. Windows NT groups can also be added as SQL Server logins, as well as database users. Permissions can be granted to a Windows NT group or a Windows NT user.

A database can have any number of roles or Windows NT groups. The default role public is always found in every database and cannot be removed. The public role functions much like the PUBLIC account in Informix. Each database user is always a member of the public role. A database user can be a member of any number of roles in addition to the public role. A Windows NT user or group can also be a member of any number of roles, and is also always in the public role.

Database Users and the guest Account

In Microsoft SQL Server, a user login account must be authorized to use a database and its objects. One of the following methods can be used by a login account to access a database:

  • The login account can be specified as a database user.

  • The login account can use a guest account in the database.

  • A Windows NT group login can be mapped to a database role. Individual Windows NT accounts that are members of that group can then connect to the database.

Members of the db_owner or db_accessadmin roles, or the sysadmin fixed server role, create the database user account roles. An account can include several parameters: the SQL Server login ID, database username (optional), and up to one role name (optional). The database username does not have to be the same as the user's login ID. If a database username is not provided, the user's login ID and database username are identical. If a role name is not provided, the database user is only a member of the public role. After creating the database user, the user can be assigned to as many roles as necessary.

Members of the db_owner or db_accessadmin roles can also create a guest account. The guest account allows any valid SQL Server login account to access a database even without a database user account. By default, the guest account inherits any privileges that have been assigned to the public role; however, these privileges can be changed to be greater or less than that of the public role. Informix does not have the equivalent of a guest account.

With both SQL Server and Informix, a Windows NT user account or group account can be granted access to a database, just as a SQL Server login can. When a Windows NT user who is a member in a group connects to the database, the user receives the permissions assigned to the Windows NT group. If a member of more than one Windows NT group that has been granted access to the database, the user receives the combined rights of all of the groups to which he belongs.

The sysadmin Role

Members of the Microsoft SQL Server sysadmin fixed server role have similar permissions to that of an Informix DBA. In SQL Server 2000, the sa SQL Server Authentication Mode login account is a member of this role by default, as are members of the local Administrators group if SQL Server is installed on a Windows NT–based computer. A member of the sysadmin role can add or remove Windows NT users and groups, as well as SQL Server logins. Members of this role typically have the following responsibilities:

  • Installing SQL Server.

  • Configuring servers and clients.

  • Creating databases.*

  • Establishing login rights and user permissions.*

  • Transferring data in and out of SQL Server databases.*

  • Backing up and restoring databases.*

  • Implementing and maintaining replication.

  • Scheduling unattended operations.*

  • Monitoring and tuning SQL Server performance.*

  • Diagnosing system problems.

*These items can be delegated to other security roles or users.

A member of the sysadmin fixed server role can access any database and all of the objects (including data) on a particular instance of SQL Server. Like an Informix DBA, there are several commands and system procedures that only members of the sysadmin role can issue.

For more information see SQL Server Books Online.

The db_owner Role

Both Informix and SQL Server treat the database as a self-contained administrative domain. Each database is assigned a database owner (dbo). This user is always a member of the db_owner fixed database role. Other users can also be members of the db_owner role. Any user who is a member of this role has the ability to manage the administrative tasks related to her database. These tasks include:

  • Managing database access.

  • Changing database options (read-only, single user, and so on).

  • Backing up and restoring the database contents.

  • Granting and revoking database permissions.

  • Creating and dropping database objects.

Members of the db_owner role have permissions to do anything within their database. Most rights assigned to this role are separated into several fixed database roles, or can be granted to database users. It is not necessary to have sysadmin server-wide privileges to have db_owner privileges in a database.

Installing and Configuring Microsoft SQL Server

With an understanding of the basic structural differences between Informix and SQL Server, you are ready to perform the first step in the migration process. SQL Server Query Analyzer should be used to run these scripts:

  1. Use software–based RAID or hardware-based RAID level 5 to create a logical drive large enough to hold all of your data. An estimate of space can be calculated by adding the total file space used by the Informix system, temporary, and application tablespaces.

  2. Create a second logical drive for holding transaction logs by using software–based RAID or hardware-based RAID level 1. The size of this drive should be at least as large as the sum of the size of the online redo and rollback segment tablespace(s).

  3. Use SQL Server Enterprise Manager to create a database with the same name as the Informix application's tablespace. (The sample application uses the database name USER_DB.) Specify the file locations to coincide with the disks you created in steps 1 and 2 for the data and transaction logs, respectively. If you are using multiple Informix tablespaces, it is not necessary or even recommended that you create multiple SQL Server databases. RAID will distribute the data for you.

  4. Create the SQL Server login accounts:

    USE MASTER 
    EXEC SP_ADDLOGIN STUDENT_ADMIN, STUDENT_ADMIN 
    EXEC SP_ADDLOGIN DEPT_ADMIN, DEPT_ADMIN 
    EXEC SP_ADDLOGIN ENDUSER1, ENDUSER1 
    GO
    
  5. Add the roles to the database:

    USE USER_DB 
    EXEC SP_ADDROLE DATA_ADMIN 
    EXEC SP_ADDROLE USER_LOGON 
    GO
    
  6. Grant permissions to the roles:

    GRANT CREATE TABLE, CREATE TRIGGER, CREATE VIEW, 
    CREATE PROCEDURE TO DATA_ADMIN 
    GO
    
  7. Add the login accounts as database user accounts:

    EXEC SP_ADDUSER ENDUSER1, ENDUSER1, USER_LOGON 
    EXEC SP_ADDUSER DEPT_ADMIN, DEPT_ADMIN, DATA_ADMIN 
    EXEC SP_ADDUSER STUDENT_ADMIN, STUDENT_ADMIN, DATA_ADMIN 
    GO
    

The illustration shows the SQL Server and Informix environments, after this process is completed. As seen below, both environments are identical.

Cc917718.infmx05(en-us,TechNet.10).gif

Figure .05 SQL Server and Informix environments

Defining Database Objects

Setting a Backward Compatibility Level

When running at its default settings, Microsoft SQL Server 2000 implements SQL-92 behaviors for some Transact-SQL statements whose behaviors differed from the standard in earlier versions of SQL Server. SQL Server 2000 also enforces reserved keywords that were not keywords in earlier versions of SQL Server. If upgrading existing systems with existing applications, you can use the database compatibility level settings to retain the earlier behaviors if your existing applications depend on those behaviors. This gives you time to upgrade applications in an orderly fashion. Most applications, however, are not affected by the changes in behavior and work at the SQL Server 2000 compatibility level.

The compatibility level is specified for each database using the sp_dbcmptlevel system stored procedure. The database compatibility level can be set to 60 (version 6.0 compatibility), 65 (version 6.5 compatibility), 70 (version 7.0 compatibility), and the default 80 (SQL Server 2000 compatibility). The effects of the compatibility level settings are generally limited to the behaviors of a small number of Transact-SQL statements that also existed in earlier versions of SQL Server. Even when the database compatibility level is set to 60 or 65, applications gain almost all of the benefits of the new performance enhancements of SQL Server 2000. Applications still benefit from features such as the improved query processor.

Important: The compatibility level for the master database is 80 and cannot be changed. If you have added any user-defined objects to master, you must ensure they work correctly at the 80 compatibility level.

Defining Database Objects

Informix database objects (tables, views, and indexes) can be migrated to Microsoft SQL Server easily because each RDBMS closely follows the SQL-92 standard that regards object definitions. Converting Informix SQL table, index, and view definitions to SQL Server table, index, and view definitions requires relatively simple syntax changes. The table highlights some differences in database objects between Informix and Microsoft SQL Server.

Category

Microsoft SQL Server

Informix

Number of columns

1024

2,767

Row size

8060 bytes, plus 16 bytes to point to each text or image column

32,767 bytes

Maximum number of rows

Unlimited

Unlimited (as long as table size remains under 64 terabytes)

Blob type storage

16-byte pointer stored with row. Data stored on other data pages

Can be stored in the table or in a blobspace

Clustered table indexes

1 per table

1 per table (Index-organized tables)

Nonclustered table indexes

249 per table

77 (assumes there is one extent in the table and each key has only one part)

Maximum number of columns in single index

16

16

Maximum length of column values within of an index

900 bytes

255 bytes

Table naming convention

[[[Server.]database.]owner.]table_name

[[database.]owner.]table_name[@Server]

View naming convention

[[[Server.]database.]owner.]table_name

[[database.]owner.]table_name[@Server]

Index naming convention

[[[Server.]database.]owner.]table_name

[[database.]owner.]table_name[@Server]

It is assumed that you are starting with an Informix SQL script or program that is used to create your database objects. Simply copy this script or program and make the following modifications. Each change is discussed throughout the rest of this section. The examples have been taken from the sample application scripts Oratable.sql and Sstable.sql:

  1. Ensure database object identifiers comply to Microsoft SQL Server naming conventions. You may need to change only the names of indexes.

  2. Consider the data storage parameters your SQL Server database will require. If you are using RAID, no storage parameters are required.

  3. Modify Informix constraint definitions to work in SQL Server. If tables cross databases, use triggers to enforce foreign key relationships.

  4. Modify the CREATE INDEX statements to take advantage of clustered indexes.

  5. Use Data Transformation Services to create new CREATE TABLE statements. Review the statements, taking note of how Informix data types are mapped to SQL Server data types.

  6. Remove any SERIAL statements. Replace the use of serial data types with identity columns in CREATE TABLE or ALTER TABLE statements.

  7. Modify CREATE VIEW statements if necessary.

  8. Remove any reference to synonyms.

  9. Evaluate the use of Microsoft SQL Server temporary tables and their usefulness in your application.

  10. Evaluate the potential use of user-defined rules, data types, and defaults.

Database Object Identifiers

The following chart compares how Informix and Microsoft SQL Server handle object identifiers. In most cases, you do not need to change the names of objects when migrating to SQL Server.

Informix

Microsoft SQL Server

Identifier names can be up to 128 bytes.

1-128 Unicode characters in length
Temporary table names: up to 116 characters

Identifier names must begin with an alphabetic character or underscore and contain alphanumeric characters, or the characters _, and $.

Identifier names can begin with an alphanumeric character, or an _, and can contain virtually any character.
If the identifier begins with a space, or contains characters other than _, @, #, or $, you must use [ ] (delimiters) around the identifier name.
If an object begins with:
@ it is a local variable.
# it is a local temporary object.
## it is a global temporary object.

Database names must be unique within the Informix instance (unless in ANSI mode then it is owner.database)

Database names must be unique.

Identifier names must be unique within user accounts (database).

Identifier names must be unique within database user accounts.

Column names must be unique within tables and views.

Column names must be unique within tables and views.

Index names must be unique within a database.

Index names must be unique within database table names.

Qualifying Table Names

When accessing tables that exist in your Informix user account, the table can be selected simply by its unqualified name. Accessing tables in other Informix databases requires that the database name be prefixed to the table name with a single period (.). Informix synonyms can provide additional location transparency. Informix also allows the owner of the table to be used to identify the table.

Microsoft SQL Server uses a different convention when it references tables. Because one SQL Server login account can create a table by the same name in multiple databases, the following convention is used to access tables and views: [[database_name.]owner_name.]table_name

Accessing a table in…

Informix

Microsoft SQL Server

your user account

SELECT *
FROM STUDENT

SELECT * FROM USER_DB.STUDENT_ADMIN.STUDENT

other schema

SELECT * FROM STUDENT_ADMIN.STUDENT

SELECT * FROM OTHER_DB.STUDENT_ADMIN.STUDENT

Here are guidelines for naming Microsoft SQL Server tables and views:

  • Using the database name and username is optional. When a table is referenced only by name (for example, STUDENT), SQL Server searches for that table in the current user's account in the current database. If it does not find one, it looks for an object of the same name owned by the reserved username of dbo in the database. Table names must be unique within a user's account within a database.

  • The same SQL Server login account can own tables with the same name in multiple databases. For example, the ENDUSER1 account owns the following database objects: USER_DB.ENDUSER1.STUDENT and OTHER_DB.ENDUSER1.STUDENT. The qualifier is the database username, not the SQL Server login name, because they do not have to be the same.

  • At the same time, other users in these databases may own objects by the same name:

    • USER_DB.DBO.STUDENT

    • USER_DB.DEPT_ADMIN.STUDENT

    • USER_DB.STUDENT_ADMIN.STUDENT

    • OTHER_DB.DBO.STUDENT

Therefore, it is recommended that you include the owner name as part of the reference to a database object. If the application has multiple databases, it is recommended that the database name also is included as part of the reference. If the query spans multiple servers, include the server name.

  • Every connection to SQL Server has a current database context, set at login time with the USE statement. For example, assume the following scenario:

    A user, using the ENDUSER1 account, is logged in to the USER_DB database. The user requests the STUDENT table. SQL Server searches for the table ENDUSER1.STUDENT. If the table is found, SQL Server performs the requested database operation on USER_DB.ENDUSER1.STUDENT. If the table is not found in the ENDUSER1 database account, SQL Server searches for USER_DB.DBO.STUDENT in the dbo account for that database. If the table is still not found, SQL Server returns an error message indicating the table does not exist.

  • If another user, for example DEPT_ADMIN, owns the table, the table name must be prefixed with the database user's name (DEPT_ADMIN.STUDENT). Otherwise, the database name defaults to the database that is currently in context.

  • If the referenced table exists in another database, the database name must be used as part of the reference. For example, to access the STUDENT table owned by ENDUSER1 in the OTHERDB database, use OTHER_DB.ENDUSER1.STUDENT.

The object's owner can be omitted by separating the database and table name by two periods. For example, if an application references STUDENT_DB..STUDENT, SQL Server searches as follows:

  1. STUDENT_DB.current_user.STUDENT

  2. STUDENT_DB.DBO.STUDENT

If the application uses only a single database at a time, omitting the database name from an object reference makes it easy to use the application with another database. All object references implicitly access the database that is currently being used. This is useful when you want to maintain a test database and a production database on the same server.

Creating Tables

Because Informix and SQL Server support SQL-92 entry-level conventions for identifying RDBMS objects, the CREATE TABLE syntax is similar.

Informix

Microsoft SQL Server

CREATE TABLE
table_name
(
{col_name column_properties
[default_expression] [constraint [constraint
[...constraint]]]| [[,] constraint]}
[[,] {next_col_name | next_constraint}...]
)
[Informix Specific Data Storage Parameters]

CREATE TABLE [Server.][database.][owner.] table_name
(
{col_name column_properties[constraint
[constraint [...constraint]]]| [[,] constraint]}
[[,] {next_col_name | next_constraint}...]
)
[ON file group_name]

Informix database object names are optionally case-sensitive (set DELIMIDENT environment variable and enclose the object name in double quotes). In Microsoft SQL Server, database object names can be case-sensitive, depending on the installation options selected.

When SQL Server is first set up, the default sort order is dictionary order, case-insensitive. (This can be configured differently using SQL Server Setup.) Because Informix object names are always unique, you should not have any problems migrating the database objects to SQL Server. It is recommended that all table and column names in both Informix and SQL Server be lowercase to avoid problems if a user installs on a case-sensitive SQL Server.

Table and Index Storage Parameters

With Microsoft SQL Server, using RAID usually simplifies the placement of database objects. A SQL Server clustered index is integrated into the structure of the table, like an Informix index-organized table.

Informix

Microsoft SQL Server

CREATE TABLE DEPT_ADMIN.DEPT (
DEPT VARCHAR(4) NOT NULL PRIMARY KEY CONSTRAINT DEPT_DEPT_PK,
DNAME VARCHAR(30) NOT NULL UNIQUE CONSTRAINT DEPT_DNAME_UNIQUE,
)
IN TABLESPACE USER_DATA
EXTENT SIZE 10 NEXT SIZE 10

CREATE TABLE USER_DB.DEPT_ADMIN.DEPT (
DEPT VARCHAR(4) NOT NULL,
DNAME VARCHAR(30) NOT NULL,
CONSTRAINT DEPT_DEPT_PK
PRIMARY KEY CLUSTERED (DEPT),
CONSTRAINT DEPT_DNAME_UNIQUE
UNIQUE NONCLUSTERED (DNAME)
)

Creating Tables With SELECT Statements

Using Informix, only a temporary table can be created with a SELECT command. Microsoft SQL Server provides the ability to create a new table with a SELECT INTO.

Informix

Microsoft SQL Server

SELECT * INTO TEMP STUDENTBACKUP FROM STUDENT [WITH NO LOG]

SELECT * INTO STUDENTBACKUP
FROM STUDENT

SELECT…INTO does not work unless the database to which this is applied has the database configuration option select into/bulkcopy set to true. (The database owner can set this option using SQL Server Enterprise Manager or the Transact-SQL sp_dboption system stored procedure.) Use the sp_helpdb system stored procedure to check the status of the database. If select into/bulkcopy is not set to true, you can still use a SELECT statement to copy into a temporary table:

SELECT * INTO #student_backup FROM user_db.student_admin.student

When new tables are created using SELECT.. INTO statements, referential integrity definitions are not transferred to the new table.

The need to have the select into/bulkcopy option set to true may complicate the migration process. If you must copy data into tables by using a SELECT statement, create the table first, and then use the INSERT INTO…SELECT statement to load the table. The syntax is the same for Informix and SQL Server, and does not require that any database option be set.

Views

The syntax used to create views in Microsoft SQL Server is similar to that of Informix.

Informix

Microsoft SQL Server

CREATE VIEW view_name
[(column_name [, column_name]...)]
AS select_statement
[WITH CHECK OPTION]

CREATE VIEW [owner.]view_name
[(column_name [, column_name]...)]
[WITH ENCRYPTION]
AS select_statement [WITH CHECK OPTION]

Informix and SQL Server views require that the tables exist and that the view owner has privileges to access the requested tables(s) specified in the SELECT statement.

By default, data modification statements on views are not checked to determine whether the rows affected are within the scope of the view. To check all modifications, use the WITH CHECK OPTION..

Both SQL Server and Informix views support derived columns, using arithmetic expressions, functions, and constant expressions. Some of the specific SQL Server differences are:

  • Data modification statements (INSERT or UPDATE) are allowed on multitable views if the data modification statement affects only one base table. Data modification statements cannot be used on more than one table in a single statement.

  • READTEXT or WRITETEXT cannot be used on text or image columns in views.

  • ORDER BY, COMPUTE, FOR BROWSE, or COMPUTE BY clauses cannot be used.

  • The INTO keyword cannot be used in a view.

When a view is defined with an outer join and is queried with a qualification on a column from the inner table of the outer join, the results from SQL Server and Informix can differ. In most cases, Informix views are easily translated into SQL Server views.

Informix

Informix and Microsoft SQL Server

CREATE VIEW STUDENT_ADMIN.STUDENT_GPA
(SSN, GPA)
AS SELECT SSN, ROUND(AVG(DECODE(grade
,'A', 4
,'A+', 4.3
,'A-', 3.7
,'B', 3
,'B+', 3.3
,'B-', 2.7
,'C', 2
,'C+', 2.3
,'C-', 1.7
,'D', 1
,'D+', 1.3
,'D-', 0.7
,0)),2)
FROM STUDENT_ADMIN.GRADE
GROUP BY SSN

CREATE VIEW STUDENT_ADMIN.STUDENT_GPA
(SSN, GPA)
AS SELECT SSN, ROUND(AVG(CASE grade
WHEN 'A' THEN 4
WHEN 'A+' THEN 4.3
WHEN 'A-' THEN 3.7
WHEN 'B' THEN 3
WHEN 'B+' THEN 3.3
WHEN 'B-' THEN 2.7
WHEN 'C' THEN 2
WHEN 'C+' THEN 2.3
WHEN 'C-' THEN 1.7
WHEN 'D' THEN 1
WHEN 'D+' THEN 1.3
WHEN 'D-' THEN 0.7
ELSE 0
END),2)
FROM STUDENT_ADMIN.GRADE
GROUP BY SSN

Indexes

Microsoft SQL Server offers clustered and nonclustered index structures. These indexes are made up of pages that form a branching structure known as a B-tree (similar to the Informix B-tree index structure). The starting page (root level) specifies ranges of values within the table. Each range on the root-level page points to another page (decision node), which contains a more limited range of values for the table. In turn, these decision nodes can point to other decision nodes, further narrowing the search range. The final level in the branching structure is called the leaf level.

Cc917718.infmx06(en-us,TechNet.10).gif

Figure .06 B-Tree Architecture

Clustered Indexes

Clustered indexes are implemented in Informix as index-organized tables. A clustered index is an index that has been physically merged with a table. The table and index share the same storage area. The clustered index physically rearranges the rows of data in indexed order, forming the intermediate decision nodes. The leaf pages of the index contain the actual table data. This architecture permits only one clustered index per table. Microsoft SQL Server automatically creates a clustered index for the table whenever a PRIMARY KEY or UNIQUE constraint is placed on the table. Clustered indexes are useful for:

  • Primary keys.

  • Columns that are not updated.

  • Queries that return a range of values, using operators such as BETWEEN, >, >=, <, and <=, for example:

    SELECT * FROM STUDENT WHERE GRAD_DATE 
    BETWEEN '1/1/97' AND '12/31/97'
    
  • Queries that return large result sets:

    SELECT * FROM STUDENT WHERE LNAME = 'SMITH'
    
  • Columns that are used in sort operations (ORDER BY, GROUP BY).

    For example, on the STUDENT table, it might be helpful to include a nonclustered index on the primary key of ssn, and a clustered index could be created on lname, fname, (last name, first name), because this is the way students are often grouped.

  • Distributing update activity in a table to avoid hot spots. Hot spots are often caused by multiple users inserting into a table with an ascending key. This application scenario is usually addressed by row-level locking.

Dropping and re-creating a clustered index is a common technique for reorganizing a table in SQL Server. It is an easy way to ensure that data pages are contiguous on disk and to reestablish some free space in the table. This is similar to exporting, dropping, and importing a table in Informix, as well as dropping and re-creating a cluster index.

A SQL Server clustered index is like an Informix cluster. The table data is reordered to make the cluster index.

As a general rule, defining a clustered index on a table improves SQL Server performance and space management. If you do not know the query or update patterns for a given table, you can create the clustered index on the primary key.

The table shows an excerpt from the sample application source code. Note the use of the SQL Server clustered index.

Informix

Microsoft SQL Server

CREATE TABLE GRADE (
SSN CHAR(9) NOT NULL,
CCODE VARCHAR(4) NOT NULL,
GRADE VARCHAR(2) NOT NULL,
PRIMARY KEY (SSN, CCODE)
CONSTRAINT GRADE_SSN_CCODE_PK, FOREIGN KEY (SSN) REFERENCES
STUDENT (SSN)
CONSTRAINT GRADE_SSN_FK,
FOREIGN KEY (CCODE) REFERENCES
CLASS (CCODE)
CONSTRAINT GRADE_CCODE_FK
)

CREATE TABLE STUDENT_ADMIN.GRADE (
SSN CHAR(9) NOT NULL,
CCODE VARCHAR(4) NOT NULL,
GRADE VARCHAR(2) NULL,
CONSTRAINT GRADE_SSN_CCODE_PK
PRIMARY KEY CLUSTERED (SSN, CCODE),
CONSTRAINT GRADE_SSN_FK
FOREIGN KEY (SSN) REFERENCES
STUDENT_ADMIN.STUDENT (SSN),
CONSTRAINT GRADE_CCODE_FK
FOREIGN KEY (CCODE) REFERENCES
DEPT_ADMIN.CLASS (CCODE)
)

Nonclustered Indexes

In nonclustered indexes, the index data and the table data are physically separate, and the rows in the table are not stored in the order of the index. You can move Informix index definitions to Microsoft SQL Server nonclustered index definitions (as shown in the following example). For performance reasons, however, you might want to choose one of the indexes of a given table and create it as a clustered index.

Informix

Microsoft SQL Server

CREATE INDEX
STUDENT_MAJOR_IDX
ON STUDENT (MAJOR)
IN USER_DATA

CREATE NONCLUSTERED INDEX
STUDENT_MAJOR_IDX
ON USER_DB.STUDENT_ADMIN.STUDENT

(MAJOR)

Index Syntax and Naming

In Informix, an index name is unique within a database. In Microsoft SQL Server, an index name must be unique within a table name, but it does not have to be unique within a user account or database. Therefore, when creating or dropping an index in SQL Server, you must specify both the table name and the index name. Additionally, the SQL Server DROP INDEX statement can drop multiple indexes at one time.

Informix

Microsoft SQL Server

CREATE [UNIQUE | DISTINCT] [CLUSTER] INDEX index_name
ON table_name (column_name [, column_name]...)
[ASC | DESC]
[FILLFACTOR n]
[IN tablespace_name]
DROP INDEX ABC;

CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name
ON { table | view } ( column [ ASC | DESC ] [ ,...n ] )
[ WITH < index_option > [ ,...n] ]
[ ON filegroup ]
< index_option > :: =
{ PAD_INDEX |
FILLFACTOR = fillfactor |
IGNORE_DUP_KEY |
DROP_EXISTING |
STATISTICS_NORECOMPUTE |
SORT_IN_TEMPDB
}
DROP INDEX USER_DB.STUDENT.DEMO_IDX,
USER_DB.GRADE.DEMO_IDX

Index Data Storage Parameters

The FILLFACTOR option in Microsoft SQL Server functions in much the same way as the FILLFACTOR variable does in Informix. As tables grow in size, index pages split to accommodate new data. The index must reorganize itself to accommodate new data values. The fill factor percentage is used only when the index is created, and it is not maintained afterwards.

The FILLFACTOR option, values are 0 through 100 in SQL Server and 1 through 100 in Informix, controls how much space is left on an index page when the index is initially created. The default fill factor of 0 is used in SQL Server if none is specified—this will completely fill index leaf pages and leave space on each decision node page for at least one entry (two for nonunique clustered indexes). The default fill factor for Informix is 90 if none is specified.

Cc917718.infmx07(en-us,TechNet.10).gif

Figure .07 Index Data Storage Parameters

A lower fill factor value initially reduces the splitting of index pages and increases the number of levels in the B-tree index structure. A higher fill factor value uses index page space more efficiently, requires fewer disk I/Os to access index data, and reduces the number of levels in the B-tree index structure.

The PAD_INDEX option specifies that the fill factor setting be applied to the decision node pages as well as to the data pages in the index.

It is seldom necessary to include the FILLFACTOR option in a CREATE INDEX statement. The fill factor is provided for fine-tuning performance. It is useful only when creating a new index on a table with existing data, and then it is useful only when you can accurately predict future changes in that data.

A fill factor of 100 is used when there will be no inserts or updates occurring in the table (a read-only table). When fill factor is set to 100, SQL Server creates indexes with each page 100 percent full.

Ignoring Duplicate Keys

With both Informix and Microsoft SQL Server, users cannot insert duplicate values for a uniquely indexed column or columns. An attempt to do so generates an error message. Nevertheless, SQL Server lets the developer choose how the INSERT or UPDATE statement will respond to the error.

If IGNORE_DUP_KEY was specified in the CREATE INDEX statement, and an INSERT or UPDATE statement that creates a duplicate key is executed, SQL Server issues a warning message and ignores (does not insert) the duplicate row. If IGNORE_DUP_KEY was not specified for the index, SQL Server issues an error message and rolls back the entire INSERT statement. For more information about these options, see SQL Server Books Online.

Using Temporary Tables

An Informix application might have to create tables that exist for short periods. The application does not have to ensure that all tables created for this purpose are dropped. The database will drop them when the application disconnects from the database.

Microsoft SQL Server provides temporary table database objects, which are created for just such a purpose. These tables are always created in the tempdb database. The table name determines how long they reside within the tempdb database.

Table name

Description

#table_name

This local temporary table only exists for the duration of a user session or the procedure that created it. It is automatically dropped when the user logs off or the procedure that created the table completes. These tables cannot be shared between multiple users. No other database users can access this table. Permissions cannot be granted or revoked on this table.

##table_name

This global temporary table also typically exists for the duration of a user session or procedure that created it. This table can be shared among multiple users. It is automatically dropped when the last user session referencing it disconnects. All other database users can access this table. Permissions cannot be granted or revoked on this table.

Indexes can be defined for temporary tables. Views can be defined only on tables explicitly created in tempdb without the # or ## prefix. The following example shows the creation of a temporary table and its associated index. When the user exits, the table and index are automatically dropped.

SELECT SUM(ISNULL(TUITION_PAID,0)) SUM_PAID, MAJOR INTO #SUM_STUDENT 
FROM USER_DB.STUDENT_ADMIN.STUDENT GROUP BY MAJOR 
CREATE UNIQUE INDEX SUM STUDENT IDX ON #SUM STUDENT (MAJOR)

You may find that the benefits associated with using temporary tables justify a revision in your program code.

Table Variables

Microsoft SQL 2000 provides a special data type that can be used to store a result set for later processing. Its primary use is for temporary storage of a set of rows, which are to be returned as the result set of a table-valued function.

Syntax

Note: Use DECLARE @local_variable to declare variables of type table.

table_type_definition ::= 
    TABLE ( { column_definition | table_constraint } [ ,...n ] ) 
column_definition ::= 
    column_name scalar_data_type 
    [ COLLATE collation_definition ] 
    [ [ DEFAULT constant_expression ] | IDENTITY 
      [ ( seed , increment ) ] ] 
    [ ROWGUIDCOL ] 
    [ column_constraint ] [ ...n ] 
column_constraint ::= 
    { [ NULL | NOT NULL ] 
    | [ PRIMARY KEY | UNIQUE ] 
    | CHECK ( logical_expression ) 
    } 
table_constraint ::= 
    { { PRIMARY KEY | UNIQUE } ( column_name [ ,...n ] ) 
    | CHECK ( search_condition ) 
    }

Functions and variables can be declared to be of type table. table variables can be used in functions, stored procedures, and batches.

Use table variables instead of temporary tables, whenever possible. table variables provide the following benefits:

  • A table variable behaves like a local variable. It has a well-defined scope, which is the function, stored procedure, or batch in which it is declared.

  • Within its scope, a table variable may be used like a regular table. It may be applied anywhere a table or table expression is used in SELECT, INSERT, UPDATE, and DELETE statements. However, table may not be used in the following statements:

    INSERT INTO table_variable EXEC stored_procedure 
    SELECT select_list INTO table_variable statements.
    
  • table variables are cleaned up automatically at the end of the function, stored procedure, or batch in which they are defined.

  • table variables used in stored procedures result in fewer recompilations of the stored procedures than when temporary tables are used.

  • Transactions involving table variables last only for the duration of an update on the table variable. Thus, table variables require less locking and logging resources.

Data Types

While some data type conversions from Informix to SQL Server are straightforward, other data type conversions will require evaluating a few options. It is recommended that you use the DTS Wizard to automate the creation of the new CREATE TABLE statements. These statements will provide you with the recommended conversion of the data types. You can then modify these statements as necessary.

Informix

Microsoft SQL Server

CHAR

char is recommended. char type columns are accessed somewhat faster than varchar columns because they use a fixed storage length.

VARCHAR
and TEXT,
NTEXT and
IMAGE

varchar or text. (If the length of the data values in your Informix column is 8000 bytes or less, use varchar; otherwise, you must use text.)
NTEXT, TEXT, and IMAGE can hold up to 2 GB in a single value.
NTEXT is the Unicode version of TEXT

NCHAR
NVARCHAR

Unicode character data versions of CHAR and VARCHAR

BLOB

varbinary or image. (If the length of the data values in your Informix column is 8000 bytes or less, use varbinary; otherwise, you must use image.)

INTEGER
SMALLINT
TINYINT
BIGINT
FLOAT
SMALLFLOAT
MONEY
SMALLMONEY
DECIMAL
NUMERIC

If integer between 1 and 255, use tinyint.
If integer between -32768 and 32767, use smallint.
If integer between greater than integer value 2,147,483,647
If integer between -2,147,483,648 and 2,147,483,647 use int.
If you require a float type number, use numeric (has precision and scale).
Note: Do not use float or real, because rounding may occur (Informix INTEGER and SQL Server numeric do not round).
Money values from -922,337,203,685,477.5808 through +922,337,203,685,477.5807
smallmoney values from -214,748.3648 through +214,748.3647

BINARY,
VARBINARY

Binary data types of either fixed-length(binary) or variable-length(varbinary)
Binary is 1 thru 8000 in value, storage size n+4 bytes
varBinary is variable binary, storage size length of data + 4 bytes.

DATE
DATETIME
SMALLDATETIME

datetime.
smalldatetime displays date ranges between January 1, 1900 and June 6, 2079

BIT

Integer data type 1,0, or NULL

ROWID
SERIAL

Use the identity column type.

USER

USER

Using Unicode Data

The Unicode specification defines a single encoding scheme for practically all characters widely used in businesses around the world. All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred from one database or computer to another without concern that the receiving system will correctly translate the bit patterns into characters.

One problem with data types that use 1 byte to encode each character is that the data type can represent only 256 different characters. This forces multiple encoding specifications (or code pages) for different alphabets. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters.

Microsoft SQL Server translates the bit patterns in char, varchar, and text columns to characters using the definitions in the code page installed with SQL Server. Client computers use the code page installed with the operating system to interpret character bit patterns. There are many different code pages. Some characters appear on some code pages, but not on others. Some characters are defined with one bit pattern on some code pages, and with a different bit pattern on other code pages. When you build international systems that must handle different languages, it becomes difficult to pick code pages for all the computers that meet the language requirements of multiple countries. It is also difficult to ensure that every computer performs the correct translations when interfacing with a system that uses a different code page.

The Unicode specification addresses this problem by using 2 bytes to encode each character. There are enough different patterns (65,536) in 2 bytes for a single specification covering the most common business languages. Because all Unicode systems consistently use the same bit patterns to represent all characters, there is no problem with characters being converted incorrectly when moving from one system to another.

In SQL Server, nchar, nvarchar, and ntext data types support Unicode data. For more information about SQL Server data types, see SQL Server Books Online.

User-defined Data Types

User-defined data types can be created for the model database or for a single user database. If the user-defined data type is defined for model, that data type is available to all new user databases created from that point forward. The user-defined data type is defined with the sp_addtype system stored procedure. For more information, see SQL Server Books Online.

You can use a user-defined data type in the CREATE TABLE and ALTER TABLE statements, and bind defaults and rules to it. If nullability is explicitly defined when the user-defined data type is used during table creation, it takes precedence over the nullability defined when the data type was created.

This example shows how to create a user-defined data type. The arguments are the user-type name, data type, and nullability:

sp_addtype gender_type, 'varchar(1)', 'not null' 
go

This capability might initially appear to solve the problem of migrating Informix table creation scripts to SQL Server. For example, it is quite easy to add the Informix DATE data type:

sp_addtype date, datetime

This does not work with data types that require variable sizes, such as the Informix data type DECIMAL. An error message is returned indicating that a length must also be specified:

sp_addtype varchar, varchar 
Go 
Msg 15091, Level 16, State 1 
You must specify a length with this physical type.

Microsoft timestamp Columns

The timestamp columns enable BROWSE-mode updates and make cursor update operations more efficient. The timestamp is a data type that is automatically updated every time a row containing a timestamp column is inserted or updated.

Values in timestamp columns are not stored as an actual date or time, but are stored as binary(8) or varbinary(8), which indicates the sequence of events on rows in the table. A table can have only one timestamp column.

For more information, see SQL Server Books Online.

Object-level Permissions

Microsoft SQL Server object privileges can be granted to, denied from, and revoked from other database users, database groups, and the public role. SQL Server does not allow an object owner to grant ALTER TABLE and CREATE INDEX privileges for the object. Informix will allow these privileges if RESOURCE has also been granted. Those privileges must remain with the object owner in SQL Server.

The GRANT statement creates an entry in the security system that allows a user in the current database to work with data in the current database or to execute specific Transact-SQL statements. The syntax of the GRANT statement is identical in Informix and SQL Server.

The Transact-SQL DENY statement creates an entry in the security system that denies a permission from a security account in the current database and prevents the security account from inheriting the permission through its group or role memberships. Informix does not have a DENY statement.

The Transact-SQL REVOKE statement removes a previously granted or denied permission from a user in the current database.

Informix

Microsoft SQL Server

GRANT {ALL [PRIVILEGES][column_list] | permission_list [column_list]}
ON {table_name [(column_list)]
| view_name [(column_list)]
| synonym name [(column_list)]
| stored_procedure_name}
TO {PUBLIC | name_list | role_name }
[WITH GRANT OPTION]
[AS grantor]
REVOKE {ALL [PRIVLEGES]}
ON {table | view | synonym }
FROM { user | user list | role }
[CASCADE | RESTRICT]

GRANT
{ALL [PRIVILEGES] | permission[,…n]}
{
[(column[,…n])] ON {table | view}
| ON {table | view}[(column[,…n])]
| ON {stored_procedure | extended_procedure}
| ON {user_defined_function}
}
TO security_account[,…n]
[WITH GRANT OPTION]
[AS {group | role}]
REVOKE [GRANT OPTION FOR]
{ALL [PRIVILEGES] | permission[,n]}
{
[( column[,n])] ON {table | view}
| ON {table | view}[( column[,n])]
| {stored_procedure | extended_procedure}
| ON {user_defined_function}
}
{TO | FROM}
security_account[,n]
[CASCADE]
[AS {group | role}]
DENY
{ALL [PRIVILEGES] | permission[,…n]}
{
[(column[,…n])] ON {table | view}
| ON {table | view}[(column[,…n])]
| ON {stored_procedure | extended_procedure}
| ON {user_defined_function}
}
TO security_account[,…n]
[CASCADE]

For more information on object-level permissions, see SQL Server Books Online.

The REFERENCES, INSERT, UPDATE, DELETE, and SELECT privileges are granted in the same way in both Informix and SQL Server.

Enforcing Data Integrity and Business Rules

Enforcing data integrity ensures the quality of the data in the database. Two important steps when planning tables are identifying valid values for a column and deciding how to enforce the integrity of the data in the column. Data integrity falls into four categories, and is enforced in various ways.

Integrity Type

How Enforced

Entity integrity

PRIMARY KEY constraint
UNIQUE constraint
IDENTITY property
indexes

Domain integrity

Domain DEFAULT definition
Range of possible values (FOREIGN KEY constraint, CHECK constraints, DEFAULT definitions, NOT NULL definitions, and rules)
The format (CHECK constraint and rules)
Nullability
Restricting type (datatypes)

Referential integrity

FOREIGN KEY constraint
CHECK constraint
Triggers to implement CASCADE and DELETE functionality

User-defined integrity

All column- and table-level constraints in CREATE TABLE
Stored procedures
Triggers

Entity Integrity

Entity integrity defines a row as a unique entity for a particular table. Entity integrity enforces the integrity of the identifier column(s) or the primary key of a table through indexes, UNIQUE constraints, PRIMARY KEY constraints, or IDENTITY properties.

Naming Constraints

You should always name your constraints explicitly. If you do not, Informix and Microsoft SQL Server will use different naming conventions to name the constraint implicitly. These differences in naming can complicate your migration process unnecessarily. The discrepancy appears when dropping or disabling constraints, because constraints must be dropped by name. The syntax for explicitly naming constraints is the same for Informix and SQL Server:

CONSTRAINT constraint_name

Primary Keys and Unique Columns

The SQL-92 standard requires that all values in a primary key be unique and that the column not allow null values. Both Informix and Microsoft SQL Server enforce uniqueness by automatically creating unique indexes whenever a PRIMARY KEY or UNIQUE constraint is defined. Additionally, primary key columns are automatically defined as NOT NULL. Only one primary key is allowed per table.

A SQL Server clustered index is created by default for a primary key, though a nonclustered index can be requested. The Informix index on primary keys can be removed by either dropping or disabling the constraint, whereas the SQL Server index can be removed only by dropping the constraint.

In either RDBMS, alternate keys can be defined with a UNIQUE constraint. Multiple UNIQUE constraints can be defined on any table. UNIQUE constraint columns are nullable. In SQL Server, a non-clustered index is created by default, unless otherwise specified.

When migrating your application, it is important to note that SQL Server allows only one row to contain the value NULL for the complete unique key (single or multiple column index), and Informix does not allow any row to contain the value NULL in any column that makes up the unique key.

Informix

Microsoft SQL Server

CREATE TABLE DEPT
(DEPT VARCHAR(4) NOT NULL,
DNAME VARCHAR(30) NOT NULL,
PRIMARY KEY (DEPT)
CONSTRAINT DEPT_DEPT_PK,
UNIQUE (DNAME)
CONSTRAINT DEPT_DNAME_UNIQUE)
) EXTENT SIZE 32 NEXT SIZE 32

CREATE TABLE USER_DB.DEPT_ADMIN.DEPT
(DEPT VARCHAR(4) NOT NULL,
DNAME VARCHAR(30) NOT NULL,
CONSTRAINT DEPT_DEPT_PK
PRIMARY KEY CLUSTERED (DEPT),
CONSTRAINT DEPT_DNAME_UNIQUE
UNIQUE NONCLUSTERED (DNAME)
)

Adding and Removing Constraints

Disabling constraints can improve database performance and streamline the data replication process. For example, when you rebuild or replicate table data at a remote site, you do not have to repeat constraint checks, because the integrity of the data was checked when it was originally entered into the table. You can program an Informix application to disable and enable constraints. It is recommended that you use the NOT FOR REPLICATION clause to suspend column-level, foreign key, and CHECK constraints during replication.

In cases where you are not replicating data, and need to remove a constraint, you can accomplish this in Microsoft SQL Server using the CHECK and WITH NOCHECK options with the ALTER TABLE statement.

This illustration shows a comparison of this process.

Cc917718.infmx08(en-us,TechNet.10).gif

Figure .08 Constraint Process in Informix vs. SQL 2000

With SQL Server, you can defer all of the table constraints by using the ALL keyword with the NOCHECK clause. Informix requires you to name all of the constraints you wish to enable or disable.

Generating Sequential Numeric Values

If your Informix application uses SERIAL, it can be altered easily to take advantage of the Microsoft SQL Server IDENTITY property.

Category

Microsoft SQL Server IDENTITY

Syntax

CREATE TABLE new_employees
( Empid int IDENTITY (1,1), Employee_Name varchar(60),
CONSTRAINT Emp_PK PRIMARY KEY (Empid)
)
If increment interval is 5:
CREATE TABLE new_employees
( Empid int IDENTITY (1,5), Employee_Name varchar(60),
CONSTRAINT Emp_PK PRIMARY KEY (Empid)
)

Identity columns per table

One

Null values allowed

No

Use of default constraints, values

Cannot be used.

Enforcing uniqueness

Yes

Querying for maximum current identity number after an INSERT, SELECT INTO, or bulk copy statement completes

@@IDENTITY (function)

Returns the seed value specified during the creation of an identity column

IDENT_SEED('table_name')

Returns the increment value specified during the creation of an identity column

IDENT_INCR('table_name')

SELECT syntax

The keyword IDENTITYCOL can be used in place of a column name when you reference a column that has the IDENTITY property, in SELECT, INSERT, UPDATE, and DELETE statements.

Although the IDENTITY property automates row numbering within one table, separate tables, each with its own identifier column, can generate the same values. This is because the IDENTITY property is guaranteed to be unique only for the table on which it is used. If an application must generate an identifier column that is unique across the entire database, or every database on every networked computer in the world, use the ROWGUIDCOL property, the uniqueidentifier data type, and the NEWID function. SQL Server uses globally unique identifier columns for merge replication to ensure that rows are uniquely identified across multiple copies of the table.

For more information about creating and modifying identifier columns, see SQL Server Books Online.

Domain Integrity

Domain integrity enforces valid entries for a given column. Domain integrity is enforced by restricting the type (through data types), the format (through CHECK constraints), or the range of possible values (through REFERENCE and CHECK constraints).

DEFAULT and CHECK Constraints

Informix treats a default as a column property, and Microsoft SQL Server treats a default as a constraint. The SQL Server DEFAULT constraint can contain constant values, built-in functions that do not take arguments (niladic functions), or NULL.

To easily migrate the Informix DEFAULT column property, you should define DEFAULT constraints at the column level in SQL Server without applying constraint names. SQL Server generates a unique name for each DEFAULT constraint.

The syntax used to define CHECK constraints is the same in Informix and SQL Server. The search condition must evaluate to a Boolean expression and cannot contain subqueries. A column-level CHECK constraint can reference only the constrained column, and a table-level check constraint can reference only columns of the constrained table. Multiple CHECK constraints can be defined for a table. Both Informix and SQL Server syntax allow only one column-level CHECK constraint to be created on a column in a CREATE TABLE statement, and the constraint can have multiple conditions.

The best way to test your modified CREATE TABLE statements is to use the SQL Server Query Analyzer in SQL Server, and parse only the syntax. The results pane indicate any errors. For more information about constraint syntax, see SQL Server Books Online.

Informix

Microsoft SQL Server

CREATE TABLE STUDENT (
SSN CHAR(9) NOT NULL,
FNAME VARCHAR(12),
LNAME VARCHAR(20) NOT NULL,
GENDER CHAR(1) NOT NULL
CONSTRAINT STUDENT_GENDER_CK
CHECK (GENDER IN ('M','F')),
MAJOR VARCHAR(4)
DEFAULT 'Undc' NOT NULL,
BIRTH_DATE DATE,
TUITION_PAID DECIMAL(12,2),
TUITION_TOTAL DECIMAL(12,2),
START_DATE DATE,
GRAD_DATE DATE,
LOAN_AMOUNT DECIMAL(12,2),
DEGREE_PROGRAM CHAR(1)
DEFAULT 'U' NOT NULL
CONSTRAINT STUDENT_DEGREE_CK CHECK
(DEGREE_PROGRAM IN ('U', 'M', 'P', 'D')),
...

CREATE TABLE USER_DB.STUDENT
_ADMIN.STUDENT (
SSN CHAR(9) NOT NULL,
FNAME VARCHAR(12) NULL,
LNAME VARCHAR(20) NOT NULL,
GENDER CHAR(1) NOT NULL
CONSTRAINT STUDENT_GENDER_CK
CHECK (GENDER IN ('M','F')),
MAJOR VARCHAR(4)
DEFAULT 'Undc' NOT NULL,
BIRTH_DATE DATETIME NULL,
TUITION_PAID NUMERIC(12,2) NULL,
TUITION_TOTAL NUMERIC(12,2) NULL,
START_DATE DATETIME NULL,
GRAD_DATE DATETIME NULL,
LOAN_AMOUNT NUMERIC(12,2) NULL,
DEGREE_PROGRAM CHAR(1)
DEFAULT 'U' NOT NULL
CONSTRAINT STUDENT_DEGREE_CK
CHECK
(DEGREE_PROGRAM IN ('U', 'M', 'P', 'D')),
...

A note about user-defined rules and defaults: The syntax for Microsoft SQL Server rules and defaults remains for backward compatibility purposes, but CHECK constraints and DEFAULT constraints are recommended for new application development. For more information, see SQL Server Books Online.

Nullability

Microsoft SQL Server and Informix create column constraints to enforce nullability. An Informix column defaults to NULL, unless NOT NULL is specified in the CREATE TABLE or ALTER TABLE statements. In Microsoft SQL Server, database and session settings can override the nullability of the data type used in a column definition.

All of your SQL Server scripts should explicitly define both NULL and NOT NULL for each column. Informix assumes NULL unless NOT NULL is defined; the NULL keyword is not supported in a column definition. To see how this strategy is implemented, see Ifxtable.sql and Sstable.sql, the sample table creation scripts. When not explicitly specified, column nullability follows these rules.

Null Setting

Description

Column is defined with a user-defined data type

Microsoft SQL Server uses the nullability specified when the data type was created. Use the sp_help system stored procedure to get the data type's default nullability.

Column is defined with a system-supplied data type

If the system-supplied data type has only one option, it takes precedence. Currently, the bit data type can be defined only as NOT NULL.
If any session settings are ON (turned on with the SET), then:
If ANSI_NULL_DFLT_ON is ON, NULL is assigned.
If ANSI_NULL_DFLT_OFF is ON, NOT NULL is assigned.
If any database settings are configured (changed with the sp_dboption system stored procedure), then:
If ANSI null default is true, NULL is assigned.
If ANSI null default is false, NOT NULL is assigned.

NULL/NOT NULL
Not Defined

When not explicitly defined (neither of the ANSI_NULL_DFLT options are set), the session has not been changed and the database is set to the default (ANSI null default is false), then SQL Server assigns it NOT NULL.

Referential Integrity

The table provides a comparison of the syntax used to define referential integrity constraints.

Constraint

Informix

Microsoft SQL Server

PRIMARY KEY

[CONSTRAINT constraint_name]
PRIMARY KEY (col_name [, col_name2 [..., col_name16]])

[CONSTRAINT constraint_name]
PRIMARY KEY [CLUSTERED | NONCLUSTERED] (col_name [, col_name2 [..., col_name16]])
[ON segment_name]
[NOT FOR REPLICATION]

UNIQUE

[CONSTRAINT constraint_name]
UNIQUE (col_name [, col_name2 [..., col_name16]])

[CONSTRAINT constraint_name]
UNIQUE [CLUSTERED | NONCLUSTERED](col_name [, col_name2 [..., col_name16]])
[ON segment_name]
[NOT FOR REPLICATION]

FOREIGN KEY

[CONSTRAINT constraint_name]
[FOREIGN KEY (col_name [, col_name2 [..., col_name16]])]
REFERENCES [owner.]ref_table [(ref_col [, ref_col2 [..., ref_col16]])]
[ON DELETE CASCADE]

[CONSTRAINT constraint_name]
[FOREIGN KEY (col_name [, col_name2 [..., col_name16]])]
REFERENCES [owner.]ref_table [(ref_col [, ref_col2 [..., ref_col16]])]
[NOT FOR REPLICATION]

DEFAULT

Column property, not a constraint
DEFAULT (constant_expression)

[CONSTRAINT constraint_name]
DEFAULT {constant_expression | niladic-function | NULL}
[FOR col_name]
[NOT FOR REPLICATION]

CHECK

[CONSTRAINT constraint_name]
CHECK (expression)

[CONSTRAINT constraint_name]
CHECK [NOT FOR REPLICATION] (expression)

The NOT FOR REPLICATION clause is used to suspend column-level, FOREIGN KEY, and CHECK constraints during replication.

Foreign Keys

The rules for defining foreign keys are similar in each RDBMS. The number of columns and data type of each column specified in the foreign key clause must match the REFERENCES clause. A nonnull value entered in this column(s) must exist in the table and column(s) defined in the REFERENCES clause, and the referenced table's columns must have a PRIMARY KEY or UNIQUE constraint.

Microsoft SQL Server constraints provide the ability to reference tables within the same database. To implement referential integrity across databases, use table-based triggers.

Both Informix and SQL Server support self-referenced tables, tables in which a reference (foreign key) can be placed against one or more columns on the same table. For example, the column prereq in the CLASS table can reference the column ccode in the CLASS table to ensure that a valid course code is entered as a course prerequisite.

Whereas cascading deletes and updates are implemented in Informix with the CASCADE DELETE clause, SQL Server provides the same functionality with table triggers. For more information, see "SQL Language Support" later in this paper.

User-Defined Integrity

User-defined integrity allows you to define specific business rules that do not fall into one of the other integrity categories.

Stored Procedures

Microsoft SQL Server stored procedures use the CREATE PROCEDURE statement to accept and return user-supplied parameters. With the exception of temporary stored procedures, stored procedures are created in the current database. The table shows the syntax for Informix and SQL Server.

Informix

Microsoft SQL Server

CREATE PROCEDURE procedure
[(argument datatype [, argument datatype)]
block
END PROCEDURE
OR
CREATE FUNCTION function
[(argument datatype [, argument datatype)]
RETURNING datatype;
block
END FUNCTION

CREATE PROC[EDURE] procedure_name [;number]
[
{@parameter data_type} [VARYING] [= default] [OUTPUT]
]
[,…n]
[WITH
{ RECOMPILE | ENCRYPTION | RECOMPILE, ENCRYPTION} ]
[FOR REPLICATION]
AS
sql_statement […n]

In SQL Server, temporary procedures are created in the tempdb database by prefacing procedure_name with a single number sign (#procedure_name) for local temporary procedures and a double number sign (##procedure_name) for global temporary procedures.

A local temporary procedure can be used only by the user who created it. Permission to execute a local temporary procedure cannot be granted to other users. Local temporary procedures are automatically dropped at the end of the user session.

A global temporary procedure is available to all SQL Server users. If a global temporary procedure is created, all users can access it, and permissions cannot be explicitly revoked. Global temporary procedures are dropped at the end of the last user session using the procedure.

SQL Server stored procedures can be nested up to 32 levels. The nesting level is incremented when the called procedure starts execution, and it is decremented when the called procedure finishes execution.

The following example demonstrates how a Transact-SQL stored procedure can be used to return result sets directly from SELECT statements in a stored procedure. Informix cannot return result sets from a stored procedure.

Microsoft SQL Server

CREATE PROCEDURE
STUDENT_ADMIN.SHOW_RELUCTANT_STUDENTS
AS SELECT FNAME+'' +LNAME+', social security
number'+ SSN+' is not enrolled in any classes!'
FROM STUDENT_ADMIN.STUDENT S
WHERE NOT EXISTS
(SELECT 'X' FROM STUDENT_ADMIN.GRADE G
WHERE G.SSN=S.SSN)
ORDER BY SSN
RETURN@@ROWCOUNT
GO

Delaying the Execution of a Stored Procedure

Microsoft SQL Server provides WAITFOR, which allows developers to specify a time, time interval, or event that triggers the execution of a statement block, stored procedure, or transaction. There is no Informix equivalent to the WAITFOR statement.

WAITFOR {DELAY ' time ' | TIME ' time '}

where

DELAY:

Instructs Microsoft SQL Server to wait until the specified amount of time has passed, up to a maximum of 24 hours.

' time ':

The amount of time to wait. time can be specified in one of the acceptable formats for datetime data, or it can be specified as a local variable. Dates cannot be specified; therefore, the data portion of the datetime value is not allowed.

TIME:

Instructs SQL Server to wait until the specified time.

For example:

BEGIN 
    WAITFOR TIME '22:20' 
    EXECUTE update_all_stats 
END

Specifying Parameters in a Stored Procedure

To specify a parameter within a stored procedure, use this syntax.

Informix

Microsoft SQL Server

Varname datatype
DEFAULT <value>;

{@parameter data_type} [VARYING] [= default] [OUTPUT]

Triggers

Both Informix and Microsoft SQL Server have triggers, which have some differences in their implementations.

Description

Informix

Microsoft SQL Server

Number of triggers per table

Unlimited

Unlimited

Triggers executed before INSERT, UPDATE, DELETE

Yes

No

Triggers executed after INSERT, UPDATE, DELETE

Yes

Yes

Triggers executed instead of INSERT, UPDATE, DELETE

No

Yes

Allow nested triggers

No

Yes (server configurable option)

Statement Level Triggers

Yes

Yes

Row Level Triggers

Yes

No

Constraints checked prior to execution

When logging is enabled constraints are deferred until trigger completes
When logging is not enabled constraints are not deferred

Yes. In addition, this is an option in Data Transformation Services

Referring to old or previous values in an UPDATE or DELETE trigger

User defined

DELETED.column

Referring to new values in an INSERT trigger

User defined

INSERTED.column

Disabling Triggers

SET TRIGGER trigger DISABLED

Option in Data Transformation Services

DELETED and INSERTED are logical (conceptual) tables created by SQL Server for trigger statements. They are structurally similar to the table on which the trigger is defined and hold the old values or new values of the rows that might be changed by the user action. The tables track row-level changes in Transact-SQL. These tables provide the same functionality as Informix row-level triggers. When an INSERT, UPDATE, or DELETE statement is executed in SQL Server, rows are added to the trigger table and to the INSERTED and DELETED table(s) simultaneously.

The INSERTED and DELETED tables are identical to the trigger table. They have the same column names and the same data types. For example, if a trigger is placed on the GRADE table, the INSERTED and DELETED tables have this structure.

GRADE

INSERTED

DELETED

SSN CHAR(9)
CCODE VARCHAR(4)
GRADE VARCHAR(2)

SSN CHAR(9)
CCODE VARCHAR(4)
GRADE VARCHAR(2)

SSN CHAR(9)
CCODE VARCHAR(4)
GRADE VARCHAR(2)

The INSERTED and DELETED tables can be examined by the trigger to determine what types of trigger actions should be carried out. The INSERTED table is used with the INSERT and UPDATE statements. The DELETED table is used with DELETE and UPDATE statements.

The UPDATE statement uses both the INSERTED and DELETED tables because SQL Server always deletes the old row and inserts a new row whenever an UPDATE operation is performed. Consequently, when an UPDATE is performed, the rows in the INSERTED table are always duplicates of the rows in the DELETED table.

The following example uses the INSERTED and DELETED tables to replace a SPL row-level trigger. A full outer join is used to query all rows from either table. Note that Informix would require five triggers (one for each update column, one insert and one delete) since it does not support multiple trigger types in one CREATE TRIGGER statement.

Informix

Microsoft SQL Server

CREATE TRIGGER TRACK_GRADES
[UPDATE OF SSN ON GRADE |
UPDATE OF CCODE ON GRADE |
UPDATE OF GRADE ON GRADE |
INSERT ON GRADE |
DELETE ON GRADE]
REFERENCING OLD AS OLD
NEW AS NEW
FOR EACH ROW
(INSERT INTO GRADE_HISTORY(
TABLE_USER, ACTION_DATE,
OLD_SSN, OLD_CCODE, OLD_GRADE,
NEW_SSN, NEW_CCODE, NEW_GRADE)
VALUES (USER, CURRENT,
OLD.SSN, OLD.CCODE, OLD.GRADE,
NEW.SSN, NEW.CCODE, NEW.GRADE),
);

CREATE TRIGGER STUDENT_ADMIN.TRACK_GRADES
ON STUDENT_ADMIN.GRADE
FOR INSERT, UPDATE, DELETE
AS
INSERT INTO GRADE_HISTORY(
TABLE_USER, ACTION_DATE,
OLD_SSN, OLD_CCODE, OLD_GRADE
NEW_SSN, NEW_CCODE, NEW_GRADE)
SELECT USER, GETDATE(),
OLD.SSN, OLD.CCODE, OLD.GRADE,
NEW.SSN, NEW.CCODE, NEW.GRADE
FROM INSERTED NEW FULL OUTER JOIN
DELETED OLD ON NEW.SSN = OLD.SSN
Or
CREATE TRIGGER STUDENT_ADMIN.TRACK_GRADES
ON STUDENT_ADMIN.GRADE
FOR INSERT, UPDATE, DELETE
AS
SELECT USER, GETDATE(),
OLD.SSN, OLD.CCODE, OLD.GRADE,
NEW.SSN, NEW.CCODE, NEW.GRADE
INTO GRADE_HISTORY
FROM INSERTED NEW FULL OUTER JOIN
DELETED OLD ON NEW.SSN = OLD.SSN

You can create a trigger only in the current database, though you can reference objects outside the current database. If you use an owner name to qualify a trigger, qualify the table name the same way.

Triggers can be nested 32 levels deep in SQL Server and 61 levels in Informix. If a trigger changes a table on which there is another trigger, the second trigger is activated and can then call a third trigger, and so on. If any trigger in the chain sets off an infinite loop, the nesting level is exceeded and the trigger is canceled. Additionally, if an update trigger on one column of a table results in an update to another column, the update trigger is activated only once.

Microsoft SQL Server declarative referential integrity (DRI) does not provide cross-database referential integrity. If cross-database referential integrity is required, use triggers. The following statements are not allowed in a Transact-SQL trigger:

CREATE statements (DATABASE, TABLE, INDEX, PROCEDURE, DEFAULT, RULE, TRIGGER, SCHEMA, and VIEW)

DROP statements (TRIGGER, INDEX, TABLE, PROCEDURE, DATABASE, VIEW, DEFAULT, RULE)

ALTER statements (DATABASE, TABLE, VIEW, PROCEDURE, TRIGGER)

TRUNCATE TABLE

GRANT, REVOKE, DENY

UPDATE STATISTICS

RECONFIGURE

UPDATE STATISTICS

RESTORE DATABASE, RESTORE LOG

LOAD LOG, DATABASE

DISK statements

SELECT INTO (because it creates a table)

For more information about triggers, see SQL Server Books Online.

Transactions, Locking, and Concurrency

This section explains how transactions are executed in both Informix and Microsoft SQL Server and presents the differences between the locking processes and concurrency issues in both database types.

Transactions

In Informix, a database can either support transactions or not. If the database does not support transactions (no logging) then every insert, update, or delete is performed immediately and cannot be rolled back. In a database that supports transactions an explicit BEGIN WORK must be performed. After all statements are processed an explicit COMMIT WORK or ROLLBACK WORK must be issued. If the connection to the database is lost after a BEGIN WORK is issued and a COMMIT WORK has not been issued, a ROLLBACK WORK is issued by the database.

By default, Microsoft SQL Server automatically performs a COMMIT statement after every insert, update, or delete operation. Because the data is automatically saved, you are unable to roll back any changes.

You can start transactions in Microsoft® SQL Server™ as explicit, autocommit, or implicit transactions. Autocommit is the default behavior; you can use implicit or explicit transaction modes to change this default behavior.

  • Autocommit transactions

    This is the default mode for SQL Server. Each individual Transact-SQL statement is committed when it completes. You do not have to specify any statements to control transactions.

  • Implicit transactions

    As with an Informix database that is set to no logging, an implicit transaction is started whenever an INSERT, UPDATE, DELETE, or other data manipulating function is performed. To allow implicit transactions, use the SET IMPLICIT_TRANSACTIONS ON statement.

    If this option is ON and there are no outstanding transactions, every SQL statement automatically starts a transaction. If there is an open transaction, no new transaction is started. Unlike Informix, SQL Server requires the open transaction to be committed by the user explicitly with the COMMIT TRANSACTION statement for the changes to take effect and for all locks to be released.

  • Explicit transactions

    An explicit transaction is a grouping of SQL statements surrounded by the following transaction delimiters. Note that BEGIN TRANSACTION and COMMIT TRANSACTION are required:

BEGIN TRANSACTION [ transaction_name ]

COMMIT TRANSACTION [ transaction_name ]

ROLLBACK TRANSACTION [transaction_name | savepoint_name]

SAVE TRAN[SACTION] {savepoint_name | @savepoint_variable}

The SQL Server SAVE TRANSACTION statement sets a savepoint in the transaction that allows partial rollbacks. Informix has no equivalent statement.

In the following example, the English department is changed to the Literature department. Note the use of the BEGIN TRANSACTION and COMMIT TRANSACTION statements.

Informix

Microsoft SQL Server

BEGIN WORK
INSERT INTO DEPT_ADMIN.DEPT (DEPT, DNAME)
VALUES ('LIT', 'Literature')
;
UPDATE DEPT_ADMIN.CLASS
SET MAJOR = 'LIT'
WHERE MAJOR = 'ENG'
;
UPDATE STUDENT_ADMIN.STUDENT
SET MAJOR = 'LIT'
WHERE MAJOR = 'ENG'
;
DELETE FROM DEPT_ADMIN.DEPT
WHERE DEPT = 'ENG'
;
COMMIT WORK
;

BEGIN TRANSACTION
INSERT INTO DEPT_ADMIN.DEPT (DEPT, DNAME)
VALUES ('LIT', 'Literature')
UPDATE DEPT_ADMIN.CLASS
SET DEPT = 'LIT'
WHERE DEPT = 'ENG'
UPDATE STUDENT_ADMIN.STUDENT
SET MAJOR = 'LIT'
WHERE MAJOR = 'ENG'
DELETE FROM DEPT_ADMIN.DEPT
WHERE DEPT = 'ENG'
COMMIT TRANSACTION
GO

Informix will give an error if a new BEGIN WORK is issued without issuing a COMMIT WORK or ROLLBACK WORK for the current transaction. In SQL Server, transactions can be nested one within another. If this occurs, the outermost pair creates and commits the transaction, and the inner pairs track the nesting level. When a nested transaction is encountered, the @@TRANCOUNT function is incremented. Usually, this apparent transaction nesting occurs as stored procedures or triggers with BEGIN…COMMIT pairs calling each other. Although transactions can be nested, they have little effect on the behavior of ROLLBACK TRANSACTION statements.

In stored procedures and triggers, the number of BEGIN TRANSACTION statements must match the number of COMMIT TRANSACTION statements. A stored procedure or trigger that contains unpaired BEGIN TRANSACTION and COMMIT TRANSACTION statements produces an error message when executed. The syntax allows stored procedures and triggers to be called from within transactions if they contain BEGIN TRANSACTION and COMMIT TRANSACTION statements.

Wherever possible, break large transactions into smaller transactions. Make sure each transaction is well defined within a single batch. To minimize possible concurrency conflicts, transactions should not span multiple batches nor wait for user input. Grouping many Transact-SQL statements into one long-running transaction can negatively affect recovery time and cause concurrency problems.

When programming with ODBC, you can select either the implicit or explicit transaction mode by using the SQLSetConnectOption function. An ODBC program's selection of one or the other depends on the AUTOCOMMIT connect option. If AUTOCOMMIT is ON (the default), you are in explicit mode. If AUTOCOMMIT is OFF, you are in implicit mode.

If you are issuing a script through SQL Server Query Analyzer or other query tools, you can either include the explicit BEGIN TRANSACTION statement shown previously, or start the script with the SET IMPLICIT_TRANSACTIONS ON statement. The BEGIN TRANSACTION approach is more flexible, and the implicit approach is more compatible with Informix.

Locking and Transaction Isolation

One of the key functions of a database management system (DBMS) is to ensure that multiple users can read and write records in the database without reading inconsistent sets of records due to in-progress changes and without overwriting each other's changes inadvertently. Informix and Microsoft® SQL Server™ approach this task with different locking and isolation strategies. You must consider these differences when you convert an application from Informix to SQL Server or the resulting application may scale poorly to high numbers of users.

Informix uses an isolation level model for all SQL statements that read data, either explicitly or implicitly. On a database that supports logging there are four isolation levels available, DIRTY READ, COMMITTED READ, CURSOR STABILITY, and REPEATABLE READ. The DIRTY READ isolation level is the default and only isolation level available to a database set to no logging. Based on the isolation level a shared or exclusive lock may be placed on the row(s).

The DIRTY READ isolation data readers neither acquire locks nor wait for other locks to be released before reading rows of data. When a reader requests data that has been changed but not yet committed by other writers, the un-committed change is returned.

The COMMITTED READ isolation level guarantees that ever row read has been committed at the time it was read. It places no locks on the rows being read and is the default isolation level for databases that support logging.

The CURSOR STABILITY isolation level places a shared lock on each row that is read and releases that lock when either the next row is read or the cursor is closed. Other processes can also place a shared lock on the same row but no process can acquire an exclusive lock on the row. Using this isolation outside of a transaction (BEGIN WORK) mimics the COMMITTED READ isolation level.

The REPEATABLE READ isolation level can only be used inside a transaction. It places a shared lock on every row that is read and holds that lock until the transaction is either committed or rolled back. As with CURSOR STABILITY, other processes may also place a shared lock on any of the rows but no process will be able to place an exclusive lock on any of the rows. This will result in a large number of locks being held and causes concurrency problems with large numbers of users.

Data writers in Informix request exclusive locks on data that is updated, deleted, or inserted. Exclusive locks are also placed on rows when the row is retrieved using a FOR UPDATE cursor. These locks are held until the end of a transaction, and they prevent other users from overwriting uncommitted changes.

SQL Server also uses shared locks to ensure that data readers only see committed data. These readers take and release shared locks as they read data. These shared locks do not affect other readers. A reader waits for a writer to commit the changes before reading a record, the same as the Informix COMMITED READ isolation level. A reader holding shared locks also blocks a writer trying to update the same data.

Releasing locks quickly for applications that support high numbers of users is as important in SQL Server as it is in Informix. Releasing locks quickly is usually a matter of keeping transactions short. If possible, a transaction should neither span multiple round-trips to the server nor wait for the user to respond. You also need to code your application to fetch data as quickly as possible because unfetched data scans can hold share locks at the server and thus block updaters.

Dynamic Locking

SQL Server uses a dynamic locking strategy to determine the most cost-effective locks. SQL Server automatically determines what locks are most appropriate when the query is executed, based on the characteristics of the schema and query. For example, to reduce the overhead of locking, the optimizer may choose page-level locks in an index when performing an index scan. Dynamic locking has the following advantages:

  • Simplified database administration, because database administrators no longer have to be concerned with adjusting lock escalation thresholds.

  • Increased performance, because SQL Server minimizes system overhead by using locks appropriate to the task.

  • Application developers can concentrate on development, because SQL Server automatically adjusts locking.

Informix's inability to escalate row-level locks can cause problems in queries that include the FOR UPDATE clause and in UPDATEs that request many rows. For example, assume that the STUDENT table has 100,000 rows, and an Informix user issues the following statement – note that 100,000 rows are affected:

UPDATE STUDENT set (col) = (value);

The Informix RDBMS locks every row in the STUDENT table, one row at a time; this can take quite a while, and can require many system resources. Informix does not escalate the request to lock the entire table.

The same statement in SQL Server will cause the (default) row-level locks to escalate to a table-level lock, which is both efficient and fast.

Changing Default Locking Behavior

Both Microsoft SQL Sever and Informix use the same default transaction isolation level – "READ COMMITTED". Both databases also allow the developer to request other locking and isolation behavior. In Informix, the most common mechanisms for this are the FOR UPDATE clause on a SELECT command, the explicit LOCK TABLE command, and the LOCK DATABASE command.

Because their locking and isolation strategies are so different, it is difficult to map these locking options directly between Informix and SQL Server. To obtain a better understanding of this process, it is important to understand the options that SQL Server provides for changing its default locking behavior.

In SQL Server, the most common mechanisms for changing default locking behavior are the SET TRANSACTION ISOLATION LEVEL statement and the locking hints that are supported in the SELECT and UPDATE statements. The SET TRANSACTION ISOLATION LEVEL statement sets transaction isolation levels for the duration of a user's session. This becomes the default behavior for the session unless a locking hint is specified at the table level in the FROM clause of an SQL statement. The transaction isolation is set like this:

SET TRANSACTION ISOLATION LEVEL 
    { 
        READ COMMITTED 
        | READ UNCOMMITTED 
        | REPEATABLE READ 
        | SERIALIZABLE 
    }

READ COMMITTED

This option is the SQL Server default. Shared locks are held while the data is being read to avoid dirty reads, but the data can be changed before the end of the transaction, resulting in nonrepeatable reads or phantom data.

READ UNCOMMITTED

Implements dirty read, or isolation level 0 locking, which means that no shared locks are issued and no exclusive locks are honored. When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the four isolation levels.

REPEATABLE READ

Locks are placed on all data that is used in a query, preventing other users from updating the data, but new phantom rows can be inserted into the data set by another user and are included in later reads in the current transaction. Because concurrency is lower than the default isolation level, use this option only when necessary.

SERIALIZABLE

A range lock is placed on the data set preventing other users from updating or inserting rows into the data set until the transaction is complete. This is the most restrictive of the four isolation levels. Because concurrency is lower, use this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction.

SQL Server implements all four ANSI standard transaction isolation levels; Informix only implements "READ COMMITTED", which is the default, and "SERIALIZABLE."

If a transaction in an application requires repeatable read behavior, you may need to use the SERIALIZABLE isolation level offered by SQL Server. If all of the database access is read only, you can improve performance by setting the SQL Server database option to READ ONLY.

SELECT…FOR UPDATE

The SELECT…FOR UPDATE statement in Informix is used when an application needs to issue a positioned update or delete on a cursor using the WHERE CURRENT OF syntax. In this case, optionally remove the FOR UPDATE clause; SQL Server cursors are updateable by default.

SQL Server cursors usually do not hold locks under the fetched row. Rather, they use an optimistic concurrency strategy to prevent updates from overwriting each other. If one user attempts to update or delete a row that has been changed since it was read into the cursor, SQL Server detects the problem and issues an error message. The application can trap this error message and retry the update or delete as appropriate.

The optimistic technique supports higher concurrency in the normal case where conflicts between updaters are rare. If your application really needs to ensure that a row cannot be changed after it is fetched, you can use the UPDLOCK hint in your SELECT statement to achieve this effect.

This hint does not block other readers, but it prevents any other potential writers from also obtaining an update lock on the data. When using ODBC, you can also achieve a similar effect using SQLSETSTMTOPTION (…,SQL_CONCURRENCY)= SQL_CONCUR_LOCK. Either of these options reduces concurrency.

Explicitly Requesting Table-level Locks

Microsoft SQL Server can provide the same table-locking functionality as Informix.

Functionality

Informix

Microsoft SQL Server

Lock an entire table - allows others to read a table, but prevent them from updating it. By default, the lock is held until the end of the statement.

LOCK TABLE…IN SHARE MODE

SELECT…table_name (TABLOCK)

Lock the table until the end of the transaction

 

SELECT…table_name (TABLOCK REPEATABLEREAD)

Exclusive lock -prevent others from reading or updating the table and is held until the end of the command or transaction

LOCK TABLE…IN EXCLUSIVE MODE

SELECT…table_name (TABLOCKX)

Specify the number of milliseconds a statement waits for a lock to be released.

SET LOCK MODE TO
[WAIT seconds | NOT WAIT]

LOCK_TIMEOUT

Handling Deadlocks

A deadlock occurs when one process locks a resource needed by another process, and the second process locks a page that the first process needs. A deadlock is also known as a deadly embrace. SQL Server automatically detects and resolves deadlocks. If a deadlock is found, the server terminates the user process that has completed the deadly embrace.

infmx09

Figure .09 Deadlock Illustration

After every data modification, your program code should check for message number 1205, which indicates a deadlock. If this message number is returned, a deadlock has occurred and the transaction was rolled back. In this situation, your application must restart the transaction.

Deadlocks can usually be avoided by using a few simple techniques:

  • Access tables in the same order in all parts of your application.

  • Use a clustered index on every table to force an explicit row ordering.

  • Keep transactions short.

For more information, see the Microsoft Knowledge Base article, "Detecting and Avoiding Deadlocks in Microsoft SQL Server."

Remote Transactions

To perform remote transactions in Informix, you must have access to a remote database node with a database link. In SQL Server, you must have access to a remote server. A remote server is a server running SQL Server on the network that users can access by using their local server. When a server is set up as a remote server, users can use the system procedures and the stored procedures on it without explicitly logging in to it.

Remote servers are set up in pairs. You must configure both servers to recognize each other as remote servers. The name of each server must be added to its partner with the sp_addlinkedserver system stored procedure or SQL Server Enterprise Manager.

After you set up a remote server, use the sp_addremotelogin system stored procedure or SQL Server Enterprise Manager to set up remote login IDs for the users who must access that remote server. After this step is completed, you must grant permissions to execute the stored procedures.

The EXECUTE statement is then used to run procedures on the remote server. This example executes the validate_student stored procedure on the remote server STUDSVR1 and stores the return status indicating success or failure in @retvalue1:

DECLARE @retvalue1 int 
EXECUTE @retvalue1 = 
     STUDSVR1.student_db.student_admin.validate_student '111111111'

For more information, see SQL Server Books Online.

Distributed Transactions

Informix automatically initiates a distributed transaction if changes are made to tables in two or more networked database nodes. SQL Server distributed transactions use the two-phase commit services of the Microsoft Distributed Transaction Coordinator (MS DTC) included with SQL Server.

By default, SQL Server must be instructed to participate in a distributed transaction. SQL Server participation in an MS DTC transaction can be started by either of the following:

  • The BEGIN DISTRIBUTED TRANSACTION statement. This statement begins a new MS DTC transaction.

  • A client application calling DTC transaction interfaces directly.

In the example, notice the distributed update to both the local table GRADE and the remote table CLASS (using a class_name procedure):

BEGIN DISTRIBUTED TRANSACTION 
UPDATE STUDENT_ADMIN.GRADE 
   SET GRADE = 'B+' WHERE SSN = '111111111' AND CCODE = '1234' 
DECLARE @retvalue1 int 
EXECUTE @retvalue1 = 
     CLASS_SVR1.dept_db.dept_admin.class_name '1234', 'Basketweaving' 
COMMIT TRANSACTION 
GO

If the application cannot complete the transaction, the application program cancels it by using the ROLLBACK TRANSACTION statement. If the application fails or a participating resource manager fails, MS DTC cancels the transaction. MS DTC does not support distributed savepoints or the SAVE TRANSACTION statement. If an MS DTC transaction is aborted or rolled back, the entire transaction is rolled back to the beginning of the distributed transaction, regardless of any savepoints.

Two-phase Commit Processing

The Informix and MS DTC two-phase commit mechanisms are similar in operation. In the first phase of a SQL Server two-phase commit, the transaction manager requests each enlisted resource manager to prepare to commit. If any resource manager cannot prepare, the transaction manager broadcasts an abort decision to everyone involved in the transaction.

If all resource managers can successfully prepare, the transaction manager broadcasts the commit decision. This is the second phase of the commit process. While a resource manager is prepared, it is in doubt about whether the transaction is committed or aborted. MS DTC keeps a sequential log so that its commit or abort decisions are durable. If a resource manager or transaction manager fails, they reconcile in-doubt transactions when they reconnect.

SQL Language Support

This section outlines the similarities and differences between Transact-SQL and SPL language syntax and presents conversion strategies.

SELECT and Data Manipulation Statements

Use the following when migrating your Informix SQL statements and SPL programs to SQL Server.

  1. Verify that the syntax of all SELECT, INSERT, UPDATE, and DELETE statements is valid. Make any required modifications.

  2. Change all outer joins to SQL-92 standard outer join syntax.

  3. Replace Informix functions with the appropriate SQL Server functions.

  4. Check all comparison operators.

  5. Replace the "||" string concatenation operator with the "+" string concatenation operator.

  6. Replace SPL programs with Transact-SQL programs.

  7. Change all SPL cursors to either noncursor SELECT statements or Transact-SQL cursors.

  8. Replace SPL procedures, functions, and packages with Transact-SQL procedures.

  9. Convert SPL triggers to Transact-SQL triggers.

  10. Use the SET SHOWPLAN statement to tune your queries for performance.

SELECT Statements

The SELECT statement syntax used by Informix and Microsoft SQL Server is similar.

Informix

Microsoft SQL Server

SELECT [ALL | DISTINCT]
[optimizer directives]
select_list
[FROM
{table_name | view_name | select_statement}]
[WHERE clause]
[GROUP BY group_by_expression]
[HAVING search_condition]
[{UNION | UNION ALL } SELECT …]
[ORDER BY clause]
[FOR UPDATE { OF column } |
FOR READ ONLY]
[INTO {TEMP | SCRATCH} table_name]

SELECT select_list
[INTO new_table_]
FROM table_source
[WHERE search_condition]
[ GROUP BY [ALL] group_by_expression [,…n]
[ WITH { CUBE | ROLLUP } ]
[HAVING search_condition]
[ORDER BY order_expression [ASC | DESC] ]
In addition:
UNION Operator
COMPUTE Clause
FOR BROWSE Clause
OPTION Clause

Informix-specific cost-based optimizer directives are not supported by SQL Server, and must be removed. The recommended technique is to use SQL Server cost-based optimization. For more information, see "Tuning SQL Statements" later in this chapter.

INSERT Statements

The INSERT statement syntax used by Informix and Microsoft SQL Server is similar.

Informix

Microsoft SQL Server

INSERT INTO
{table_name | view_name }
[(column_list)]
VALUES
{ values_list | select_statement}

INSERT [INTO]
{
table_name [ [AS] table_alias] WITH ( <table_hint_limited> […n])
| view_name [ [AS] table_alias]
| rowset_function_limited
}
{ [(column_list)]
{ VALUES ( { DEFAULT
| NULL
| expression
}[,…n]
)
| derived_table
| execute_statement
}
}
| DEFAULT VALUES

The Transact-SQL language does not support the Informix PUT statement. Any occurrences will have to be replaced with INSERT statements.

Informix

Microsoft SQL Server

INSERT INTO (SELECT SSN, CCODE, GRADE FROM GRADE)
VALUES ('111111111', '1111',NULL)

INSERT INTO GRADE (SSN, CCODE, GRADE)
VALUES ('111111111', '1111',NULL)

The Transact-SQL values_list parameter offers the SQL-92 standard keyword DEFAULT, which is not supported by Informix. This keyword specifies that the default value for the column be used when an insert is performed. If a default value does not exist for the specified column, a NULL is inserted. If the column does not allow NULLs, an error message is returned. If the column is defined as a timestamp data type, the next sequential value is inserted.

The DEFAULT keyword cannot be used with an identity column. To generate the next sequential number, columns with the IDENTITY property must not be listed in the column_list or values_clause. You do not have to use the DEFAULT keyword to obtain the default value for a column. As in Informix, if the column is not referenced in the column_list and it has a default value, the default value is placed in the column. This is the most compatible approach to use when performing the migration.

One useful Transact-SQL option (EXECute procedure_name) is to execute a procedure and pipe its output into a target table or view. Informix does not allow you to do this.

UPDATE Statements

Because Transact-SQL supports most of the syntax used by the Informix UPDATE command, minimal revision is required.

Informix

Microsoft SQL Server

UPDATE
[optimizer directives]
{table_name | view_name | synonym_name }
SET [column_name(s) = {constant_value | expression | select_statement | column_list |
variable_list ]
[ {where_statement} |
{WHERE CURRENT OF cursor_id}]

UPDATE
{
table_name [ [AS] table_alias] WITH ( <table_hint_limited> […n])
| view_name [ [AS] table_alias]
| rowset_function_limited
}
SET
{column_name = {expression | DEFAULT | NULL}
| @variable = expression
| @variable = column = expression } [,…n]
{{[FROM {<table_source>} [, …n] ]
[WHERE
<search_condition>] }
|
[WHERE CURRENT OF
{ { [GLOBAL] cursor_name } | cursor_variable_name}
] }
[OPTION (<query_hint> [,…n] )]

Informix-specific cost-based optimizer directives are not supported by SQL Server, and must be removed. The recommended technique is to use SQL Server cost-based optimization. For more information, see "Tuning SQL Statements" later in this chapter.

The Informix UPDATE command can use only program variables from within a SPL block. The Transact-SQL language does not require the use of blocks to use variables.

Informix

Microsoft SQL Server

DEFINE VAR1 NUMBER(10,2);
BEGIN
LET VAR1 = 2500;
UPDATE STUDENT_ADMIN.STUDENT
SET TUITION_TOTAL = VAR1;
END;

DECLARE
@VAR1 NUMERIC(10,2)
SELECT @VAR1 = 2500
UPDATE STUDENT_ADMIN.STUDENT
SET TUITION_TOTAL=@VAR1

The keyword DEFAULT can be used to set a column to its default value in SQL Server. You cannot set a column to a default value with the Informix UPDATE command.

Transact-SQL and Informix SQL support the use of sub-queries in an UPDATE statement. However, the Transact-SQL FROM clause can be used to create an UPDATE based on a join. This capability makes your UPDATE syntax more readable and in some cases can improve performance.

Informix

Microsoft SQL Server

UPDATE STUDENT_ADMIN.STUDENT S
SET TUITION_TOTAL = 1500
WHERE SSN IN (SELECT SSN
FROM GRADE G
WHERE G.SSN = S.SSN
AND G.CCODE = '1234')

Subquery:
UPDATE STUDENT_ADMIN.STUDENT S
SET TUITION_TOTAL = 1500
WHERE SSN IN (SELECT SSN
FROM GRADE G
WHERE G.SSN = S.SSN
AND G.CCODE = '1234')
FROM clause:
UPDATE STUDENT_ADMIN.STUDENT S
SET TUITION_TOTAL = 1500
FROM GRADE G
WHERE S.SSN = G.SSN
AND G.CCODE = '1234'

DELETE Statements

In most cases, you do not need to modify DELETE statements.

Transact-SQL supports the use of subqueries in the WHERE clause, as well as joins in the FROM clause. The latter can produce more efficient statements. See the example shown previously in "UPDATE Statements."

Informix-specific cost-based optimizer directives are not supported by SQL Server, and must be removed. The recommended technique is to use SQL Server cost-based optimization. For more information, see "Tuning SQL Statements" later in this chapter.

Informix

Microsoft SQL Server

DELETE [optimizer directives] [FROM]
{table_name | view_name | synonym_name}
[ {WHERE clause} |
{ WHERE CURRENT OF cursor_id} ]

DELETE
[FROM ]
{
table_name [ [AS] table_alias] WITH ( <table_hint_limited> […n])
| view_name [ [AS] table_alias]
| rowset_function_limited
}
[ FROM {<table_source>} [, …n] ]
[WHERE
{ <search_condition>
| { [ CURRENT OF
{
{ [ GLOBAL ] cursor_name }
| cursor_variable_name
}
]
}
]
[OPTION (<query_hint> [,…n])]

TRUNCATE TABLE Statement

The TRUNCATE TABLE syntax used by Informix and Microsoft SQL Server is the same. TRUNCATE TABLE is used to remove all of the rows from a table and cannot be rolled back. The table structure and all of its indexes continue to exist. DELETE triggers are not executed. If a table is referenced by a FOREIGN KEY constraint, it cannot be truncated.

Informix

Microsoft SQL Server

TRUNCATE TABLE table_name

TRUNCATE TABLE table_name

In SQL Server, only the table owner can issue this statement. In Informix, this command can be issued if you are the table owner or have DBA system privilege.

The Informix TRUNCATE TABLE command does not release the storage space occupied by the rows in the table. The SQL Server TRUNCATE TABLE statement always reclaims space occupied by the table data and its associated indexes.

Manipulating Data in Identity and timestamp Columns

Informix serial data type is similar to the SQL Server IDENTITY column. If a value of 0 (zero) is inserted into an Informix serial column, the next serial value is used. The most recently inserted serial value is returned as part of the SQLCA record. Informix allows direct insert and update of the serial value as long as it does not violate the unique constraint on serial values. Informix sets the serial seed during table creation. There is not an Informix equivalent for the TIMESTAMP column in SQL Server. However, in Microsoft SQL Server identity columns, values cannot be updated and the DEFAULT keyword cannot be used.

By default, data cannot be inserted directly into an identity column. The identity column automatically generates a unique, sequential number for each new row inserted in the table. This default can be overridden using the following SET statement:

SET IDENTITY_INSERT table_name ON

With IDENTITY_INSERT set to ON, the user can insert any value into the identity column of a new row. To prevent the entry of duplicate numbers, a unique index must be created against the column. The purpose of this statement is to allow a user to re-create a value for a row that has been deleted accidentally. The @@IDENTITY function can be used to obtain the last identity value.

The TRUNCATE TABLE statement resets an identity column to its original SEED value. If you do not want to reset the identity value for a column, use the DELETE statement without a WHERE clause instead of the TRUNCATE TABLE statement. You will have to evaluate how this affects your Informix migration, because Informix SERIAL seed is not reset following the TRUNCATE TABLE command.

You can perform only inserts or deletes when working with timestamp columns. If you attempt to update a timestamp column, you receive this error message:

Msg 272, Level 16, State 1 Can't update a TIMESTAMP column.

Locking Requested Rows

Informix uses the FOR UPDATE clause to lock rows specified in the SELECT command. You do not need to use the equivalent clause in Microsoft SQL Server because this is the default behavior.

Row Aggregates and the Compute Clause

The SQL Server COMPUTE clause is used to generate row aggregate functions (SUM, AVG, MIN, MAX, and COUNT), which appear as additional rows in the query results. It allows you to see detail and summary rows in one set of results. You can calculate summary values for subgroups, and you can calculate more than one aggregate function for the same group.

The Informix SELECT command syntax does not support the COMPUTE clause.

Join Clauses

Microsoft SQL Server 2000 allows up to 256 tables to be joined in a join clause, including both temporary and permanent tables. There is no join limit in Informix but there is a limit of 33,000 open tables for all users, which effectively limits the number of tables in a join.

When using outer joins in Informix, the keyword OUTER is placed typically next to the table where the join is not enforced (often referred to as the subservient table). With SQL Server, you can use the *= and =* outer join operators. The * is used to identify the column that has more unique values. If the child (foreign key) column does not allow null values, the * is placed on the parent (PRIMARY KEY or UNIQUE constraint) column side of the equal sign. The placement of the * is essentially reversed in Informix. You cannot place the * on both sides of the equal sign (=).

The *= and =* are considered legacy join operators. SQL Server also supports the SQL-92 standard join operators listed below. It is recommended that you use this syntax. The SQL-92 standard syntax is more powerful and has fewer restrictions than the * operators.

Join operation

Description

CROSS JOIN

This is the cross product of two tables. It returns the same rows as if no WHERE clause was specified in an old-style join. This type of join is called a Cartesian-join in Informix.

INNER

This join specifies that all inner rows be returned. Any unmatched rows are discarded. This is identical to a standard Informix table join.

LEFT [OUTER]

This type of join specifies that all of the left table outer rows be returned, even if no column matches are found. This operates just like an Informix outer join.

RIGHT [OUTER]

This type of join specifies that all of the right table outer rows be returned, even if no column matches are found. Informix has no direct equivalent to a right outer join.

FULL [OUTER]

If a row from either table does not match the selection criteria, specifies the row be included in the result set and its output columns that correspond to the other table be set to NULL. Informix has no direct equivalent to a full outer join.

The following code examples return lists of classes taken by all students. Outer joins are defined between the student and grade tables that allow all students to appear, even those who are not enrolled in any classes. Outer joins are also added to the class table in order to return the class names. If outer joins are not added to the class tables, those students who are not enrolled in any classes are not returned because they have null course codes (CCODE).

Informix

Microsoft SQL Server

SELECT S.SSN AS SSN,
FNAME, LNAME
FROM STUDENT S,
OUTER CLASS C, OUTER GRADE G
WHERE S.SSN = G.SSN
AND G.CCODE = C.CCODE

SELECT S.SSN AS SSN,
FNAME, LNAME
FROM STUDENT_ADMIN.GRADE G
RIGHT OUTER JOIN
STUDENT_ADMIN.STUDENT S
ON G.SSN = S.SSN
LEFT OUTER JOIN
DEPT_ADMIN.CLASS C
ON G.CCODE = C.CCODE

Using SELECT Statements as Table Names

Microsoft SQL Server and Informix support the use of SELECT statements as the source of tables when performing queries. SQL Server requires an alias; the use of an alias is optional with Informix.

Informix

Microsoft SQL Server

SELECT SSN, LNAME, FNAME,
TUITION_PAID, SUM_PAID
FROM STUDENT,
(SELECT SUM(TUITION_PAID) SUM_PAID
FROM STUDENT)

SELECT SSN, LNAME, FNAME,
TUITION_PAID, SUM_PAID
FROM STUDENT_ADMIN.STUDENT,
(SELECT SUM(TUITION_PAID) SUM_PAID
FROM STUDENT_ADMIN.STUDENT) SUM_STUDENT

Reading and Modifying BLOBs

Microsoft SQL Server implements binary large objects (BLOBs) with text and image columns. Informix implements BLOBs with BLOB or BYTE for binary data and CLOB or TEXT columns or text data. In Informix, a SELECT command can query the values in BLOB and CLOB columns.

In SQL Server, you can use a standard Transact-SQL statement or the specialized READTEXT statement to read data in text and image columns. The READTEXT statement allows you to read partial sections of a text or image column. Informix does not provide an equivalent statement. SQL Server does not have an equivalent for the Informix FILETOBLOB, LOTOFILE, and LOCOPY functions used to manipulate BLOBs.

The READTEXT statement makes use of a text_pointer, which can be obtained using the TEXTPTR function. The TEXTPTR function returns a pointer to the text or image column in the specified row or to the text or image column in the last row returned by the query if more than one row is returned. Because the TEXTPTR function returns a 16-byte binary string, it is best to declare a local variable to hold the text pointer, and then use the variable with READTEXT.

The READTEXT statement specifies how many bytes to return. The value in the @@TEXTSIZE function, which is the limit on the number of characters or bytes to be returned, supersedes the size specified by the READTEXT statement if it is less than the specified size for READTEXT.

The SET statement can be used with the TEXTSIZE parameter to specify the size, in bytes, of text data to be returned with a SELECT statement. If you specify a TEXTSIZE of 0, the size is reset to the default (4 KB). Setting the TEXTSIZE parameter affects the @@TEXTSIZE function. The SQL Server ODBC driver automatically sets the TEXTSIZE parameter when the SQL_MAX_LENGTH statement option is changed.

In Informix, UPDATE and INSERT commands are used to change values in BLOB, CLOB, TEXT, and BYTE columns. In SQL Server, you can use standard UPDATE and INSERT statements, or you can use the UPDATETEXT and WRITETEXT statements. Both UPDATETEXT and WRITETEXT allow a nonlogged option, and UPDATETEXT allows for partial updating of a text or image column.

The UPDATETEXT statement can be used to replace existing data, delete existing data, or insert new data. Newly inserted data can be a constant value, table name, column name, or text pointer.

The WRITETEXT statement completely overwrites any existing data in the column it affects. Use WRITETEXT to replace text data and UPDATETEXT to modify text data. The UPDATETEXT statement is more flexible because it changes only a portion of a text of image value rather than the entire value.

For more information, see SQL Server Books Online.

Functions

The tables in this section show the relationship between Informix and SQL Server scalar-valued and aggregate functions. Although the names appear to be the same, it is important to note that functions vary in numbers and types of arguments. Also, functions that are supplied only by Microsoft SQL Server are not mentioned in this list as this paper is limited to easing migration from existing Informix applications. Examples of functions not supported by Informix are: degrees (DEGREES), PI (PI), and random number (RAND).

Number/Mathematical Functions

The following are number/mathematical functions supported by Informix and their Microsoft SQL Server equivalents.

Function

Informix

Microsoft SQL Server

Absolute value

ABS

Same

Arc cosine

ACOS

Same

Arc sine

ASIN

Same

Arc tangent of n

ATAN

Same

Arc tangent of n and m

ATAN2

ATN2

Cosine

COS

Same

Exponential value

EXP

Same

Hex value

HEX

N/A

Natural logarithm

LOGN

LOG

Logarithm, base 10

LOG10

Same

Modulus (remainder)

MOD

USE MODULO (%) OPERATOR

Power

POW

POWER

Root value

ROOT

N/A

Round

ROUND

Same

Sign of number

N/A

SIGN

Sine

SIN

Same

Square root

SQRT

Same

Tangent

TAN

Same

Truncate

TRUNC

N/A

Character Functions

The following are character functions supported by Informix and their Microsoft SQL Server equivalents.

Function

Informix

Microsoft SQL Server

Convert characters to lowercase (LOWER)

LOWER

Same

Convert characters to uppercase (UPPER)

UPPER

Same

Pad left side of character string

LPAD

N/A

Remove leading blanks

TRIM

LTRIM

Remove trailing blanks

TRIM

RTRIM

Repeat character string
multiple times

RPAD

REPLICATE

String of repeated spaces

RPAD

SPACE

Substring

SUBSTR
SUBSTRING

SUBSTRING

Replace characters

REPLACE

STUFF

Capitalize first letter of each word in string

INITCAP

N/A

Length of character string

LENGTH
CHAR_LENGTH
CHARACTER_LENGTH

DATALENGTH or LEN

Bytes in a character column including spaces

OCTET_LENGTH

N/A

Date Functions

The following are date functions supported by Informix and their Microsoft SQL Server equivalents.

Function

Informix

Microsoft SQL Server

Date addition

date column +/- INTERVAL(value)
or
date column +/- DATETIME(value)
or
date column +/- value UNITS datetime unit

DATEADD

Difference between dates

date column +/- DATETIME(value)
or
date column +/- DATE(value)

DATEDIFF

Current date and time

CURRENT
TODAY

GETDATE()

Character string representation
of date

DATETIME(value)

DATENAME

Integer representation of date

N/A

DATEPART

Date round

DATETIME(value) datetime unit TO datetime unit

CAST

Date truncate

DATETIME(value) datetime unit TO datetime unit

CAST

Character string to date

DATETIME(value)
or
DATE(value)

CAST

Convert date if NULL

N/A

ISNULL

Conversion Functions

The following are conversion functions supported by Informix and their Microsoft SQL Server equivalents.

Function

Informix

Microsoft SQL Server

Number to character

Implied

CONVERT

Character to number

Implied (error if cast cannot be performed)

CONVERT

Date to character

Implied

CONVERT

Character to date

Implied (error if cast cannot be performed)

CONVERT

Hex to binary

N/A

CONVERT

Binary to hex

N/A

CONVERT

Other Row-level Functions

The following are other row-level functions supported by Informix and their Microsoft SQL Server equivalents.

Function

Informix

Microsoft SQL Server

Return first non-null expression

DECODE

COALESCE

If exp1 = exp2, return null

DECODE

NULLIF

User's login ID number

N/A

SUSER_ID

User's login name

USER

SUSER_NAME

User's database ID number

N/A

USER_ID

User's database name

USER

USER_NAME

Current User

USER

CURRENT_USER

Aggregate Functions

The following are aggregate functions supported by Informix and their SQL Server equivalents.

Function

Informix

Microsoft SQL Server

Average

AVG

Same

Count

COUNT

Same

Maximum

MAX

Same

Minimum

MIN

Same

Standard deviation

STDDEV

STDEV or STDEVP

Summation

SUM

Same

Variance

VARIANCE

VAR or VARP

Range

RANGE

N/A

Conditional Tests

The Informix DECODE and CASE statements and the Microsoft SQL Server CASE expression perform conditional tests. When the value in test_value matches any following expression, the related value is returned. If no match is found, the default_value is returned. If no default_value is specified, both DECODE and CASE return NULL if there is no match. The table shows the syntax as well as an example of a converted DECODE command.

Informix

Microsoft SQL Server

DECODE (test_value,
expression1, value1
[[,expression2, value2] […]]
[,default_value]
)
CASE test_value
WHEN expression1 THEN value1
[[WHEN expression2 THEN value2] [...]]
[ELSE default_value]
END
CREATE VIEW STUDENT_GPA
(SSN, GPA)
AS SELECT SSN, ROUND(AVG(DECODE(grade
,'A', 4
,'A+', 4.3
,'A-', 3.7
,'B', 3
,'B+', 3.3
,'B-', 2.7
,'C', 2
,'C+', 2.3
,'C-', 1.7
,'D', 1
,'D+', 1.3
,'D-', 0.7
,0)),2)
FROM GRADE
GROUP BY SSN
or
CREATE VIEW STUDENT_GPA
(SSN, GPA)
AS SELECT SSN,
ROUND(AVG(CASE grade
WHEN 'A' THEN 4
WHEN 'A+' THEN 4.3
WHEN 'A-' THEN 3.7
WHEN 'B' THEN 3
WHEN 'B+' THEN 3.3
WHEN 'B-' THEN 2.7
WHEN 'C' THEN 2
WHEN 'C+' THEN 2.3
WHEN 'C-' THEN 1.7
WHEN 'D' THEN 1
WHEN 'D+' THEN 1.3
WHEN 'D-' THEN 0.7
ELSE 0
END),2)
FROM GRADE
GROUP BY SSN

CASE test_value
WHEN expression1 THEN value1
[[WHEN expression2 THEN value2] [...]]
[ELSE default_value]
END
CREATE VIEW STUDENT_ADMIN.STUDENT_GPA
(SSN, GPA)
AS SELECT SSN, ROUND(AVG(CASE grade
WHEN 'A' THEN 4
WHEN 'A+' THEN 4.3
WHEN 'A-' THEN 3.7
WHEN 'B' THEN 3
WHEN 'B+' THEN 3.3
WHEN 'B-' THEN 2.7
WHEN 'C' THEN 2
WHEN 'C+' THEN 2.3
WHEN 'C-' THEN 1.7
WHEN 'D' THEN 1
WHEN 'D+' THEN 1.3
WHEN 'D-' THEN 0.7
ELSE 0
END),2)
FROM STUDENT_ADMIN.GRADE
GROUP BY SSN

The SQL Server CASE expression can support the use of SELECT statements for performing Boolean tests, something the Informix DECODE and CASE commands do not allow. For more information about the CASE expression, see SQL Server Books Online.

Converting Values to Different Datatypes

The Microsoft SQL Server CONVERT and CAST functions are multiple purpose conversion functions. They provide similar functionality, converting an expression of one data type to another data type, and supporting a variety of special date formats:

CAST(expression AS data_type)

CONVERT (data type[(length)], expression [, style])

CAST is a SQL-92 standard function.

With SQL Server the data type is any system data type into which the expression is to be converted. SQL Server does not support user-defined data types however Informix does. The length parameter is optional and is used with char, varchar, binary, and varbinary data types. The maximum allowable length is 8000.

Conversion

Informix

Microsoft SQL Server

Character to number

CAST('10' AS number)
'10'::number

CONVERT(numeric, '10')

Number to character

CAST(10 as char)
10::char

CONVERT(char, 10)

Character to date

CAST('97-JUL-04' AS datetime)
'97-JUL-04'::datetime
CASE('07-04-97' AS date)
'07-04-97'::date

CONVERT(datetime, '04-JUL-97')
CONVERT (datetime, '04-JUL-1997')
CONVERT (datetime, 'July 4, 1997')

Date to character

CAST(CURRENT AS char)
CURRENT::char

CONVERT(char, getdate())
CONVERT(char, getdate(), 106)
CONVERT(char, getdate(), 101)

Hex to binary

N/A

CONVERT(binary, '1F')

Binary to hex

N/A

CONVERT(char,binary_column)

Notice how character strings are converted to dates. In Informix, the default date format model is "YY-MM-DD" for datetime and "MM-DD-YY" for date. If you use any other format, you must provide an appropriate date format model. The CONVERT function automatically converts standard date formats without the need for a format model.

When converting from a date to a character string, the default output for the CONVERT function is "dd mon yyyy hh:mm:ss:mmm(24h)". A numeric style code is used to format the output to other types of date format models. For more information about the CONVERT function, see SQL Server Books Online.

The following table shows the default output for Microsoft SQL Server dates.

Without century

With century

Standard

Output

-

0 or 100 (*)

Default

mon dd yyyy hh:miAM (or PM)

1

101

USA

mm/dd/yy

2

102

ANSI

yy.mm.dd

3

103

British/French

dd/mm/yy

4

104

German

dd.mm.yy

5

105

Italian

dd-mm-yy

6

106

-

dd mon yy

7

107

-

mon dd, yy

8

108

-

hh:mm:ss

-

9 or 109 (*)

Default milliseconds

mon dd yyyy hh:mi:ss:mmm (AM or PM)

10

110

USA

mm-dd-yy

11

111

JAPAN

yy/mm/dd

12

112

ISO

yymmdd

-

13 or 113 (*)

Europe default

dd mon yyyy hh:mm:ss:mmm(24h)

14

114

-

hh:mi:ss:mmm(24h)

User-defined Functions

Informix SPL functions can be used in Informix SQL statements. This functionality can often be achieved in other ways with Microsoft SQL Server.

In the following example, the Informix user-defined function GET_SUM_MAJOR is used to obtain a sum of tuition paid by major. It can be replaced in SQL Server by using a query as a table.

Informix

Microsoft SQL Server

SELECT ssn, fname, lname, tuition_paid,
tuition_paid/get_sum_major(major) as percent_major
FROM student

SELECT ssn, fname, lname, tuition_paid, tuition_paid/sum_major as percent_major
FROM student_admin.student,
(SELECT major, sum(tuition_paid) sum_major
FROM student_admin.student
GROUP BY major) sum_student
WHERE student.major = sum_student.major

CREATE FUNCTION get_sum_major
(inmajor varchar) RETURN NUMBER
AS sum_paid number;
BEGIN
SELECT sum(tuition_paid) into sum_paid
FROM student
WHERE major = inmajor;
RETURN(sum_paid);
END get_sum_major;

No CREATE FUNCTION syntax is required;
use CREATE PROCEDURE syntax.

Comparison Operators

Informix and Microsoft SQL Server comparison operators are nearly identical.

Operator

Informix

Microsoft SQL Server

Equal to

(=)

Same

Greater than

(>)

Same

Less than

(<)

Same

Greater than or equal to

(>=)

Same

Less than or equal to

(<=)

Same

Not equal to

(!=, <>,^=)

Same

Not greater than, not less than

N/A

!> , !<

In any member in set

IN

Same

Not in any member in set

NOT IN

Same

Any value in set

ANY, SOME

Same

Referring to all values in set.

!= ALL, <> ALL, < ALL, > ALL, <= ALL, >= ALL

Same

Like pattern

LIKE

Same

Not like pattern

NOT LIKE

Same

Value between x and y

BETWEEN x AND y

Same

Value not between

NOT BETWEEN

Same

Value exists

EXISTS

Same

Value does not exist

NOT EXISTS

Same

Value {is | is not} NULL

IS NULL, IS NOT NULL

Same. Also supports = NULL, != NULL for backward compatibility (not recommended).

Pattern Matches

The SQL Server LIKE keyword offers useful wildcard search options that are supported by the Informix LIKE and MATCHES. In addition to supporting the % and _ wildcard characters common to both RDBMSs, the [ ] and [^] characters are also supported by the SQL Server LIKE keyword. The [] and [^] keywords are supported by the Informix MATCHES keyword. Any occurrence of the Informix MATCHES keyword should be replaced with the SQL Server LIKE keyword. Also note that the Informix MATCHES keyword uses the '?' character to match a single character where SQL Server uses % and the '*' to match zero or more characters where SQL Server uses '_'.

The [ ] character set is used to search for any single character within a specified range. For example, if you search for the characters a through f in a single character position, you can specify this with LIKE '[a-f]' or LIKE '[abcdef]'. The usefulness of these additional wildcard characters is shown in this table.

Informix

Microsoft SQL Server

SELECT * FROM STUDENT
WHERE LNAME MATCHES '[ABC]?'

SELECT * FROM STUDENT_ADMIN.STUDENT
WHERE LNAME LIKE '[ABC]%'

The [^] wildcard character set is used to specify characters NOT in the specified range. For example, if any character except a through f is acceptable, you use LIKE '[^a - f]' or LIKE '[^abcdef]'. This syntax is common to the SQL Server LIKE keyword and the Informix MATCHES keyword.

For more information about the LIKE keyword, see SQL Server Books Online.

Using NULL in Comparisons

Although Microsoft SQL Server traditionally has supported the SQL-92–standard as well as some nonstandard NULL behaviors, it supports the use of NULL in Informix.

SET ANSI_NULLS should be set to ON for executing distributed queries.

The SQL Server ODBC driver and OLE DB Provider for SQL Server automatically SET ANSI_NULLS to ON when connecting. This setting can be configured in ODBC data sources, in ODBC connection attributes, or in OLE DB connection properties that are set in the application before connecting to SQL Server. SET ANSI_NULLS defaults to OFF for connections from DB-Library applications.

When SET ANSI_DEFAULTS is ON, SET ANSI_NULLS is enabled.

For more information about the use of NULL, see SQL Server Books Online.

String Concatenation

Informix uses two pipe symbols (||) as the string concatenation operator, and SQL Server uses the plus sign (+). This difference requires minor revision in your application program code.

Informix

Microsoft SQL Server

SELECT FNAME||' '||LNAME AS NAME
FROM STUDENT

SELECT FNAME +' '+ LNAME AS NAME
FROM STUDENT_ADMIN.STUDENT

Control-of-Flow Language

The control-of-flow language controls the flow of execution of SQL statements, statement blocks, and stored procedures. SPL and Transact-SQL provide many of the same constructs, although there are some syntax differences.

Keywords

These are the keywords supported by each RDBMS.

Statement

Informix SPL

Microsoft SQL Server Transact-SQL

Declare variables

DEFINE
DEFINE GLOBAL <variable>
DEFINE GLOBAL <variable>
DEFAULT <value>

DECLARE

Statement block

BEGIN...END;

BEGIN...END

Conditional processing

IF…THEN,
ELIF…THEN,
ELSE
END IF;

IF…[BEGIN…END]
ELSE <condition>
[BEGIN…END]
ELSE IF <condition>
CASE expression

Unconditional exit

RETURN

RETURN

Unconditional exit to the statement following the end of the current program block

EXIT FOR
EXIT FOREACH
EXIT WHILE

BREAK

Restarts a WHILE loop

CONTINUE

CONTINUE

Wait for a specified interval

N/A (dbms_lock.sleep)

WAITFOR

Loop control

WHILE <condition>
END WHILE

FOR <condition> … END FOR;

WHILE <condition>
BEGIN… END
LABEL…GOTO LABEL

Program comments

/* … */, --

/* … */, --

Print output

N/A

PRINT

Raise program error

RAISE EXCEPTION

RAISERROR

Execute program

EXECUTE

EXECUTE

Statement terminator

Semicolon (;)

Semicolon (;)

Declaring Variables

Transact-SQL variables are created with the DECLARE keyword and SPL variables are created with the DEFINE keyword. Transact-SQL variables are identified with @) and, like SPL variables, are initialized to a null value when they are first created. Transact-SQL does not support GLOBAL variables.

Informix

Microsoft SQL Server

DEFINE
VSSN CHAR(9);
VFNAME VARCHAR(12);
VLNAME VARCHAR(20);
VBIRTH_DATE DATE;
VLOAN_AMOUNT NUMBER(12,2);

DECLARE
@VSSN CHAR(9),
@VFNAME VARCHAR2(12),
@VLNAME VARCHAR2(20),
@VBIRTH_DATE DATETIME,
@VLOAN_AMOUNT NUMERIC(12,2)

Transact-SQL does not support the SET, MULTISET, and LIST variable data type definitions. A Transact-SQL variable cannot be initialized in the DECLARE command while a GLOBAL variable can be set in SPL. The Informix NOT NULL cannot be used in Microsoft SQL Server data type definitions.

Unlike Informix BLOB, CLOB, TEXT, and BINARY data types, text and image data types cannot be used for variable declarations. Additionally, the SPL style record and table definitions are not supported.

Assigning Variables

Informix and Microsoft SQL Server offer these ways to assign values to local variables.

Informix

Microsoft SQL Server

LET variable = value

SET @variable = value

SELECT...INTO syntax for selecting column values from a single row

SELECT @var=<expression> [FROM…] for assigning a literal value, an expression involving other local variables, or a column value from a single row

N/A

FETCH…INTO syntax

Here are some syntax examples.

Informix

Microsoft SQL Server

DECLARE VSSN CHAR(9);
DEFINE VFNAME VARCHAR(12);
DEFINE VLNAME VARCHAR(20);
BEGIN
LET VSSN = '123448887';
SELECT FNAME, LNAME INTO VFNAME, VLNAME FROM STUDENTS WHERE SSN=VSSN;
END;

DECLARE @VSSN CHAR(9),
@VFNAME VARCHAR(12),
@VLNAME VARCHAR(20)
SET @VSSN = '12355887'
SELECT @VFNAME=FNAME, @VLNAME=LNAME FROM STUDENTS WHERE SSN = @VSSN

Statement Blocks

Informix SPL and Microsoft SQL Server Transact-SQL support the use of BEGIN…END terminology to specify statement blocks. Transact-SQL does not require the use of a statement block following the DECLARE statement. The BEGIN…END statement blocks are required in Microsoft SQL Server for IF statements and WHILE loops if more than one statement is executed.

Informix

Microsoft SQL Server

DEFINE
DEFINE VARIABLES ...
BEGIN -- THIS IS REQUIRED SYNTAX
PROGRAM_STATEMENTS ...
IF ...THEN
STATEMENT1;
STATEMENT2;
STATEMENTN;
END IF;
WHILE ... LOOP
STATEMENT1;
STATEMENT2;
STATEMENTN;
END WHILE;
END; -- THIS IS REQUIRED SYNTAX

DECLARE
DECLARE VARIABLES ...
BEGIN -- THIS IS OPTIONAL SYNTAX
PROGRAM_STATEMENTS ...
IF ...
BEGIN
STATEMENT1
STATEMENT2
STATEMENTN
END
WHILE ...
BEGIN
STATEMENT1
STATEMENT2
STATEMENTN
END
END -- THIS IS REQUIRED SYNTAX

Conditional Processing

The Microsoft SQL Server Transact-SQL conditional statement includes IF and ELSE rather than the ELIF statement in Informix SPL. Multiple IF statements can be nested to achieve the same effect. For extensive conditional tests, the CASE expression may be easier to read.

Informix

Microsoft SQL Server

DEFINE
VDEGREE_PROGRAM CHAR(1);
VDEGREE_PROGRAM_NAME VARCHAR(20);
BEGIN
LET VDEGREE_PROGRAM = 'U';
IF VDEGREE_PROGRAM = 'U' THEN
LET VDEGREE_PROGRAM_NAME =
'Undergraduate';
ELIF VDEGREE_PROGRAM = 'M' THEN
LET VDEGREE_PROGRAM_NAME =
'Masters';
ELIF VDEGREE_PROGRAM = 'P' THEN
LET VDEGREE_PROGRAM_NAME = 'PhD';
ELSE
LET VDEGREE_PROGRAM_NAME =
'Unknown';
END IF;
END;

DECLARE
@VDEGREE_PROGRAM CHAR(1),
@VDEGREE_PROGRAM_NAME VARCHAR(20)
SELECT @VDEGREE_PROGRAM = 'U'
SELECT @VDEGREE_PROGRAM_NAME =
CASE @VDEGREE_PROGRAM
WHEN 'U' THEN 'Undergraduate'
WHEN 'M' THEN 'Masters'
WHEN 'P' THEN 'PhD'.
ELSE 'Unknown'
END

Repeated Statement Execution (Looping)

Informix SPL provides the conditional WHILE and FOR loops. Transact-SQL offers the WHILE loop and the GOTO statement for looping purposes.

WHILE Boolean_expression 
   {sql_statement | statement_block} 
   [BREAK] [CONTINUE]

The WHILE loop tests a Boolean expression for the repeated execution of one or more statements. The statement(s) are executed repeatedly as long as the specified expression evaluates to TRUE. If multiple statements are to be executed, they must be placed within a BEGIN…END block.

Informix

Microsoft SQL Server

DEFINE
COUNTER NUMBER;
BEGIN
LET COUNTER = 0;
WHILE COUNTER < 5
LET COUNTER = COUNTER + 1;
END WHILE;
END;

DECLARE
@COUNTER NUMERIC
SELECT@COUNTER = 1
WHILE (@COUNTER <5)
BEGIN
SELECT @COUNTER = @COUNTER +1
END

Statement execution can be controlled from inside the loop with the BREAK and CONTINUE keywords. The BREAK keyword causes an unconditional exit from the WHILE loop, and the CONTINUE keyword causes the WHILE loop to restart, skipping any statements that follow. The BREAK keyword is equivalent to the Informix SPL EXIT keyword. The Transact-SQL and Informix SPL CONTINUE keywords are equivalent.

GOTO Statement

Microsoft SQL Server has a GOTO statement, while Informix only supports the GOTO as an argument of the WHENEVER keyword. The GOTO statement causes the execution of a Transact-SQL batch to jump to a label. None of the statements between the GOTO statement and the label are executed.

Informix

Microsoft SQL Server

WHENEVER <condition> GOTO label;
:label

GOTO label
:label

PRINT Statement

Informix does not have an equivalent to the Transact-SQL PRINT statement. It is used for printing user-specified messages.

The message limit for the PRINT statement is 8,000 characters. Variables that are defined using the char or varchar data type can be embedded in the printed statement. If any other data type is used, the CONVERT or CAST function must be used. Local variables, global variables, and text can be printed. Both single and double quotation marks can be used to enclose text.

Returning from Stored Procedures

Both Microsoft SQL Server and Informix have RETURN statements. RETURN lets your program exit unconditionally from a query or procedure. RETURN is immediate and complete and can be used at any point to exit from a procedure, batch, or statement block. Statements following RETURN are not executed.

Informix

Microsoft SQL Server

RETURN <expression>:

RETURN <integer>

Raising Program Errors

The Transact-SQL RAISERROR statement returns a user-defined error message and sets a system flag to record that an error has occurred. It is similar in function to the SPL RAISE EXCEPTION exception handler.

The RAISERROR statement allows the client to retrieve an entry from the sysmessages table or build a message dynamically with user-specified severity and state information. When defined, this message is sent back to the client as a server error message.

RAISERROR ({msg_id | msg_str}, severity, state 
   [, argument1 [, argument2]]) 
   [WITH options]

When converting your SPL programs, it may not be necessary to use the RAISERROR statement. In the following code example, the SPL program uses the RAISE EXCEPTION exception handler, while the Transact-SQL program uses nothing. The RAISE EXCEPTION exception handler has been included to prevent the SPL program from possibly returning an ambiguous unhandled exception error message. Instead, it always returns the Informix error message (SQLERRM) when an unanticipated problem occurs.

When a Transact-SQL program fails, it always returns a detailed error message to the client program. Therefore, unless some specialized error handling is required, the RAISERROR statement is not always needed.

Informix

Microsoft SQL Server

CREATE FUNCTION DELETE_DEPT
(VDEPT VARCHAR) RETURNING INTEGER
BEGIN
DELETE FROM DEPT
WHERE DEPT = VDEPT;
RETURN(sqlca.sqlerrd[2]);
ON EXCEPTION
RAISE EXCEPTION (-20001);
END;

CREATE FUNCTION
DEPT_ADMIN.DELETE_DEPT
(@VDEPT VARCHAR(4))
AS
BEGIN
DELETE FROM DEPT_DB.DBO.DEPT
WHERE DEPT = @VDEPT
RETURN @@ROWCOUNT
END

Implementing Cursors

Informix always requires that cursors be used with SELECT statements, if the number of rows requested from the database is greater than one. In Microsoft SQL Server, a SELECT statement that is not enclosed within a cursor returns rows to the client as a default result set. This is an efficient way to return data to a client application.

SQL Server provides two interfaces for cursor functions. When cursors are used in Transact-SQL batches or stored procedures, SQL statements can be used to declare, open, and fetch from cursors as well as positioned updates and deletes. When cursors from a DB-Library, ODBC, or OLEDB program are used, the SQL Server client libraries transparently call built-in server functions to handle cursors more efficiently.

When porting a SPL procedure from Informix, first determine whether cursors are needed to do the same function in Transact-SQL. If the cursor returns only a set of rows to the client application, use a noncursor SELECT statement in Transact-SQL to return a default result set. If the cursor is used to load data a row at a time into local procedure variables, you must use cursors in Transact-SQL.

Syntax

The table shows the syntax for using cursors.

Operation

Informix

Microsoft SQL Server

Declaring a cursor

FOREACH cursor_name
[WITH HOLD]
FOR select_statement
[INTO variables]
END FOREACH;

DECLARE cursor_name CURSOR
[LOCAL | GLOBAL]
[FORWARD_ONLY | SCROLL]
[STATIC | KEYSET | DYNAMIC | FAST_FORWARD]
[READ_ONLY | SCROLL_LOCKS | OPTIMISTIC]
[TYPE_WARNING]
FOR select_statement
[FOR UPDATE [OF column_name [,n]]]

Opening a cursor

N/A (opened by FOREACH)

OPEN cursor_name

Fetching from cursor

N/A (fetched by FOREACH)

FETCH [[NEXT | PRIOR | FIRST | LAST | ABSOLUTE { n | @nvar} | RELATIVE {n | @nvar}]
FROM] cursor_name
[INTO @variable(s)]

Update fetched row

UPDATE table_name
SET statement(s)…
WHERE CURRENT OF cursor_name;

UPDATE table_name
SET statement(s)…
WHERE CURRENT OF cursor_name

Delete fetched row

DELETE FROM table_name
WHERE CURRENT OF cursor_name;

DELETE FROM table_name
WHERE CURRENT OF cursor_name

Closing cursor

N/A (closed by END FOREACH)

CLOSE cursor_name

Remove cursor data structures

N/A

DEALLOCATE cursor_name

Declaring a Cursor

Although the Transact-SQL DECLARE CURSOR statement does not support the use of cursor arguments, it does support local variables. The value of these local variables is used in the cursor when it is opened. Microsoft SQL Server offers many additional capabilities in its DECLARE CURSOR statement.

The INSENSITIVE option is used to define a cursor that makes a temporary copy of the data to be used by that cursor. All of the requests to the cursor are answered by this temporary table. Consequently, modifications made to base tables are not reflected in the data returned by fetches made to this cursor. Data accessed by this type of cursor cannot be modified.

Applications can request a cursor type and then execute a Transact-SQL statement that is not supported by server cursors of the type requested. SQL Server returns an error that indicates the cursor type has changed, or given a set of factors, implicitly converts a cursor. For a complete list of factors that trigger SQL Server 2000 to implicitly convert a cursor from one type to another, see SQL Server Books Online.

The SCROLL option allows backward, absolute, and relative fetches in addition to forward fetches. A scroll cursor uses a keyset cursor model in which committed deletes and updates made to the underlying tables by any user are reflected in subsequent fetches. This is true only if the cursor is not declared with the INSENSITIVE option.

If the READ ONLY option is chosen, updates are prevented from occurring against any row within the cursor. This option overrides the default capability of a cursor to be updated.

The UPDATE [OF column_list] statement is used to define updatable columns within the cursor. If [OF column_list] is supplied, only the columns listed allow modifications. If no list is supplied, all of the columns can be updated unless the cursor has been defined as READ ONLY.

It is important to note that the name scope for a SQL Server cursor is the connection itself. This is different from the name scope of a local variable. A second cursor with the same name as an existing cursor on the same user connection cannot be declared until the first cursor is deallocated.

Opening a Cursor

Transact-SQL does not support the passing of arguments to a cursor when it is opened. When a Transact-SQL cursor is opened, the result set membership and ordering is fixed. Updates and deletes that have been committed against the base tables of the cursor by other users are reflected in fetches made against all cursors defined without the INSENSITIVE option. In the case of an INSENSITIVE cursor, a temporary table is generated.

Fetching Data

Informix cursors can move in a forward direction only—there is no backward or relative scrolling capability. SQL Server cursors can scroll forward and backward with the fetch options shown in the following table. These fetch options can be used only when the cursor is declared with the SCROLL option.

Scroll option

Description

NEXT

Returns the first row of the result set if this is the first fetch against the cursor; otherwise, it moves the cursor one row within the result set. NEXT is the primary method used to move through a result set. NEXT is the default cursor fetch.

PRIOR

Returns the previous row within the result set.

FIRST

Moves the cursor to the first row within the result set and returns the first row.

LAST

Moves the cursor to the last row within the result set and returns the last row.

ABSOLUTE n

Returns the nth row within the result set. If n is a negative value, the returned row is the nth row counting backward from the last row of the result set.

RELATIVE n

Returns the nth row after the currently fetched row. If n is a negative value, the returned row is the nth row counting backward from the relative position of the cursor.

The Transact-SQL FETCH statement does not require the INTO clause. If return variables are not specified, the row is automatically returned to the client as a single-row result set. However, if your procedure must get the rows to the client, a noncursor SELECT statement is much more efficient.

The @@FETCH_STATUS function is updated following each FETCH. It is similar in use to the SQLCA variables used in SPL. The @@FETCH_STATUS function is set to the value of 0 following a successful fetch. If the fetch tries to read beyond the end of the cursor, a value of –1 is returned. If the requested row has been deleted from the table after the cursor was opened, the @@FETCH_STATUS function returns –2. The value of –2 usually occurs only in a cursor declared with the SCROLL option. This variable must be checked following each fetch to ensure the validity of the data.

SQL Server cursors usually do not hold locks under the fetched row. Rather, they use an optimistic concurrency strategy to prevent updates from overwriting each other. If one user attempts to update or delete a row that has been changed since it was read into the cursor, SQL Server detects the problem and issues an error message. The application can trap this error message and retry the update or delete as appropriate.

The optimistic technique supports higher concurrency in the normal case where conflicts between updaters are rare. If your application really needs to ensure that a row cannot be changed after it is fetched, you can use the UPDLOCK hint in your SELECT statement to achieve this effect.

CURRENT OF Clause

The CURRENT OF clause syntax and function for updates and deletes is the same in both SPL and Transact-SQL. A positioned UPDATE or DELETE is performed against the current row within the specified cursor.

Closing a Cursor

The Transact-SQL CLOSE CURSOR statement closes the cursor but leaves the data structures accessible for reopening. SPL does not support a CLOSE CURSOR statement.

Transact-SQL requires the use of the DEALLOCATE CURSOR statement to remove the cursor data structures. The DEALLOCATE CURSOR statement is different from CLOSE CURSOR in that a closed cursor can be reopened. The DEALLOCATE CURSOR statement releases all data structures associated with the cursor and removes the definition of the cursor.

Cursor Example

The example below shows equivalent cursor statements in SPL and Transact-SQL.

Informix

Microsoft SQL Server

DEFINE
VSSN CHAR(9);
VFNAME VARCHAR(12);
VLNAME VARCHAR(20);
FOREACH cur1 FOR
SELECT SSN, FNAME, LNAME
FROM STUDENT ORDER BY LNAME
INTO VSSN, VFNAME, VLNAME;
END FOREACH;

DECLARE
@VSSN CHAR(9),
@VFNAME VARCHAR(12),
@VLNAME VARCHAR(20)
DECLARE curl CURSOR FOR
SELECT SSN, FNAME, LNAME
FROM STUDENT ORDER BY SSN
OPEN CUR1
FETCH NEXT FROM CUR1
INTO @VSSN, @VFNAME, @VLNAME
WHILE (@@FETCH_STATUS <> -1)
BEGIN
FETCH NEXT FROM CUR1
INTO @VSSN, @VFNAME, @VLNAME
END
CLOSE CUR1
DEALLOCATE CUR1

Tuning SQL Statements

Microsoft SQL Server 2000 is a largely auto-configuring and self-tuning database server, dramatically reducing the burden of server configuration on the database administrator. In most cases, SQL Server runs best when these autotuning parameters are left at their default settings and when administrators allow SQL Server to handle the performance tuning. For the latest information about tuning your SQL Server database, see http://msdn2.microsoft.com/sqlserver/default.aspx.

SQL Server Query Analyzer

You can use the graphical showplan feature of SQL Server Query Analyzer to learn more about how the optimizer will process your statement.

SQL Server Profiler

This graphical tool captures a continuous record of server activity in real-time. SQL Server Profiler monitors many different server events and event categories, filters these events with user-specified criteria, and outputs a trace to the screen, a file, or another server running SQL Server.

SQL Server Profiler can be used to:

  • Monitor the performance of SQL Server.

  • Debug Transact-SQL statements and stored procedures.

  • Identify slow-executing queries.

  • Troubleshoot problems in SQL Server by capturing all the events that lead up to a particular problem, and then replaying the events on a test system to replicate and isolate the problem.

  • Test SQL statements and stored procedures in the development phase of a project by single-stepping through statements, one line at a time, to confirm that the code works as expected.

  • Capture events on a production system and replay those captured events on a test system, thereby re-creating what happened in the production environment for testing or debugging purposes. Replaying captured events on a separate system allows the users to continue using the production system without interference.

SQL Profiler provides a graphical user interface to a set of extended stored procedures. You can also use these extended stored procedures directly. Therefore, it is possible to create your own application that uses the SQL Profiler extended stored procedures to monitor SQL Server.

SET Statement

The SET statement can set SQL Server query-processing options for the duration of your work session, or for the duration of a running trigger or a stored procedure.

The SET FORCEPLAN ON statement forces the optimizer to process joins in the same order as the tables appear in the FROM clause.

The SET SHOWPLAN_ALL and SET SHOWPLAN_TEXT statements return only query or statement execution plan information and do not execute the query or statement. To execute the query or statement, set the appropriate showplan statement OFF. The query or statement will then execute.

With SET STATISTICS PROFILE ON, each executed query returns its regular result set, followed by an additional result set that shows a profile of the query execution. Other options include SET STATISTICS IO and SET STATISTICS TIME.

Transact-SQL statement processing consists of two phases, compilation and execution. The NOEXEC option compiles each query but does not execute it. After NOEXEC is set ON, no subsequent statements are executed (including other SET statements) until NOEXEC is set OFF.

SET SHOWPLAN ON 
SET NOEXEC ON 
go 
SELECT * FROM DEPT_ADMIN.DEPT, 
STUDENT_ADMIN.STUDENT 
WHERE MAJOR = DEPT 
go 
STEP 1 
The type of query is SETON 
STEP 1 
The type of query is SETON 
STEP 1 
The type of query is SELECT 
FROM TABLE 
DEPT_ADMIN.DEPT 
Nested iteration 
Table Scan 
FROM TABLE 
STUDENT_ADMIN.STUDENT 
Nested iteration 
Table Scan

Query Optimization

Informix requires the use of hints to influence the operation and performance of its cost-based optimizer. The Microsoft SQL Server cost-based optimizer does not require the use of hints to assist in its query evaluation process. They are offered, however, as some situations do warrant their use.

The INDEX = index_name | index_id} hint specifies the index name or ID to use for that table. An index_id of 0 forces a table scan, while an index_id of 1 forces the use of a clustered index, if it exists. Informix does not support index_id and uses the keyword FULL to force a table scan.

The SQL Server FASTFIRSTROW hint directs the optimizer to use a nonclustered index if its column order matches the ORDER BY clause. This hint operates in a similar fashion to the Informix FIRST_ROWS hint.

Using XML

Microsoft SQL Server 2000 introduces new features to support XML functionality. Using XML, you can:

SELECT, INSERT and UPDATE a SQL Server database.

Use Xpath queries against XDR (XML Data Reduced schemas).

Format the results of Transact-SQL statements in XML using FOR XML.

For more information about SQL Server XML support, see Chapter 31, "Exposing SQL Server Data to the Web with XML" in this book and "XML and Internet Support Overview" in SQL Server Books Online. You can also search the MSDN® Library for the articles "Duwamish Online SQL Server XML Catalog Browsing" and "SQL Server XML and Web Application Architecture" at http://msdn2.microsoft.com/xml/default.aspx.

Using ODBC

This section provides information about the ways Informix and SQL Server use ODBC and information about developing or migrating applications with ODBC. Though the use of ODBC is still applicable for migrating apps written to the older API, Microsoft's preferred API is OLE DB.

Recommended Conversion Strategy

Use the following process when you convert your application code from Informix to SQL Server:

  1. Consider converting your application to ODBC if it is written using the Informix Call Level Interface (CLI).

  2. Understand SQL Server default result sets and cursor options, and choose the fetching strategy that is most efficient for your application.

  3. Remap Informix ODBC SQL data types to SQL Server ODBC SQL data types where appropriate.

  4. Use the ODBC Extended SQL extensions to create generic SQL statements.

  5. Determine if manual commit mode is required for the SQL Server–based application.

  6. Test the performance of your application(s) and modify the program(s) as necessary.

    infmx10

    Figure .10 Informix ODBC vs SQL 2000 ODBC

ODBC Architecture

Microsoft provides both 16-bit and 32-bit versions of its the ODBC SQL Server driver. The 32-bit ODBC SQL Server driver is thread-safe. The driver serializes shared access by multiple threads to shared statement handles (hstmt), connection handles (hdbc), and environment handles (henv). However, the ODBC program is still responsible for keeping operations within statements and connection spaces in the proper sequence, even when the program uses multiple threads.

Because the ODBC driver for Informix can be supplied by one of many possible vendors, there are many possible scenarios regarding architecture and operation. You must contact the vendor to ensure that the ODBC driver meets your application's requirements.

In most cases, the ODBC driver for Informix uses CLI to connect to the Informix RDBMS.

The illustration shows the application/driver architecture for 32-bit environments.

Cc917718.infmx11(en-us,TechNet.10).gif

Figure .11 Informix ODBC vs SQL 2000 ODBC Architecture

The term thunking means intercepting a function call, doing a special processing to translate between 16-bit and 32-bit code, and then transferring control to a target function. Note how the ODBC Cursor Library optionally resides between the driver manager and its driver. This library provides scrollable cursor services on top of drivers that support only forward-only cursors.

Forward-only Cursors

Informix and SQL Server treat result sets and cursors differently. Understanding these differences is essential for successfully moving a client application from Informix to SQL Server and having it perform optimally.

In Informix, any result set from a SELECT command is treated as a forward-only cursor when fetched in the client application. This is true whether you are using ODBC, OCI, or Embedded SQL as your development tool.

By default, each Informix FETCH command issued by the client program (for example, SQLFetch in ODBC) causes a round-trip across the network to the server to return one row. If a client application wants to fetch more than one row at a time across the network, it must set up an array in its program and perform an array fetch.

Between fetches, no locks are held at the server for a read-only cursor because of Informix's multiversioning concurrency model. When the program specifies an updatable cursor with the FOR UPDATE clause, all of the requested rows in the SELECT command are locked when the statement is opened. These row-level locks remain in place until the program issues a COMMIT or ROLLBACK request.

In SQL Server, a SELECT statement is not always associated with a cursor at the server. By default, SQL Server simply streams all the result set rows from a SELECT statement back to the client. This streaming starts as soon as the SELECT is executed. Result set streams can also be returned by SELECT statements within stored procedures. Additionally, a single stored procedure or batch of commands can stream back multiple result sets in response to a single EXECUTE statement.

The SQL Server client is responsible for fetching these default result sets as soon as they are available. For default result sets, fetches at the client do not result in round-trips to the server. Instead, fetches from a default result set pull data from local network buffers into program variables. This default result set model creates an efficient mechanism to return multiple rows of data to the client in a single round-trip across this network. Minimizing network round-trips is usually the most important factor in client/server application performance.

Compared to Informix's cursors, default result sets put some additional responsibilities on the SQL Server client application. The SQL Server client application must immediately fetch all the result set rows returned by an EXECUTE statement. If the application needs to present rows incrementally to other parts of the program, it must buffer the rows to an internal array. If it fails to fetch all result set rows, the connection to SQL Server remains busy.

If this occurs, no other work (such as UPDATE statements) can be executed on that connection until the entire result set rows are fetched or the client cancels the request. Moreover, the server continues to hold share locks on table data pages until the fetch has completed. The fact that these share locks are held until a fetch is complete make it mandatory that you fetch all rows as quickly as possible. This technique is in direct contrast to the incremental style of fetch that is commonly found in Informix applications.

Server Cursors

Microsoft SQL Server offers server cursors to address the need for incremental fetching of result sets across the network. Server cursors can be requested in an application by simply calling SQLSetStmtOption to set the SQL_CURSOR_TYPE option.

When a SELECT statement is executed as a server cursor, only a cursor identifier is returned by the EXECUTE statement. Subsequent fetch requests pass the cursor identifier back to the server along with a parameter specifying the number of rows to fetch at once. The server returns the number of rows requested.

Between fetch requests, the connection remains free to issue other commands, including other cursor OPEN or FETCH requests. In ODBC terms, this means that server cursors allow the SQL Server driver to support multiple active statements on a single connection.

Furthermore, server cursors do not usually hold locks between fetch requests, so you are free to pause between fetches for user input without affecting other users. Server cursors can be updated in place using either optimistic conflict detection or pessimistic scroll locking concurrency options.

While these features make programming with server cursors more familiar to Informix developers than using default result sets, they are not free. Compared to default result sets:

  • Server cursors are more expensive in terms of server resources, because temporary storage space is used to maintain cursor state information at the server.

  • Server cursors are more expensive to retrieve a given result set of data with, because the EXECUTE statement and each fetch request in a server cursor requires a separate round-trip to the server.

  • Server cursors are less flexible in terms of the kind of batches and stored procedures they support. This is because a server cursor can execute only one SELECT statement at a time, whereas default result sets can be used for batches and stored procedures that return multiple result sets or include statements other than SELECT statements.

For these reasons, it is wise to limit the use of server cursors to those parts of your application that need their features. An example that illustrates the use of server cursors can be found in the LIST_STUDENTS function in the Ssdemo.cpp sample SQL Server ODBC program file.

Scrollable Cursors

The Informix RDBMS CLI and SPL supports only forward-scrolling cursors. Each row is fetched to the application in the order that it was specified in the query. Informix does not accept requests to move backward to a previously fetched row. The only way to move backward is to close the cursor and reopen it. Unfortunately, you are repositioned back to the first row in the active query set.

Because SQL Server supports scrollable cursors, you can position a SQL Server cursor at any row location. You can scroll both forward and backward. For many applications involving a user interface, scrollability is a useful feature. With scrollable cursors, your application can fetch a screen full of rows at a time, and only fetch additional rows as the user asks for them.

Although Informix does not directly support scrollable cursors, using one of several ODBC options can minimize this limitation. For example, some Informix ODBC drivers, such as the one that ships with the Informix Client SDK development system, offer client-based scrollable cursors in the driver itself.

Alternatively, the ODBC Cursor Library supports block scrollable cursors for any ODBC driver that complies with the Level One conformance level. Both of these client cursor options support scrolling by using the RDBMS for forward-only fetching, and by caching result set data in memory or on disk. When data is requested, the driver retrieves it from the RDBMS or its local cache as needed.

Client-based cursors also support positioned UPDATE and DELETE statements for the result sets generated by SELECT statements. The cursor library constructs an UPDATE or DELETE statement with a WHERE clause that specifies the cached value for each column in a row.

If you need scrollable cursors and are trying to maintain the same source code for both Informix and SQL Server implementations, the ODBC Cursor Library is a useful option. For more information about the ODBC Cursor Library, see your ODBC documentation.

Strategies for Using SQL Server Default Result Sets and Server Cursors

With all of the options that SQL Server offers for fetching data, it is sometimes difficult to decide what to use and when. Here are some useful guidelines:

  • Default result sets are always the fastest way to get an entire set of data from SQL Server to the client. Look for opportunities in your application where you can use this to your advantage. Batch report generation, for example, generally processes an entire result set to completion, with no user interaction and no updates in the middle of processing.

  • If your program requires updatable cursors, use server cursors. Default result sets are never updatable when using positioned UPDATE or DELETE statements. Additionally, server cursors are better at updating than client-based cursors, which have to simulate a positioned UPDATE or DELETE by constructing an equivalent searched UPDATE or DELETE statement.

  • If your program needs scrollable, read-only cursors, both the ODBC Cursor Library and server cursors are good choices. The ODBC Cursor Library gives you compatible behavior across SQL Server and Informix, and server cursors give you more flexibility as to how much data to fetch across the network at one time.

  • When you use default result sets or ODBC Cursor Library cursors built on top of default result sets, be sure to fetch to the end of a result set as quickly as possible to avoid holding share locks at the server.

  • When you use server cursors, be sure to use SQLExtendedFetch to fetch in blocks of rows rather than a single row at a time. This is the same as array-type fetching in Informix applications. Every fetch request on a server cursor requires a round-trip from the application to the RDBMS on the network.

  • Grocery shopping provides an analogy. Assume you purchase 10 bags of groceries at the grocery store, load one bag into your car, drive home, drop it off, and return to the grocery store for the next bag. This is an unlikely scenario, but this is what you do to SQL Server and your program by making single-row fetches from a server cursor.

  • If your program requires only forward-only, read-only cursors but depends on multiple open cursors on the same connection, use default result sets when you know you can fetch the entire result set immediately into program variables. Use server cursors when you do not know if you can fetch all of the rows immediately.

This strategy is not as difficult as it sounds. Most programmers know when they are issuing a singleton select that can return a maximum of one row. For singleton fetches, using a default result set is more efficient than using a server cursor.

For an example of this technique, see the LIST_STUDENTS function in the Ssdemo.cpp sample SQL Server ODBC program file. Note how a server cursor is requested only if the SELECT statement may return more than one row. Following the execute step, the rowset size is set to a reasonable batch size. This allows the same SQLExtendedFetch loop to work efficiently in either the default result set or the server cursor case.

For more information about cursor implementations, see SQL Server Books Online.

Multiple Active Statements (hstmt) per Connection

The ODBC driver uses a statement handle (hstmt) to track each active SQL statement within the program. The statement handle is always associated with a RDBMS connection handle (hdbc). The ODBC driver manager uses the connection handle to send the requested SQL statement to the specified RDBMS. Most ODBC drivers for Informix allow multiple statement handles per connection. However, the SQL Server ODBC driver allows only one active statement handle per connection when using default result sets. The SQLGetInfo function of this SQL Server driver returns the value 1 when queried with the SQL_ACTIVE_STATEMENTS option. When statement options are set in a way that uses server cursors, multiple active statements per connection handle are supported.

For more information about setting statement options to request server cursors, see SQL Server Books Online.

Data Type Mappings

The SQL Server ODBC driver offers a rich set of data type mappings.

Microsoft SQL Server data type

ODBC SQL data type

binary

SQL_BINARY

bit

SQL_BIT

char, character

SQL_CHAR

datetime

SQL_TIMESTAMP

decimal, dec

SQL_DECIMAL

float, double precision, float(n) for n = 8-15

SQL_FLOAT

image

SQL_LONGVARBINARY

int, integer

SQL_INTEGER

money

SQL_DECIMAL

nchar

SQL_WCHAR

ntext

SQL_WLONGVARCHAR

numeric

SQL_NUMERIC

nvarchar

SQL_WVARCHAR

real, float(n) for n = 1-7

SQL_REAL

smalldatetime

SQL_TIMESTAMP

smallint

SQL_SMALLINT

smallmoney

SQL_DECIMAL

sysname

SQL_VARCHAR

text

SQL_LONGVARCHAR

timestamp

SQL_BINARY

tinyint

SQL_TINYINT

uniqueidentifier

SQL_GUID

varbinary

SQL_VARBINARY

varchar

SQL_VARCHAR

The timestamp data type is converted to the SQL_BINARY data type. This is because the values in timestamp columns are not datetime data, but rather binary(8) data. They are used to indicate the sequence of SQL Server activity on the row.

The Informix data type mappings for the Intersolv Data Direct ODBC (shipped with the Informix Client SDK) driver for Informix are shown in this table.

Informix data type

ODBC SQL data type

BYTE

SQL_LONGVARBINARY

CHAR

SQL_CHAR

DATE

SQL_TIMESTAMP

DATETIME YEAR TO FRACTION(F)

SQL_TYPE_TIMESTAMP

DATETIME YEAR TO DAY

SQL_TYPE_DATE

DATETIME HOUR TO SECOND

SQL_TYPE_TIME

DECIMAL

SQL_DECIMAL

FLOAT

SQL_DOUBLE

INTERVAL YEAR(P) TO YEAR

SQL_INTERVAL_YEAR

INTERVAL YEAR(P) TO MONTH

SQL_INTERVAL_YEAR_TO_MONTH

INTERVAL MONTH(P) TO MONTH

SQL_INTERVAL_MONTH

INTERVAL DAY(P) TO DAY

SQL_INTERVAL_DAY

INTERVAL DAY(P) TO HOUR

SQL_INTERVAL_DAY_TO_HOUR

INTERVAL DAY(P) TO MINUTE

SQL_INTERVAL_DAY_TO_MINUTE

INTERVAL DAY(P) TO SECOND

SQL_INTERVAL_DAY_TO_SECOND

INTERVAL DAY(P) TO FRACTION(F)

SQL_INTERVAL_DAY_TO_SECOND

INTERVAL HOUR(P) TO HOUR

SQL_INTERVAL_HOUR

INTERVAL HOUR(P) TO MINUTE

SQL_INTERVAL_HOUR_TO_MINUTE

INTERVAL HOUR(P) TO SECOND

SQL_INTERVAL_HOUR_TO_SECOND

INTERVAL HOUR(P) TO FRACTION(F)

SQL_INTERVAL_HOUR_TO_SECOND

INTERVAL MINUTE(P) TO MINUTE

SQL_INTERVAL_MINUTE

INTERVAL MINUTE(P) TO SECOND

SQL_INTERVAL_MINUTE_TO_SECOND

INTERVAL MINUTE(P) TO FRACTION(F)

SQL_INTERVAL_MINUTE_TO_SECOND

INTERVAL SECOND(P) TO SECOND

SQL_INTERVAL_SECOND

INTERVAL SECOND(P) TO FRACTION(F)

SQL_INTERVAL_SECOND

INTERVAL FRACTION TO FRACTION(F)

SQL_VARCHAR

MONEY

SQL_DECIMAL

SERIAL

SQL_INTEGER

SMALLFLOAT

SQL_REAL

SMALLINT

SQL_SMALLINT

TEXT

SQL_LONGVARCHAR

VARCHAR

SQL_VARCHAR

Informix ODBC drivers from other vendors can have alternative data type mappings.

ODBC Extended SQL

The ODBC Extended SQL standard provides SQL extensions to ODBC that support the advanced nonstandard SQL feature set offered in both Informix and SQL Server. This standard allows the ODBC driver to convert generic SQL statements to Informix- and SQL Server native SQL syntax.

This standard addresses outer joins, such as predicate escape characters, scalar functions, date/time/timestamp values, and stored programs. This syntax is used to identify these extensions:

--(*vendor(Microsoft), product(ODBC) extension *)-- 
OR 
{extension}

The conversion takes place at run time and does not require the revision of any program code. In most application development scenarios, the best approach is to write one program and allow ODBC to perform the RDBMS conversion process when the program is run.

Outer Joins

Informix and SQL Server do not have compatible outer join syntax. This can be solved by using the ODBC extended SQL outer join syntax. The Microsoft SQL Server syntax is the same as the ODBC Extended SQL/SQL-92 syntax. The only difference is the {oj } container.

ODBC Extended SQL and SQL-92

Informix

Microsoft SQL Server

SELECT STUDENT.SSN, FNAME, LNAME, CCODE, GRADE
FROM {oj STUDENT LEFT OUTER JOIN GRADE ON STUDENT.SSN = GRADE.SSN}

SELECT STUDENT.SSN, FNAME, LNAME,
CCODE, GRADE
FROM STUDENT,
OUTER GRADE
WHERE
STUDENT.SSN = GRADE.SSN

SELECT STUDENT.SSN, FNAME, LNAME,
CCODE, GRADE
FROM STUDENT LEFT OUTER JOIN GRADE
ON STUDENT.SSN = GRADE.SSN

Date, Time, and Timestamp Values

ODBC provides three escape clauses for date, time, and timestamp values.

Category

Shorthand syntax

Format

Date

{d 'value'}

"yyyy-mm-dd"

Time

{t 'value'}

"hh:mm:ss"

Timestamp

{Ts 'value'}

"yyyy-mm-dd hh:mm:ss"

The format of dates has more of an impact on Informix applications than on SQL Server based applications. Informix expects the date format to be "DD-MON-YY". In any other case, the TO_CHAR or TO_DATE functions are used with a date format model to perform a format conversion.

Microsoft SQL Server automatically converts most common date formats, and also provides the CONVERT function when an automatic conversion cannot be performed.

As shown in the table, ODBC Extended SQL works with both databases. SQL Server does not require a conversion function. Nevertheless, the ODBC shorthand syntax can be generically applied to both Informix and SQL Server.

ODBC Extended SQL

Informix

Microsoft SQL Server

SELECT SSN, FNAME, LNAME, BIRTH_DATE
FROM STUDENT WHERE BIRTH_DATE < {D '1970-07-04'}

SELECT
SSN, FNAME, LNAME,
BIRTH_DATE
FROM STUDENT
WHERE
BIRTH_DATE <
'1970-07-04'

SELECT SSN, FNAME, LNAME,
BIRTH_DATE
FROM STUDENT
WHERE BIRTH_DATE < '1970-07-04'

Calling Stored Procedures

The ODBC shorthand syntax for calling stored programs supports Microsoft SQL Server stored procedures, and Informix stored procedures and functions. The optional "?=" captures the return value for an Informix function or a SQL Server procedure. The parameter syntax is used to pass and return values to and from the called program. In most situations, the same syntax can be generically applied to Informix- and SQL Server based applications.

In the following example, the SHOW_RELUCTANT_STUDENTS function is part of the Informix package P1. This function must exist in a package because it returns multiple rows from a SPL cursor. When you call a function or procedure that exists in a package, the package name must be placed in front of the program name.

The SHOW_RELUCTANT_STUDENTS function in the package P1 uses a package cursor to retrieve multiple rows of data. Each row must be requested with a call to this function. If there are no more rows to retrieve, the function returns the value of 0, indicating that there are no more rows to retrieve. The resulting performance of this sample Informix package and its function might be less than satisfactory. In this example, the SQL Server procedure is more efficient.

Generic ODBC Extended SQL

Informix

Microsoft SQL Server

{?=} call procedure_name[(parameter(s))]}
SQLExecDirect(hstmt1,(SQLCHAR *)"{? = call owner.procedure(?)}",
SQL_NTS);

SQLExecDirect(hstmt1, (SQLCHAR*)"{? = call
STUDENT_ADMIN.P1.
SHOW_RELUCTANT
_STUDENTS(?)}",
SQL_NTS);

SQLExecDirect(hstmt1, (SQLCHAR*)"{? = call
STUDENT_ADMIN.
SHOW_RELUCTANT
_STUDENTS}",
SQL_NTS);

Native SQL Translation

Because of the variety of ODBC drivers for both Informix and SQL Server, you may not always get the same conversion string for the extended SQL functions. To assist with application debugging issues, you might want to consider using the SQLNativeSql function. This function returns the SQL string as translated by the driver.

The following are possible results for the following input SQL string that contains the scalar function CONVERT. The column SSN is defined as the type CHAR(9), and is converted to a numeric value.

Original statement

Converted Informix statement

Converted SQL Server statement

SELECT (fn CONVERT
(SSN, SQL_INTEGER)}
FROM STUDENT

SELECT TO_NUMBER(SSN)
FROM STUDENT

SELECT CONVERT(INT,SSN)
FROM STUDENT

The Common.cpp Sample Program

The Common.cpp sample program does not take advantage of the ODBC Extended SQL syntax. Rather, it employs a series of views and procedures to hide statements and functions that are not common to both Informix and SQL Server. This program, although written using ODBC, is intended to show how an application programmer can easily overcome any apparent hurdles when trying to write one common program.

These techniques and strategies are best employed in a non-ODBC development environment. If you are using ODBC, consider using the ODBC Extended SQL syntax to overcome any syntactical differences between Informix and SQL Server.

Manual Commit Mode

By default, SQL Server and Informix automatically commit each change as it occurs. This is called autocommit mode in ODBC. If you do not want this to occur, you can use the BEGIN TRANSACTION statement to signal the start of a block of statements comprising a transaction. After this statement is issued, it is followed by an explicit COMMIT TRANSACTION or ROLLBACK TRANSACTION statement.

You can use the SQLConnectOption function to place your SQL Server™ based application in implicit transaction mode. The SQL_AUTOCOMMIT option must be set to SQL_AUTOCOMMIT_OFF in order to accomplish this. This code excerpt from the sample programs demonstrates this concept:

SQLSetConnectOption(hdbc1, SQL_AUTOCOMMIT,-sql_AUTOCOMMIT_OFF);

The SQL_AUTOCOMMIT_OFF option instructs the driver to use implicit transactions. The default option SQL_AUTOCOMMIT_ON instructs the driver to use autocommit mode, in which each statement is committed immediately after it is executed. Changing from manual commit mode to autocommit mode commits any open transactions on the connection.

If the SQL_AUTOCOMMIT_OFF option is set, the application must commit or roll back transactions explicitly with the SQLTransact function. This function requests a commit or rollback operation for all active operations on all statement handles associated with a connection handle. It can also request that a commit or rollback operation be performed for all connections associated with the environment handle.

SQLTransact(henv1, hdbc1, SQL_ROLLBACK); 
(SQLTransact(henv1, hdbc1, SQL_COMMIT);

When autocommit mode is off, the driver issues SET IMPLICIT_TRANSACTIONS ON statement to the server. Starting with SQL Server 6.5, DDL statements are supported in this mode.

To commit or roll back a transaction in manual commit mode, the application must call SQLTransact. The SQL Server driver sends a COMMIT TRANSACTION statement to commit a transaction, and a ROLLBACK TRANSACTION statement to roll back a transaction.

Be aware that manual commit mode can adversely affect the performance of your SQL Server™ based application. Every commit request requires a separate round-trip to the server to send the COMMIT TRANSACTION string.

If you have single atomic transactions (a single INSERT, UPDATE, or DELETE immediately followed by a COMMIT), use the autocommit mode.

In the sample programs, the manual commit mode has been turned on, even for atomic transactions, to demonstrate how easily a SQL Server™ based application can be developed that closely mimics the operation of a similar application designed for the Informix RDBMS.

Developing and Administering Database Replication

This section explains the differences between Informix and Microsoft SQL Server replication support.

Informix

Microsoft SQL Server

N/A

Snapshot with immediate updating subscribers

Log-based transaction-capture

Transactional replication with immediate updating subscribers -- better for well-connected subscribers who are online when updating.
Merge Replication -- better for mobile disconnected. This supports default and custom conflict resolution.

As its name implies, SQL Server snapshot replication takes a picture, or snapshot, of the published data in the database at a moment in time. Snapshot replication requires less constant processor overhead than transactional replication because it does not require continuous monitoring of data changes on source servers. Instead of copying INSERT, UPDATE, and DELETE statements (characteristic of transactional replication), or data modifications (characteristic of merge replication), Subscribers are updated by a total refresh of the data set. Hence, snapshot replication sends all the data to the Subscriber instead of sending only the changes.

Rows in immediate updating articles in SQL Server, with snapshot replication or transactional replication use a uniqueidentifier column to identify versions. In addition, the triggers generated for immediate updating have been changed from past SQL versions, and the trigger generation code has been modified to accommodate queued updating.

Informix and SQL Server both offer transactional replication, a type of replication that marks selected transactions in the Publisher's database transaction log for replication and then distributes them asynchronously to Subscribers as incremental changes, while maintaining transactional consistency.

Informix and SQL Server merge replication allows sites to make autonomous changes to replicated data, and at a later time, merge changes made at all sites. Unlike Informix, which supports only row-level conflict detection (without SPL support), SQL Server merge replication supports both column-level and row-level conflict detection. That is, you can define a "conflict" to be any change to the same row at two locations, or only when changes to the same column(s) at two locations.

SQL Server offers heterogeneous replication, which is the simplest way to publish data to a heterogeneous Subscriber by using ODBC or OLE/DB and creating a push subscription from the Publisher to the ODBC Subscriber. As an alternative, however, you can create a publication and then create an application with an embedded distribution control. The embedded control implements the pull subscription from the Subscriber to the Publisher. For ODBC Subscribers, the subscribing database has no administrative capabilities regarding the replication being performed.

The chart below compares conflict resolution mechanisms for Informix and SQL Server:

Informix

Microsoft SQL Server

Stored-procedure conflict-resolution rule

Priority-based resolution using COM or Transact-SQL

Conflict resolution for "column groups" supported through SPL

Same using Transact-SQL

Row-level conflict resolution

Support for both column-level and row-level conflict resolution

Primary and secondary conflict-resolution rules

N/A

Transaction conflict-resolution rules

N/A

ODBC, OLE/DB, and Replication

With Microsoft SQL Server, a distribution server connects to all subscription servers as an ODBC or OLE/DB client. Replication requires that the ODBC 32-bit driver be installed on all distribution servers. The SQL Server Setup program automatically installs the necessary driver on Windows NT™ based computers.

You do not have to pre-configure ODBC Data Sources for SQL Server subscription servers because the distribution process simply uses the subscriber's network name to establish the connection.

SQL Server also includes an ODBC driver that supports Informix subscriptions to SQL Server. The driver exists only for Intel-based computers. To replicate to Informix ODBC subscribers, you must also need to obtain the appropriate Informix CLI driver from Informix.

If a password is provided in the Windows NT registry, the Informix ODBC driver connects to Informix without requesting a password. If a password is not provided in the Windows NT registry, you must enter a username and a password for the Informix ODBC data source when specifying the DSN in the New ODBC Subscriber dialog box of SQL Server Enterprise Manager.

The following restrictions apply when replicating to an Informix ODBC subscriber:

  • The SQL Server ranges for float and real data types are different from the Informix ranges.

Drivers for other ODBC subscriber types must conform to the SQL Server replication requirements for generic ODBC subscribers. The ODBC driver:

  • Must be ODBC Level 1 compliant.

  • Must be 32-bit and thread-safe for the processor architecture that the distribution process runs on.

  • Must be transaction capable.

  • Must support the data definition language (DDL).

  • Cannot be read-only.

Migrating Your Data and Applications

This section presents various methods for migrating data from an Informix database to a Microsoft SQL Server database.

Data Migration Using Data Transformation Services

The simplest method of migrating between Informix and SQL Server is to use the Data Transformation Services (DTS) feature in Microsoft SQL Server 2000. The DTS Wizard guides you through moving the data to SQL Server.

The Informix ODBC driver is not supported for use with DTS. The Merant Informix OLE DB provider is supported for DTS imports from Informix, but not DTS exports to Informix. This driver also cannot be used to import meta data. The Intersolv Informix ODBC driver is supported, but with the following restrictions:

  • BLOBs cannot be exported to Informix.

  • When creating new tables on Informix, the DTS Import/Export Wizard will incorrectly map the SQL Server 2000 datetime columns to the Informix 'Datetime year to fraction' data type. Manually change this to the Informix Date type.

  • The DTS meta data import will not import Informix catalog or table information.

Informix Call Level Interface (CLI)

If you have applications that were written by using the Informix Call Level Interface (CLI), you may want to consider rewriting them by using ODBC. The CLI is specific to the Informix RDBMS and cannot be used with Microsoft SQL Server or any other database.

In most cases, you don't have to replace CLI functions with the appropriate ODBC functions, the program code should require minimal modification. The example shows a comparison of the CLI and ODBC statements required for establishing a connection to an Informix database.

Informix Call Level Interface

Informix ODBC

rcl = SQLConnect(hdbc1,
(SQLCHAR*) ODBC_dsn, (SQLSMALLINT) SQL_NTS,
(SQLCHAR*) user_name, (SQLSMALLINT) SQL_NTS,
(SQLCHAR*) user_password, (SQLSMALLINT) SQL_NTS);

rcl = SQLConnect(hdbc1,
(SQLCHAR*) ODBC_dsn, (SQLSMALLINT) SQL_NTS,
(SQLCHAR*) user_name, (SQLSMALLINT) SQL_NTS,
(SQLCHAR*) user_password, (SQLSMALLINT) SQL_NTS);

Embedded SQL

Many applications are written using the Informix Programmatic Interfaces (ESQL/C, 4GL, and so on). These interfaces support the use of SQL-92 standard embedded SQL. They also include nonstandard Informix programmatic extensions.

Informix embedded SQL applications can be migrated to SQL Server by using the Microsoft Embedded SQL (ESQL) for C development environment. This environment provides adequate but less than optimal control over the performance and the use of SQL Server features compared to an ODBC application.

Some of the Informix ESQL/C features are not supported in Microsoft's ESQL precompiler. If your Informix application makes extensive use of these features, a rewrite to ODBC is probably a better migration choice. These features include:

  • Host array variables.

  • VAR and TYPE statements for data type equivalencing.

  • Support for embedded SQL in C++ modules.

  • Support for embedded SPL or Transact-SQL blocks.

  • Cursor variables.

  • Multithreaded application support.

You can convert your Informix embedded SQL application to the ODBC environment. This migration process is quite easy and offers many advantages. ODBC does not require the use of a precompiler, as does embedded SQL. Consequently, much of the overhead associated with program development is eliminated.

The table shows the approximate relationship between Embedded SQL statements and ODBC functions.

Embedded SQL statement

ODBC function

CONNECT

SQLConnect

PREPARE

SQLPrepare

EXECUTE

SQLExecute

DECLARE CURSOR and OPEN CURSOR

SQLExecute

FOREACH

SQLFetch

SQLCA.SQLERRD[2]

SQLRowCount

CLOSE

SQLFreeStmt

COMMIT WORK, ROLLBACK WORK

SQLTransact

SQLCA, SQLSTATE

SQLError

ALTER, CREATE, DROP, GRANT, REVOKE

SQLExecute, SQLExecDirect

The most significant change when converting embedded SQL programs to ODBC involves the handling of SQL statement errors. The MODE = INFORMIX option is often used when developing embedded SQL programs. When this option is used, the SQL Communications Area (SQLCA) is typically used for error handling operations.

The SQLCA structure provides:

  • Informix error codes.

  • Informix error messages.

  • Warning flags.

  • Information regarding program events.

  • The number of rows processed by the most recent SQL statement.

In most cases, you should check the value in the sqlca.sqlcode variable following the execution of each SQL statement. If the value is less than zero, an error has occurred. If the value is greater than zero, the requested statement executed with warnings. The Informix error message text can be retrieved from the sqlca.sqlerrm.sqlerrmc variable.

In ODBC, a function returns a numeric status code that indicates its success or failure following the requested operation. The status codes are defined as string literals, and include SQL_SUCCESS, SQL_SUCCESS_WITH_INFO, SQL_NEED_DATA, SQL_ERROR, and others. It is your responsibility to check these return values following each function call.

An associated SQLSTATE value can be obtained by calling the SQLError function. This function returns the SQLSTATE error code, the native error code (specific to the data source), and the error message text.

An application typically calls this function when a previous call to an ODBC function returns SQL_ERROR or SQL_SUCCESS_WITH_INFO. However, any ODBC function can post zero or more errors each time it is called, so an application may call SQLError after every ODBC function call.

Here are examples of error handling for each environment.

Informix ESQL/C

Informix ODBC

EXEC SQL DECLARE CURSOR C1 CURSOR
FOR SELECT SSN, FNAME, LNAME FROM STUDENT ORDER BY SSN;
EXEC SQL OPEN C1;
if (sqlca.sqlcode) != 0 {
/* handle error condition,
look at sqlca.sqlerrm.sqlerrmc for error description...*/}

if (SQLExecDirect(hstmtl,
(SQLCHAR*)"SELECT SSN, FNAME, LNAME
FROM STUDENT ORDER BY SSN",
SQL_NTS) != SQL_SUCCESS) {
/* handle error condition, use SQLError
for SQLSTATE details regarding error...*/}

Informix 4GL and Third-party Applications

If you have developed an application using Informix 4GL and want to use it with SQL Server, consider converting it to Microsoft Visual Basic®. Visual Basic is a powerful development system that works well with both databases. You might also consider other development tools in the Microsoft Visual Studio development system, or PowerBuilder, SQL Windows, and others.

Internet Applications

Microsoft SQL Server includes the Web Assistant Wizard, which generates standard HTML files from SQL Server data. The wizard can configure your Web page so that it is static, updated periodically, or updated when the data is updated. A wizard walks you through the process of creating the Web page.

Conclusion

Microsoft SQL Server 2000 delivers business value measured by a low total cost of ownership, ease of use (wizards, tuning tools, and automated tuning features), as well as integrated data warehousing capabilities. It has garnered some of the highest commendations and awards available from industry reviewers, and has demonstrated its ability to meet the demands of leading enterprises and electronic commerce sites.

This whitepaper has reviewed many of the architectural differences between Informix and Microsoft SQL Server, and has uncovered some of the differences in the terminology used to describe Informix and SQL Server. With proper planning and familiarization with SQL Server, you can migrate your Informix application and start to enjoy the benefits of SQL Server 2000.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft