Export (0) Print
Expand All

Appendices

Published: June 21, 2004 | Updated: October 29, 2004
On This Page

Appendix A: Development-Related Topics Appendix A: Development-Related Topics
Appendix B: Sybase and SQL Server Architectural Differences Appendix B: Sybase and SQL Server Architectural Differences
Appendix C: Sybase Stored Procedure and Trigger Migration to Microsoft SQL Server Appendix C: Sybase Stored Procedure and Trigger Migration to Microsoft SQL Server
Appendix D: Transact-SQL Behavioral Differences Appendix D: Transact-SQL Behavioral Differences
Appendix E: Tranasct-SQL Transformation Checklist Appendix E: Tranasct-SQL Transformation Checklist
Appendix F: Data Migration Appendix F: Data Migration
Appendix G: Cutover and Fallback Strategies Appendix G: Cutover and Fallback Strategies
Appendix H: Sybase Migration Toolkit Installation Guide Appendix H: Sybase Migration Toolkit Installation Guide
Appendix I: Sybase Migration Toolkit User Guide Appendix I: Sybase Migration Toolkit User Guide
Appendix J: FreeTDS and unixODBC Installation Appendix J: FreeTDS and unixODBC Installation
Appendix K: ActiveState Perl Installation Appendix K: ActiveState Perl Installation
Appendix L: Team Roles and Responsibilities Appendix L: Team Roles and Responsibilities
Appendix M: Familiarizing with SQL Server Interfaces and Connectivity Appendix M: Familiarizing with SQL Server Interfaces and Connectivity

Appendix A: Development-Related Topics

This appendix summarizes the features of the various APIs provided by Sybase and Microsoft. This information is useful for developers needing to migrate applications built using the Sybase APIs to Microsoft® SQL Server™.

Libraries

Sybase and Microsoft originally shared a common implementation of the DB-Library API. In recent years, however, Microsoft and Sybase database technologies have diverged; Sybase has developed new APIs and Microsoft has modified the DB-Library API with SQL Server-specific extensions. This section provides an overview of the different libraries now in use and describes the issues you are likely to encounter when migrating applications from Sybase to Microsoft SQL Server.

Sybase Open Server

The Sybase Open Server technology allows developers to build event-driven server applications that look like a Sybase data server to the client applications that connect to it. The Open Server technology is an extension to the Sybase data server. It is usually a custom program written in the C language. You have to code the Open Server to provide services such as e-mail, interfacing with the mainframe, interfacing with other database systems, making remote procedure calls (RPCs), and processing Transact-SQL statements and other language events. Classic examples of Open Server implementations are found in Sybase’s Backup and Omni servers.

Microsoft SQL Server includes Open Data Services and other application programming interfaces (APIs) that can replace many of the services that the Sybase Open Server provides. When Open Servers are involved in a Sybase migration, a detailed analysis must be performed to determine the exact services that the Open Server provides. After you understand the types of services that the Sybase Open Server provides, you can begin to mimic those services in the appropriate SQL Server technologies. Usually, Microsoft Windows® COM objects as extended stored procedures and linked servers can be used to replace the functionality of the Sybase Open Server technology.

For more information about some of the issues that might arise when you replace Sybase Open Server technology, see:

Remote Procedure Calls with Sybase SQL Servers, at http://support.microsoft.com/?id=169283.

Microsoft SQL Server Network Integration Architecture, at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsqlsg/html/msdn_sqlintgr.asp.

Sybase OPEN Client/CT-Library

Sybase CT-Library was designed to take advantage of the features released in Sybase System 10, and it introduced advanced programming features such as asynchronous programming and a callback event model. To help take advantage of the new features of System 10, Sybase developed a new version of Tabular Data Stream (TDS) called TDS 5.0. This version of TDS is incompatible with Microsoft SQL Server; client applications using TDS 5.0 will not connect to SQL Server or function correctly.

You have a number of choices when migrating CT-Library client applications from UNIX to Windows:

  • You can leave them running under UNIX. In this case you can either rewrite the application and replace the CT-Library calls with the appropriate Open Database Connectivity (ODBC) calls, or make use of FreeTDS to map CT-Library calls to a version of TDS understood by Microsoft SQL Server (FreeTDS is described later in this appendix).

    With the ODBC option, you have to exercise extreme caution because ODBC does not provide the same functionality as CT-Library. This strategy may work with simple applications, but those applications that use the advanced features of CT-Library, such as asynchronous callbacks, may present a difficult challenge for ODBC. You should also note that an increasing number of open source implementations of ADO.NET are becoming available, providing an alternative path for UNIX client applications to access SQL Server running on Windows.

    The FreeTDS option offers a simpler approach, in theory, as applications can continue using the same API. However, FreeTDS is currently incomplete and a number of CT-Library functions are not yet implemented. You will either have to implement any missing functionality yourself or modify the application to avoid using these functions.

  • You can migrate the application to Microsoft® Windows® Services for UNIX (SFU).

    SFU provides a UNIX-like environment that enables customers to take advantage of the many benefits of the Windows platform while still maintaining their UNIX-based legacy applications. FreeTDS and ODBC can be used under SFU in much the same way as under UNIX. The same caveats mentioned earlier regarding FreeTDS and ODBC still apply.

  • You can rewrite these client applications to use the Microsoft Win32® API.

    If you have the resources available, this is the recommended option. The result is an application highly tuned for the Windows environment, and it produces an application that can take full advantage of the facilities provided by Windows. Rewriting an application should be treated as a new development project; it must be designed, implemented, and tested as thoroughly as any brand new application would be.

Sybase DB-Library

Sybase DB-Library is an older API that is used to write client applications that connect to Sybase data servers. It includes routines to manage the login and database connection process, as well as the submission of SQL statements, result sets, and the handling of application and server errors.

Although recoding the application to use more recent APIs is the recommended approach, the Microsoft implementation of DB-Library provides one possible path for client applications ported from UNIX to Windows. Although some developers consider DB-Library migrations straightforward, there is always the possibility that some Sybase DB-Library calls are not supported by SQL Server. As such, you should exercise extreme caution when considering keeping Sybase DB-Library clients on UNIX. Furthermore, you should consider that Microsoft is planning to deprecate the DB-Library API from SQL Server, so client applications that are ported in this manner should not expect to have a long lifespan.

If the motivation for performing a migration is based solely on cost, it might not be practical to migrate the application; it is preferable to leave the DB-Library applications on UNIX and redirect the applications to SQL Server. UNIX client applications can be forced to use TDS 4.2 (the most recent version of TDS that is supported by both Microsoft and Sybase) provided that these applications do not use any of the newer features of Sybase. In many cases, this approach is worthwhile, especially because DB-Library applications tend to be existing applications that were written before Sybase System 10, and hence do not rely on the most recent features of DB-Library or TDS. 5.0.

To force an application to use TDS 4.2, you must perform the following tasks:

  1. Add the Microsoft SQL Server 2000 server to the Sybase interfaces file.

  2. Add the dbsetversion call to the source file and specify the constant DBVERSION_46 as in:

    dbsetversion (DBVERSION_46) 

    For more information about how to use the dbsetversion call, see the Sybase Open Client DB-Library/C Reference Manual at http://manuals.sybase.com/onlinebooks/group-cnarc/cng1110e/dblib/@Generic__BookView/39614.

Applications that use Sybase’s version of Embedded SQL must be ported to the Win32 platform because support for Microsoft Embedded SQL for C (ESQL/C) will also soon be deprecated. You should replace the embedded SQL portions of the application with stored procedure calls, and you should replace the data access portion of these applications with a database connectivity technology such as ADO.NET, Microsoft ActiveX® Data Objects (ADO), or ODBC.

Microsoft DB-Library

After SQL Server version 4.2, Sybase and Microsoft continued to develop and extend DB-Library independently. Microsoft SQL Server 7.0 and Microsoft SQL Server 2000 both include DB-Library (Msdblib3.dll and Msdblib3.lib) so you can still create applications that use DB-Library and have them work against a SQL Server 2000 database; however, all DB-Library functionality was frozen at the Microsoft SQL Server 6.5 level. This means that you cannot use any of the newer features of Microsoft SQL Server (that is, after SQL Server 6.5) within a DB-Library application.

The following list itemizes the limitations of DB-Library applications in Microsoft SQL Server 7.0 and Microsoft SQL Server 2000 that you need to consider when porting UNIX DB-Library applications to Microsoft DB-Library:

  • Microsoft DB-Library does not support sql_variant data type.

  • Microsoft DB-Library does not support bigint data. bigint data values are returned as float values to TDS 4.2 clients.

  • In Microsoft DB-Library, char, varchar, nchar, nvarchar, binary, and varbinary values longer than 255 bytes are truncated to 255 bytes.

  • In Microsoft DB-Library, uniqueidentifer data types are converted to varbinary (16).

  • Microsoft DB-Library applications cannot use ntext data type.

  • NULL values in bit columns are retrieved as 0 values in Microsoft DB-Library. Catalog and meta data functions report all bit columns as NOT NULL because NULL was not allowed for bit columns in SQL Server version 6.5 and earlier.

  • In Microsoft DB-Library, varchar empty strings are returned as one blank.

  • Microsoft DB-Library does not support Unicode. nchar and nvarchar values are converted to char and varchar using the non-Unicode Windows code page of the server, with possible loss of extended characters. ntext values cannot be retrieved.

  • Catalog and meta data functions do not report the SQL Server 2000 data types (nchar, nvarchar, ntext, or uniqueidentifier).

  • Microsoft DB-Library does not support column-level collations. SQL Server 2000 always reports the default collation of the instance back to DB-Library clients.

  • Microsoft DB-Library does not support Extensible Markup Language (XML) document processing. Applications attempting to execute SELECT statements with a FOR XML clause will receive an error.

  • The two-phase commit in DB-Library is no longer supported in Microsoft DB-Library. Use Microsoft Distributed Transaction Coordinator (MS DTC) to accomplish simultaneous updates on two servers.

  • Lack of TDS 5.0 support in Microsoft DB-Library means that Microsoft DB-Library does not support DECIMAL and NUMERIC types (all numeric data types are converted as FLOAT).

  • Microsoft SQL Server 2000 named instances are not supported.

For more information, see "Programming DB-Library for C" at:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dblibc/dbc_pdc00b_589v.asp?frame=true.

FreeTDS

FreeTDS is an open source project, independent from Microsoft and Sybase. FreeTDS implements the various different versions of TDS used by Sybase and SQL Server, enabling client applications to link to Sybase and SQL Server databases from platforms where commercial or proprietary drivers and libraries may not be available. As well as the TDS protocol, FreeTDS provides implementations of the DB-Library and CT-Library APIs; FreeTDS allows CT-Library client applications to connect to SQL Server 2000 using TDS 8.0. FreeTDS additionally includes an ODBC driver (built for the unixODBC libraries) that can be used by ODBC client applications to connect to SQL Server and Sybase.

FreeTDS is available from the Web site at http://www.freetds.org. You should note that FreeTDS is currently still under development; although much of the core functionality required by many applications is present, parts of the DB-Library and CT-Library APIs have not yet been implemented — see the Web site at http://www.freetds.org/api_status.txt for more details.

UNIX Shell Scripts

Many UNIX client applications use Bourne, Korn, and C shell scripts to support a database application. Shell scripts performing regular maintenance tasks are often executed automatically according to a schedule using the UNIX cron utility. Scripts such as these can be transported to Windows running SFU. The benefits of using SFU include the capability to:

  • Run UNIX-based applications and scripts on Windows. SFU provides a powerful, high-performance environment in which to easily run UNIX-based applications and scripts on the Microsoft® Windows NT®, Windows® 2000, and Windows Server™ 2003 operating systems.

  • Use a single enterprise platform. When combined with Windows, SFU provides customers with a single enterprise platform to run all of their Windows, UNIX, and Internet-based applications.

  • Take advantage of existing UNIX expertise. SFU provides UNIX users and system administrators with a familiar environment and set of tools that allow them to take advantage of their existing UNIX expertise.

SFU implements the features of a traditional UNIX operating system, including pipes, hard file links, UNIX networking, and UNIX graphical support through the X Window System. It also provides UNIX and POSIX.2 utilities such as Korn Shell, awk, and vi. The SFU environment is not an emulation; it is a native environment subsystem that integrates with the Windows NT kernel, just as the Win32 subsystem does. Shell scripts and other scripted applications that use UNIX and POSIX.2 utilities will run under SFU.

The main features of SFU include:

  • Korn Shell and C Shell

  • BSD sockets and UNIX networking

  • Memory mapped files

  • SVID IPC mechanisms: message queues, semaphores, and shared memory

  • Pseudo-terminal support

  • Color curses support

  • Full integration with Windows NT security model

  • Full integration with Windows NT file systems

  • More than 300 utilities, including scripting tools such as awk, sed, and Perl5

  • POSIX.2 shell and utilities

  • X11R6 runtime libraries and tools, including X clients such as xterm, twm, xrdb and xlsclients

  • Support for running X Windows applications on remote displays

  • Support for UNIX-style daemons running as Windows NT services

  • The cron job scheduler

  • A multiuser telnetd daemon supports up to 25 simultaneous users logged into the system. You must have the appropriate client license pack for Windows NT Server to comply with licensing requirements.

Perl

Practical Extraction and Reporting Language (Perl) is a common scripting language that UNIX developers frequently use as an alternative to shell scripts. Perl is available on many other platforms, including Microsoft Windows. Perl is also extensible; developers can create Perl modules that provide additional functionality to Perl scripts. Perl scripts can be extremely portable provided the appropriate Perl modules are also installed.

Microsoft recommends using ActivePerl, which can be downloaded from http://www.activestate.com.

Sybperl Module

Sybperl is a Perl module that provides Sybase Open Client extensions. Sybperl contains DB-Library, CT-Library and a simplified interface. However, because it is highly database-specific, it is rarely used in production systems in favor of the more generic DBI interface. In general, it is advised whenever developing new Perl applications to use the DBI interface. However, Sybperl is necessary for supporting any legacy applications. The SMT has support for both DBI and Sybperl interfaces.

The Sybperl module allows Perl scripts to incorporate function calls that are similar to those available to developers using the DB-Library and CT-Library APIs. Sybperl can be configured to support different versions of TDS; switching Sybperl to use TDS 4.2 allows Sybperl scripts to access Microsoft SQL Server as well as Sybase, providing a possible migration path for scripts that do not depend upon more recent Sybase features.

The Sybase Migration Toolkit makes extensive use of Perl and the Sybperl module for connecting to Sybase and SQL Server.

The source code for Sybperl can be freely downloaded from the Web site at http://www.peppler.org/downloads/. Version 2.15 is recommended. Documentation is available at http://knight.fcu.edu.tw/perl/sybperl-faq.html.

You can build and install Sybperl for your development environment using one of the following procedures, depending on your development environment:

To install the Sybperl module under UNIX and SFU:

  1. Run the Comprehensive Perl Archive Network tool, cpan.

  2. If this is the first time you have run cpan, it will be necessary to answer some configuration questions, including selecting an appropriate download site.

  3. Run the command install Sybperl. This will download, extract, build, test, and install the Sybperl module.

To install the Sybperl module under Windows:

  1. Under ActivePerl, run the Perl Package Manager, ppm.

  2. Run the command install Sybperl. This will download a prebuilt binary and install it in a default location. It does not attempt to test the package.

Appendix B: Sybase and SQL Server Architectural Differences

This appendix summarizes the key architectural differences between Sybase and Microsoft SQL Server. Originally, Sybase and Microsoft SQL Server shared a common source, and thus have many fundamental architectural similarities.

Structural Differences

At a high level, Sybase and Microsoft SQL Server appear very similar: both servers can manage a collection of databases; both servers hold critical server-wide and security data in the master database; both servers use a model database to provide a default structure for new databases; and both servers provide an additional database called tempdb, which is used to hold temporary results and tables created by the servers themselves as part of ordinary query processing. Both servers expect administrators to create additional user-defined databases containing application-specific tables, views, indexes, stored procedures, triggers, and other objects. Furthermore, both servers use a similar dialect of SQL (Transact-SQL), and support a similar protocol allowing client applications to connect and submit queries. However, below this level the architectures of both products are somewhat different. The following sections discuss these differences.

Multiprocessor Support

Microsoft SQL Server and Sybase both support Symmetric Multiprocessor (SMP) configurations. SQL Server can take advantage of up to 32 processors on a single computer. SQL Server will distribute its tasks to threads running on the available processors. You can control how many processors SQL Server will use through the Processor tab in the SQL Server Properties page in SQL Server Enterprise Manager, as shown in Figure B.1.

Figure B.1 The Processor tab in the SQL Server Properties page

Figure B.1 The Processor tab in the SQL Server Properties page

Sybase supports SMP using its Virtual Server Architecture (VSA). VSA comprises multiple database server processes (instead of threads), each running in a cooperative manner on a different processor using a shared everything architecture. Each database server process is referred to as an engine. Client requests are queued and are executed by whichever database engine becomes available to process the request. By default, Sybase is configured to use a single database engine. You can use the sp_configure "max online engines" command to add additional engines.

System Databases

Apart from the master, model, tempdb, and the user databases, a typical SQL Server installation also includes the following databases:

  • msdb. This database is used to record information about scheduled tasks and jobs that are executed by the SQL Server Agent (the SQL Server job scheduling process).

  • distribution. This database acts as a holding point for data being transferred between publishing and subscribing databases for a SQL Server database that is involved in replication.

A Sybase installation does not have the msdb or distribution databases. However, a typical Sybase installation will include the following system databases:

  • sybsyntax. This database holds help information that is used by the sp_syntax stored procedure to report the syntax of any SQL command or function (before SQL Server 2000, SQL Server had the sp_helpsql stored procedure, which provided similar functionality, although the information it used was stored in the master database.) This database (and the sp_helpsql stored procedure) was removed at release 12.0.

  • sybsystemdb. This database holds information about transactions and is used during the recovery process. This database is also used by applications performing distributed transactions using the Two-Phase Commit protocol.

  • sybsystemprocs. This database holds system and global stored procedures. Stored procedures created in this database can be accessed from any other database. SQL Server uses the master database for this purpose.

  • sybsecurity. This database is created if auditing is enabled with Sybase. The database contains tables and stored procedures that can trace users’ attempts to access databases and objects.

Files and Devices

The storage systems used by Sybase and SQL Server databases differ significantly.

A SQL Server database comprises one or more physical database files, and one or more transaction log files. For performance and reliability reasons, the transaction log files and database files for a database should ideally be placed on different physical disks. An administrator can use the CREATE DATABASE, ALTER DATABASE, and DROP DATABASE Transact-SQL statements to create, change, and delete a database. SQL Server databases can be configured to grow automatically as required — the files used to hold the database are extended as necessary.

Note  This feature is especially useful for the tempdb database. This database is critical for performing many SQL operations because it used by SQL Server for holding temporary results. In previous versions of SQL Server, increasing the size of databases, including tempdb, often required manual intervention by the database administrator. If the tempdb was too small to hold the intermediary results generated by processing SQL statements, those SQL statements would fail. Using dynamic databases removes this problem.

Sybase database and transaction logs are created on devices. A device is created using the Sybase DISK INIT command. This command preallocates space on a disk and associates it with a device name. Multiple Sybase databases can share the same Sybase device, and a single device can hold databases and transaction logs (although this is not recommended). As with SQL Server, the CREATE DATABASE, ALTER DATABASE, and DROP DATABASE commands are used to manage databases, although the syntax and options available are different. Extending a Sybase database requires allocating more space on a device, and it can require creating additional devices if all current devices are full (this is a manual process).

Note  The DISK INIT command is still supported by SQL Server 2000 for backward compatibility, although it is recommended that you do not use it for creating new databases.

System Tables

Sybase and SQL Server both use system tables to hold meta-data for the other objects in the database. Table B.1 shows a comparison between the system tables used by Sybase and SQL Server.

Table B.1: Sybase and SQL Server System Tables

Sybase Table

SQL Server Table

Description

sysalternates

N/A

Contains one row for each user alias in the database.

sysattributes

N/A

Contains information about the attributes of objects in the database.

Syscolumns

syscolumns

Contains one row for every column in every table and view, and one row for each parameter for every stored procedure in the database.

Syscomments

syscomments

Contains user-supplied comments and source code (which can be encrypted) for all views, rules, defaults, triggers, constraints, and stored procedures in the database.

Sysconstraints

sysconstraints

Contains information about each constraint in the database.

Sysdepends

sysdepends

Contains information about the dependencies between objects in the database. You can use this information to find out which tables are referenced by a specified stored procedure, for example.

sysgams

N/A

Contains information about how space is allocated to objects in the database.

N/A

sysfilegroups

Contains information about each filegroup used by the database. Filegroups provide a way to manage the physical files used to hold the database.

N/A

sysfiles

Contains information about each file used to store the database.

N/A

sysforeignkeys

Contains information about foreign key constraints used to link tables together in the database.

N/A

sysfulltextcatalogs

Contains information about any full-text catalogs held in the database.

sysindexes

sysindexes

Contains one row for each table and index in the database.

N/A

sysindexkeys

Contains information about the key columns for each index in the database.

sysjars

N/A

Contains one row for each Java archive file held in the database.

syskeys

N/A

Contains one row for each primary, foreign, or common key defined in the database.

syslogs

N/A

Contains information recorded in the database transaction log.

N/A

sysmembers

Contains information about each user that is a member of a database role.

sysobjects

sysobjects

Contains one row for each object in the database.

syspartitions

N/A

Contains one row for each page chain of each partitioned table in the database.

N/A

syspermissions

Contains information about the permissions granted and denied to users, groups and roles in the database.

sysprocedures

N/A

Contains compiled binary information for each view, default, rule, trigger, stored procedure, default, and check constraint in the database.

sysprotects

sysprotects

Contains information about the privileges that have been granted or denied to users in the database.

sysqueryplans

N/A

Contains information about compiled plans used to execute queries.

sysreferences

sysreferences

Contains information mapping foreign keys and the columns that they reference in the database.

sysroles

N/A

Contains information mapping server roles to local roles in the database.

syssegments

N/A

Contains information about how the database uses space allocated on devices.

sysstatistics

N/A

Contains statistical information about each indexed column on each user-defined table in the database.

systabstats

N/A

Contains statistical information for each table and index in the database.

systhresholds

N/A

Contains information about each free-space threshold defined in the database.

systypes

systypes

Contains one row for each system and user-defined data type in the database.

sysusermessages

N/A

Contains one row for each user-defined message in the database.

sysusers

sysusers

Contains one row for each user, group, or role in the database.

sysxtypes

N/A

Contains one row for each extended Java-SQL data type defined in the database.

In Sybase and SQL Server, the master database contains additional system tables as summarized in Table B.2:

Table B.2: Sybase and SQL Server Master System Tables

Sybase Table

SQL Server Table

Description

N/A

sysaltfiles

Contains information about the files holding each database.

N/A

syscacheobjects

Contains information about how objects are being cached in memory.

Syscertificates

N/A

Contains one row for each security certificate loaded.

syscharsets

syscharsets

Contains one row for each character set and sort order installed on the server.

sysconfigures

sysconfigures

Contains one row for each configuration option (some may be out of date if they have been changed since the server was last started).

syscurconfigs

syscurconfigs

Contains the current values for the server configuration options (including any that have changed since the server was last started).

sysdatabases

sysdatabases

Contains one row for each database created on the server.

sysdevices

sysdevices

Contains one row for each database device and backup device created on the server. (On SQL Server, this table is provided for backward compatibility only — use the sysfiles table in each database to obtain this information.)

sysengines

N/A

Contains information about each database engine currently online.

syslanguages

syslanguages

Contains one row for each language installed with the server (U.S English is always available, although it never appears in this table).

syslisteners

N/A

Contains information about each network protocol that client applications can use to connect to the server.

syslocks

syslockinfo

Contains information on each current lock request in the server.

sysloginroles

N/A

Contains one row for each login that has been granted a Sybase system role.

syslogins

syslogins

Contains one row for each server login account.

syslogshold

N/A

Contains information about the oldest active transaction for each database.

sysmessages

sysmessages

Contains one row for each system error or warning message that can be returned by the server.

sysmonitors

sysperfinfo

Contains performance information.

N/A

sysoledbusers

Contains information about user mappings with linked servers.

sysprocesses

sysprocesses

Contains one row for each current connection and system process.

sysremotelogins

sysremotelogins

Contains information about remote users allowed to invoke stored procedures on the server.

sysresourcelimits

N/A

Contains one row for each resource limit defined in the server.

syssecmechs

N/A

Contains information about the security services and mechanisms available to the server.

sysservers

sysservers

Contains information about the other servers that can be accessed by this server.

syssessions

N/A

Contains information about clients that can failover if the server should fail (clients without an entry in this table will be disconnected).

syssrvroles

N/A

Contains information about system and user-defined roles.

Systestlog

N/A

?

systimeranges

N/A

Contains information about timed ranges, used when resource limits are active.

systransactions

N/A

Contains information about active transactions.

sysusages

N/A

Contains information about how databases use device storage.

Configuration

SQL Server configuration information is available in the sysconfigures and syscurconfigs table in the master database. A database administrator can modify SQL Server configuration information using the sp_configure stored procedure followed by executing the RECONFIGURE command to make any changes operational (do not edit the sysconfigures or syscurconfigs table directly). Some SQL Server configuration options are dynamic and come into effect immediately while others require SQL Server to be restarted — these are referred to as static options. Furthermore, SQL Server itself modifies many configuration options to tune itself according to the resources available. Table B.3 lists the SQL Server configuration options, identifying those that are dynamic, static, or managed by SQL Server.

Table B.3: SQL Server 200 Configuration Options

Option

Static or Dynamic

Managed by SQL Server

affinity mask

Static

No

allow updates

Dynamic

No

awe enabled

Static

No

c2 audit mode

Static

No

cost threshold for parallelism

Dynamic

No

cursor threshold

Dynamic

No

default full-text language

Dynamic

No

default language

Dynamic

No

fill factor

Static

No

index create memory

Dynamic

Yes

lightweight pooling

Static

No

locks

Static

Yes

max degree of parallelism

Dynamic

No

max server memory

Dynamic

Yes

max text repl size

Dynamic

No

max worker threads

Static

No

media retention

Static

No

min memory per query

Dynamic

No

min server memory

Dynamic

Yes

network packet size

Dynamic

No

open objects

Static

Yes

priority boost

Static

No

query governor cost limit

Dynamic

No

query wait

Dynamic

No

recovery interval

Dynamic

Yes

remote access

Static

No

remote login timeout

Dynamic

No

remote proc trans

Dynamic

No

remote query timeout

Dynamic

No

scan for startup procs

Static

No

set working set size

Static

No

show advanced options

Dynamic

No

two digit year cutoff

Dynamic

No

user connections

Static

Yes

user options

Dynamic

No

using nested triggers

Dynamic

No

For detailed information about SQL Server configuration options, see "Setting Configuration Options" in SQL Server Books Online.

Sybase holds server configuration information in a text file, by default called servername.cfg where servername is the name of the Sybase server. The Sybase database engine reads this file whenever it starts up, and then configures itself accordingly. A database administrator can edit this file, although any changes made will not go into effect until Sybase is restarted. Alternately, a database administrator can use the sp_configure stored procedure in conjunction with the RECONFIGURE command. Many of the configuration options are dynamic, and using this mechanism has the advantage of allowing dynamic options to go into effect immediately. Although Sybase and SQL Server share a number of configuration options, the divergence in architecture of the two database engines means that most Sybase configuration options are significantly different from those of SQL Server. For information about Sybase configuration parameters, see the Starting, Stopping, and Configuring SQL Server entry in the Sybase SQL Server Reference Manual, available online at http://manuals.sybase.com/onlinebooks/group-asarc/srg1100e/sqlref/@Generic__BookView/675.

Memory Use

SQL Server and Sybase make extensive use of memory for caching data read from disk. Caching data effectively will reduce the number of physical disk accesses required and improve performance.

SQL Server dynamically acquires and frees memory according to the resources available and the current workload. It is not normally necessary for an administrator to change any settings. However, it is still possible for an administrator to manually change the way in which memory is allocated using the Memory tab in the SQL Server Properties page in SQL Server Enterprise Manager, as shown in Figure B.2.

Figure B.2 The Memory tab in the SQL Server Properties page

Figure B.2 The Memory tab in the SQL Server Properties page

Sybase requires an administrator to manually configure how memory is used. Tuning memory use for Sybase is a highly skilled practice. You can invoke the sp_configure command to set parameters such as total cache data size to indicate how much memory Sybase should take for caching data. The data cache can be divided into named caches using the sp_cacheconfig command, and tables can be assigned to specific caches. A named cache can be further divided into buffer pools, supporting I/O operations of specific sizes, using the sp_poolconfig command.

SQL Processing

Sybase and SQL Server both use SQL and the procedural extensions usually referred to as Transact-SQL. However, SQL has undergone a number of changes since Sybase first appeared. For reasons of backward compatibility, Sybase retains the capability to use older style SQL statements and behavior that are not always available in SQL Server. Older Sybase databases and applications still use this syntax. This section describes the common differences you are likely to encounter.

Join Syntax

Sybase joins use an older style of syntax in which the join takes place in the WHERE statement. This syntax is supported in SQL Server 2000 for backward compatibility only. It will not be supported in future releases. When migrating a database from Sybase to SQL Server 2000, you should update the joins to an SQL-92–compliant syntax, as shown in Table B.4.

Table B.4: Sybase and SQL Server Join Syntax

Sybase

SQL Server 2000

SELECT a.col1, b.col1FROM table1 a, table2 bWHERE a.col1 *= b.col2

SELECT a.col1, b.col1FROM table1 aLEFT OUTER JOIN table2 bON a.col1 = b.col2

These two queries can return different results because the FROM clause is evaluated before the WHERE clause. For example, if a search condition were added to the WHERE clause, it would be evaluated at different times in the query. In the Sybase syntax, it would be evaluated along with the join. In the SQL-92-standard syntax, it would be evaluated after the join and reduce the result set.

Note  For more information, see the "Specifying Joins in FROM or WHERE Clauses" chapter in SQL Server Books Online.

Subquery Behavior

Sybase 4.9.2 and SQL Server 4.2x subqueries exhibit non SQL-92-standard behavior; subqueries can return duplicate rows.

Sybase System 10 defaults to SQL-92-standard behavior, but the earlier subquery behavior can be set to ON to ease migration. Sybase 10 is backward-compatible to the non-SQL-92 behavior if the SET DUP_IN_SUBQUERY option is set to ON. Sybase 11 and SQL Server 2000 only support SQL-92 subquery behavior. If you are migrating an application from Sybase 10 and the system uses SET DUP_IN_SUBQUERY ON, you must review the Sybase queries so they do not cause errors. If you are migrating from a Sybase version that is earlier than version 10, you also must review queries that have subqueries.

Grouping Results

SQL Server requires that all nonaggregate columns in the select list be named in the GROUP BY clause if it is used. Sybase allows the nonstandard method of not explicitly naming all columns in the GROUP BY statement. The same functionality can be achieved using derived tables in SQL Server, as shown in Table B.5.

Table B.5: Sybase GROUP BY versus SQL Server Derived Table

Sybase

SQL Server 2000

SELECT au_lname, city, count(*) FROM authors GROUP BY city

SELECT au_lname, city, (select count(*) FROM authors WHERE city = a.city) FROM authors a

Server Roles

Sybase and SQL Server both provide roles, but in a different manner. SQL Server has a number of fixed server roles that can be assigned to users. Table B.6 describes these roles:

Table B.6: Fixed Server Roles in SQL Server

Fixed Server Role

Permission

Sysadmin

Can perform any activity in SQL Server

Serveradmin

Can set server-wide configuration options and shut down the server

Setupadmin

Can manage linked servers and startup procedures

Securityadmin

Can manage logins and CREATE DATABASE permissions and read error logs

Processadmin

Can manage processes running in SQL Server

Dbcreator

Can create and alter databases

Diskadmin

Can manage disk files

Bulkadmin

Can perform bulk insert operations

You can see a list of the fixed server roles by executing sp_helpsrvrole, and you can view the specific permissions for each role by executing sp_srvrolepermission.

Each database has a set of fixed database roles. Although roles that have the same names exist in each database, the scope of an individual role is only within a specific database. For example, if Database1 and Database2 both have user IDs named UserX, adding UserX in Database1 to the db_owner fixed database role for Database1 has no effect on whether UserX in Database2 is a member of the db_owner role for Database2.

Table B.7 describes the fixed database roles in SQL Server.

Table B.7: Fixed Database Roles in SQL Server

Fixed Database Role

Permission

db_owner

Has all permissions in the database

db_accessadmin

Can add or remove user Ids

db_securityadmin

Can manage all permissions, object ownerships, roles, and role memberships

db_ddladmin

Can issue ALL DDL, but cannot issue GRANT, REVOKE, or DENY statements

db_backupoperator

Can issue DBCC, CHECKPOINT, and BACKUP statements

db_datareader

Can select all data from any user table in the database

db_datawriter

Can modify any data in any user table in the database

Sybase has its own set of server roles, called sa_role, sso_role, and oper. The sa_role gives system administrator authority to a user, the sso_role allows a user to act as a system security officer changing passwords and managing the audit system, and the oper role allows a user to perform operations tasks such as backing up or restoring a database. Sybase also allows developers to create application roles; applications can check that a user request has an appropriate application role before processing any requests.

The Sybase roles are not recognized by SQL Server. In SQL Server, the sysadmin fixed server role has functions that are equivalent to the Sybase sa_role and sso_role. By using the GRANT statement, you can give individual users permissions to perform the operator actions of dumping databases and transactions, but you cannot give them permissions to load databases and transactions.

The Sybase proc_name function (which validates a user name) is not supported by SQL Server.

System Stored Procedures

Sybase and SQL Server both make extensive use of system stored procedures for performing common tasks in the server and databases. There are a large number of system stored procedures that vary quite widely between Sybase and SQL Server. For details of Sybase system stored procedures, see the Sybase Reference Manual available online at http://sybooks.sybase.com/onlinebooks/group-as/asg1250e/refman. For information about SQL Server system stored procedures see "System Stored Procedures" in the Transact-SQL Reference in SQL Server Books Online.

The sp_addmessage, sp_dboption, and sp_configure stored procedures are used frequently by database administrators and are both available in Sybase and SQL Server. However, the SQL Server 2000 and Sybase implementations of the sp_addmessage, sp_dboption, and sp_configure system stored procedures have some differences. The information under the following subheadings describes these differences.

sp_addmessage

In Sybase systems, the range for user-defined message numbers starts at 20,000. In SQL Server 2000, the range starts at 50,000 and also requires a severity to be specified (range is from 1 through 25) to support alerts.

SQL Server 2000 stores user messages in master.dbo.sysmessages, and Sybase stores them in master.dbo.sysusermessages.

sp_dboption

Table B.8 summarizes the parameters for sp_dboption that are different for Sybase and SQL Server 2000 implementations.

Table B.8: sp_dboption Parameter Differences Between Sybase and SQL Server 2000

Sybase

SQL Server 2000

ABORT TRAN ON LOG FULL

No equivalent

ALLOW NULLS BY DEFAULT

ANSI NULL DEFAULT

AUTO IDENTITY

No equivalent

DDL IN TRAN

SQL Server 2000 allows DDL in transactions

IDENTITY IN NONUNIQUE INDEX

No equivalent

Sybase requires a checkpoint in the affected database after sp_dboption completes, and SQL Server 2000 automatically checkpoints the affected database. Also, SQL Server 2000 allows DDL in transactions without requiring the system administrator to set any server or database options.

sp_configure

You can manage and optimize SQL Server 2000 resources through configuration options by using SQL Server Enterprise Manager or the sp_configure system stored procedure. The most commonly used server configuration options are available through SQL Server Enterprise Manager; all configuration options are accessible through sp_configure.

SQL Server 2000 has internal features for self-tuning and reconfiguring. These features reduce the need to set server configuration options manually. You should consider the effects on your system carefully before setting these options.

Note  The options for sp_configure are quite different in Sybase and SQL Server 2000. Detailing all of the differences is beyond the scope of this guide. For more information about the options for sp_configure, see SQL Server BOL.

sp_configure allows members of the sysadmin fixed server role to set defaults for user options, such as SQL-92 options, although individual connections can change the settings later. The current state of a connection's settings is made visible to it through the @@OPTIONS statement, which returns a numeric value that records the current option settings.

Note  For more information about a stored procedure that returns a character list of the options recorded by @@OPTIONS, read the Microsoft Knowledge Base Article Q156498 at http://support.microsoft.com/default.aspx?scid=kb;en-us;156498.

Other Differences

Table B.9 summarizes the other key differences between Sybase and SQL Server.

Table B.9: Key Differences Between Sybase and SQL Server

Sybase Feature, Function, or Task

SQL Server Equivalent

Comments

Transaction log

Microsoft SQL Server 4.x, 6.x, 7.0, and 2000 all use a transaction log for database recoverability

The transaction log was moved out of the syslogs table in version 7.0 and 2000 to a separate file structure that can easily be placed on a separate disk subsystem. The transaction log is now a linear file structure, which is more in line with the way transactions are logged, and which provides better performance and greater manageability.

Declaring transactions

SQL Server supports declared transactions, including rollback, savepoints, and the final commit.

The syntax is slightly different. Check SQL Server Books Online (BOL) for specifics. SQL Server can be configured to support implied transactions. This is where the server wraps all data modification calls with a begin transaction and commit transaction automatically without having to include it in your Transact-SQL code. If there is a system failure, any uncommitted transactions would be automatically rolled back upon system recovery.

Declaring distributed transactions

SQL Server supports distributed transactions through a two-phase commit process that is managed by a service called Distributed Transaction Coordinator (DTC)

DTC is installed as a part of the Windows Server 2003 operating system. SQL Server engages DTC whenever a distributed transaction is declared. Implied Distributed Transactions are supported, as well.

Remote servers

SQL Server 4.x, 6.x, 7.0, and 2000 all support defining other SQL Servers as remote servers

This feature is included with SQL Server 7.0 and 2000 for backward compatibility. SQL Server 7.0 and 2000 introduced the concept of linked servers to make it easier to query remote servers by using a four-part naming convention through any standard OLE-DB provider or ODBC data source.

Remote stored procedures

SQL Server 4.x, 6.x, 7.0, and 2000 all support remote stored procedure execution.

In SQL Server 7.0 and 2000, a linked server provides an easier way to execute queries on remote databases. Linked servers also support heterogeneous data sources, not just SQL Server databases.

Specifying character sets

SQL Server supports a wide variety of character sets for international and backward compatibility.

SQL Server supports UniCode character sets. With SQL Server 8.0, character sets can be specified at the server, database, and column level.

Specifying sort orders

SQL Server supports a wide variety of sort orders for international and backward compatibility.

With SQL Server 8.0, the sort order can be set at the server, database, and column level.

Data types

Support for a wide range of data types, including var, varchar, float, numerical, currency, datetime, timestamp, text, binary, GUID, and so on.

Check with SQL Server BOL for specific information regarding data type definition and use.

Lock types

SQL Server 2000 supports dynamic lock escalation from row-level, page-level, extent-level, table-level, and schema-level locking.

No need to specify lock types. SQL Server requests locks based upon balance between optimal performance and efficient use of resources.

Page sizes

With SQL Server 7.0 and 8.0, data page sizes have been extended from the original size of 2k bytes to a newer, more efficient size of 8k bytes.

Extent sizes have also been increased in accordance with the new increases in page size.

Indexes

SQL Server 8.0 supports primary keys, clustered indexes, nonclustered indexes, unique clustered indexes and nonunique clustered Indexes.

SQL Server 8.0 introduced a new index type called Indexed Views, in which the view is materialized with an underlying index. It is ideal for Microsoft SQL Server® OLAP ServicesTM and Reporting. In OLTP applications, use indexes cautiously. Indexed Views incur the same performance overhead as other views for updates, deletes, and inserts.

Primary keys

By default, a primary key column creates a unique clustered index to support the primary key constraint

The clustered value can be overridden or changed.

Altering tables

SQL Server 2000 supports altering tables by adding or removing columns.

Use of the alter table function may truncate data when removing columns. Use cautiously.

Altering databases

SQL Server 2000 allows the system administrator or database owner to alter the structure of the database by changing the database size, changing the database’s autogrow characteristics, or by adding additional files or filegroups.

SQL Server 2000 uses the concept of files or filegroups in place of Sybase’s use of devices and segments. Database files and filegroups can be grown, or shrunk, and database files and filegroups can be added or removed while the database is in use. See SQL Server BOL for more detailed discussion of these topics.

User-defined data types

SQL Server 2000 does support the use of User-defined data types

 

User-defined functions

SQL Server 2000 does support the use of User defined functions.

Check SQL Server BOL for details on implementing User-Defined Functions.

Text and binary data

Text, image, and binary data can be stored in the database. Additional data types are available for defining text, image, and binary data.

There are certain limitations and precautions associated with working with Text, Image, and Binary data. Check SQL Server BOL and Microsoft MSDN® white papers for guidelines regarding implementation issues.

Job scheduling

SQL Server provides an integrated scheduling agent called SQL Server Agent. Windows Server 2003 also provides a scheduling service that is similar to cron. The SQL Server Agent should be used in preference for scheduling database related tasks.

Provides an integrated environment for scheduling jobs and reviewing job history through SQL Server Enterprise Manager. All job information is maintained in a system database called msdb that is located in the associated system tables. There are a number of system stored procedures available to view, schedule, enable, disable and manually run jobs. See SQL Server BOL for additional details regarding SQL Server Agent and Job Scheduling.

Built-in support for XML

SQL Server 2000 supports XML input and XML output formats.

See SQL Server BOL for information regarding the use of XML data in your applications.

Full-text search capabilities

SQL Server 7.0 and 2000 both include a Full-Text Search Service that fully integrates with the SQL Server service. Full-Text Search is capable of indexing text data maintained in the database.

The Windows Server 2003 operating system provides its own Indexing service, which is available to the database clients through a linked server and OLE-DB technologies for text search capabilities on data that is not maintained in the database.

Alerts and notifications

SQL Server can provide alert and notification services through the SQL Server Agent service. Notification procedures can include e-mail, pager, network message, posting messages to the Application Event Log, or other means as made available by external applications.

Many third-party applications exist that can provide a comprehensive framework for alert and notification services. The SQL Server Agent Alert and Notification services are intended to provide a base set of services that can be utilized out of the box.

Appendix C: Sybase Stored Procedure and Trigger Migration to Microsoft SQL Server

Although Sybase Transact-SQL and Microsoft® Transact-SQL are very similar, they contain subtle differences that must be addressed to port a stored procedure or trigger from Sybase to Microsoft SQL Server™. This appendix describes the procedures you can perform for translating Sybase stored procedures and triggers to make them suitable for SQL Server. Each procedure includes a description of the problem, an example of how the Sybase Transact-SQL code should be translated to the corresponding SQL Server Transact-SQL, and a generalized rule indicating what you should search for in the original Sybase code and what you should replace it with to make the code work with SQL Server.

Although you can perform these procedures manually, in many cases it may make more sense to script the actions that you choose to take to aid repeatability.

Stored Procedure Creation Changes

This section describes the changes you should make to scripts that create stored procedures migrated from Sybase to SQL Server.

Remove Creation of Temporary Tables

In order for Sybase to compile a stored procedure successfully, all of the stored procedure’s dependencies must already be defined in the database. SQL Server does not require dependencies to be present to compile a stored procedure.

Making this change will also make code maintenance easier because all temporary table definitions will be in one place. In Sybase, temporary table definitions can appear in multiple places and are often out of synchronization with each other.

Example:

CREATE TABLE #custom_attributes( PL_Nom_Attribute_Id    numeric(20)    NOT NULL /*dmm 10/13/98*/,Sequence_Number    int        NOT NULL)GO
CREATE PROCEDURE sp_proc @arg 1 int, @arg2 intAS...create table #temp ( .... )...SELECT Sequence_Number FROM #custom_attributes...GO

Change to:

CREATE PROCEDURE sp_proc @arg 1 int, @arg2 intAS...create table #temp (... )...SELECT Sequence_Number FROM #custom_attributes...GO

Search for:

CREATE TABLE #* ( *  ) GO

Replace with:

If they are outside of a stored procedure, remove any temporary table definitions (and related indexes) from the script and copy them to another file containing only the definitions of temporary tables. When complete, you will have two scripts; one defining stored procedures and the other defining the temporary tables.

Change Ownership of Stored Procedures to dbo

To make sure that all stored procedures are accessible to all users, the stored procedures should be owned by the database owner (dbo).

Example:

CREATE PROCEDURE sp_test

Change to:

CREATE PROCEDURE dbo.sp_test

Search for:

CREATE PROCEDURE or CREATE PROC

Replace with:

If the procedure name is not already prefixed with dbo, add dbo.

Transactional Changes

This section describes the changes you must make to transaction-handling statements in Transact-SQL code migrated from Sybase to SQL Server.

Remove sp_procxmode

The sp_procxmode stored procedure displays or changes the transaction mode that is associated with stored procedures in a Sybase database. SQL Server does not implement this feature.

Example:

exec sp_procxmode 'dbo.Balancing_Accounting_Load_sp', 'unchained'

Search for:

exec sp_procxmode

Replace with:

Remove the entire line that has the call to sp_procxmode.

Change SET CHAINED to SET IMPLICIT_TRANSACTIONS

Sybase SQL Server and Microsoft SQL Server both implement chained transactions that have implicit starting points.

If chained mode is turned on, Sybase will automatically start a transaction on the following statements:

  • DELETE

  • INSERT

  • OPEN

  • FETCH

  • SELECT

  • UPDATE

SQL Server will use the same statements to begin transactions.

Example:

set chained oninsert into publishers values ('9999', null, null, null)begin transactiondelete from publishers where pub_id = '9999'rollback transaction

Change to:

set implicit_transaction oninsert into publishers values ('9999', null, null, null)begin transactiondelete from publishers where pub_id = '9999'rollback transaction

Search for:

set chained [on|off]

Replace with:

set implicit_transaction [on|off]
Match all COMMIT TRANSACTION Statements with a BEGIN TRANSACTION Statement

Sybase allows a transaction to be committed without being started, whereas SQL Server requires that a transaction have an explicit starting point. If a stored procedure contains a commit transaction statement, the stored procedure must be scanned to determine where the transaction started and a begin transaction statement must be added.

Example:

update titleauthor set royaltyper = 65 from titleauthor, titles where royaltyper = 75 and titleauthor.title_id = titles.title_id and title = 'The Gourmet Microwave'commit transaction

Change to:

begin transactionupdate titleauthor set royaltyper = 65 from titleauthor, titles where royaltyper = 75 and titleauthor.title_id = titles.title_id and title = 'The Gourmet Microwave'commit transaction

Search for:

commit [transaction]

Replace with:

Search backward through the code to find the point at which the transaction started. If the stored procedure uses implicit transactions, find any of the statements that start a transaction. For example: INSERT, UPDATE, and DELETE. At the point at which the transaction should start, add a BEGIN TRANSACTION statement.

Change SET TRANSACTION ISOLATION LEVEL Statements to their SQL Server Equivalents

Sybase uses different identifiers to set the transaction mode. Table C.1 lists Sybase transaction levels and their equivalents in SQL Server:

Table C.1: Sybase Transaction Level and SQL Server Equivalents

Sybase Transaction Level

SQL Server Transaction Level

0

read uncommitted

1

read committed

2

Repeatable read

3

Serializable

Example:

set transaction isolation level 1

Change to:

set transaction isolation level read committed

Search for:

set transaction isolation level

Replace with:

Replace the Sybase number with the equivalent SQL Server value from the table.

Change @@TRANCHAINED to @@OPTIONS

Sybase uses @@TRANCHAINED to determine the type of transaction it is in, whereas SQL Server uses @@OPTIONS, as shown in Table C.2:

Table C.2: Sybase and SQL Server Transaction Options

Sybase

SQL Server

0 (unchained)

(@@OPTIONS & 2 ) = 0

1 (chained)

(@@OPTIONS & 2 )  = 1

Example:

if @@tranchained = 0begin.....end

Change to:

if (@@options & 2 ) = 0begin....end

Search for:

@@tranchained

Replace with:

(@@options & 2 ) = 0
Change @@TRANSTATE to SQL Server Equivalent

Sybase uses @@TRANSTATE to determine whether a transaction is in progress, successful, or cancelled. SQL Server does not have an equivalent value. Replace logic that uses @@TRANSTATE with equivalent @@ERROR checks. The SET XACT_ABORT ON statement can be used to allow SQL Server to roll back transactions automatically when an error occurs.

Table C.3 lists the values that @@TRANSTATE can have.

Table C.3: Sybase @@TRANSTATE and SQL Server @@ERROR Values

Sybase @@TRANSTATE Values

Description

SQL Server Equivalent

0

Transaction in progress: an explicit or implicit transaction is in effect; the previous statement executed successfully.

@@ERROR = 0

1

Transaction succeeded: the transaction completed and committed its changes.

@@ERROR = 0

2

Statement aborted: the previous statement was aborted; no effect on the transaction.

@@ERROR <> 0

3

Transaction aborted: the transaction aborted and rolled back any changes.

@@ERROR <> 0

The SET XACT_ABORT ON statement can be used to cause SQL Server to automatically roll back an entire transaction if there are any errors. For more information, see SQL Server BOL.

Example:

if @@transtate = 1 begin...end

Change to:

if @@ERROR = 0begin....End

Search for:

@@transtate

Replace with:

Examine the usage of @@TRANSTATE and replace it with the SQL Server equivalent.

Language Changes

This section describes the syntax and semantics differences of Transact-SQL between Sybase and SQL Server.

Remove Index Hints

Sybase and SQL Server both implement index hints intended to assist the Query Optimizer ensure that queries are executed efficiently. However, it is better to let SQL Server make its own optimization choices, so index hints are usually unnecessary and can often be detrimental to performance.

Note  Do not remove locking hints.

Example:

select * from publishes (index ndxPub )

Change to:

select * from publishers

Search for:

(index

Replace with:

Remove all index hints from the FROM clauses.

Remove SET FORCEPLAN Statements

SQL Server Query Optimizer does not require SET FORCEPLAN, and it performs better if is not present.

Example:

set forceplan

Search for:

set forceplan

Replace with:

Remove all SET FORCEPLAN statements.

Change CHAR_LENGTH to LEN

Sybase uses CHAR_LENGTH to get the length of a string, whereas SQL Server uses the LEN function.

Example:

@someval = char_length('hello world!')

Change to:

@someval = len('hello world!')

Search for:

char_length

Replace with:

len
Change SUSER_NAME to SUSER_SNAME

Sybase uses SUSER_NAME to get a user's login id, whereas SQL Server uses SUSER_SNAME.

Example:

update ar_on_account_applicationset last_mod_user = suser_name(), last_mod_date = getdate()from ar_on_account_application a, insertedwhere a.ar_on_account_application_id = inserted.ar_on_account_application_id

Change to:

update ar_on_account_applicationset last_mod_user = suser_sname(), last_mod_date = getdate()from ar_on_account_application a, insertedwhere a.ar_on_account_application_id = inserted.ar_on_account_application_id

Search for:

suser_name

Replace with:

suser_sname
Change SYB_IDENTITY to IDENTITYCOL

SQL Server and Sybase use the same syntax for defining identity columns. The SQL Server default name for an identity column is IDENTITYCOL, whereas the Sybase default name is SYB_IDENTITY. All references to SYB_IDENTITY must be changed to IDENTITYCOL.

Example:

update authorsset au_lname = 'Sharp'where syb_identity = 99

Change to:

update authorsset au_lname = 'Sharp'where identitycol = 99

Search for:

SYB_IDENTITYCOL

Replace with:

IDENTITYCOL
Change @@SQLSTATUS to @@FETCH_STATUS

The values that Sybase uses to determine whether a cursor fetch was successful are different from those used by SQL Server. The use of @@SQLSTATUS also needs to be examined and changed to match the usage of @@FETCH_STATUS in SQL Server. Table C.4 shows the values used by Sybase and SQL Server.

Table C.4: Sybase @@SQLSTATUS and SQL Server @@FETCH_STATUS Values

Sybase @@SQLSTATUS

SQL Server @@FETCH_STATUS

Meaning

0

0

Success

1

-1

End of result set

2

-

Type mismatch

-

-2

Row deleted from result set

The primary cases to watch for when using @@SQLSTATUS are 0 and 2.

Example:

if @@sqlstatus = 2begin...end

Change to:

if @@fetch_status = -1begin ...end

Search for:

@@sqlstatus [0|2]

Replace with:

@@fetch_status [0|-1]
Change DEALLOCATE CURSOR to DEALLOCATE

Sybase uses DEALLOCATE CURSOR to release a cursor, whereas SQL Server uses DEALLOCATE to perform the same action.

Example:

deallocate cursor @mycursor

Change to:

deallocate @mycursor

Search for:

deallocate cursor

Replace with:

deallocate
Check Usage of Cursors

Sybase considers the scope of a cursor to be within a stored procedure and, as such, allows different stored procedures to open multiple cursors that have the same name. SQL Server considers the scope of a cursor to be the connection.

Make sure that stored procedures close their cursors before exiting.

Example:

declare mycursor cursor for select * from publishersopen mycursorfetch next from mycursor into @pub_id, @pub_namewhile @@sqlstatus = 0 begin     ...endclose mycursor

Change to:

declare mycursor cursor for select * from publishersopen mycursorfetch next from mycursor into @pub_id, @pub_namewhile @@fetch_status = 0 begin     ...endclose mycursordeallocate mycursor

Search for:

declare ... cursor

Replace with:

Make sure the cursor is closed by the end of the stored procedure.

Change PRINT Statements to the SQL Server Equivalent

SQL Server does not support parameter substitution in PRINT statements. Change PRINT statements that use parameter substitutions with a select statement followed by an ordinary PRINT statement.

Example:

print 'value %1, %2', @val1, @val2

Change to:

declare @temp_msg varchar( 255 )select @temp_msg = 'value ' + @val1 + ', ' + @val2print @temp_msg

Search for:

Any PRINT statement that contains percent (

%
) characters.

Replace with:

Build the string using a separate SELECT statement and then call PRINT.

Change SET Statements to the SQL Server Equivalents

The SET statement options that are implemented by Sybase and SQL Server have diverged since their respective 4.2 versions. Although the new options may have the same or similar names, their defined characteristics can be different. For example, the full effect of the Sybase ANSINULLS option combines behavior defined by the SQL Server ANSI_NULLS and ANSI_WARNINGS options.

Table C.5 lists the Sybase-specific options that do not have exact SQL Server equivalents, and it lists the SQL Server options that most closely match the Sybase behavior that you may want. This table does not list the options that Sybase and SQL Server share and, therefore, require no changes, or SET statements that have been described earlier in this appendix. The entries that have no equivalent value in SQL Server are the result of the architectural differences between Sybase and SQL Server; these SET statements should be discarded when migrating to SQL Server.

Table C.5: SET Statement Option Differences Between Sybase and SQL Server

Sybase Option

SQL Server Option

ANSINULLS

ANSI_NULLS and ANSI_WARNINGS

ANSI_PERMISSIONS

No equivalent

ARITHABORT can take overflow or truncated options

ARITHABORT does not support options

ARITHIGNORE can take overflow option

ARITHIGNORE does not support options

CHAINED

IMPLICIT_TRANSACTION

CLOSE ON ENDTRAN

CURSOR_CLOSE_ON_COMMIT

CHAR_CONVERT

Set with either ODBC or DB-Library connect options

CURSOR ROWS

No equivalent

DUP_IN_SUBQUERY (System 10 only)

No equivalent

FIPSFLAGGER takes ON/OFF

FIPSFLAGGER takes a FIPS level identifier

FLUSHMESSAGE

No equivalent

PREFETCH

SQL Server automatically performs read-ahead when appropriate. Use the sp_configure stored procedure with the RA options to tailor the policy that SQL Server uses. See SQL Server BOL for details

ROLE

No equivalent

SELF_RECURSION

No equivalent

STATISTICS SUBQUERYCACHE

No equivalent

STRING_RTRUNCATION

No equivalent

TABLE COUNT

No equivalent

Example:

set ansinull on

Change to:

set ansi_null onset ansi_warnings on

Search for:

set 

Replace with:

If there is no equivalent SQL Server option, delete the statement; otherwise, replace using the appropriate value from Table C.5.

Change NULL Usage to the Equivalent SQL Server Usage

The NULL value in ANSI-standard SQL is defined to be unknown, and not zero or the empty string. Sybase occasionally will coerce a NULL into a value such as zero or the empty string. SQL Server closely follows the ANSI specification when dealing with nulls and in most cases will not automatically coerce the value. This means that if NULL appears in an expression, the value of the expression becomes NULL.

In SQL Server, the expression NULL=NULL yields the value NULL, whereas in Sybase the expression NULL=NULL yields the value TRUE. This can cause results sets to be different on Sybase and SQL Server.

To demonstrate this behavior, use OSQL to execute the following commands in the SQL Server pubs database:

set ansi_nulls onselect count( * ) from pubs..authors where NULL = NULLset ansi_nulls offselect count( * ) from pubs..authors where NULL = NULL

The first SELECT statement will return nothing, whereas the second SELECT statement will return everything in the table.

Example:

select ...where Company_Id  = isnull( @v_company_id, Company_Id )

Change to:

select ...where isnull( Company_Id, -100 ) =                         isnull( @v_company_id, IsNull( Company_Id, -100 ) )

Making this change will make the result set on SQL Server match that of Sybase.

Search for:

isnull

Replace with:

Make sure that expressions that use ISNULL are written so that NULL cannot appear anywhere in the statement.

As you use ISNULL, be aware that ISNULL requires that the replacement value be of the same type as the value being checked.

Differences in the Operation of the SUBSTRING Function

The function SUBSTRING behaves differently in Sybase and SQL Server.

Example:

In Sybase, if the starting index for the substring is less than 1, the following expression returns NULL:

substring('xyz', 0, 2)

In SQL Server, the same expression returns

x

Note  it is not a common requirement to extract substrings that start before the beginning of the main string, and it frequently indicates a programming error.

Search for:

substring

Replace with:

Verify that the starting point for the substring is not less than 1.

Using Table Aliases

The rules governing the use of table aliases differ between Sybase and SQL Server. If a table is aliased in a FROM clause, the alias must be used. In a SELECT statement, all of the columns must be prefixed by the table alias. In a DELETE statement, if the tabling in the first FROM clause is aliased in the second FROM clause, the alias must be used in the first FROM clause. INSERT and UPDATE statements are similar to DELETE statements.

Example:

delete from   #r1_result_totalfrom  #r1_result_total r,#r1_result_total r2 where  r.Deal_Name = r2.Deal_Nameand r.Pipeline_Id = r2.Pipeline_Idand r.Pipeline_Point_Id= r2.Pipeline_Point_Id

Change to:

delete from   rfrom  #r1_result_total r,#r1_result_total r2 where  r.Deal_Name = r2.Deal_Nameand r.Pipeline_Id = r2.Pipeline_Idand r.Pipeline_Point_Id= r2.Pipeline_Point_Id

Search for:

INSERT, UPDATE, DELETE, and SELECT statements

Replace with:

Check to see if table aliases are being used and, if they are, make sure all columns are prefixed with the correct alias.

SELECT Statements with GROUP BY Clauses.

Sybase does not require columns in the SELECT clause to be in the GROUP BY column list, whereas SQL Server does. This only effects statements using the GROUP BY clause.

Example:

use tempdbgocreate table #groupby ( A int, B int ) goinsert into #groupby values( 0, 1)insert into #groupby values( 0, 2)insert into #groupby values( 1, 1)insert into #groupby values( 1, 2) select A, B from #groupbyselect A, B from #groupby groupby A   --works on Sybase but not on SQL Server.

Search for:

group by 

Replace with:

SQL Server requires that the nonaggregate columns in the SELECT clause also appear in the GROUP BY clause. Rewrite the statement so that it works on SQL Server.

Change != to <>

Sybase accepts both != or <> to mean "not equal," whereas SQL Server only uses <>

Example:

if @@error != 0begin     ...end

Change to:

if @@error <> 0 begin    ...end

Search for:

!=

Replace with:

<>
Change Double Quotes to Single Quotes

SQL Server uses single quotes to wrap strings, whereas Sybase accepts both double and single quotes.

Example:

select Col1 from Table1 where Table1.Col1 = "ColName"

Change to:

select Col1 from Table1 where Table1.Col1 =  'ColName'

Search for:

"

Replace with:

'
Differences in the Behavior of the CHARINDEX Function

Description:

There have been cases observed in which CHARINDEX uses a column value to search a constant string. This apparently works on Sybase, but fails on SQL Server.

Part of the issue appears to be the difference in handling the char type. SQL Server will pad values (using spaces on the right side) in a column that has been declared as char not null. The Sybase documentation indicates that Sybase handles char values the same way; however, CHARINDEX does not work in the same manner.

Generally, switching the expression to use the following form works on both platforms:

charindex(constant string, column value[, starting point] )

Search for:

charindex

Replace with:

If the first argument to CHARINDEX is a column, and the second is a constant expression or variable (for example, @val), swap the arguments so the column is the second argument.

If both arguments are columns, constants, or variables, leave them alone.

User-defined Errors

Sybase allows user-defined errors to start at 20000, whereas SQL Server requires that user-defined errors begin at 50000.

Example:

RAISERROR 22359 'Error Creating Accounting Arrangement Quantity Set Records'

Change to:

RAISERROR 122359 'Error Creating Accounting Arrangement Quantity Set Records'

Search for:

RAISERROR

Replace with:

Change the error value by prefixing it with a 1. For example, 22359 will become 122359.

Replace the noholdlock Lock Hint

Sybase has a locking hint called noholdlock. This hint tells the database server to ignore the current isolation level for the current query, revert to isolation level 0, and not to get any type of lock on the table. This allows the query to do dirty reads.

SQL Server uses the locking hint NOLOCK to achieve the same effect.

Example:

SELECT ...FROM  Statement_Group SG   noholdlock      ,Service_Statement_Detail SSD noholdlock      ,Service_Statement SS noholdlock      ,Period P  noholdlock      ,Price_Component PC noholdlock

Change to:

SELECT ...FROM  Statement_Group SG   with (nolock)      ,Service_Statement_Detail SSDwith (nolock)      ,Service_Statement SS with (nolock)      ,Period P  with (nolock)      ,Price_Component PC with (nolock)

Search for:

noholdlock

Replace with:

with (nolock)
Inserting Oversize Values

Sybase allows oversized values to be inserted into columns without generating an error. It quietly truncates the insert value to match the column definition. SQL Server 2000 does not allow oversized values to be inserted into a column, and it will generate an error if you attempt to do so.

Example:

Assume that the au_fname column in the table named authors is 20 characters long:

insert into authors(au_id, au_lname, au_fname )values('123', 'Sharp', 'CuddlesTheMaskedAvenger')Change to:insert into authors(au_id, au_lname, au_fname )values('123', convert( varchar( 40), 'Sharp' ),              convert( varchar(20),  'CuddlesTheMaskedAvenger'))

Search for:

insert 

Replace with:

Make sure that the values being inserted into the column are not larger than the column’s definition. If necessary, use the convert function to truncate the data.

Remove Constraint Names on Temporary Tables

Constraint names must be unique in a SQL Server database. Specifying a constraint name can lead to name conflicts. If you remove the constraint name, SQL Server will assign a unique name automatically.

Example:

CREATE TABLE #splyr301_1a (  Supply_Statement_Id  int               constraint pk_unq_ss_id primary key ,Statement_Group_Id   int               NULL ,Currency_Id          char(4)           NULL ,Buyer_Contact_Id     int               NULL ,Buyer_Phone          char(10)         NULL)

Change to:

CREATE TABLE #splyr301_1a (  Supply_Statement_Id  int               primary key ,Statement_Group_Id   int               NULL ,Currency_Id          char(4)           NULL ,Buyer_Contact_Id     int               NULL ,Buyer_Phone          char(10)         NULL)

Search for:

constraint

Replace with:

Remove the constraint name.

Appendix D: Transact-SQL Behavioral Differences

Under the conditions used in the following examples, the Transact-SQL functions behave differently on Sybase than they do on Microsoft® SQL Server™. Each table represents a Transact-SQL function, a statement, and that statement's behavior in Sybase and SQL Server.

Table D.1: CHARINDEX

Statement

Sybase

SQL Server

CHARINDEX(‘ABC’, NULL)

0

NULL

CHARINDEX(‘ABC’,’ DEFABC’)

4

4

CHARINDEX(NULL,’DEFABC’)

0

NULL

Table D.2: REPLICATE

Statement

Sybase

SQL Server

REPLICATE(‘ABC’, 0)

NULL

‘’

REPLICATE(‘ABC’, NULL)

NULL

NULL

Table D.3: RTRIM

Statement

Sybase

SQL Server

RTRIM(‘     ‘)

NULL

‘’

RTRIM(‘’)

NULL

‘’

RTRIM(NULL)

NULL

NULL

Table D.4: CHAR_LENGTH versus LEN and DATALENGTH

Statement

Sybase

SQL Server

CHAR_LENGTH(‘     ‘)

5

 

CHAR_LENGTH(‘‘)

1

 

CHAR_LENGTH(RTRIM(‘     ‘))

NULL

 

LEN(‘     ‘)

 

0

LEN(‘‘)

 

0

LEN(RTRIM(‘     ‘))

 

0

DATALENGTH(‘     ‘)

5

5

DATALENGTH(‘‘)

1

0

DATALENGTH(RTRIM(‘     ‘))

NULL

0

Table D.5: SUBSTRING

Statement

Sybase

SQL Server

SUBSTRING(‘ABCDE’, 10, 1 )

NULL

‘’

SUBSTRING(‘ABCDE’, 0, 3 )

NULL

‘AB’

SUBSTRING(‘ABCDE’, 1, 0 )

NULL

‘’

Table D.6: NULL String-Concatenated With a Value

Statement

Sybase

SQL Server

NULL + ABC

‘ABC’

NULL

NULL + NULL

NULL

NULL

Appendix E: Tranasct-SQL Transformation Checklist

You must make the following changes to your Sybase database and applications before migrating to Microsoft® SQL Server™ 2000.

  1. Change references to chained transaction mode to either unchained transactions or SQL Server 2000 implicit transactions. Change @@TRANCHAIN references to @@OPTIONS. Change @@TRANSTATE references to @@ERROR handling logic.

  2. Convert ROLLBACK TRIGGER statements to SQL Server savepoints.

  3. Change transaction isolation levels from Sybase numeric-level identifiers to SQL Server 2000 string-based identifiers.

  4. Move user-supplied stored procedures from the sybsystemprocs database to the master database.

  5. Delete (preferably) or change index and locking optimizer hints to SQL Server 2000 format.

  6. Change permanent temporary tables to global tables.

  7. Change range and add severity to user-defined messages.

  8. Remove:

    • Arguments and argument substitution from the PRINT statement.

    • Sybase server roles.

    • User-defined data cache references.

    • Thresholds.

    • Table partitioning.

  9. Make required syntax changes:

    • Change DUMP and LOAD statements to BACKUP and RESTORE statements.

    • Change @@SQLSTATUS to @@FETCH_STATUS when processing cursors.

    • Change the default name of identity columns in a table to IDENTITYCOL.

    • Modify or delete Database Consistency Checker (DBCC) statements.

    • Update any RAISERROR statements to use SQL Server syntax and change the value specified for messages to values greater than 50,000.

  10. Ensure that the tempdb database is of sufficient size to hold the temporary objects created by stored procedures and queries.

Optionally, you can choose to make the following changes to your Sybase database and applications before migrating to SQL Server 2000:

  • Change the tuning options for read-ahead cursors to values that are suitable for SQL Server.

  • Change scrollable server cursors.

  • Encrypt stored procedures in syscomments.

  • Replace nested IF statements with the CASE statement.

  • Use remote procedure calls (RPCs) with result sets in INSERTS.

  • Use extended stored procedures for mail notification, paging, and scheduling.

  • Log user messages to the Windows application log or the SQL Server error log by using xp_logevent.

  • Make use of Microsoft Windows-integrated security and trusted connections

  • Use SQL Server Agent to schedule automatic maintenance tasks, alerts, and replication

  • Change the strings used by EXECUTE statements.

  • Make use of stored procedures that can run automatically when SQL Server starts up, to perform initialization tasks.

  • Change the SELECT statement to use SQL-92-style joins (INNER JOIN, CROSS JOIN, LEFT OUTER JOIN, RIGHT OUTER JOIN, FULL OUTER JOIN).

Appendix F: Data Migration

Data transfer can be a complex task. The technique that you choose should depend upon the amount of data that you have to move and your window of opportunity in which you have to complete the task. The following sections outline some of the tools that you can use.

Use Sybase BCP and Microsoft Data Transformation Services (DTS)

To move large amounts of data from a UNIX Sybase data server, you could employ the following strategy:

  1. Use Sybase BCP utility to extract the table data in character mode and use the CRLF as a row delimiter.

  2. Configure the Sybase BCP to split the tables into an equal number of files. For example, you can configure BCP to start and end at a certain row and you can set the number of rows to extract. At the end of this process, you should have multiple files that represent the entire extracted table.

  3. Use File Transfer Protocol (FTP), NFS or SAMBA to transfer these files to a designated SQL Server host.

  4. Create a DTS package that will use BULK INSERT in parallel on the multiple data files.

Use the DB-Library BCP API

This strategy involves writing a C program that uses the BCP API of DB-Library. This approach is very flexible and allows you to transform the data as it is retrieved from the database and massage it into a form suitable for inserting into SQL Server. In various tests, this has proven to be the fastest way to move data from a Sybase database to SQL Server. In fact, it is possible to develop a generalized BCP-based copy program that copies data from one server to another by reading the schema from the source server when the target has an identical schema. See the assessment recommendations in the Planning chapter.

Use DTS to Migrate Tables and Data

DTS enables importing and exporting data between multiple heterogeneous sources that use an OLE DB-based architecture (such as Microsoft Excel spreadsheets and flat text files), and it enables transferring databases and database objects (for example, indexes and stored procedures) between multiple computers running SQL Server 2000. You can also use DTS to transform data so it can be used more easily to build data warehouses and data marts from an online transaction processing (OLTP) system.

Using the DTS Import and DTS Export Wizards, you can interactively create DTS packages that use OLE DB and ODBC to import, export, validate, and transform heterogeneous data. Using the wizards, you can also copy schema and data between relational databases. To use these wizards, you must have an ODBC or OLE DB driver for your Sybase server on the computer on which you are running DTS.

To transfer your Sybase data into SQL Server using the DTS Import and DTS Export Wizards:

  1. Create an ODBC data source for the Sybase database.

  2. Start SQL Server Enterprise Manager. On the Tools menu, point to Data Transformation Services, and then click Import Data.

  3. In the Choose a Data Source dialog box, select Other (ODBC Data Source) as the Source, and then select the DSN that corresponds to your Sybase data source.

  4. In the Choose a Destination dialog box, select Microsoft SQL Server OLE DB Provider, select the database server, and then select the required authentication mode for the selected database server.

  5. In the Specify Table Copy or Query dialog box, select Copy tables and views from the source database.

  6. In the Select Source Tables dialog box, click Select All. Click the gray box found in the Transform column of the Select Source Tables dialog box to change column names, data types, nullability, size, precision, and even to write code to make unique transformations to your data before importing data into SQL Server.

  7. Run the data migration package immediately or at a later time. The DTS Import and DTS Export Wizards show you the progress and status of the data migration.

General Guidelines

When trying to determine the most efficient method for loading a database, take into account the volume of data to be loaded. The bcp utility and BULK INSERT statement provide the most efficient method of loading large data sets. In situations in which a small to moderate amount of data is involved, consider using the DTS Import and DTS Export Wizards.

Load Operation Performance

Enhancing the performance of the load process is important. The faster the load process runs, the shorter the time that is required to bring the server online. There exist several options to specify how data should be bulk copied into SQL Server using the bcp utility or BULK INSERT statement, including the following:

  • Using logged and nonlogged bulk copies.

  • Using the bcp utility for parallel data loading.

  • Controlling the locking behavior.

  • Using batches.

  • Ordering the data in the data files.

Two factors determine which of these options can or should be used to increase the performance of the load process:

  • The volume of existing data in the table compared to the amount of data to be copied into the table.

  • The number and type of indexes on the table.

Additionally, these factors depend on whether data is bulk copied into the table for a single client or in parallel from multiple clients. Loading of data in parallel creates the best performance when dealing with a multiprocessor server.

Loading Data in Parallel

If SQL Server is running on a server that has more than one processor, and the data to be bulk copied into the table can be partitioned into separate data files, it is recommended that the data be loaded into the same table from multiple clients in parallel, thereby improving the performance of the bulk-copy operation. When copying data into a table from multiple clients, consider the following:

  • All indexes on the table must be dropped first and then recreated on the table. Consider recreating the secondary indexes in parallel by creating each secondary index from a separate client at the same time.

  • Using ordered data and the ORDER hint will not affect performance because the clustered index is not present during loading.

  • Data must be partitioned into multiple input files.

  • Utilizing the TABLOCK hint. This causes a table-level lock to be taken for the duration of the bulk-copy operation.

  • Specify a large batch size using the ROW_PER_BATCH hint. A single batch representing the size of the entire client file is recommended.

  • Set the database option select into/bulkcopy to TRUE to enable nonlogged operations.

    Note  For more information about the bulk copy operations and on improving its performance, see "Optimizing Bulk Copy Performance" at http://msdn.microsoft.com/library/en-us/optimsql/odp_tun_1a_5gyt.asp?frame=true, and "Bulk Copy Performance Considerations" at http://msdn.microsoft.com/library/en-us/adminsql/ad_impt_bcp_5zqd.asp.

Appendix G: Cutover and Fallback Strategies

This appendix describes three common strategies that you can use in a production environment for "cutting over" to the new, migrated SQL Server database from the original Sybase database. The process for "falling back" to the Sybase database is also described, should events necessitate such action.

The strategy that you use for transferring production to the new database will depend on a number of factors, the most important of which is the amount of downtime (if any) available to perform the operation. If the need to fallback to the original Sybase database arises, you must have appropriate plans in place for synchronizing the contents of the Sybase database with any changes applied to the SQL Server database while it was operational. The two databases become out of synchronization as soon as the first business transaction flows through the system. Therefore, to perform a complete fallback, you must have a plan that can bring the Sybase database up to the same state as the SQL Server database.

The following tools and techniques can be used to synchronize the databases:

  • DTS package. A DTS package (or packages) can be developed to move data from Microsoft SQL Server 2000 to the Sybase production server.

  • Replication. Replicate the data from the SQL Server to the Sybase Server.

  • Backups. Back up SQL Server 2000 and load the results into the Sybase database. This technique requires that you use a backup mechanism in SQL Server that is compatible with Sybase.

  • bcp. Data from the migrated production environment can be extracted from the SQL Server database using the bcp utility. That data can then be copied back into the existing Sybase environment.

Parallel Cutover with Replication Strategy

Microsoft® SQL Server™ 2000 offers the capability to replicate data to any heterogeneous data source that provides a 32-bit ODBC or OLE DB driver for Microsoft® Windows® Server 2003. Additionally, SQL Server 2000 can subscribe to transactional data that is replicated from Sybase.

Implementing transactional replication using data published by a Sybase data source requires third-party software (such as Sybase Replication Server), or applications built using the SQL-DMO API and the Replication Distributor Interface. For more information about configuring SQL Server 2000 for heterogeneous replication, see the "Replication and Heterogeneous Data Sources" chapter in SQL Server BOL.

Performing a parallel cutover with replication is an appropriate approach for large production databases which have to support a high volume of transactions and complex Transact-SQL queries 24 hours a day, seven days a week.

The common tasks associated with performing parallel cutover with replication include:

  1. Perform premigration tasks:

    1. Identify physical drive configuration for new SQL Server 2000 production environment.

    2. Configure drives.

    3. Configure network configuration.

  2. Notify the user community of the execution of the migration process.

  3. Prepare the Sybase server for migration:

    1. Shutdown replication.

    2. Lock out logins except for the sa login.

    3. Backup the Sybase database.

  4. Set up SQL Server 2000 in the target production environment:

    1. Shut down SQL Server database in staging/development environment.

    2. Lock all developers out of the environment except sa.

    3. Back up staging/development SQL Server database.

    4. Create the production database.

    5. Restore the production database from Stage/Development SQL Server database.

    6. Connect interfaces to the production environment.

    7. Update the interfaces.

    8. Run DBCC commands (checktable, checkalloc, and so on).

  5. Run user acceptance tests on the new target production server.

  6. Set up replication to the existing Sybase environment to maintain both datasets.

  7. Shift production users to the new SQL Server 2000.

  8. Monitor the new production environment for stability.

Problems

As mentioned earlier, the parallel cutover with replication strategy provides an easy way to fall back to the Sybase server, but it can be the most complex method of performing a migration cutover. The problems that should be tested and taken into consideration include:

  • The configuration of transactional replication between Sybase and SQL Server 2000.

  • The performance overhead and maintenance of replication between the two servers.

  • The capability to ensure the data set being validated in the new production environment is equivalent to the data set found on the Sybase production server.

Fallback Strategy

The parallel cutover with replication strategy consists of establishing heterogeneous replication between Sybase and SQL Server 2000. Considering that the synchronization of data between the systems is dependent upon the replication process, the following should be taken into consideration when developing a fallback strategy for a parallel cutover migration:

  • The amount of time for all users and clients to fall back to Sybase after you take the target SQL Server offline.

  • Set up heterogeneous replication back to the Sybase environment for data synchronization.

  • Consider making daily or frequent backups of Sybase and SQL Server environments. (This should include the operating system level as well as the database backup.)

  • Consider the amount of time it takes to restore the system and database from backup.

  • Backup storage location — backup material should be easily and quickly accessible.

Cutover Without Replication Strategy

Performing a cutover without replication migration with SQL Server is an appropriate approach for small to medium-sized production databases in which the organization has appropriate downtime to perform a new build and migration of a new database environment.

The common tasks associated with performing a cutover without replication include:

  1. Perform premigration tasks:

    1. Identify physical drive configuration for new SQL Server 2000 production environment.

    2. Configure drives.

    3. Configure network configuration.

  2. Notify the user community of the execution of the migration process.

  3. Prepare the Sybase server for migration:

    1. Lock out logins except for the sa login.

    2. Back up Sybase database.

  4. Set up SQL Server 2000 target production environment:

    1. Shut down SQL Server database in staging environment.

    2. Lock all developers out of environment except sa.

    3. Back up staging/development SQL Server database.

    4. Create the production database.

    5. Load transformed database tables into new SQL Server 2000 target server.

    6. Load the extracted data into new SQL Server 2000 production environment.

    7. Add new messages and other environment variables.

    8. Add logins.

    9. Install SQL Server stored procedures and triggers.

    10. Apply indexes.

    11. Update the interfaces.

    12. Run DBCC commands (checktable, checkalloc, and so on).

  5. Run user acceptance tests on the new target production server.

  6. Configure DTS packages or bcp scripts to port data back to the existing Sybase system for the fallback strategy, or configure heterogeneous replication to the existing Sybase system.

  7. Move production users to the new SQL Server 2000.

  8. Monitor the new environment for stability.

Problems

This strategy is an effective method of performing a migration when resources are limited and downtime is ample. The problems that should be taken into consideration when performing a cutover migration include:

  • Performance expectations are immediate.

  • Downtime is critical for the execution of migration tasks.

  • Data integrity — the capability to ensure the data set being validated in the new production environment is equivalent to the data set found on the Sybase production server.

  • Recovery can become time-consuming.

Fallback Strategy

When performing a cutover without replication, you expect that the performance of the new environment should meet the minimal performance of the existing production environment. If the migrated system does not meet the performance requirement, the fallback strategy plays a very important role.

When considering the development of a fallback strategy for a cutover migration, the key factor that should be considered is the amount of time needed to restore the Sybase system to the original state of production prior the next business day. When calculating the amount of time required for a restoration, it is very important that all aspects of the environment be taken into consideration (for example, the restoration of client applications should be considered in the calculation of time for restoration). The following list describes additional methods for preparing for the execution of this form of fallback strategy:

  • Perform a database dump or use bcp to copy data out of the Sybase database before final migration.

  • Perform a database backup of the transformed SQL Server database.

  • Capture transactions to be applied back to the Sybase environment after a fallback scenario has been satisfied.

  • Backup storage location — backup material should be easily and quickly accessible.

Phased Cutover Strategy

Performing a phased cutover migration with SQL Server is an appropriate approach for small to medium-sized production databases. It is suitable for situations in which an organization has a segmented duration of downtime to perform a migration of selected Sybase databases.

The common tasks associated with performing a phased cutover include:

  1. Perform premigration tasks:

    1. Identify physical drive configuration for new SQL Server 2000 production environment.

    2. Configure drives.

    3. Configure network configuration.

  2. Notify the user community of the execution of the migration process.

  3. Prepare the Sybase server for migration:

    1. Lock out logins for selected database except the sa login.

    2. Back up the selected Sybase database.

  4. Set up the SQL Server 2000 target production environment:

    1. Shut down the SQL Server database in the Stage environment.

    2. Lock all developers out of environment except sa.

    3. Back up staging/development SQL Server database.

    4. Create the production database.

    5. Load transformed database tables into new SQL Server 2000 target server.

    6. Load the extracted data into new SQL Server 2000 production environment.

    7. Add new messages and other environment variables.

    8. Add login names.

    9. Install SQL Server stored procedures and triggers.

    10. Apply indexes.

    11. Connect interfaces to production environment.

    12. Update the interfaces.

    13. Run DBCC commands (checktable, checkalloc, and so on).

  5. Run user acceptance tests on new target production server.

  6. Configure DTS packages or bcp scripts to port data back to the existing Sybase system for the fallback strategy, or configure heterogeneous replication to the existing Sybase system.

  7. Shift production users to new SQL Server 2000.

  8. Monitor new environment for stability.

Problems

As mentioned earlier, phased cutover deployments are an effective method of performing a "selected" database migration when resources are limited and downtime is ample. The criteria that should be taken into consideration when performing a cutover migration include:

  • Performance expectations are immediate.

  • Downtime is critical for the execution of migration tasks.

  • The capability to ensure the data set being validated in the new production environment is equivalent to the data set found on the Sybase production server.

  • Recovery can become time-consuming.

  • Close coordination with application groups and database owners is required.

Fallback Strategy

As before, consideration for the amount of time to restore the Sybase production environment plays a very important role. This time should include the amount of time required for restoring client applications, as well as backing up of data and transactions before the next business day. You should select one of the following two options, depending on the type of failure that has occurred:

  • If it is performance degradation, the SQL Server database can be shut down at a specific time and the Sybase database can be recreated from it.

  • If it is a software functional failure (that is, mission-critical data corruption), you need to determine a point in time at which the data was not corrupted and transfer the data up to that point in time back to the Sybase database.

Appendix H: Sybase Migration Toolkit Installation Guide

The Sybase Migration Toolkit (SMT) is a Microsoft® Visual® C# application for Microsoft Windows® that calls a combination of Perl and Sybperl scripts to perform various tasks. Operation of the SMT is predicated on installation and configuration of a number of software components, as described in this appendix. These components include:

  • Sybase Client Connectivity Tools for Windows or Sybase for Windows. This software is supplied by Sybase.

  • Sybase Migration Toolkit

  • ActiveState Perl (modules and scripts) including:

    • The termreadkey Perl Script (which is delivered with the ppe.exe)

    • The digest-md5 module

    • Sybperl

      Note  This code is downloaded as source code and you need a C compiler to use it.

  • Configuration of the Sybase SQL.ini file

  • Creation of the perfdb database

  • Testing of database connectivity

Each of the tasks described in the following sections are dependent on the tasks that have preceded them, so it is important that you perform these tasks in order, and that you read through the rest of this appendix before beginning the procedure.

Install Sybase Client Connectivity Tools

The SMT requires a connection to a Sybase server. You must install SMT on a Windows host that has either of the following Sybase software installed:

  • Sybase Client Connectivity Tools for Windows

  • Sybase ASE Server for Windows

    Note  The client connectivity tools do not include the header files and libraries required to build sybperl, only the dynamic link libraries to run it when built. The system that will be used to build the Sybperl module must have at least the client SDK installed from the server package. This is included in the freely available developer edition of Sybase ASE for Windows, downloadable from http://www.sybase.com/ase_1251devel.

Install the Sybase Migration Toolkit

Complete the following procedure to install the Sybase Migration Toolkit:

  1. Run the setup.exe program that is provided with the Sybase Migration Toolkit software.

  2. Read the license agreement. Click I Agree, and then click Next.

  3. Specify the folder for the SMT installation. The default directory is C:\SMT.

  4. Specify whether you want the SMT to be available to other users of the installation computer. Click Everyone or Just me, and then click Next.

    The installation process places the code for the SMT application under the installation directory and creates two subdirectories: DTSSyb and Scripts, as shown in the following directory structure:

    Figure H.1 Sybase Migration Toolkit directory tree

    Figure H.1 Sybase Migration Toolkit directory tree

    A DTS package and its supporting code are placed in the DTSSyb directory. This is the part of the SMT toolkit that generates the HTML Assessment report. For more information, see Appendix J, "Sybase Migration Toolkit User Guide."

    The Perl scripts that perform the SMT functionality are installed in the Scripts directory along with the SQL script that creates the perfdb database. For more information, see Appendix J, "Sybase Migration Toolkit User Guide."

  5. In the Confirm Installation dialog box, click Next to indicate that you are ready to begin the installation.

    A progress dialog box appears while the software is installed.

  6. The Installation Complete dialog box appears when the files are installed. Click Close.

Install ActiveState Perl

The SMT uses Perl scripts to perform its various functions. If Perl is not currently installed, you can download a copy of the latest version of ActivePerl from the Active State Web site at http://www.activestate.com. You should download the MSI version of the package and install the software by selecting all the default options. See Appendix K for further details on installing Perl

The targeted host server must have, at the very least, the Sybase Client Connectivity Tools for Windows installed regardless of whether Windows Sybase ASE server is installed. This means that the targeted host server should have a copy of Sybase DB-Library for Windows, the Sybase SQL.INI file, and the appropriate Sybase Windows Net-Library for the protocol stack being used.

The SMT has been tested to run with Perl 5.5.x or later. You can verify the version of Perl installed by executing the following command at a Windows command prompt:

C:\SMT\scripts> perl –v

The following example shows that the host is running Perl version 5.8.3 build for Microsoft Windows.

This is perl, v5.8.3 built for MSWin32-x86-multi-thread(with 8 registered patches, see perl -V for more detail)Copyright 1987-2003, Larry WallBinary build 809 provided by ActiveState Corp. http://www.ActiveState.comActiveState is a division of Sophos.Built Feb  3 2004 00:28:51Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit.Complete documentation for Perl, including FAQ lists, should be found on this system using `man perl' or `perldoc perl'.  If you have access to the Internet, point your browser at http://www.perl.com/, the Perl Home Page.

Install Perl Modules and Script

For SMT to work properly, the Perl modules and a Sybperl script described in Table H.1 must also be installed.

Table H.1: Perl Modules Required by SMT

Perl Module or Script

Version

Description

TermReadKey

2.14

SMT uses this module to obtain input for the keyboard and to help provide password protection.

Digest-MD5

2.20

SMT uses this module to perform a checksum on an entire SQL result set.

Sybperl

2.15

Provides a Perl wrapper around the Sybase database connectivity libraries CT-Library or DB-Library.

Dbschema.pl

N/A

The Sybperl script that connects to a Sybase database server and extracts the DDL for the schema.

(optional) DBI

1.42

The generic Perl database interface

(optional) DBD::Sybase

1.02

The Sybase driver for Perl DBI

Follow the procedures outlined in the following sections to install the Perl modules and the Sybperl script.

To install the TermReadKey module:

  1. Open a Windows command prompt, execute the ppm program, and then press ENTER:

    C:\ppm

    Note  The Programmer’s Package Manager (PPM) is a command line utility for managing modules and extensions to Perl. PPM is an interactive shell in which you execute commands that will search for Perl modules in known repositories, list installed modules, and install or remove Perl modules. PPM is supplied with Perl.

  2. At the PPM shell prompt, execute the query TermReadKey command, and then press ENTER:

    PPM> query TermReadKey

    This command searches the list of installed modules to establish whether the TermReadKey module is already installed. The following screen shot shows the results of the query commands. Notice the PPM found version 2.21 of the

    TermReadKey package is already installed. If the package had not been installed, a message that there were no matches for the given name is provided.

    Figure H.2 The results of the PPM help and search commands

    Figure H.2 The results of the PPM help and search commands
  3. If the TermReadKey module is not installed, type the install TermReadKey command, and then press ENTER:

    PPM> install TermReadKey

To install the Digest-MD5 module:

Note  By default, the ActiveState Perl install script installed the Digest-MD5 module, so no extra work is needed. You may want to verify that the module is installed by executing the following steps.

  1. At the PPM shell prompt, execute the query command, and then press ENTER:

    PPM> query

  2. If the Digest-MD5 module is missing, install it by typing the install Digest-MD5 command, and then press ENTER:

    PPM> install Digest-MD5

To install the Sybperl module:

  1. Download the compressed tar file sybperl-2.15.tar.gz from the Sybperl Web site at http://www.peppler.org/downloads/.

  2. Use a suitable decompression tool to unpack the contents of the sybperl-2.15.tar.gz file into the sybperl-2.15 folder.

  3. Open a command prompt window and move to the sybperl-2.15 folder.

  4. At the command prompt, change the environment variable DBLIBVS to specify that Sybperl should use Tabular Data Stream (TDS) 4.2:

    set DBLIBVS=400

    Note  The location of the.NET development tools, including nmake, must be recorded in the PATH environment variable. It is recommended that you open a Visual Studio .NET 2003 Command Prompt window; click the Windows Start menu, point to All Programs, point to Microsoft Visual Studio .NET 2003, point to Visual Studio .NET Tools, and click Visual Studio .NET 2003 Command Prompt.

  5. Edit the CONFIG file. Locate the following lines and change them as shown here:

    1. DBLIBVS=400
    2. CTLIBVS=0
    3. SYBASE=<the location of your Sybase install directory>

      For example,

      SYBASE=C:\SYB1192
    4. EXTRA_DEFS=-DDO_TIE –DMSSQL
  6. Save the file.

  7. Execute the following command at the command prompt to build the Perl makefile used to compile the Sybperl module itself:

    Note  The script prompts you for a Sybase server, user name, and password. For testing with SQL Server, supply the name, user name, and password for the SQL Server instead.

    perl Makefile.PL
  8. Build the Sybperl module by typing the following command at the shell prompt:

    nmake
  9. Install the Sybperl module by typing the following command at the shell prompt:

    nmake install
  10. Verify that the package was installed successfully by executing the following command at the shell prompt:

    perldoc perllocal

    This command returns a list of installed Perl modules. If the Sybase module version 2.15 is in this list, it was installed successfully.

To install the DBI modules:

To run the SMT, the DBI modules are not strictly required. However, because DBI is very widely used, you may want to install it anyway. The DBI module itself can be installed with ppm:

  1. At the PPM shell prompt, execute the query DBI command, and then press ENTER:

    PPM> query DBI

    This command searches the list of installed modules to establish whether the DBI module is already installed

  2. If the DBI module is missing, install it by typing the install BI command, and then press ENTER:

    PPM> install DBI
  3. DBD::Sybase is available as a PPM module. Run install DBD-Sybase in the same way. However, note there is a bug in some combinations of Sybase clients and Windows operating systems whereby Perl applications can hang in the “connect” call to a database. If this occurs, you must install DBD::Sybase in source form, make a minor modification to it, and build it. To test if you need to do this, the script “db_ver_new.pl” in the SMT Scripts folder can be run. This script is given a user name, password and database on the command line (the password will be prompted for if it is not given on the command line), and uses DBI and DBD::SYBASE to read @@server and @@version from the given database.

    C:\SMT> Scripts\db_ver_new.pl –U username –P pass –S SYBASE

    If this script hangs, it can be interrupted with a ctrl-c. In this case, you must download DBD::Sybase from http://www.cpan.org or from http://www.peppler.org.

  4. If you need to build DBD::Sybase from source, first open a Visual Studio.Net Command Prompt window. Then use a suitable decompression tool to unpack the contents of the DBD-Sybase-1.02.6.tar.gz file into the DBD-Sybase-1.02.6 folder, and cd to that folder.

  5. Edit the file dbdimp.c and find the first line that reads:

    imp_dbh->flushFinish   = 0;
  6. Change the

    0
    to a
    1
    :

    imp_dbh->flushFinish   = 1;
  7. Exit the editor, saving the file, and run the following command:

    perl Makefile.PL
  8. Answer the questions about the test database

  9. Run nmake to build the module

  10. Run nmake test to prove that it works correctly.

  11. Run nmake install to copy the object files to their correct locations.

    Note  Under SFU and FreeTDS, this bug does not manifest itself, and there is no need to make the code change described. Binary versions of Perl 5.8.3, DBI 1.42 and DBD::Sybase 1.02 for SFU are available at http://interopsystems.com/tools. However, these cannot be run from the Windows subsystem.

To install the dbschema.pl Sybperl script:

  1. Download a copy of the latest dbschema.pl script from the following Web site: http://www.midsomer.org/.

  2. Save the dbschema.tgz file to the Scripts directory located in the root directory where you installed the SMT software; for example, C:\SMT\Scripts.

  3. Using Windows Explorer, double-click the dbschema file. A WinZip pop-up displays.

  4. Click Yes to decompress the contents to a temporary directory.

  5. Extract the contents of the dbschema file into the script directory underneath the SMT installation directory; for example, C:\SMT\Scripts.

  6. Verify that the package was installed successfully. Using Windows Explorer, move to the Scripts folder under the SMT installation folder. You should see the dbschema.pl Perl script in this directory.

Configure the SQL.INI File

In the Windows environment, a Sybase client application identifies a Sybase ASE server by looking up its name in a Sybase-supplied file called SQL.INI. The name of the Sybase ASE server and the port number that it listens on must be entered into this file.

The SMT is designed to test Transact-SQL scripts against Sybase ASE and Microsoft SQL Server. Whether your Sybase ASE server is on UNIX or Windows, you need to make an entry into the Sybase SQL.INI file for the Sybase ASE server that you are going to test.

There are two ways to make an entry into a Sybase SQL.INI file, and in either case you must supply the database server name, the IP address, and the port number to which the database listens. The following two sections outline how to edit the SQL.INI file.

To edit the SQL.INI file by using the Dsedit program:

Note  Dsedit is a GUI tool installed as part of the Sybase client software. It is fully documented in the Sybase manuals.

  1. On the Windows Start menu, point to All Programs, point to Sybase, and then click Dsedit Utility.

  2. In the Select Directory Service dialog box click OK.

  3. Click Server Object in the menu bar, and then click Add.

  4. In the Input Server Name dialog box, type a name for the server to which you want to connect. The example shown in Figure H.3 creates an entry called CLIENT01. Click OK.

  5. In the Server list, highlight the CLIENT01 server name, and then double-click the Server Address attribute in the right-hand pane. The Network Address Attribute dialog box appears.

    Figure H.3 Adding a new server using Dsedit

    Figure H.3 Adding a new server using Dsedit
  6. Click Add, and then provide network address information, such as the IP address and port number, to which the Sybase ASE server listens, and then click OK.

    Note  You can also specify the address of a SQL Server in Dsedit. This is useful if you are using Sybperl scripts which can connect to Sybase and SQL Server databases using entries recorded in SQL.INI. By default, SQL Server listens on port 1433.

  7. Click OK again.

  8. In the DSEDIT1 — Interfaces Driver screen, highlight the CLIENT01 server in the Server list, and then right-click anywhere in the right-hand pane.

    Figure H.4 Pinging a server using Dsedit

    Figure H.4 Pinging a server using Dsedit
  9. Select the Ping Server option to verify connectivity to the Sybase server. If the ping fails, check that you have provided the correct server address and try again.

    Note  Most versions of the Sybase database engine listen on port 4100 by default. However, Sybase 12.5.1 listens on port 5001 by default.

To edit the SQL.INI file manually:

Note  This procedure is useful in a non-GUI environment.

  1. At a command prompt, move to the Sybase tools installation directory.

    Note  Usually, the environment variable $SYBASE (UNIX) or %SYBASE% (Windows) is set to the Sybase install directory.

  2. Move to the ini directory. This is where the SQL.INI file is located.

  3. Use a text editor, such as Notepad, to edit the SQL.INI file:

    C:\sybase1192\ini> notepad sql.ini

    You should see a screen similar to the following:

    Figure H.5 The SQL.INI file under Windows

    Figure H.5 The SQL.INI file under Windows
  4. Scroll to the end of the file and provide the information for the Sybase ASE server to which you want to connect. For example, to connect to a UNIX Sybase ASE server called CLIENT_DEV add the following lines:

    1. The server name encased in brackets and without any space characters to the left of the first character. The server name must be terminated with a carriage return.

    2. A line labeled query that must begin without any space characters to the left of the entry followed by an equal sign (=) up against the last character (y), the driver type (which is just the string TCP), the IP address of the Sybase ASE server, and the port number:

      [CLIENT_DEV]query=TCP,157.57.64.64,4500
  5. Save and close the file.

Test Database Connectivity

The SMT provides a Perl script called db_ver.pl to test database connectivity (see the assessment document referenced in the Planning chapter. From a connectivity perspective, you must have installed and built the Sybperl connectivity library and forced it to use TDS 4.2. In addition, you must have added entries to the sql.ini file for any Microsoft SQL Servers or any Sybase ASE server to connect to. To save space, this document does not show any screen shots of the procedures listed because similar screen shots were displayed in previous sections of this appendix.

Microsoft SQL Server

To test database connectivity to SQL Server:

  1. Open a command prompt and change directory to the Scripts subdirectory of the SMT.

  2. Run the script from the command line:

    C:\SMT\Scripts> db_ver.pl –U user –P password

The db_ver.pl Perl script performs a SELECT @@VERSION, so the result set that is returned provides information about the version of the SQL Server to which you just connected.

Note  if you do not provide a –P argument, the program will prompt for a password. This will prevent the password from being visible to other users of this system.

Sybase Server

To test database connectivity to a Sybase ASE server:

  1. At the Windows command prompt, use Notepad to open the db_ver.pl Perl script:

    C:\SMT\Scripts> notepad db_ver.pl

  2. Scroll to the dblogin subroutine. Notice that it takes three parameters: the USER ID, the PASSWORD and the SERVER NAME. Replace these string parameters with the values that correspond to the SQL Server to which you want to connect. For example, to connect to the SYB1192 Sybase ASE server, replace the following line:

    $x = &dblogin("tester", "pw4tester", "DEV_SQL2000");

    with:

    $x = &dblogin("tester", "testerpassword", "SYB1192");
  3. Save the file.

  4. Run the script from the command line:

    C:\SMT\Scripts> db_ver.pl

    The db_ver.pl Perl script executes a SELECT @@VERSION statement, so the result set that is returned provides information about the version of the Sybase server to which you just connected.

Install the Perfdb Database

To use the SQL Comparison tab of SMT, you must run a SQL script called installperfdb.sql on the SQL Server that you have identified as your collection server. This script creates the perfdb database and all the database objects that are used by the SMT. This database is used to store checksum and statistics IO information that is gathered from invoking the Perl scripts behind the SQL Comparison tab.

To install the installperfdb.sql SQL script:

  1. On the Windows Start menu, point to All Programs, point to Microsoft SQL Server, and then click Query Analyzer. Connect to SQL Server as the sa user.

  2. Using SQL Query Analyzer, open the file installperfdb.sql in the C:\SMT\Scripts folder.

  3. In the CREATE DATABASE statement, change the paths to the data and log files to whatever directories your site uses.

  4. Save the changes.

  5. Execute the script.

  6. Verify that the perfdb database was created.

  7. Exit SQL Query Analyzer.

Install the Object Comparison Stored Procedure

To use the Object Comparison tab of SMT, you must run a SQL script called sp_getSybsTableInfo.sql on the Sybase ASE Server that has the database that is targeted for migration. Create this stored procedure in the system database called sybsystemprocs. This stored procedure collects information on the tables that are going to be migrated and it gets the count of rows for each table in the targeted database for migration.

To run the sp_getSybsTableInfo.sql SQL script:

  1. Use either Sybase’s version of ISQL.exe or Sybase SQL Advantage to load and view the file sp_getSybsTableInfo.sql.

  2. Execute the batch script. Verify that it runs correctly.

  3. Exit ISQL or Sybase SQL Advantage.

Appendix I: Sybase Migration Toolkit User Guide

The Sybase Migration Toolkit (SMT) is a Microsoft® Visual C#® application for Microsoft Windows® that provides a wrapper around a combination of tools that will assist you in the migration process. All Sybase migrations have the following tasks in common:

  1. Identify conversion issues

  2. Extract the Sybase database schema

  3. Perform automatic and manual Transact-SQL transformations

  4. Validate what you converted

SMT contains several tabs that will help you work through these tasks. The SMT tabs provide the following functionality:

  • Assessment. This tab uses a Data Transformation Services (DTS) package that connects to the targeted Sybase server and reverse engineers the targeted database schema. The package then performs an assessment of the database objects and produces an HTML Assessment report on some of the known issues that you will encounter during a migration. It does not transform any objects, but it does report on the types of issues that you have to correct.

  • Extract Schema. This tab executes a Sybperl script to extract the Data Definition Language (DDL) schema from the targeted Sybase database into one single text file. That text file is parsed into a directory tree and places individual database objects into files under the appropriate database object subdirectory. This subdirectory tree is used by all subsequent Transact-SQL transformation processes, both automatic and manual.

  • TSQL Transformation. This tab invokes a Perl script that performs automatic syntax conversions on Transact-SQL statements. The user is prompted for the type of objects to translate: stored procedures, triggers, or views. The Perl script is extensible, so a Perl developer can make alterations to the script and provide customized Transact-SQL transformations.

  • SQL Comparison. This tab calls a Sybperl script that checks the behavioral integrity of Transact-SQL statements by performing a checksum of the result sets. In addition, it captures STATISTICS IO on the query so you can analyze the performance footprint of a query as it executed on Sybase or Microsoft SQL Server.

  • Object Comparison. This tab calls a Sybperl script that helps to check that all the objects and data in the original Sybase database were successfully migrated to SQL Server.

These tabs are discussed in greater detail in the following sections.

Assessment Tab

The first stage of a Sybase migration is to make an assessment of the types of Sybase migration issues that you will encounter during a migration. SMT produces an Assessment report to help you in your assessment.

When you click the Assessment tab and start the Assessment report process, you will be prompted to enter connection information to the Sybase database targeted for migration. A DTS package connects to the targeted Sybase database and reverse engineers the database schema from the Sybase system tables and analyzes those database objects. The DTS package performs two tasks:

  1. It creates two SQL script files: the first is called <db-name>_before.sql, which contains the database schema to create the database and any supporting DDL; and the second script is called <db-name>_after.sql, which contains the DDL that creates the database objects.

  2. It analyzes the Sybase database schema against a set of rules to determine the types of migration issues that you will encounter. The end result is an HTML report called the Assessment report.

Assessment Report

The Assessment report is a HTML document that is divided into three subject areas: Summary, Source Configuration, and Objects Migrated. At the top of the report, each subject area is a hyperlink to the appropriate subject report section.

Summary

Use the Summary section to help you identify what database objects were analyzed and the number of database objects that have migration issues. The Summary section helps you identify the database objects where the majority of conversion work will take place.

Source Configuration

Use the Source Configuration section to help you configure some of the server and database options that apply to the new SQL Server.

Objects Migrated

Use the Objects Migrated section to identify the types of issues that you will encounter. This section will also help you estimate the amount of work that you can automate and the amount of work that has to be resolved through an iterative manual process.

After you review the Assessment report, you should have a better understanding of how you are going to implement key areas of the development process. The key points of the assessment phase include:

  • Break down the migration issues into two categories: automated and manual.

  • Identify those Transact-SQL issues that can be automated.

  • Identify those Transact-SQL issues that are purely manual processes.

  • Review the migration issues to better estimate the overall migration effort and any problems that you will encounter.

To create an Assessment report:

  1. Click the Assessment tab, and then click the Start button.

  2. In the Source box, provide the connection information for the Sybase database that is targeted for migration, and then click Next.

  3. Select the Transfer all objects check box. In the Script file directory box, type a full directory path for the schema scripts that SMT will produce, and then click Next.

  4. Click Finish to execute the Assessment analysis.

  5. A progress screen appears. You will be prompted when the analysis is completed.

  6. After the analysis is completed, you will be prompted with the status. Click OK to return to the SMT screen.

Extract Schema Tab

The second stage of a Sybase migration is to extract the database schema of the Sybase database that is targeted for migration. SMT can help you extract the database schema.

When you click the Extract Schema tab, you will be prompted for the full directory path to the location of an output file that will contain the extracted database schema and connection information to the Sybase database that is targeted for migration. Behind the scenes, this information is passed to a Sybperl script called dbschema.pl that performs the schema extraction from the Sybase system tables and places that data in a single SQL file. A second Perl script is invoked to build a directory tree of database objects and to parse the SQL file into individual database objects, in which it then places in the appropriate object subdirectory. The extraction process performs the following tasks:

  1. From the Sybase system tables, it reverse-engineers the database schema for the Sybase database that is targeted for migration and places those objects in a single file.

  2. It parses the individual database objects into scripts that are stored in a directory tree that is the starting point for Transact-SQL migrations. At this point, no transformations are performed; only the generation of scripts.

To extract the database schema:

  1. Click the Extract Schema tab, and then click the Start button.

  2. In the Output File Location box, identify the full directory path to the location where you want to place the SQL file of the extracted database schema. In addition, provide the requested connection information to the targeted database that is identified for migration. Click Finish to continue.

  3. A progress screen appears. When all the steps are completed, click Done to return to SMT.

The database schema is extracted into the files stored in the Output File Location, with names that start with the name of the database. Figure I.1 shows the result of running the extraction with the Output File Location set to C:\SMT\Database\clientdb, and specifying Database clientdb.

Figure I.1 Extracted database schema files

Figure I.1 Extracted database schema files

A directory tree is created beneath the Output File Location, as shown in Figure I.2. The database schema is parsed into individual database objects that are placed in the appropriate database object subdirectory. Notice that the root directory is the name of the Sybase database that is being migrated.

Figure I.2 Directory tree for database schema files

Figure I.2 Directory tree for database schema files

TSQL Transformation Tab

The third stage of a Sybase migration is to perform Transact-SQL transformations. Stored procedures, triggers, and views are the only database objects whose Sybase syntax is automatically converted. SMT calls a Perl script called syb2ms.pl to perform the translation automatically. The Perl script is extensible and could easily be modified to accommodate new Transact-SQL conversion subroutines. There are two types of Transact-SQL transformations: automatic and manual.

Automatic Transformations

You can perform automatic transformations by using SMT and clicking the TSQL Transformation tab. SMT will convert some Transact-SQL syntax statements for you. These are call "canned" SQL syntax conversions. The following list identifies the "canned" Transact-SQL statements whose Sybase syntax is converted to Microsoft Transact-SQL syntax:

  • CHAR_LENGTH

  • DBO prefix

  • DEALLOCATE

  • INDEX HINTS

  • RPC syntax

  • PRINT

  • Comment out RAISERROR statement. This is a special case that has to be handled manually, as noted in the following section.

  • @@SQLSTATUS

  • Convert TRANSACTION ISOLATION

  • UPDATE alias

To perform automatic Transact-SQL transformations:

  1. Click the TSQL Transformation tab, and then click the Start button.

  2. In the Parsed DDL Files Root Directory box, provide the full directory path to the root directory of the location for the database object that you want to process. Specify the type of objects you want to transform, and then click Migrate.

    Figure I.3 The Transformation Wizard

    Figure I.3 The Transformation Wizard

    Every file in the targeted database object subdirectory is copied to the MS directory and renamed as MS_object_name.sql, where object_name is the name of the object. These files contain the most recent syntax conversions, and all subsequent manual Transact-SQL conversions should be performed in this directory.

  3. When you have finished migrating objects, click Finish.

Manual Transformations

With SMT, you have the option to automate the conversion of other Transact-SQL statements. In the Assessment stage, you should have identified any additional Transact-SQL statements for which the Sybase syntax conversion could be automated. The SQL Transformation tab invokes a Perl script called Syb2MS.pl. You can edit this script and create a Perl subroutine to handle the conversion of any additional Sybase Transact-SQL statement.

Syb2MS is a Perl script that was developed to address Transact-SQL conversion issues between Sybase Transact-SQL and Microsoft Transact-SQL. The following description outlines how the script works:

  1. The script is invoked and passed command-line flags to indicate the type of database object to process. Only stored procedures, triggers, or views get processed by the script. In addition, the full directory path to the parent directory of the database object that you want is passed from the command line. The following flags indicate which database object to process:

    • -s

    The root directory to the stored procedures that will be converted.

    • -t

    The root directory to the triggers that will be converted.

    • -v

    The root directory to the views that will be converted.

  2. During the Extract Schema stage, SMT parses the SQL schema file for the Sybase database that is targeted for migration. The Syb2MS Perl script opens the appropriate object subdirectory and, for each file in that subdirectory, it loops through a hash table of keywords that represent known Sybase syntax issues, one statement at a time.

  3. When Syb2MS finds a line that matches a keyword entry, Syb2MS branches to the subroutine that handles the conversion. This is repeated for every line in every file that is in the object subdirectory.

  4. The Syb2MS Perl scripts create a subdirectory called MS and renames all database object files as MS_object_name.sql, where object_name is the name of the object. The MS subdirectory contains the most recently converted syntax, and that is the directory that you should use for any other manual Transact-SQL conversions.

You can create a new subroutine to convert new Sybase syntax to Microsoft syntax by executing the following procedure:

To add a new conversion subroutine:

  1. Create a string that uniquely identifies a Sybase syntax issue and place that string in the %tsql hash table. For example, if you wanted to add a NEW_ISSUE, you would add it to the bottom of the hash table:

    %tsql = (  "print",         "print",            "raiserror",     "raiserror",           "exec",          "exec",           "execute",       "execute",           "char_len",      "char_length",           "sproc1",        "create procedure",           "sproc2",        "create proc",           "from1",         "from",           "sp_addmessage", "sp_addmessage",           "sqlstatus",     "sqlstatus",           "deallocate",    "deallocate",           "update",        "update",           "isolation",     "isolation",           "NEW_ISSUE",     "new_issue" );
  2. Because the SMT loops through every file in the targeted object subdirectory reading every single line, you need to add a conditional branch in the while loop that tests for a particular new issue. There is a single nested if-then-else statement that provides the conditional branching to subroutines that are customized to handle the particular syntax conversion. For example, if you wanted to process the NEW_ISSUE, you would add the following lines, highlighted in yellow, to test for the new issue in the while loop:

          # Process each file in the directory.      while(<IN>) {      chop;      $this = '';       # The following if statement is like a switch statement. Place all TSQL      # Statements that need to be converted or flagged       # for manual intervention.       # PRINT      if ($this eq "print") {          # Convert the Sybase print statement to Microsoft print syntax.              $temp_ = $_;              @tmp_ = split(/print/, $temp_);              if ( @tmp_[0] =~ /\/\*/ ) {                    print OUT "$_\n";              }              else {                    convert_print();           }       # RAISERROR      } elsif ($this eq "raiserror") {           do_raiserror();       # NEW_ISSUE      } elsif ($this eq "new_issue")            do_new_issue();
  3. Create the do_new_issue subroutine to handle the appropriate Sybase syntax conversion.

SQL Comparison Tab

At this stage it’s assumed that the migration of the schema has taken place. The following description shows how to use the SQL Comparison tab for the test procedure that verifies a successful schema migration.

The fourth stage of a Sybase migration is to test the SQL statements that were converted. SMT can help you test the compatibility of SQL statements or stored procedures. When you click the SQL Comparison tab, it invokes a Sybperl script called S2SPerl.pl.

The SMT is designed to submit SQL queries or stored procedures to a targeted database server. The intention is to identify the behavioral differences between Sybase ASE Transact-SQL and Microsoft SQL Server Transact-SQL by applying a checksum to a queries entire result set. By using a checksum on the result set, you simplify identifying incompatible result sets because of inconsistencies in the two SQL dialects.

In addition, you can capture the number of logical and physical reads/writes by setting the database option STATISTICS IO on for every SQL batch statement tested. By capturing STATISTICS IO, you have the ability to identify those SQL batch statements or stored procedures that may perform poorly by comparing the STATISTICS IO of the two different database systems.

Both the checksum and the STATISTICS IO are stored in a database called perfdb. The procedures for installing the perfdb database are described in Appendix H, "Sybase Migration Toolkit Installation Guide."

The following sections describe the major components of the SQL Comparison tab.

SQL Batch File Layout

You can submit SQL statements, stored procedures, or a combination of both to the SQL Comparison through an ASCII flat file. During the processing of the SQL Comparison, the SMT will prompt you to:

  1. Choose what server product you want to test against either Sybase or Microsoft.

  2. Provide database connection information to the server product that you want to test against.

  3. Provide the name of the batch file that contains the SQL queries or stored procedures that you want to test.

For SQL Comparison to properly process the batch file, you must adopt the following guidelines when creating a SQL batch file. The file layout is a follows:

  1. The first field in the file is the batch number and should be placed in the left-most column:

    7
  2. The second field is a batch delimiter. The delimiter must be a pipe (the “|” character), which must immediately follow the batch number. For example, the following depicts a seven followed immediately by a pipe character:

    7|
  3. The SQL statement or a stored procedure should begin on the very next line. In either case, the SQL or stored procedure may span multiple lines. For example, the following depicts the batch number seven, followed by a pipe character, with a SQL statement starting on the next line and spanning several lines:

    7| select *from titleswhere titleid = 12323
  4. The very last line of the SQL batch file must be terminated by a tilde (~) character. For example, the following code snippet depicts the batch number three as a stored procedure with the ~ terminating the SQL batch:

    3|exec myproc 1, 2, 3~

Figure I.4 shows all these steps in a single text file. It is an example of a SQL batch file with SQL statements and stored procedures that span multiple lines. Notice the ~ character, which terminates each SQL batch.

Figure I.4 SQL Comparison Wizard batch file

Figure I.4 SQL Comparison Wizard batch file
The Perfdb Database

A major feature of the SQL Comparison tab is the capability to capture checksum and statistics IO information. This data is captured and stored in a database called perfdb. After you run the database install script, the following database tables are created:

  • sybs_batch. This table stores information about the SQL statements or stored procedures that were run against any Sybase ASE server. You can write queries to compare the checksum in this table against the checksum in the msft_batch table to identify those queries that did not return the same checksum result set. The structure of this table is shown in Table I.1:

    Table I.1: The Columns in the sybs_batch Table

    Column

    Description

    Batch_id

    This is an integer that identifies the sequence number of the statement within a given SQL batch file.

    Io_id

    This is a GUID that uniquely identifies a particular statement and its associated STATISTICS IO. It has a one-to-many relationship to the stats_io table.

    Query_str

    This is a string that represents the actual SQL statement or stored procedure that was executed to produce the result set that was checksum-ed.

    Checksum

    This is a string that represents the checksum of a SQL result set.

    Filename

    This is a string that represents the name of the SQL batch file.

    Last_updated

    This is a time stamp that indicates when the row was inserted.

  • msft_batch. This table stores information about the SQL statements or stored procedures that were run against the SQL Server. You write queries to compare the checksum in this table against the checksum in the sybs_batch table to identify those queries that did not return the same checksum result set. The structure of this table is shown in Table I.2:

    Table I.2: The Columns in the msft_batch Table

    Column

    Description

    Batch_id

    This is an integer that identifies the sequence number of the statement within a given SQL batch file.

    Io_id

    This is a GUID that uniquely identifies a particular statement and its associated STATISTICS IO. It has a one-to-many relationship to the stats_io table.

    Query_str

    This is a string that represents the actual SQL statement or stored procedure that was executed to produce the result set that was checksum-ed.

    Checksum

    This is a string that represents the checksum of a SQL result set.

    Filename

    This is a string that represents the name of the SQL batch file.

    Last_updated

    This is a time stamp that indicates when the row was inserted.

  • stats_io. This table stores information about the STATISTICS IO for a given query. You would join the field io_id to either sybs_batch or msft_batch to determine the query the statistics relate to. This table has the columns listed in Table I.3:

    Table I.3: The Columns in the stats_io Table

    Column

    Description

    Io_id

    This is a GUID that uniquely identifies a particular statement and its associated STATISTICS IO.

    File_type

    This is a string that indicates the type of server that processed the SQL batch file, either SYBASE or MSSQL.

    Obj_name

    This is a string that indicates the type of database object that was processed, either a SQL Statement or the actual name of the stored procedure.

    Table_name

    This is the actual name of the table, as returned by STATISTICS IO.

    Scan_cnt

    This is the scan count, as returned by STATISTICS IO.

    Logical_cnt

    This is the logical page count, as returned by STATISTICS IO.

    Physical_cnt

    This is the physical page count, as returned by STATISTICS IO.

    Read_ahead_cnt

    This is the read-ahead page count, as returned by STATISTICS IO and applies to Microsoft SQL Server only.

    Last_updated

    This is a time stamp that indicates when the row was inserted.

To perform an SQL comparison, follow these steps:

  1. Click the SQL Comparison tab, and then click the Start button.

  2. Select the type of server where you want to test the Transact-SQL behavior, and then provide the necessary connection information.

  3. Provide the connection information, specifying the location of the perfdb database where the checksum information is stored.

  4. In the Path box, provide the full directory path to the file where you stored the Transact-SQL that you want to test.

  5. Click Finish to allow the wizard to execute the batch file.

  6. A progress dialog box appears. Click Done when the processing has completed to return to the SMT user interface.

Object Comparison Tab

The final stage of a Sybase migration is to check that all objects and data in the original Sybase database have been successfully migrated to SQL Server. When you click the Object Comparison tab of the SMT, it invokes a Sybperl script called tabsum.pl. This script generates a count of the tables, stored procedures, triggers, and views in the specified database, and for each table it also counts the number of rows found.

This tab is similar in operation to the SQL Comparison tab; specify the details of the database to connect to and the location of the perfdb database. The results are recorded in two tables that you can query using SQL Server Query Analyzer:

  • The table_info table records information about user-defined tables in the specified database. For each user-defined table, the statistics recorded include the number of columns in the table, the number of indexes, and the number of rows. The structure of this table is shown in Table I.4:

    Table I.4: The Columns in the table_info Table

    Column

    Description

    Product

    This is a string that identifies the type of database product in use, either SYBASE or MSSQL.

    Dbname

    This is a string that specifies the name of the database being tested.

    Table_name

    This is a string that specifies the name of a user-defined table being analyzed.

    Row_count

    This is an integer representing the number of rows in the user-defined table.

    Column_count

    This is an integer that holds the number of columns in the user-defined table.

    Index_count

    This is an integer that holds the number of columns on the table.

    Index_type

    This is a character. A C indicates a clustered index is defined on the table, and a NULL value indicates that no clustered index is available.

    Last_updated

    This is a time stamp indicating when this row in table_info was last modified.

  • The object_info table records information about tables, stored procedures, triggers, and views in the database being analyzed. You can query this table to count the number of objects of each type in the database. This table has the structure shown in Table I.5:

    Table I.5: The Columns in the object_info Table

    Column

    Description

    Product

    This is a string that identifies the type of database product in use; either SYBASE or MSSQL.

    Dbname

    This is a string that specifies the name of the database being tested.

    Obj_name

    This is a string that specifies the name of a user-defined object being analyzed.

    Type

    This is a character that identifies the type of user-defined object being analyzed; U for a table, S for a stored procedure, TR for a trigger, or V for a view.

    Last_updated

    This is a time stamp indicating when this row in table_info was last modified.

Appendix J: FreeTDS and unixODBC Installation

FreeTDS and unixODBC are open source libraries that can be used by applications to connect to Sybase and Microsoft SQL Server. This appendix describes how to install and configure FreeTDS and unixODBC under UNIX and Microsoft Windows Services for UNIX (SFU).

FreeTDS provides an implementation of the Tabular Data Stream (TDS) protocol, emulating the different versions of TDS that are used by Sybase and SQL Server. FreeTDS also supplies implementations of the DB-Library and CT-Library APIs. As such, it is possible to allow applications to communicate with database servers using APIs that were previously incompatible with the version of TDS in use. For example, an application created using the CT-Library API can communicate with SQL Server 2000 using TDS version 8.0.

unixODBC is an implementation of the ODBC protocol and API for UNIX; the unixODBC library allows applications running under UNIX to communicate with database servers, such as SQL Server, using ODBC. Applications developed using unixODBC can execute under UNIX and SFU.

Installing FreeTDS

A precompiled and directly installable version of FreeTDS (currently version 0.62.3) is available at http://interopsystems.com/tools. This can be installed under SFU 3.5 using the pkg_add command:

pkg_add ftp://ftp.interopsystems.com/pub/db/freetds-0.62.3-bin/.tgz

This installs the package under the /usr/local directory.

Configuring FreeTDS

When FreeTDS is installed, you should configure it to connect to your SQL Server and Sybase databases using the following procedure:

To configure FreeTDS connectivity:

  1. Move to the /usr/local/etc directory and edit the file freetds.conf. This file contains information about the Sybase and SQL Server database servers it can access, and the versions of TDS to use.

  2. Add the following entries to the end of the file. Replace aa.aa.aa.aa with the IP address or DNS name of the UNIX computer running SQL Server, and bbbb with the port that SQL Server is listening to on this computer (SQL Server usually listens on port 1433):

    Note  The name, SQLServer, does not have to be the same as the name of the computer running SQL Server — it is an identifier used by the FreeTDS library functions to locate the entry in this file. It is a name that uniquely identifies this entry in the configuration file, and is used in the –S option for most of the scripts.

     [SQLServer]      host = aa.aa.aa.aa      port = bbbb      tds version = 8.0
  3. [Optional] Add the following entries to the end of the file. Replace xx.xx.xx.xx with the IP address of the UNIX computer running Sybase, and yyyy with the port that Sybase is listening to on this computer (Sybase usually listens on port 4100, although the Sybase database administrator may have changed it, so check with the Sybase database administrator first):

     [SybaseServer]      host = xx.xx.xx.xx      port = yyyy      tds version = 5.0
  4. Save the file.

Testing the FreeTDS Configuration

You can test the FreeTDS configuration using tsql, a tool provided with FreeTDS, by following this procedure:

To test the installation of FreeTDS:

  1. At a shell prompt, type the following command. Replace

    server
    with the entry in freetds.conf that you want to test (SybaseServer or SQLServer), and
    password
    with the sa password for the selected server:

    tsql –S server –U sa –P password
  2. At the

    1>
    prompt, type the following commands. The result should be a list of databases available on the selected server:

    SELECT name FROM sysdatabasesgo
  3. At the

    1>
    prompt, type the following commands. The result should be a list of connections currently established to the selected server:

    EXEC sp_whogo
  4. At the

    1>
    prompt, type the following command to leave tsql:

    exit

Installing unixODBC

unixODBC is available in source form (in the file unixODBC-2.2.6.tar.gz) from the unixODBC Web site at http://www.unixodbc.com. This guide uses unixODBC version 2.2.6. This section describes the procedures you should follow for installing unixODBC under UNIX and SFU. The procedures assume you have already downloaded the file unixODBC-2.2.6.tar.gz.

Note  Under SFU, it is recommended that you download the file unixODBC-2.2.6.tar.gz to a directory under the SFU directory tree, such as /tmp.

To install unixODBC under UNIX:

  1. Log in to UNIX as the root user.

  2. At a shell prompt, move to the location holding the downloaded unixODBC source code, and then type the following command to unzip the file unixODBC-2.2.6.tar.gz to the file unixODBC-2.2.6.tar:

    gunzip unixODBC-2.2.6.tar.gz
  3. Type the following command to unpack the file unixODBC-2.2.6.tar into the directory unixODBC-2.2.6:

    tar xvf unixODBC-2.2.6.tar
  4. Move to the unixODBC-2.2.6 directory and type the following command to generate the files needed to build unixODBC. This command will generate a number of messages on the screen as it examines your UNIX configuration and generates the appropriate make files:

    ./configure

    Note  The configure script uses a number of well-known tricks to ascertain which tools and libraries are available that it can use to compile unixODBC. If the configure script fails it is usually because the script cannot find a particular tool, file, or library. You can run the script supplying parameters to help it analyze your environment. Execute

    ./configure –h

    for more details.

  5. Type the following command to build unixODBC. As before, you will see a large number of messages as the build process progresses:

    make
  6. Type the following command to install unixODBC:

    make install
  7. Type the following command to check that unixODBC was installed successfully:

    /usr/local/bin/isql

    If unixODBC is installed correctly, you will see the following message:

    *********************************************** unixODBC - isql                            ************************************************ Syntax                                     **                                            **      isql DSN [UID [PWD]] [options]        **                                            ** Options                                    **                                            ** -b         batch.(no prompting etc)        ** -dx        delimit columns with x          ** -x0xXX     delimit columns with XX, where  **            x is in hex, ie 0x09 is tab     ** -w         wrap results in an HTML table   ** -c         column names on first row.      **            (only used when -d)             ** -mn        limit column display width to n ** -v         verbose.                        ** --version  version                         **                                            ** Notes                                      **                                            **      isql supports redirection and piping  **      for batch processing.                 **                                            ** Examples                                   **                                            **      cat My.sql | isql WebDB MyID MyPWD -w **                                            **      Each line in My.sql must contain      **      exactly 1 SQL command except for the  **      last line which must be blank.        **                                            ** Please visit;                              **                                            **      http://www.unixodbc.org               **      pharvey@codebydesign.com              **      nick@easysoft.com                     ***********************************************

To install unixODBC under Microsoft Windows Services for UNIX:

  1. A precompiled and directly installable version of unixODBC (currently version 2.2.8) is available at http://interopsystems.com/tools. This can be installed under SFU 3.5 using the pkg_add command:

    pkg_add ftp://ftp.interopsystems.com/pub/db/unixODBC-2.2.8bin/.tgz

    This installs the package under the /usr/local directory.

  2. Type the following command to check that unixODBC was installed successfully:

    /usr/local/bin/isql

    If unixODBC is installed correctly, you will see the following message:

    *********************************************** unixODBC - isql                            ************************************************ Syntax                                     **                                            **      isql DSN [UID [PWD]] [options]        **                                            ** Options                                    **                                            ** -b         batch.(no prompting etc)        ** -dx        delimit columns with x          ** -x0xXX     delimit columns with XX, where  **            x is in hex, ie 0x09 is tab     ** -w         wrap results in an HTML table   ** -c         column names on first row.      **            (only used when -d)             ** -mn        limit column display width to n ** -v         verbose.                        ** --version  version                         **                                            ** Notes                                      **                                            **      isql supports redirection and piping  **      for batch processing.                 **                                            ** Examples                                   **                                            **      cat My.sql | isql WebDB MyID MyPWD -w **                                            **      Each line in My.sql must contain      **      exactly 1 SQL command except for the  **      last line which must be blank.        **                                            ** Please visit;                              **                                            **      http://www.unixodbc.org               **      pharvey@codebydesign.com              **      nick@easysoft.com                     ***********************************************

Testing the unixODBC Configuration

unixODBC requires an ODBC driver to connect to a database. A number of third-party drivers are available, listed at http://www.unixodbc.com. When you have installed a driver, you can test the unixODBC configuration using the unixODBC isql utility to connect to the database.

Note  It is beyond the scope of this guide to provide information on the installation of third-party ODBC drivers for use with unixODBC.

Appendix K: ActiveState Perl Installation

ActiveState Perl can be downloaded from the ActiveState Web site at http://www.activestate.com. ActiveState Perl is available in prebuilt binary form for Linux, Solaris, and Windows, but the source code is also available and can be used to build ActiveState Perl on other UNIX platforms.

This guide is based upon ActiveState Perl 5.6.1, build 635, although later implementations should operate equally well — the Sybase Migration Toolkit requires version 5.5 or later.

This appendix describes how to install ActiveState Perl under Windows and UNIX.

Windows Installation

This section describes how to install ActiveState Perl using the prebuilt Microsoft Installer (MSI) package ActivePerl-5.6.1.635-MSWin32-x86.msi.

To install ActiveState Perl under Windows:

  1. Execute the file ActivePerl-5.6.1.635-MSWin32-x86.msi. Windows Installer starts the installation.

  2. In the ActivePerl 5.6.1 Build 635 Setup screen, click Next.

  3. In the ActivePerl 5.6.1 Build 635 License Agreement screen, select I accept the terms in the License Agreement and click Next.

  4. In the Custom Setup screen, make sure all features are selected to be installed (the default) and click Next.

  5. In the New features in PPM screen, keep the Enable PPM3 to send profile info to ASPN check box clear and click Next.

  6. In the Choose Setup Options screen, check all four options and click Next.

  7. In the Ready to Install screen, click Install. The Installing ActivePerl 5.6.1 Build 635 screen will appear indicating the progress of the installation process.

  8. In the Completing the ActivePerl 5.6.1 Build 635 Setup Wizard screen, clear the Display the release notes check box and click Finish.

UNIX Installation

This section describes how to install ActiveState Perl under UNIX using the source distribution AP635_source.tgz.

To install ActiveState Perl under UNIX:

  1. Log into UNIX as the root user.

  2. At a shell prompt, move to the location holding the downloaded Perl source code, and then type the following command to unzip the file AP635_source.tgz to the file AP635_source.tar:

    gunzip AP635_source.tgz
  3. Type the following command to unpack the file AP635_source.tar into the directory AP635_source:

    tar xvf AP635_source.tar
  4. Move to the AP635_source directory and type the following command to remove files created by an earlier installation (this step is optional if you have not installed ActiveState Perl before):

    rm –f config.sh Policy.sh 
  5. Type the following command to generate the files needed to build ActiveState Perl. This command will generate a large number of messages on the screen as it examines your UNIX configuration and generates the appropriate make files:

    sh Configure –de

    Note  The Configure script uses a number of well-known tricks to ascertain which tools and libraries are available that it can use to compile Perl. The Configure script will output a number of "found" and "not found" messages as it proceeds. If all is well, the final message will be:

    Now you must run 'make'.

    If the Configure script fails, it is usually because the script cannot find a particular tool, file, or library. You can run the script omitting the

    -de
    flag. In this mode, Configure will prompt you to supply the configuration details it requires.
  6. Type the following command to build ActiveState Perl. As before, you will see a large number of messages as the build process progresses:

    make
  7. Type the following command to install Perl:

    make install
  8. At the following prompt, type

    Y
    to link /usr/bin/perl to /usr/local/bin/perl.

    Many scripts expect perl to be installed as /usr/bin/perl.Do you wish to have /usr/bin/perl be the same as/usr/local/bin/perl? [y]  at installer line 483.
  9. At the shell prompt, type the following command to check that Perl 5.6.1 has installed correctly:

    perl –v

    A message similar to the one shown here should appear:

    This is perl, v5.6.1 built for i686-linux.Copyright 1987-2001, Larry Wall.Binary Build 635 provided by ActiveState Corp. http://www.ActiveState.comBuilt 09:03:27 Nov 18 2003.Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit..Complete documentation for Perl, including FAQ lists, should be found on this system using `man perl' or `perldoc perl'. If you have access to the Internet, point your browser at http://www.perl.com/, the Perl Home Page.  

Appendix L: Team Roles and Responsibilities

The core team for performing a migration project consists of six goal-driven role clusters (commonly referred to as just "roles"), each made up of functional areas that are naturally aligned because they support the same major project goal. Role clusters refer to six distinct groupings of functional areas and their associated responsibilities that all support a quality goal for a solution. Collectively, the quality goals represent the major criteria the project must meet to be judged a success. Table L.1 describes these roles:

Table L.1 Migration Project Key Role Clusters and Their Goals and Responsibilities

Role Cluster and Goal

Responsibilities

Product Management

Goal: To satisfy customers

Defines and maintains business justification for the project; gathers, analyzes, and prioritizes customer and business requirements; defines and measures the business value realization and metrics; manages customer expectations and communications

Program Management

Goal: To deliver solution within project constraints

Tracks and manages the budget and project schedule; drives risk management process; manages resource allocation; facilitates communication within the team; drives overall solution design; manages solution scope and critical trade-off decisions

Development

Goal: To build solution according to specifications

Provides input on technical implications and feasibility of the solution; may prototype technology options; drives the development plan, infrastructure development, and configuration management

User Experience

Goal: To enhance user effectiveness of solution

Analyzes user performance and support requirements; may develop use cases for the solution; defines user assistance documentation and training needs.

Test

Goal: To approve for release only after all solution quality issues are identified and addressed

Articulates quality goals for the solution; defines test approach and acceptance criteria; creates test strategies to ensure quality of the solution; conducts tests; tracks all bugs and communicates issues

Release Management

Goal: To achieve smooth deployment, ongoing operations

Defines deployment requirements and implications; creates rollout and pilot plans; manages deployment.

One or many persons may fulfill the responsibilities of the role cluster. The size and scope of your enterprise and scale of the environment you are migrating will determine whether certain roles will eventually require multiple individuals to fulfill them or, alternatively, whether certain individuals can fulfill multiple roles. In most migration projects the Development Role is likely to require the largest number of individuals and will become a subteam of the core team. In fact, it is recommended for a Sybase migration project to have two subteams filling the Development Role — one for the database migration and one for the client migration.

For a detailed discussion of building high performance project teams, scaling teams according to the size and complexity of the project, the risks of combining roles that are intended to check and balance one another, and the changing importance of different roles throughout the phases of the project, see the UNIX Migration Project Guide at http://www.microsoft.com/technet/interopmigration/sag/unix/umpg/default.mspx.

Appendix M: Familiarizing with SQL Server Interfaces and Connectivity

Before you begin development, you should have a good understanding of the information in this section. The following topics present important information that is applicable to database application migrations.

The following sections describe the application program interfaces (APIs) that applications can use when accessing SQL Server. This information is also useful if you are going to rewrite applications to make use of the native SQL Server APIs.

SQL Server Application Programming Interfaces

SQL Server 2000 supports several classes of APIs that applications can use to access SQL Server resources. They are:

  • General database access APIs. These APIs allow applications to work with data in a relational database. The APIs exposed through Microsoft libraries, such as ODBC and the .NET Framework, present results to applications as tabular result sets using the Tabular Data Stream (TDS) format. The ODBC API can also be used from clients running under UNIX and SFU if unixODBC is installed and configured.

    Note  SQL Server and Sybase also provide APIs that allow client applications to submit and extract data as XML documents. Although the APIs exposed by Sybase and SQL Server are different, the format of the XML data that is generated is the same in both cases.

  • SQL Server database services APIs. These APIs allow applications to administer and configure the services included with the Microsoft SQL Server database engine, such as replication and Data Transformation Services (DTS). These APIs are currently only available to Windows applications.

Additional SQL Server 2000 APIs supporting Windows platforms:

  • SQL Server 2000 Analysis Services API. This API gives applications access to the online analytical processing (OLAP) and data mining facilities of Analysis Services. This API is only available to Windows applications.

    For more information, see "Programming Analysis Services Applications" at http://msdn.microsoft.com/library/en-us/olapdmpr/printro_5vqr.asp.

  • Meta Data Services API. This API gives applications access to the repository of SQL Server metadata stored in Meta Data Services. This API is only available to Windows applications.

    For more information, see "Programming Meta Data Services Applications" at http://msdn.microsoft.com/library/en-us/repospr/rpmdsprgramming_035f.asp.

  • English Query API. This API provides applications the capability to pass customer questions, written in English, about information in a database or OLAP cube to the English Query engine. The engine returns a Transact-SQL statement or multidimensional expressions (MDX) query that can be executed to answer the question. This API is only available to Windows applications.

    For more information, see "Developing and Deploying English Query Applications" at http://msdn.microsoft.com/library/en-us/eqbol/eqconcpt_1koj.asp.

    Note  For more information about additional APIs that SQL Server 2000 supports, see "Application Development Architecture" at http://msdn.microsoft.com/library/en-us/architec/8_ar_ad_71wl.asp.

The following subheadings discuss the general database access APIs and the SQL Server database services APIs. These APIs can be used to access Sybase and SQL Server, and are frequently used by existing Sybase client applications running under Windows. It is useful to understand them if you need to modify such applications to communicate with SQL Server. For information about the other APIs in this list, you should read the articles that have been cited in the list.

General Database Access APIs

SQL Server (and Sybase) database applications generally deal with data in one of two formats:

  • Tabular result sets (also called rowsets). The application uses a database API to execute a Transact-SQL statement and process any result sets that may be returned. These APIs support result set processing and include Microsoft ActiveX® Data Objects (ADO), and OLE-DB for accessing Microsoft SQL Server, as well as Open Database Connectivity (ODBC), Embedded SQL for C, CT-Library, and DB-Library for accessing SQL Server and Sybase.

  • XML documents. The application uses an API or uniform resource location (URL) to execute a Transact-SQL statement or XPath query. The application then retrieves any XML document that is returned. Both Microsoft SQL Server and Sybase provide support for XML processing, although the APIs they use are different.

In addition, SQL Server 2000 can be accessed from URLs in Internet applications. URLs are formatted strings, or stream objects, that Internet applications use to access resources that are available through the Internet or an organization's intranet. SQL Server 2000 supports URLs that specify Transact-SQL statements, query templates, or XPath queries.

Any SQL commands sent to SQL Server 2000 through the database APIs or URLs must comply with the Transact-SQL language. Transact-SQL complies with the Entry Level of the SQL-92 standard, and it supports powerful extensions to SQL-92. The SQL Server OLE DB provider and SQL Server ODBC driver also support the ODBC SQL specification.

For more information, see "Transact-SQL Overview" at http://msdn.microsoft.com/library/en-us/tsqlref/ts_tsqlcon_6lyk.asp.

Table M.1 describes the general database APIs that SQL Server 2000 supports and provides links to supporting documentation.

Table M.1: SQL Server 2000 APIs

Topic

Description

ADO and SQL Server

The ADO API is recommended as the primary API for accessing data from general business applications, such as human resources, accounting, and marketing applications. ADO encapsulates the OLE-DB API in a simplified object model that reduces application development and maintenance costs. The SQL Server OLE-DB provider is the preferred provider to use in ADO applications that access SQL Server. ADO, similar to OLE-DB, can access data from many sources, not just SQL Server databases. In SQL Server 2000, ADO supports Extensible Markup Language (XML) document processing in addition to relational result set processing. For more information, see "Programming ADO SQL Server Applications" at http://msdn.microsoft.com/library/en-us/adosql/adoprg01_1kwv.asp.

URL access to SQL Server

These are formatted strings or stream objects that Internet applications use to access resources that are available on the Internet or intranet. SQL Server 2000 supplies an ISAPI DLL that allows access to SQL Server 2000 over HTTP. For more information, see "URL Access" at http://msdn.microsoft.com/library/en-us/xmlref/xmlref_7583.asp.

OLE-DB and SQL Server

This is the strategic, low-level, COM API for accessing data. OLE-DB is recommended for developing tools, utilities, or low-level components that need high performance. The SQL Server OLE-DB provider is a native, high performance provider that accesses the SQL Server TDS protocol directly. In SQL Server 2000, OLE-DB supports XML document processing in addition to relational result set processing. For more information, see "OLE-DB and SQL Server" at http://msdn.microsoft.com/library/en-us/oledbsql/9_ole_01_87xh.asp.

ODBC and SQL Server

ODBC is a generic API for accessing databases. The SQL Server ODBC driver is a native, high-performance driver that directly accesses the SQL Server TDS protocol. Applications running under UNIX and SFU can use unixODBC. For more information, see "Programming ODBC SQL Server Applications" at http://msdn.microsoft.com/library/en-us/odbcsql/od_6_0oview_8sc3.asp.

Embedded SQL and SQL Server

This is an API defined for accessing SQL Server databases from C or COBOL applications. For more information, see "Programming Embedded SQL for C" at http://msdn.microsoft.com/library/en-us/esqlforc/ec_6_epr_01_3m03.asp.

See also http://support.microsoft.com/default.aspx?scid=kb;en-us;137637. Note that newer SQL Server ODBC drivers may not support Sybase.

DB-Library and SQL Server

This is the C API that is designed to work with SQL Server. For more information, see "Programming DB-Library for C" at http://msdn.microsoft.com/library/en-us/dblibc/dbc_pdc00b_589v.asp.

Through its support of ODBC, SQL Server 2000 also supports applications written to the Remote Data Objects (RDO) and Data Access Objects (DAO) APIs which are object APIs that encapsulate ODBC.

The SQL Server Books Online topics concerning ADO, OLE-DB, and ODBC cover only the issues that are specific to those APIs when you are using the SQL Server OLE-DB provider or the SQL Server ODBC driver. SQL Server Books Online assumes that you are familiar with the general concepts for the API that you are using and that you have access to the documentation for the API.

For more information about ADO, OLE DB, and ODBC, see "Data Access Technologies" at http://www.microsoft.com/isapi/redir.dll?Prd=uda&Ar=home.

SQL Server Database Services APIs

Microsoft Distributed Transaction Coordinator (MS DTC) is a Windows component that allows applications to define distributed transactions. Distributed transactions protect the integrity of a series of updates made against multiple servers. SQL Server 2000 database applications can initiate distributed transactions by directly calling the MS DTC API, but the SQL Server database engine can also call MS DTC to implement the functionality that is required by distributed Transact-SQL statements that applications execute.

For more information about MS DTC, see "MS DTC Distributed Transactions" at http://msdn.microsoft.com/library/en-us/acdata/ac_8_md_06_8w6r.asp.

SQL Server 2000 supports APIs that allow applications to configure and administer the DTS and replication components of SQL Server. Applications can use the same administration and configuration API, called SQL-DMO, which the SQL Server tools use when managing instances of SQL Server. Table M.2 describes the SQL Server database services APIs and provides links to supporting documentation.

Table M.2: SQL Server Services APIs

Topic

Description

Data Transformation Services

Data Transformation Services (DTS) is a set of COM interfaces (based on OLE DB) for defining and executing complex data transformations between OLE DB data providers. For more information, see "DTS Programming Reference" at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dtsprog/dtsptasks_4yp1.asp.

Distributed Transaction Coordinator

The Distributed Transaction Coordinator is a component that allows applications to define distributed transactions that protect the integrity of a series of updates made against multiple servers. Applications use the transaction commands of Transact-SQL, or the DB-Library API, or SQL Server 2000 interface with MS DTC to implement the distributed transactions. For more information, see "MS DTC Distributed Transactions" at http://msdn.microsoft.com/library/en-us/acdata/ac_8_md_06_8w6r.asp.

Extended Stored Procedures

The SQL Server Extended Stored Procedures API consists of C and C++ functions and macros that are used for writing code that can be invoked from Transact-SQL using stored procedure syntax. For more information, see "Programming Extended Stored Procedures" at http://msdn.microsoft.com/library/en-us/odssql/ods_6_con_00_6p9v.asp.

Replication

SQL Server provides a set of COM interfaces for defining and managing replication between instances of SQL Server databases. You can also replicate data from heterogeneous third-party databases to SQL Server. For more information, see "Getting Started with Replication Programming" at http://msdn.microsoft.com/library/en-us/replprog/rp_4_3roprg_37qr.asp.

Distributed Management Objects

Distributed Management Objects (DMO) provides a set of COM interfaces for managing and administering SQL Server 2000. For more information, see "Developing SQL-DMO Applications" at http://msdn.microsoft.com/library/en-us/sqldmo/dmoref_con01_2yi7.asp.

Data Transformation Services

Data Transformation Services (DTS) is a set of COM interfaces (based on OLE DB) for defining and executing complex data transformations between OLE DB data providers. For more information, see "DTS Programming Reference" at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dtsprog/dtsptasks_4yp1.asp.

For more information about using these APIs, see "Application Development Architecture" at http://msdn.microsoft.com/library/en-us/architec/8_ar_ad_71wl.asp.

Third Party Drivers for Sybase

A number of vendors supply ADO.NET, ADO, and ODBC drivers that allow Windows applications to access Sybase databases. One such vendor is DataDirect Technologies. For more information about ADO.NET drivers, see http://www.datadirect-technologies.com/products/dotnet/dotnetindex.asp. For more information about ADO drivers and ODBC drivers, see http://www.datadirect.com.

JDBC Drivers

UNIX clients that are written in Java are more straightforward to migrate than applications written in other languages. If the Java code uses Sybase’s JConnect JDBC driver, you could replace the JConnect driver with a JDBC driver that supports SQL Server. The SQL Server 2000 driver for JDBC is a Type 4 JDBC driver that provides highly scalable and reliable connectivity for the enterprise Java environment. It provides JDBC access through any Java-enabled applet, application, or application server (it supports the Java Development Kit 1.3, 1.2, and 1.1.8.).

The driver technology, licensed by Microsoft through an agreement with DataDirect Technologies, allows Java and J2EE applications to integrate with SQL Server 2000. JDBC is the standard API for accessing data within the Java and J2EE platforms. DataDirect Technologies provides an OEM version (2.0) of their JDBC SQL driver to Microsoft, which Microsoft distributes free of charge. Version 3.0 of this driver is available as a link on the page at http://www.microsoft.com/sql/default.asp.

Additional References and Recommended Client Migration Reading

  • For more information that will help you understand the technologies and processes that are involved in migrating UNIX applications, see the UNIX Application Migration Guide, at http://msdn.microsoft.com/library/?url=/library/en-us/dnucmg/html/ucmglp.asp?frame=true. This document provides guidelines and best practices that are required to port existing UNIX applications to a Windows environment.

  • The recommended APIs for accessing a SQL Server database from client applications on the Microsoft Win32® platform are ADO.NET, ADO, and ODBC. For more information about the native Microsoft drivers, see http://www.microsoft.com/data

  • For more information about the structures and mechanisms used by Sybase and SQL Server, see Appendix B, "Sybase and SQL Server Architectural Differences."

  • For more information about the syntactic differences between Sybase stored procedures and triggers and those used by SQL Server, see Appendix C, "Sybase Stored Procedure and Trigger Migration to Microsoft SQL Server."

  • For more information about the functional and syntactic differences between Sybase stored procedures and triggers and those used by SQL Server, see Appendix D, "Transact-SQL Behavioral Differences."

  • For more information about SQL Server, see the SQL Server product documentation at http://www.microsoft.com/sql/techinfo/productdoc/2000/default.asp.

  • For more information about Sybase, see the Sybase product manuals at http://www.sybase.com/support/manuals.

Download

Get the Solution Guide for Sybase/UNIX to SQL Server 2000: Database Engine Migration and Application Interoperation Guide

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft