Chapter 10 - Access to Legacy Applications and Data

By connecting your intranet to the Microsoft® Systems Network Architecture (SNA) network environment, you can gain access to the wealth of applications and data available on IBM platforms such as Multiple Virtual Storage (MVS) and OS/400 systems. Using Microsoft® SNA Server 4.0 SP2 as a data access server, you can readily develop scripts in Active Server Pages (ASP) that interoperate with legacy applications. This will allow you to process legacy data and distribute it across your intranet and over the Internet.

This chapter presents strategies for using Internet Information Services (IIS) 5.0 Web applications to access applications and data on IBM systems running in SNA environments—mainframes running MVS and AS/400 computers running OS/400. The strategies and scenarios in this chapter will help you use Microsoft development tools and production software packages in order to integrate legacy applications and data into IIS 5.0 Web applications.

On This Page

Identifying Strategies
Integrating IIS and Legacy Applications
Gaining Access to Legacy Data
Replicating Legacy Databases
Migrating Transaction Processing
Additional Resources

Identifying Strategies

To leverage Web technology, an enterprise must make its business applications and data easily accessible to its employees, key business partners and customers. This goal is sometimes difficult to achieve because so much mission-critical data (up to 80 percent of vital business information for many large corporations and government agencies) is stored in host-based file systems and databases on IBM mainframes or AS/400 computers. This information—product specifications, customer profiles, position descriptions, and much more—is often unavailable to people who need it most, at the time they could make best use of it.

Delivering large amounts of data from legacy systems to many widely dispersed users has always been difficult because:

  • Legacy hardware and system software is very expensive, often prohibiting expansion on an Internet-wide scale.

  • IBM network protocols are not widely supported outside the legacy environment—on the Internet, for example.

  • Application development costs are high, discouraging the development of modifications needed to make legacy data more widely available.

By using IIS 5.0 Web applications to access legacy applications and data on IBM systems, you can:

  • Integrate host applications running in legacy environments into IIS ASP applications. Connect host transaction processors to Microsoft® Windows® 2000 Server, by using SNA Server 4.0 SP2 and Microsoft® COM Transaction Integrator (COMTI) for Customer Information Control System (CICS) and Information Management System (IMS) databases.

  • Access legacy files at the record level using SNA Server 4.0 SP2 and Microsoft® OLE DB Provider for AS/400 and Virtual Sequential Access Method (VSAM) Data Provider. Send the data to a Web application running on Windows 2000 Server.

  • Acquire host database structures using Microsoft® Host Data Replicator (HDR). Replicate them for Microsoft® SQL Server and IIS 5.0.

  • Move the automated processes from the legacy environment to the open, more cost­effective Windows 2000 Server environment. To accomplish this, use IIS 5.0 and Microsoft® Component Services (formerly Microsoft® Transaction Server).

Connecting to SNA

Each of the legacy access strategies discussed in this chapter requires connections to IBM host computers through SNA. To understand how each strategy is implemented, you need a basic understanding of how the SNA environment is constructed, how to connect to SNA resources, and how to exploit them. For more detailed information about the SNA environment, see the SNA Server 4.0 SP2 product documentation.

The SNA Environment

To transmit data peer-to-peer over SNA, a session must be established between two Logical Units (LUs) in accordance with the LU 6.2 protocol, with one LU on the host system and the other on the client system. Because LU 6.2 is peer-to-peer, either the mainframe host or the client can initiate a session.

Note: Older SNA networks are hierarchical and do not support peer-to-peer communications. In addition, they do not use LU 6.2; instead, other LU types are used that conform to a hierarchical networking scheme.

As shown in Figure 10.1, any computer connected to an SNA network that uses this protocol, including computers running Windows 2000 Server or Microsoft® Windows NT® Server 4.0, can participate in the SNA environment and gain access to:

  • Legacy host environments, including transaction processing monitors.

  • VSAM files.

  • AS/400 files (both flat and unstructured).

  • Database data structures, such as IBM DB2 data tables.

    Bb742406.iischp10_iis1001(en-us,TechNet.10).gif

    Figure 10.1 The SNA Environment

Connecting Through Microsoft SNA Server

SNA Server 4.0 SP2 provides access to applications and data in the SNA environment from Windows 2000 Server. It translates Windows 2000 Server Transmission Control Protocol/Internet Protocol (TCP/IP) communications to the LU 6.2 protocol, providing access to the wealth of resources in the legacy environment. See Figure 10.2.

SNA Server 4.0 SP2 runs on Windows NT Server 4.0 and Windows 2000 Server.

Bb742406.iischp10_iis1002(en-us,TechNet.10).gif

Figure 10.2 SNA Server Connecting Windows 2000 to SNA

Developing and Deploying Applications on Windows

With SNA Server translating between TCP/IP and LU 6.2, you can develop and deploy applications that access the legacy environment from the Windows 2000 Server side of the connection.

Software tools used to gain access to SNA applications and data reside on Windows 2000 Server platforms and take advantage of the unified administrative tools and lower-cost resources in the Windows 2000 Server environment.

Application development and modification is accomplished in the Windows 2000 Server environment as well. This means that you can avoid the high overhead associated with development and modification of legacy host-side resources.

Integrating IIS and Legacy Applications

For years, IBM has encouraged its customers to gain better control of their software development and maintenance by coding their business logic into programs separate from their terminal access logic. Many Information Services (IS) organizations have responded by coding their business rules into transaction processing programs that execute on CICS or IMS. Gaining access to these programs on the host side from the Windows 2000 Server environment can open up the business rules for an entire application, such as inventory control or budgeting, thus creating new opportunities for distributed applications.

Using Web applications to access business logic offers significant advantages over traditional methods such as using a "screen scraper" to gather data from terminal emulation programs because:

  • All the data and processes that the business logic allows are accessible, rather than only the limited data and processes accessible in individual terminal access programs.

  • There is no requirement for a terminal emulator on the Windows 2000 Server platform because the processing involves no terminal access software.

  • Using native protocols, such as Advanced Program-to-Program Communications (APPC) and LU 6.2, requires fewer data translations, thus reducing the likelihood of errors.

  • The integration of legacy processes with IIS 5.0–based applications using tools such as COMTI is easier and less costly to accomplish than extensive enhancement of applications in the legacy environment.

COM Transaction Integrator

COMTI for CICS and IMS is a technology that integrates legacy transaction processing programs (that operate on mainframes) with Web application and transaction processes (that run in the Windows 2000 Server environment). See Figure 10.3.

COMTI reduces the effort required to develop applications, by integrating COmmon Business Oriented Language (COBOL) programs on mainframes with Automation clients running Windows 2000 Server, Microsoft® Windows® 2000 Professional, Microsoft® Windows® 95, Microsoft® Windows® 98, or any other computer that supports Automation. Specifically:

  • COMTI can automatically create a recordset of the data returned from a mainframe transaction processing program. The recordset data, formatted in a tabular array, can then be accessed by scripts in ASP pages.

  • COMTI coordinates legacy transaction processing programs on the mainframe, which are managed by Component Services. This extends the transaction environment to include transactions managed by CICS or IMS on an IBM mainframe computer.

  • COMTI development tools map COBOL data declarations to Automation data types.

    Bb742406.iischp10_iis1003(en-us,TechNet.10).gif

    Figure 10.3 COMTI as a Proxy for the Mainframe

Functional Overview of the COM Transaction Integrator

The following list summarizes how COMTI gains access to CICS applications. It also shows how COMTI integrates data returned from CICS transaction processing programs with IIS 5.0, by using COM components and Component Services. Specifically, COMTI can:

  • Gain Access to CICS Transaction Processing Programs COMTI directly supports any transaction processing program that executes in CICS or IMS. Because COMTI can access CICS programs, developers can issue application calls to the legacy environment, by using CICS to gain access to any program under its control. This includes DB2 databases, VSAM files, or IMS databases.

  • Redirect Method Calls COMTI is a generic proxy for the mainframe. It intercepts method calls from the client application and redirects them to transaction processing programs running on the mainframe. For example, when an Internet browser sends data that ASP interprets as requiring COMTI, IIS forwards the data to COMTI.

  • Reformat Method Calls When COMTI intercepts a method call, it converts and formats the method's parameters from Automation data types into IBM System 390 mainframe data types.

  • Handle Return Values COMTI handles the return of all output parameters and values from the mainframe, converting and reformatting them for IIS as needed.

COMTI processing runs on computers using Windows 2000 Server, not on the SNA host. It does not require any new executable code to be installed on the mainframe, or on the desktop computer that is running the Internet browser. COMTIcommunicates through SNA Server 4.0 SP2. It uses standard protocols (LU 6.2 or TCP/IP, each of which is supported on SNA Server 4.0 SP2) in order to communicate between the computer running Windows 2000 Server and the mainframe transaction processing program.

COMTI Development Scenarios

The following two scenarios illustrate how the COMTI environment can be used to develop applications that integrate transaction processing programs with ASP pages.

Scenario One: Integrating Legacy Transaction Processing Data Using COM Transaction Integrator

This scenario illustrates how you can connect a Windows 2000 Server–based Web site to an existing COBOL program with transaction processing. Suppose you want to dynamically add content from a legacy database running on CICS on an IBM mainframe computer to a Web application running on IIS 5.0. You can begin by using ASP pages in order to interpret user requests and format the data returned by the mainframe application. Next, you can use COMTI to develop a component that will process the method calls from the IIS environment and the mainframe environment.

This scenario involves six main steps:

  • Step 1 (setup time): Configuring COMTI

  • Step 2 (design time): Defining required methods and parameters

  • Step 3 (design time): Writing the application

  • Step 4 (design time): Testing the application

  • Step 5 (deployment): Deploying the application components

  • Step 6 (post-deployment): Maintaining the application

Step 1: Configuring COMTI

To develop a COMTI component, you must have the following installed:

  • Windows 2000 Server or Windows 2000 Professional

  • IIS 5.0 with Component Services 2.0

  • Microsoft® Windows Client for SNA Server

  • Microsoft® Data Access Components (MDAC) 2.0

Additionally, the following COMTI components must be installed:

  • The administration component, which collects information about the user's SNA environment.

  • The run-time component, which intercepts the method calls to the mainframe and uses the COMTI-created component library to perform the actual conversion and formatting of the method parameters. In addition, the run-time component interacts with SNA Server and builds packets, which are sent to the mainframe using LU 6.2 or TCP/IP.

  • Development tool components, featuring the Component Builder, a GUI used to create component libraries from mainframe COBOL programs.

The Component Builder can be installed as an add-in to Microsoft® Visual Basic® 5.0, and does not need to be installed on the same system as the other components. Developers who are not using Visual Basic 5.0 can use the Component Builder as a stand-alone tool.

Step 2: Defining Required Methods and Parameters

  1. Acquire the COBOL source code from the mainframe by using a file transfer mechanism, such as the FTP-AFTP gateway service that is included with SNA Server 4.0 SP2.

    Use the COBOL Import Wizard to:

    • Select the COBOL source code.

    • Specify the methods and mainframe transaction processing names.

    • Select input, output, and return value parameters.

  2. When necessary, change the mappings between the COBOL and Automation data types.

  3. Use the Component Builder to make a COMTI component type library (.tlb). This is a standard library that can be used by client software and Component Services.

If you have changed the data type mapping in the COBOL code, there are two more actions required in this step:

  • Use the Component Builder to generate new COBOL declarations.

  • Update the mainframe program with the new COBOL data declarations. This is the only instance requiring modifications to the mainframe environment.

Step 3: Writing the Application

  1. Write the client in a language that supports referencing of Automation objects, such as Visual Basic, Microsoft® Visual C++®, or Microsoft® Visual J++®.

  2. Add the appropriate COMTI component library to the references list in the project and add the references of the component in the program.

  3. Invoke methods as appropriate throughout the application.

If the existing mainframe transaction processing program needs to be modified, do one of the following:

  • Perform the modification on the mainframe.

  • Use a Windows-based COBOL development environment, such as Microfocus COBOL. Then move the code to the mainframe.

Step 4: Testing the Application

If the mainframe transaction processing program is unchanged, it does not require testing. If the transaction processing program has been modified, then the COBOL program should be tested independently to ensure that it runs correctly in its own environment.

To test the new application

  1. Ensure that the COMTI component library is registered in Component Services.

  2. Test the mainframe transaction processing independently, if it has been modified in any way.

  3. Test the newly developed COMTI component independently to ensure that it is working correctly.

  4. Test the application, running the mainframe transaction processing program.

Step 5: Deploying Application Components

Each of the following must be installed on the production computer before you deploy the client side of the application:

  • Windows 2000 Server or Windows 2000 Professional

  • Windows Client for SNA Server

  • Component Services

  • COMTI administration and run-time components

  • COMTI component libraries registered in Component Services

  • Client applications that access COMTI components

Step 6: Maintaining the Application

As changes are made to the mainframe transaction processing program, do one or more of the following, as appropriate:

  • Acquire the COBOL source code from the mainframe.

  • Use the COBOL Import Wizard in order to respecify the method names and host transaction processing names, and reselect the input, output, and return parameter values.

What If the Required Mainframe Transaction Processing Does Not Exist?

In this case, you must modify steps 2 and 3 of this scenario by developing a transaction processing program that runs on CICS (on the mainframe host).

In step 2, use the COMTI Component Builder to:

  • Enter the methods and parameters for the application.

  • Add information about the name and location of the new transaction processing program.

  • Change the default mappings produced by the Component Builder, if necessary.

  • Create the COMTI component library.

In step 3:

  • Write the mainframe transaction processing program, either on the mainframe or in the Windows environment using a product such as Microfocus COBOL; then move the program to the mainframe for testing.
Scenario Two: Extending Transactions with COMTI

When deployed in the Windows 2000 Server–based environment, COMTI can extend transactions to include mainframe transaction processing programs running on CICS and IMS.

A developer can use the following steps in order to connect a Windows 2000 Server–based Web site to an existing COBOL transaction processing program. Doing so will make legacy data available to scripts in ASP pages. In this scenario, additional tasks are needed in order to extend transactions that are under the control of CICS.

Like the previous scenario, this scenario also involves six main steps:

Step 1: Configuring COMTI

To develop the COMTI object, you must have the following installed:

  • Windows 2000 Server or Windows 2000 Professional

  • IIS 5.0

  • Windows 2000 Client for SNA Server

  • Component Services

Additionally, the following COMTI components must be installed (for descriptions of each of the components, see Scenario One):

  • The administration component

  • The run-time component

  • The Component Builder

Step 2: Defining Required Methods and Parameters

To make the mainframe transaction processing program data available to IIS, perform the following tasks:

  1. Acquire the COBOL source code from the mainframe by using a file transfer mechanism such as the FTP-AFTP gateway that is delivered by SNA Server 4.0 SP2.

    Use the COBOL Import Wizard to:

    • Select the COBOL source code.

    • Specify the methods and mainframe transaction processing program names.

    • Select input, output, and return value parameters.

  2. When necessary, change the mappings between the COBOL and Automation data types.

  3. Use the Component Builder to make a COMTI component library (.tlb). This is a standard library that can be used by client software and Component Services.

If you have changed the data type mapping in the COBOL code, there are two more actions required in this step:

  • Use the Component Builder to generate new COBOL declarations.

  • Update the mainframe program with the new COBOL data declarations. This is the only instance requiring modifications to the mainframe environment.

Step 3: Writing the Application

  1. Write the client in a language that supports referencing of Automation objects, such as Visual Basic, Visual C++, or Visual J++.

  2. Add the appropriate COMTI component library to the references list in the project and add the references of the component in the program.

  3. Invoke methods as appropriate throughout the application.

  4. Define any transaction-related attributes in the COMTI component. These attributes will handle transactions in a manner transparent to the client application (for example, an IIS 5.0 application using ASP). The COMTI component will call both the Component Services/Distributed Transaction Coordinator (DTC) and the transaction processing program running on CICS.

Step 4: Testing the Application

If the mainframe transaction processing program is unchanged, it does not require testing. If the transaction processing program has been modified, then the COBOL program should be tested independently to ensure that it runs correctly in its own environment.

To test the new application

  1. Test the mainframe transaction processing program independently, if it has been modified in any way.

  2. Test the newly developed COMTI component independently to ensure that it is working correctly.

  3. Test the application completely, by driving the COMTI object with the client application and running the mainframe transaction processing program.

  4. Carry out a transaction test. To do this, test the COMTI object with the transactions made available. Check operation between COMTI and Component Services, in conjunction with COMTI and the transaction processing program running on CICS.

Step 5: Deploying Application Components

Each of the following must be installed on the production computer before you deploy the client side of the application:

  • Windows 2000 Server or Windows 2000 Professional

  • Windows Client for SNA Server

  • Component Services

  • COMTI administration and run-time components

  • COMTI component libraries registered in Component Services

  • Client applications that access COMTI components

Step 6: Maintaining the Application

As changes are made to the mainframe transaction processing program, do one or more of the following, as appropriate:

  • Acquire the COBOL source from the mainframe.

  • Use the COBOL Import Wizard in order to respecify the method names and host transaction processing program names, and reselect the input, output, and return parameter values.

Using COMTI with IMS

Current versions of COMTI do not support transactional semantics (also known as a two­phase commit) on the IMS subsystem. However, you can access an IMS/DB database transaction through a CICS subsystem, front-end transaction processing program. That is, if the mainframe environment supports CICS transaction processing against IMS/DB, you can extend Component Services transactional semantics to the IMS/DB database. In this case, COMTI provides the same services as any other transaction processing program running on CICS. If you do not require transactional semantics, and just want to gain access to your data, you can access IMS directly.

Gaining Access to Legacy Data

OLE DB data providers make it easy to access diverse legacy data sources from IIS 5.0. Data providers targeting legacy host environments include:

  • OLE DB provider for DB2

  • OLE DB provider for AS/400 and VSAM

This section describes how you can incorporate legacy file systems into your Web applications, by using a data provider to access AS/400 and VSAM files at the record level and move the data to the IIS 5.0 environment. For information about OLE DB provider for DB2, see the SNA Server product documentation.

Legacy File Data and IIS 5.0

In order to develop Web applications that deliver data stored in VSAM and AS/400 files, you need to gain access to VSAM and AS/400 files from the Windows 2000 Server environment. You can do this by making the data available to data consumer applications that are running ASP:

  • Access legacy file systems running on MVS and OS/400 to retrieve the business data stored in them.

  • Integrate legacy data with applications and data in the IIS 5.0 environment using the OLE DB provider for AS/400 and VSAM.

Access to VSAM and AS/400 files with OLE DB and ActiveX Data Objects

The OLE DB provider for AS/400 and VSAM is the first application to make record-level mainframe VSAM and AS/400 file systems available to ASP applications. The data provider allows consumer ASP applications to gain access to the mission-critical data available in those file systems. The data provider ships with SNA Server 4.0 SP2, Windows Client for SNA Server, and the SNA Server SDK.

For more information about developing ASP applications, see "Developing Web Applications" in this book.

Functional Overview of the Data Provider

The OLE DB provider for AS/400 and VSAM comprises two core components. The first is an OLE DB–compatible data provider that insulates the complexities of LU 6.2 programming from the OLE DB or Microsoft® ActiveX® Data Objects (ADO) programmer. The second is an SNA Distributed Data Management (DDM) transaction management program that runs as a service on Windows 2000 Server, or as an application on Windows 95 or Windows 98.

The following list summarizes the uses of the data provider:

  • From Windows 2000 Server, you can gain access to VSAM and AS/400 file systems through the IBM DDM protocol server components (OLE DB/DDM Driver) installed on many IBM host systems. There is no need to install Microsoft software on the host system.

  • You can use customizable applications in order to read and write to VSAM and AS/400 files that are in place on IBM host computers. There is no need to migrate the files to the Windows 2000 Server environment.

  • You can gain access to fixed and variable logical-record-length classes, as well as file and record locking, while preserving file and record attributes.

  • You can gain access to most AS/400 file types (both physical and logical) and most popular mainframe dataset types: sequential (SAM); VSAM key-sequenced (KSDS), VSAM entry-sequenced (ESDS), VSAM relative-record (RRDS), and partitioned (PDS/PDSE).

Leverage Existing Systems with Data Provider

The data provider makes it possible to integrate unstructured legacy file data with data in the Windows 2000 Server environment. The DDM protocol provides program-to-program communications through SNA Server 4.0 SP2 and native host protocols (such as LU 6.2 and TCP/IP). No custom development is required on the host system.

IBM DDM servers are available on host systems supporting record-level access to files. For example, Distribute File Manager, a component of the IBM Data Facility Storage Management Subsystem (DFSMS) version 1R2 or later, is a target DDM server installed on many mainframes running on MVS or OS/390. On AS/400 computers, OS/400 (version 2R2 or later) runs as a DDM server. The data provider communicates with Data File Manager and OS/400 through LU 6.2 and TCP/IP.

The data provider makes it easy for developers to gain access to high-level component interfaces such as OLE DB or ADO. It supports development in Visual Basic, Visual C++, VBScript, and Microsoft® JScript®. Web developers don't need to know how to run SNA or LU 6.2.

Scenario: Using Data Provider to Gain Access to Host Files

With the data provider, you can gain access to file data on an IBM host from a Windows 2000 Server–based Web application. Suppose that you want to add content from a legacy file that is stored on an IBM mainframe or AS/400 computer to an ASP application running on IIS 5.0. ASP can interpret user requests and format the return of data to the user through Web pages. The data provider can process calls from the IIS 5.0 environment and pass data returned from the mainframe environment to IIS.

This scenario requires six main steps:

  • Step 1 (setup and configuration): Configuring the data provider

  • Step 2 (design time): Defining the application requirements

  • Step 3 (design time): Writing the application

  • Step 4 (design time): Testing the application

  • Step 5 (deployment): Deploying the application components

  • Step 6 (post-deployment): Maintaining the application

Step 1: Configuring the Data Provider

To develop an application using the data provider, you must utilize the following system requirements:

  • Windows 2000 Server or Windows 2000 Professional.

  • IIS 5.0 or later. This includes Microsoft® Data Access Components 2.1 (MDAC 2.1) supported by the data provider.

  • Microsoft® Windows® Client for Microsoft® SNA Server or SNA Server 4.0. Configure it so that it connects to SNA Server 4.0 SP2.

Additionally, the following packages must be installed and configured:

  • Microsoft OLE DB provider for AS/400 and VSAM.

  • Microsoft OLE DB provider for AS/400 and VSAM snap-in for Microsoft® Management Console (MMC). Configure the DDM service for the target host and PC locale. Optionally, configure the data sources if you are not passing the information through an ADO consumer application. Configure the mainframe data column description to OLE DB data type mappings.

Step 2: Defining Application Requirements

  1. Compile a list of target host files, keys, and alternate index paths. Define the subset of records to be read from the target Web application.

  2. Specify the ADO objects, methods, properties, and collections supported by the data provider that will be used in the application.

  3. Consider using Recordset.Filter to define recordsets based on logical search criteria and to search for records based on the application program and user input.

  4. Use the ADO errors collection to produce errors in formats that the program can understand. This will prevent passing unnecessary error conditions to the Web browser.

  5. Use either the automatic AS/400-to-OLE DB data transformation, or custom mapping, by using a data column description file from the DDM service host.

  6. Decide whether to automatically map logon user IDs obtained from the Web browser.

  7. Choose a deployment option and decide whether to run the DDM service on the computer running SNA Server, or on the computer running Windows Client for SNA Server.

Step 3: Writing the Application

  1. Write scripts in order to gain access to ADO 2.0 from an ASP page. These should be written in a language that supports referencing of Automation objects, such as VBScript or JScript.

  2. Cast data to match the OLE DB and host data types. Refer to the recordset schema in order to determine which host data types are supported. Ensure that the host data is valid before writing to the host files, especially if a host application concurrently gains access to host data files.

  3. Check the syntax of supported OLE DB methods and properties. Pay special attention to the connection string (which is constructed using the data link dialog boxes) and the Recordset.Open parameters. These are unique to each OLE DB provider.

  4. If appropriate, use the MS$NAME placeholder in order to pass the user ID and password to the SNA Server 4.0 SP2 host security feature. The security feature performs security mapping between Windows accounts and mainframe accounts.

  5. Program some loops in order to ensure that target recordsets contain data before passing recordset methods. This will allow for delays caused by network conditions and remote target hosts.

Step 4: Testing the Application

Test the new application to make sure that:

  • The ASP pages run correctly.

  • There is clean communication between ADO and the DDM Service on the Windows 2000 Server side. Consider starting the DDM Service on Windows 2000 Server automatically, in order to ensure the timely availability of a PC-to-host connection with minimal session startup time.

  • There are reliable, efficient operations between DDM Service and the host by way of SNA Server 4.0 SP2. Consider keeping the connection between SNA Server and the host connection active, in order to reduce session startup time.

  • There is proper data display in the Web applications. Ensure the data integrity of host files, because there can be a loss of precision when you move data from the host to the PC and back again.

Step 5: Deploying Application Components

Each of the following must be installed on each production computer before you deploy the application:

  • Windows 2000 Server or Windows 2000 Professional.

  • Windows Client for SNA Server.

  • IIS 5.0.

  • OLE DB provider for AS/400 and VSAM.

  • ASP pages requesting data from the legacy files.

  • DDM Service running with Windows Client for SNA Server or SNA Server 4.0. To improve responsiveness, consider starting the DDM Service automatically when the system restarts.

Step 6: Maintaining the Application

If you modify the scripts in ASP pages, you need to retest the application by using the following guidelines:

  • Test the application fully if you add new scripts or script fragments to existing ASP pages that request data from new data sources.

  • If the target host data files are restructured, or new host tables are added, these changes need to be incorporated into the Web application by modifying ADO methods and creating new recordsets as needed.

  • If the host connectivity changes, you should verify the Windows Client for SNA Server configuration or the data provider data sources.

Replicating Legacy Databases

Much of the data stored in legacy systems resides in relational databases. One option for gaining access to legacy data that IIS 5.0 will use is to replicate database tables from a legacy application to SQL Server 7.0 running on Windows 2000 Server.

Why Replication?

Legacy database replication is a set of conversion processes that copies, reformats, and migrates database tables for use in relational databases running on Windows 2000 Server. Using data replicated from a legacy database, developers and systems engineers can:

  • Integrate the legacy data with data from the Windows 2000 Server side. If the data is replicated for storage in a Windows 2000 Server–side database, IIS 5.0 connects Internet or intranet clients to dynamically created Web pages. The data is then retrieved through OLE DB provider or Open Database Connectivity (ODBC).

  • Readily subject the data to new business logic. For new processes involving the database, developing in Windows is less costly than developing in legacy systems.

  • Manage the data efficiently. Windows 2000 Server with IIS 5.0 provides a common set of system management tools, database management tools, Web application servers, and transaction services.

  • If the application requires it, rereplicate the data from SQL back to the legacy database.

Replicate Data Using Data Transformation Services

Data Transformation Services (DTS) is a standard feature of Microsoft® SQL Server 7.0. With DTS you can import data to SQL Server 7.0 from legacy databases using OLE DB.

An overview of DTS capabilities and features is provided here. For detailed information, see the SQL Server 7.0 product documentation.

If you are still using a pre-SP2 version of SNA Server 4.0, you can replicate DB2 data by using HDR. See "Replicating DB2 Tables by Using Host Data Replicator" later in this chapter.

Import, Export, and Transform Database Data

Either interactively or automatically, DTS simplifies the process of importing and transforming data from multiple, heterogeneous sources. It supports data lineage, making it easy to track the origin of data. In addition, DTS enables you to move and transform data to and from the following sources:

  • OLE DB providers for SQL Server 7.0 and others

  • ODBC data sources such as Microsoft® Access, Oracle, and DB2, which are using OLE DB provider for ODBC

DTS provides the functionality to import, export, and transform data between Microsoft SQL Server 7.0 and any OLE DB format. Using DTS, it is possible to build data warehouses and data marts in SQL Server 7.0 by importing and transforming data from multiple heterogeneous sources on a regularly scheduled basis (requiring no user intervention).

DTS imports and exports data between applications by reading and writing data in a common format. For example, DTS can import data from an ASCII text file or an Oracle database into SQL Server 7.0. Alternatively, data can be exported from SQL Server 7.0 to an ODBC data source, or a Microsoft® Excel spreadsheet.

Note: DTS only moves schema and data between heterogeneous data sources. Triggers, stored procedures, rules, defaults, constraints, and user-defined data types are not converted between heterogeneous data sources.

Transformations are applied to source data before it is stored in its new destination. For example, DTS allows new values from one or more source fields to be calculated. It also permits a single field that has been broken into multiple values to be stored in separate destination columns. Transformations make it easy to implement complex data validation, scrubbing, and enhancement during import and export.

DTS supports multistep packages, where multiple files can be processed separately, then brought together in a single, final step. Single records in a file can be broken up into multiple records in the destination, or multiple records in the source can be aggregated into single records in the destination.

Creating DTS Packages with Wizards

DTS Export Wizard

DTS Export Wizard guides you through the process of creating DTS packages in order to export data from a SQL Server 7.0 database to heterogeneous data sources. Help is included with this wizard.

DTS Import Wizard

DTS Import Wizard guides you through the process of creating DTS packages in order to import heterogeneous data to a SQL Server 7.0 database. Help is included with this wizard.

Replicating DB2 Tables by using Host Data Replicator

If you are still using a pre-SP2 version of SNA Server 4.0, you can replicate DB2 data by using HDR and IIS 5.0.

HDR is a database replication software product that copies predefined data from IBM DB2 database tables to SQL Server 7.0 database tables. It can do so on demand, at a single scheduled time, or according to a recurring schedule. HDR has the capability to reverse the process as well, replicating SQL Server 7.0 tables for use in a DB2 database.

HDR is composed of the data replicator service (a Windows 2000 operating system service) and Data Replicator Manager (a Windows 2000 operating system application for administration). The Data Replicator Manager has a user interface similar to those that appear in SQL Enterprise Manager and in the scheduling portions of SQL Executive.

Two-Way Replication

The HDR performs fully updated two-way replication. A complete "snapshot" of the source table is copied into the target database table, by using either Bulk Copy Program (BCP) when copying to SQL Server 7.0, or ODBC "inserts" when copying to DB2. All of the records of the target table are overwritten each time replication occurs. Optionally, you can append data to the end of the existing table, provided you do not change the table schema.

Flexible Processing and Filtering

HDR flexibly reproduces host data in whole or in part according to selection criteria determined by the user:

  • Replication of selected columns ("vertical partitioning")

  • Replication of selected rows ("horizontal partitioning")

  • Replication of selected columns from selected rows (combined vertical and horizontal partitioning)

HDR can either replace entire tables or merge replicated data with existing tables that are located in the target database. Changes may be made in format or data values in several ways:

  • Construction of destination columns calculated ("derived") from source data

  • Use of SQL expressions in order to alter data in destination tables before or after replication

  • Change in column names, column data types, or column order between source and destination

Scheduling

HDR offers the ability to perform single replication on demand (repeatable at will or through a programmatic interface such as SP_RUNTASK). It also allows single or repeated replication at predetermined times.

Statistics

Statistics gathered by HDR are available through the Data Replicator Manager or through the System Monitor in Windows 2000 Server. Statistics include the following:

  • Throughput for each replication operation

  • Number of bytes transferred for each replication operation

  • Elapsed time for each replication operation

Security

The Data Replicator Manager prompts the administrator to supply a valid SQL Server 7.0 account and password each time it establishes a connection to a Data Replicator Service. If a correct account and password are not provided, the Data Replicator Manager closes the connection, thus preventing administration of the associated service and its subscriptions. (A subscription refers to a replication operation involving one source table and one destination table. A single Data Replicator Service can handle many subscriptions.)

For DB2, during subscription setup, an administrator must supply a valid DB2 account and password. HDR will also support the version 3.0 Single Sign On option of SNA Server 4.0. Database administrators using HDR are not required to keep track of multiple passwords.

SQL Server7.0 destination table ownership can be defined during subscription setup. Access to replicated data is then controlled through normal SQL Server 7.0 security measures.

Performance

Source table names can be filtered so that they reduce network traffic and improve performance during setup. This allows subscriptions to be updated in environments that have large numbers of possible source tables. Also, connections to source and destination servers can be pooled in order to avoid the performance costs of reestablishing connections unnecessarily. Pool sizes can be adjusted as needed.

The Data Replicator Service caches subscription information. By doing this, it avoids the performance costs of obtaining the information from the data replicator control database at each scheduled replication time.

HDR does not coordinate online database transactions. Thus, it is not a suitable tool for carrying out routine database updates.

Supported Platforms

HDR is supported on the following platforms:

  • SNA Server 4.0 or later

  • Microsoft® Windows NT Server 3.51 or later (Intel and Alpha-based)

  • Microsoft® SQL Server 6.5 or later

  • IBM DB2 including DB2 (MVS), DB2/VM (SQL/DS), DB2/400, and the common family (DB2/2, DB2/6000, and DB2/2 Windows 2000 Server using APPC)

Migrating Transaction Processing

Component Services is a transaction management server that provides reliable, secure transaction management for Web applications. The following section provides information that can help you plan a migration of transaction-driven applications from any legacy environment to the Windows 2000 Server environment, where transactions requested by IIS 5.0 are managed by Component Services.

Why Use Transactions?

Two changes in the use of information technology make the increased deployment of transaction management systems compelling for many organizations:

  • The growing demand to use the Internet and intranets for exchanging secure information, including financial exchanges through online commerce

  • The increasing trend of running multiple reusable software components within one application, including components used to access databases

As stated, the explosive growth of the Internet and organizational intranets has presented new opportunities for doing business over data networks. The elemental expression of doing business—the exchange of money for goods or services—requires making updates to more than one database for each exchange.

In addition, software design is increasingly shifting toward a component model, in which applications are made up of many code segments operating independently of each other. An application often allows more than one component to concurrently update one or more databases. Concurrent updates require a transaction manager, in order to ensure transaction integrity while still optimizing performance.

For more information, see "Data Access and Transactions" in this book.

Migrating to Component Services

Since there is a growing need for transaction management on the Web, and since many transaction systems that exist on legacy networks process critical business data, more and more organizations need to manage transactions in both environments. A serious obstacle to achieving this is that legacy transaction systems do not extend across boundaries, such as the boundary between SNA legacy networks and TCP/IP–based intranets using Windows 2000 Server. In other words, a legacy transaction processing program running on CICS or IMS cannot track or verify a database update on the Windows 2000 Server network. Additionally, the costs of development, hosting, and scaling up are higher in the legacy environment than on the Windows 2000 Server network.

The best solution is a Windows 2000 Server–based transaction management system that coordinates Web transactions based on IIS 5.0 with legacy transaction processing programs. Any transaction can then involve updates of databases running on Windows 2000 Server, a mainframe transaction processing program, or both at the same time. Transaction processes can be selectively migrated to Windows 2000 Server–based database management software, such as SQL Server 7.0, and any transaction processes left on the mainframe can be managed from Windows 2000 Server as well.

Features and Capabilities

Component Services expands the capabilities of IIS 5.0 to include Web-based transaction management. Essentially, this is a component-based transaction management solution that provides a programming model, a run-time environment, and graphical server administration tools—everything required to design and develop a transaction application and migrate a legacy process to it. With Component Services, Web developers who use ASP pages can create full transaction management capabilities for deployment on the Web.

By deploying Component Services as your transaction management system, you can profit from the advantages of the Windows 2000 Server environment and migrate selected legacy transactional processes. At the same time, Component Services extends transaction management to include processes left running in the legacy environment.

In addition to providing full transaction monitoring and management, as well as a low cost of scaling up Windows 2000 Server–based systems and software, Component Services offers advantages over its mainframe-based counterparts in design time and at run time. Other advantages involve maintenance and administration.

Component Services Design Time

The Component Services programming model provides the framework to develop components that encapsulate business logic.

Component Services fits perfectly into a three-tier programming model; acting as middleware, it manages the components that make it possible for a Web application to handle many clients. It also provides developers with a great deal of flexibility:

  • The model emphasizes a logical architecture for applications, rather than a physical one. Any service can invoke any component.

  • Component Services connects requests for transactions (calls from scripts in ASP pages) to business logic and to database applications, so that you are not required to develop these processes.

  • The applications are distributed, which means you can run the right components in the right places. This benefits users and optimizes network and computer resources.

Three-Tier Applications and Middleware

A three-tier application divides a networked application into three logical areas. Middleware, such as Component Services, connects the three tiers: presentation, business logic, and data processing.

Tier 1 handles presentation. In a Web application, data is requested by the browser and is sent there from the Web server for display.

Tier 2 processes business logic (the set of rules for processing business information). In an IIS 5.0–based Web application, Tier 2 processing is carried out using components of IIS 5.0.

Tier 3 processes the data (the associated databases and files where the data is stored). In a Web application, Tier 3 consists of a back-end database management system (DBMS) or a file access system with its associated data.

Three-tier systems are easier to modify and maintain than two-tier systems because the programming of presentation, business logic, and data processing are separated by design. This architecture permits redevelopment to proceed in one tier, without affecting the others.

Middleware, such as Component Services, makes efficient use of resources so that Web application programmers can concentrate on business logic. Component Services can connect the browser request (Tier 1) to the business logic (Tier 2). In Tier 3, it can connect business logic to the databases and manage all activities of the transaction.

Development Efficiency

Application programming interfaces (APIs) and resource dispensers make applications scalable and robust. Resource dispensers are services that manage nondurable shared-state architecture, on behalf of the application components within a process. This way, you don't have to undertake traditional programming tasks associated with state maintenance.

Component Services works with any application development tool capable of producing COM-based dynamic-link libraries (DLLs). For example, you can use Visual Basic, Visual C++, Visual J++, or any other Microsoft® ActiveX® tool to develop Component Services applications.

Component Services is designed to work with a wide variety of resource managers, including relational database systems, file systems, and document storage systems. Developers and independent software vendors can select from a broad range of resource managers, and can use two or more within a single application.

The Component Services programming model simplifies migration, by making transaction application development easier and faster than traditional programming models allow. For more information about developing Component Services applications, see "Data Access and Transactions" in this book.

Component Services Run Time

The Component Services run-time environment is a second-tier platform for running COM components. This environment provides a comprehensive set of system services including:

  • Distributed transactions.

  • Automatic management of processes and threads.

  • Object instance and connection pool management, in order to improve the scalability and performance of applications.

  • A distributed security service that controls object invocation and use.

  • A graphical user interface (GUI) that supports system administration and component management.

This run-time infrastructure makes application development, deployment, and management much easier by making applications scalable and robust.

Overall performance is optimized by managing component instantiation and connection pooling. Component Services instantiates components just in time for transactions. It then purges state information from the instance when a transaction completes, and reuses the instance for the next transaction. For example, users can enter transaction requests from their browsers to an ASP page containing the code that is needed to call a COM component. As Component Services receives these messages, the transactions are managed using components already instantiated. This minimizes object instantiation and the number of connections required, both of which often inhibit the performance of systems that support transactions.

Component Services Administration Tools

Microsoft® Component Services Explorer is a graphical administration tool used to register, deploy, and manage components that execute in the Component Services run-time environment. With Component Services Explorer, you can script administration objects in order to automate component deployment.

Planning a Migration to Component Services

As you migrate processes and databases from a legacy environment (transaction processing programs running on CICS on a mainframe) to Component Services, the transaction processing programs will continue to run on the mainframe for a while. To migrate to Component Services:

  • Use Component Services and COMTI in order to extend transaction management so that it includes all the parts of each transaction. This involves updates that take place on databases running on Windows 2000 Server, as well as updates that take place on the mainframe.

  • Script the ASP pages so that IIS 5.0 calls the COM components that execute the transaction.

You can migrate parts of the legacy transaction infrastructure to SQL Server 7.0 and use Component Services to manage the parts of the transaction on the legacy host. IIS 5.0 can access all of the data by using scripts in ASP pages.

Mapping Transaction Tasks to Windows 2000 Server–Based Applications

Table 10.1 maps transaction-related functions to the applications used to support them in the Windows 2000 Server environment.

Table 10.1 Transaction-Related Functions and Windows 2000 Server–Based Applications

Transaction-Related Task

Windows 2000 Server–Based Application

Manage transactions

Component Services

Manage data resources

SQL Server 7.0

Call transactions from Web pages

ASP

Connect to a legacy network

SNA Server 4.0 SP2

Extend transactions to legacy transaction processing programs

COMTI

Additional Resources

The following Web sites and books provide additional information about IIS 5.0 and about other features of Windows 2000 Server. It also provides resources for accessing legacy applications and data.

https://www.microsoft.com/data

The Microsoft Universal Data Access Web site provides product information, technical documents, and application development resources involving OLE DB, ADO/RDS, ODBC, and more.

https://www.microsoft.com/hiserver/evaluation/previousversions/

The Microsoft® SNA Web site provides information about current and future releases of Microsoft SNA Server 4.0 SP2, COMTI for CICS and IMS, and the OLE DB provider for VSAM and AS/400. Features include access to evaluation copies of SNA Server 4.0 SP2, as well as related software that can be downloaded. The site also offers planning, deployment, development, and support resources, training options, links to SNA specialists, and popular newsgroups of the SNA community.

https://msdn.microsoft.com/

MSDN Online offers up-to-date resources on all aspects of developing Web applications, including multitier ASP applications.

Books

Microsoft SNA Server 4.0 Resource Guide, 1997, Redmond: Microsoft Press.

This second volume of the Microsoft® BackOffice® Resource Kit provides the technical information and resources needed to deploy, support, and integrate SNA Server 4.0.

SNA Server 4.0 Software Product Documentation

For information about SNA Server 4.0, COMTI for CICS and IMS, and OLE DB provider for VSAM and AS/400, see the SDK documentation included with SNA Server 4.0 SP2.

Bb742406.spacer(en-us,TechNet.10).gif