Chapter 9: Additional Migration Considerations

Migrating from a mainframe environment to a Windows environment can often cause additional concerns for mainframe professionals. This chapter highlights the technologies and architectures that have been successfully used to deliver IT services to resolve or alleviate some of those concerns. Existing documentation and prescriptive guidance that can be referenced to answer questions and concerns will help to better understand how these applications and services are used with the Windows Server System.

On This Page

Migrating Data to the Windows Server System Migrating Data to the Windows Server System
Critical Mainframe Workload Examples Critical Mainframe Workload Examples

Migrating Data to the Windows Server System

A complete discussion of data migration is beyond the scope of this book. However, data migration is a critical topic that must be addressed briefly. The topic of data migration can often be overlooked in favor of code conversion in an application migration, but without access to accurate data in the new environment, no application migration can be considered successful.

There are as many ways to migrate data as there are ways to migrate application code. Some of them do not involve actual movement or change to the data, but merely establish a way for the migrated application to access the required data that is still in its original repository.

Data Migration versus Interconnection

Instead of migrating an application’s data from one platform or DBMS to another, it is sometimes advisable to establish an “interconnection” for the migrated application to access its data in its existing location, without actually migrating the data itself. This may be a useful strategy when:

  • Multiple applications must access the data, and not all of them will be migrated.

  • Insufficient time is available to migrate the data.

  • Different accounts own the application and the data.

Note Interconnection, instead of migration, can either be a permanent end-state architecture or a phase in the overall project.

Data Interconnection Techniques

The Microsoft Host Integration Server provides the tools and guidance to allow migrated applications to interact with mainframe environments. Various techniques for interconnection are available using HIS and the Windows Server environment in general. They can be broken down into the following approaches, each with its strengths and weaknesses:

  1. Use the .NET Framework, specifically ADO.NET, to access the “old” datastore programmatically from the migrated application.

  2. Use non-.NET features, such as ADO, ODBC or OLE-DB, to access the “old” datastore programmatically from the migrated application.

  3. Publish the contents of the “old” datastore through a purpose-built interface application.

The best overall approach is probably the first, using the .NET Framework, because ADO.NET offers a rich interconnection that is virtually identical to accessing a SQL Server database — at least in the case of DB2 or VSAM — and will therefore be the most “invisible” from the Windows Server environment. In addition, ADO.NET provides a unique disconnected access model that provides good support for almost any type of application architecture, including occasionally-connected mobile applications.

If .NET is not the target platform of the migration, selecting from among the other options depends on the target platform standards and an in-depth understanding of how the application and the datastore interact. Generally, the “publish” approach would be reserved for situations where access is limited to a small set of low-complexity transactions instead of a rich variety of complex ones such as those found in an ad hoc query. As for performance, if the transaction set is limited and the transaction volume is fairly predictable, the publish interface might offer the capability to implement a high-volume solution, but only through careful design. For a more complex and diverse transaction set with highly variable load cycles, the “publish” approach probably would not be suitable.

Data Migration Techniques

Far too many data migration techniques are available to cover them comprehensively. Perhaps the most promising are those available from the Data Warehousing capabilities of SQL Server. This source includes sophisticated tools for:

  • Data Analysis

  • Data Extraction

  • Data Transformation

  • Data Loading

Because the typical Data Warehousing solution must draw data from multiple diverse application datastores, the analysis and extraction tools, in particular, can be used against a wide variety of mainframe databases and file systems.

Critical Mainframe Workload Examples

The following are examples of mainframe workloads that may need to be migrated to the Windows environment, including OLTP applications, business intelligence (BI) capabilities, Web-enabling mainframe applications and batch jobs.

Moving Mission-Critical OLTP to Windows

OLTP applications are the applications that many medium and large size enterprises rely on to support their main line of product commerce. Consistently supporting the demanding service levels of these applications is a large contributor to the mainframe’s solid reputation.

It is important for a business to not only have stable and reliable OLTP, but to be capable to deliver it flexibly to new customers through new channels, and at a competitive price.

The Windows Server System components that support a high-performance OLTP solution include:

  • Windows Server 2003 Datacenter Edition provides a high-availability operating system.

  • Windows Application Center is used for high performance and high availability clustering.

  • Windows SQL Server 2000 is used for relational database transaction processing and reporting, and for business intelligence.

  • Windows IIS Web server is used to present the OLTP application to the worldwide Web and allow customer transactions with a Web browser.

  • ISA Server is used to manage the Internet connection to ensure security and consistent performance under varying transactional loads.

  • Windows Content Management Server is used to provide the capability to manage the product catalogs and other intellectual property that the customer needs to make the buying decision.

Building a Business Intelligence (BI) Capability

Many organizations find themselves needing to understand the market better to refine product lines or evaluate the business case for entering into new markets with new products. Having operated for many years, these businesses have accumulated detailed information of their day-to-day transactions. The challenge is to properly analyze and use the available information.

Businesses can take advantage of available Microsoft technologies and best practices to create an optimal BI solution that extracts valuable information from data, monitors customer and market trends, and analyzes the effectiveness of current business strategies and plans.

Microsoft Guidance and Products for BI

Developed by Microsoft and Symmetry Corporation, SQL Server Accelerator for BI reduces the cost and time required to build and deploy a customized BI solution by implementing existing best practices, enabling partners and MCS to deliver solid custom applications. SQL Server Accelerator for BI includes a rapid development tool, data models, and templates for deploying a solution.

For more information on SQL Server Solution Accelerator, refer to:

https://www.microsoft.com/sql/ssabi/

Other Vendor Guidance and Products

SQL Server Accelerator for BI partners have the knowledge and skills necessary to implement a BI solution to suit the individual needs of any organization, regardless of size or complexity.

For a list of partners and other vendor guidance available for SQL Server for Business Intelligence, refer to:

https://www.microsoft.com/sql/ssabi/partners/default.asp

Additional vendor products include:

Web-Enabling an Existing Mainframe Application

In many migration situations, the default channel for user interaction with the application changes from standard green screen mainframe terminals to desktop workstations or personal computers. In the past, the most likely software running on that desktop would have been a fat client, a program that sends query and update transactions to a central database, and receives one or more rows of data in return for display and user modification.

However, fat clients have gradually fallen from favor as the benefits of thin clients using Web browser technology have become more apparent. The advantage of the thin client is that it is architecturally more like the legacy green screen than a fat client. The disadvantage is that when an application is Web-enabled with a thin client, the application must be modified to allow the state of the user interaction to be preserved between transmissions.

Legacy terminal applications maintain state because their programs usually run in special teleprocessing monitor software that preserves an in-memory “image” of the cumulative result of the interactions of the user up to the current time. The monitor software can also set context variables that tell the application about the history or progress of the user’s interactive session.

Replacing a green screen with a Web browser usually requires that the Web pages served contain cookies that are set by the server to record the state of the interaction. These are set as part of the definition of the outgoing Web page, and when they are returned by the user’s browser, they can be tested by the central application in lieu of keeping its own context variables.

Another form of Web-enablement is deploying an existing application’s capabilities as a Web Service. Web Services are not inherently designed to be consumed by human users, although in principle there is no reason they cannot be. Rather, they are designed to allow their capabilities to be used by other applications. An example of this is credit-card validation. Where once a green-screen user may have typed in a credit-card number on a terminal keyboard, now a Web site somewhere on the Internet sends a credit-card number as part of a SOAP message. A Web Service creates a query to the same database used before to validate the card number.

Processing Large Batch Jobs with Windows

Most legacy applications involve some form of batch processing, and the duration of that processing — the length of the batch “run” — may be problematic when managing the application.

Typically, what is traditionally called a batch window is a time frame when batch processing can occur without interfering with online transaction processing. In today’s business world, it is becoming increasingly more common for businesses to operate their OLTP systems over extended hours, usually because they are receiving orders from multiple time-zones, or perhaps over the Web from anywhere in the world at any time of day.

At the same time, most businesses hope to increase their volume of orders, which can increase the length of batch runs proportionally and perhaps even exponentially in some cases. So the duration of batch runs tends to increase with greater volume of business, and the batch window tends to decrease as the business’s operating territory expands geographically.

Because batch runs are often isolated logically from online operations, migrating a batch work stream from a mainframe to the Windows platform can be an architecturally simple solution. Even if the online processing remains on a mainframe, the net effect can be dramatic savings because the organization can afford to purchase and operate a powerful Windows Server System that can process the batch work in less time than the mainframe. This may remove the need for an expensive mainframe upgrade or repurchase, with the attendant license cost increases that can ripple through the entire mainframe infrastructure.

Three basic challenges confront the migration of batch workstreams:

  1. Migrating the complex JCL that is used to define the batch job stream in terms of its sequence of programs and I/O environment.

  2. Making provision for the high-volume database access that is usually required.

  3. Replacing the functionality of the reports that are generated.

JCL migration is usually accomplished by one of two strategies:

  • JCL conversion, where a one-time translation of the JCL occurs, usually into a script language such as Windows script, Perl or VBScript.

  • Utilizing non-Microsoft products that enable existing mainframe JCL to be parsed, submitted and executed on Windows Server to control batch processes.

Typically, the reporting or consolidations done from a batch job stream require that a DBMS is read from end-to-end. If the database was designed for optimal OLTP, this may not result in very good performance without careful tuning and the use of special access techniques.

Often, the suite of reports generated by a batch run will be an eclectic combination generated by in-house COBOL programs, 4GL reporting tools, and perhaps mainframe-only, off-the-shelf reporting tools that may be obsolete. In-house programs can be converted as part of the application migration or rewritten in a modern 4GL. There will inevitably be reports that are problematic and cannot be easily reproduced without rewriting the report using a new reporting tool.