Chapter 4: Attaining High Performance and Scalability

Performance can refer to almost any parameter of the computing environment that can be measured or is the subject of a Service Level Agreement. Thus reliability, security, and manageability could all be considered aspects of performance, but are discussed elsewhere in this book. For the purposes of this chapter, performance refers to the speed with which work can be accomplished by applications and the volume of work that can be accomplished within a given timeframe.

Performance includes not only high workload processing capacity, but the capability to increase that capacity without significantly redesigning the solution. Capacity to process additional workloads typically requires computing resources such as servers and storage, as well as the capability to support more human or automated clients. These aspects of performance are often referred to as scalability.

*

On This Page

Mainframe Performance Overview Mainframe Performance Overview
Windows Server 2003 Performance Windows Server 2003 Performance

Mainframe Performance Overview

Much of the perceived high value of the mainframe is driven by its capability to meet the requirements of many diverse workloads, allowing an expensive resource to be shared among many consumers in the hopes of benefiting all. The maintenance of the balance between high optimization and wide applicability and effectiveness is the essence of mainframe performance management.

For example, an OLTP application with hundreds of active users may require very high communications bandwidth from the end-user terminal through to the DBMS that executes a data query or update. This is addressed by communications channel controllers and network switches that are actually specialized processors. However, it may require relatively low central computing resources because the database schema will have been optimized for the most common data manipulations.

By contrast, a business intelligence application may be much less predictable in terms of data manipulations, because the transactions are much larger and require much more computing resources than OLTP.

A report stream that is run overnight is an example of a situation where large amounts of information must be processed and equally vast amounts of output are generated, typically with very rigid sequencing dependencies. The rate of completion is at a premium because such en masse access often must be made overnight when the database is otherwise inactive. In a business that spans several time zones, overnight may be compressed into only a few hours.

The preceding examples are a sampling of the different types of work a mainframe must be capable of accomplishing. Additionally, a mainframe OS must be optimized for the allocation of resources to the different types of "consumer" workloads.

Managing the performance of the mainframe requires highly skilled, and therefore highly compensated, individuals. Although mainframe OSs are capable of supporting a wide variety of configurations that can conform to almost any conceivable workload, because the mainframe can be a centralized resource that must respond to a wide variety of workload profiles a hands-on approach to managing that adaptation is usually required.

A variety of software tools are available to automate the process of managing mainframe performance, but these are typically options purchased separately from independent software vendors. In addition to the up-front costs, each incurs a training overhead, and if the tools are acquired from the mainframe vendor, a recurring charge for use.

Maintaining peak mainframe performance can be a daunting task. The need to ensure that resources are widely shared can conflict with the desire to provide optimal service to each customer workload. Also, because some of a shared mainframe's power is consumed by the need to optimize multiple diverse workloads, business growth can push a given mainframe configuration to the point of diminishing returns. This often results in required — and expensive — upgrades.

Finally, the add-on nature of many performance-enhancing and performance management features leads to complexity and increased costs. These costs are often not only recurring, but can increase in cost each time a system upgrade is required.

Windows Server 2003 Performance

Each edition of Windows Server 2003 supports the indicated performance-related attributes:

  • Windows Server 2003, Standard Edition: up to 2 GB RAM, and support for up to 4-way c (SMP)

  • Windows Server 2003, Enterprise Edition: up to 32 GB RAM, and support for up to 8-way SMP, optional support for 64-bit Itanium processors

  • Windows Server 2003, Datacenter Edition: up to 512 GB RAM, and support for up to 62-way SMP, optional 64-bit support

  • Windows Server 2003, Web Edition: up to 2 GB RAM, and support up to 2-way SMP

The principles of Windows Server System performance management are not greatly different from those of the mainframe. Compromises between optimized utilization and high service levels are required on any computing platform. The Windows Server System's capability to add incremental processing power is very granular and relatively inexpensive.

Increased power is available by either building up, acquiring a more powerful Windows Server System, or by building out, adding additional parallel systems. This not only applies to application processing, but also to database serving and utility applications, such as firewall and proxy serving.

Options to build up and build out exist for the mainframe environment, but are still subject to the complex and expensive pricing policies of the single vendor market. In contrast, Windows Server System vendors must price their upgrade options aggressively. If an offering is not the most cost-effective on the market, another vendor is always ready and able to provide a compatible solution.

From the point of view of performance management, this is an ideal situation for the customer. A Windows Server System vendor has little capability to lock in a customer by making alternate possibilities economically unattractive. The only account control strategy available to a Windows Server System vendor is consistently delivering excellent products and services at competitive prices.

Although Windows Server 2003 is well suited to perform out-of-the-box for most customer workloads, it is possible to tune server settings. Tuning can result in incremental performance gains, especially when the nature of the workload will not vary much over time. Tunable parameters available to optimize the performance of the Windows Server System include file serving, networking, storage, and Web serving.

Processing Scalability

Windows Server 2003 also dramatically improves scalability on large enterprise class multiprocessor systems. Significant improvements have been made in scalability on large x86-based and 64-bit systems with eight or more processors. A number of different workloads have been used to analyze scalability, such as Transaction Processing Performance Council TPC-C and the SAP Sales and Distribution workload. In addition, the scalability of several other Windows Server 2003 features and components have been improved, such as IIS, Active Directory, and various networking components.

Supporting Large Numbers of Users

Although Windows was originally designed as an OS for personal computers, Windows Server 2003 is an enterprise-class server OS. As evidenced at countless customer sites worldwide, Windows Server 2003 is capable of supporting thousands of concurrent users.

Even the largest mainframes rarely service thousands of users with a single processor. Often, multiple processors, processor groups, and complexes are used. In addition, channel processors are configured to offload terminal handling from the central processor complexes. For very large and dispersed application user bases, multiple processor complexes and multiple instances of the database may even be geographically dispersed.

Windows Server provides hardware system vendors with the ability to use the same techniques and strategies to both build up to more powerful multiprocessor servers, and to build out to use multiple parallel servers. Windows Server 2003 currently supports up to 64-way SMP, and also supports NLB, which automatically balances incoming Internet traffic across servers in a cluster.

As of October 2004, the Transaction Processing Performance Council recognizes systems configurations from Dell, Hewlett-Packard, IBM, and RackSaver among its ten best-rated systems in terms of cost per transaction per second. The average user population represented in these benchmarks is approximately 24,400; with the high being approximately 35,000 users and the low being approximately 17,200 users. Nine of the ten systems use Windows Server 2003 as their OS.

For more information on TPC-C benchmarks on cost per transaction per second, refer to:

https://www.tpc.org/tpcc/results/tpcc_price_perf_results.asp

Windows Server with the .NET Framework not only can support a multitude of users, but also offers many options for how users connect to computing resources, including:

  • Standard green-screen terminal emulation

  • Windows Desktop Terminal Services

  • Internet browser connectivity

  • Client-server connectivity

  • ActiveX® Data Objects (ADO) .NET connectivity

  • Simple Object Access Protocol (SOAP) connectivity to Web Services

Batch Processing

When the Windows Desktop was initially developed, online applications were relatively common on mainframes, but the typical online application only allowed the user to navigate through a tightly scripted set of functions. The heavy lifting in many large mainframe applications was, and is often still, accomplished through batch processes. The Windows Desktop did not change computing so much as it introduced an alternate computing experience, where computer technology could be controlled and afforded by virtually anyone.

Until recently, few commercial systems possessed the raw computing power necessary to run interactive visual interfaces as well as perform useful data-processing work. Mainframe designers accommodated this reality by focusing on data processing-related attributes, while other devices, such as terminals and eventually personal computers, were intended to interact with the user.

Today's microprocessors are far more powerful, and many desktop computers can easily deliver more computing power than the mainframe of only a few years ago. The key components necessary for batch processing are available in today's Windows Server and the servers that it supports within the Windows Server System:

  • 3+ GHz processor speeds, with 32- and 64-bit bandwidth and 8-way parallelism

  • Multi-Gigabyte RAM storage with high-speed access times

  • Disk storage subsystems that implement high bandwidth data transfer and are essentially identical to the mainframe-class storage equipment

  • DBMS that are equally adept at bulk high-rate read/write and random access

  • Scripting and scheduling software that can either emulate Job Control Language (JCL) or replace it with greater functionality and ease of use

  • Extensive non-Microsoft software products to support business operations without the need to build from scratch

The experience of many organizations that migrated to the Windows Server for batch processing is that their batch windows actually became less constrained, taking the pressure off many other jobstreams that might depend on the migrated one.

Thus, using the cost effective and powerful Windows Server as a batch engine on a critical path jobstream can have economic benefits that ripple throughout the entire processing environment. For example, a mainframe processor may no longer require an upgrade because its batch window has increased to what it was previously because a precursor job was migrated and now executes in a shorter period of time.

Web Application Performance

Software scaling is a technique used to increase the capacity of an application by adding servers. While hardware scaling requires specialized servers, software scaling can be achieved using standard off-the-shelf servers. With software scaling, the relationship of cost to added capacity is almost linear.

Microsoft Application Center simplifies software scaling by using clustering. Traditionally, software scaling has been associated with a high cost in complexity and resources to get applications to run on multiple servers as a unified resource. Application Center eliminates these barriers by creating and running a group of servers designed to be as simple as operating a single server.

Application Center offers the benefits of software scaling to existing applications without requiring modifications or rewrites because it uses no new application programming interfaces (APIs).

For more information on Application Center, refer to:

https://www.microsoft.com/applicationcenter/

Portal and e-Business Performance

Portals and e-Business sites require a number of specialized user capabilities in addition to reliability and availability. Microsoft Commerce Server delivers the high-performance, scalability, and proven reliability required by mission-critical solutions. Commerce Server provides a powerful set of capabilities for non-transaction-based sites, including user profiling, content targeting, multiple language, and advanced business analytics. Features also extend to transaction-based sites with capabilities for catalog management, order processing, and merchandising.

For more information on Commerce Server, refer to:

https://www.microsoft.com/commerceserver/

Host-Integrated Application Performance

Often the best migration path for an application is to use the Windows Server System to support new capabilities, while leaving components of the application that do not require immediate improvement on the mainframe. Microsoft Host Integration Server (HIS) is the successor product to the Microsoft SNA Server, a product that set the standard for connectivity between the mainframe and the workstation. HIS works with a wide variety of network protocols and network service types for maximum networking flexibility, including Microsoft ActiveX-enabled, Web-deployable, 3270 and 5250 clients.

HIS supports up to 30,000 simultaneous host sessions per server, and utilizes enhancements such as Microsoft Message Queue (MSMQ), COM+, and Microsoft Application Center services.

For more information on Microsoft Host Integration Server, refer to:

https://www.microsoft.com/hiserver/

Database OLTP and OLAP Performance

Microsoft SQL Server 2000 is a fully Web-enabled RDBMS that supports some of the largest and highest performing data intensive OLTP applications in the world.

Three aspects of SQL Server performance are worth considering when considering a migration from the mainframe to the Windows Server System:

  • Transaction throughput

  • Very Large Databases (VLDB)

  • Large numbers of concurrent users

For example, one market research firm used a VLDB to support the consumer packaged goods and healthcare industries. One service offered was an OLAP tool that allowed clients to fully analyze market data. Users were able to access advanced tools such as decomposition trees and perception charts, and could turn raw business intelligence into charts, graphs, or reports using familiar tools in Microsoft Office. This 7-terabyte analytical database grew over time to 30 terabytes, at the rate of 500 million new rows per week. SQL Server 2000 could deliver query results anywhere from 3 to 360 times faster than the previous solution, and reduced TCO by supporting 10 times as many customers per server.

For more information on this example of a VLDB, refer to:

https://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseStudyID=13929

In another example, a major chemical manufacturer reevaluated the capability of its business systems to handle nearly three times the number of users and related increases in its database. To accommodate that growth and increase system availability and performance, the organization decided to upgrade its SAP database to Microsoft SQL Server. The results were a significant decrease in the time necessary to complete tasks such as batch processing, backup and report generation. These performance increases were important as the organization’s user base continued to grow. With approximately 5,000 total users, the system typically had approximately 800 people signed on at any given time, generating a typical transaction rate of 450,000 dialog steps per day and a peak transaction rate of almost 600,000 dialog steps per day.

For more information on this example of Microsoft SQL Server performance, refer to:

https://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseStudyID=12112

For more information on Microsoft SQL Server, refer to:

https://www.microsoft.com/sql/

CICS Functionality

CICS is a transaction processing monitor program from IBM. CICS provides two basic areas of functionality to mainframe applications:

  1. Terminal screen handling or Basic Mapping Support (BMS) allows standard green screen 3270-type terminals to display screens of information, and return user-modified fields within the screen to the program.

  2. Transaction support allows application programs to make updates to database data and other application functions using a transactional approach. Transactions are used to ensure that either 100 percent of the operations in a complex update occur, or none of them occur. This allows applications to ensure that a database will never be left in an inconsistent state from which the application cannot recover.

Windows Server System has no direct product equivalent to CICS because the major functions of CICS are built into the OS and the .NET Framework.

Windows GUI applications built with Visual Studio can interact with the user through visual objects (controls) that are part of the programming environment. In those situations where terminal-type access is preferred, Windows Terminal Server presents a remote Windows desktop to the user who interacts with it as if it were local.

Transactions are supported in two ways in the Windows Server System:

  1. Microsoft ADO, Object Linking and Embedding-Database (OLE-DB), Open Database Connectivity (ODBC), and MSMQ APIs enable manual transaction processing. In a manual transaction, a program explicitly begins a transaction, controls each connection and resource used within the transaction boundary, determines the outcome of the transaction (commit or abort), and ends the transaction.

  2. Microsoft Transaction Server (MTS), COM+, and the .NET Framework’s Common Language Runtime support an automatic distributed transaction model. After an ASP.NET page, XML Web service method, or .NET Framework class is marked to participate in a transaction, it automatically executes within the scope of a transaction. A physical transaction occurs when a transactional object accesses a data resource such as a database or message queue. The transaction associated with the object automatically flows to the appropriate resource manager, which looks up the transaction in the object's context and enlists in the transaction through the Distributed Transaction Coordinator built into the Windows Server.

This architecture allows transactional behavior to be determined by the object or resource used, instead of by an individual application program, thus ensuring a high level of cross-application consistency and reliability. Although these transaction management capabilities are available, the effort involved in removing reliance on CICS, which is prevalent throughout the mainframe environment, is often perceived as a significant obstacle to migration. However, non-Microsoft CICS-compatible transaction monitors are available on the Windows Server System to minimize effort and risk during the initial migration.

Options and Techniques

It is sometimes assumed that with the Windows Server System, the rule is one application, one server. This approach is sometimes appropriate, such as in an environment where applications are administered by different organizational units; when it is deemed simpler to isolate each application to its own independent hardware components; when it will be managed as an independent unit; and when it can be upgraded as required.

A cost/benefit analysis may show that the management overheads and loss of flexibility incurred in sharing a relatively inexpensive hardware configuration outweighs the marginal gain in utilization.

In contrast, high utilization is always the main objective within the mainframe environment because hardware and software components tend to be significantly more expensive to own and operate than equivalents in the Windows environment.

But as the Windows Server System is increasingly deployed as an enterprise standard instead of as an application-by-application solution, the newer versions of the Windows Server System are delivered with more sophisticated tools for accommodating multiple applications, such as:

  • Windows System Resource Manager (WSRM). Allocates appropriate system resources among multiple processes based on business priorities.

  • Microsoft Operations Manager. Simplifies the process of identifying and addressing possible application issues.

  • Virtual Server. A complete virtual machine solution with robust storage, networking and management features and a Web-based management console. Virtual Server enables multiple workloads to coexist on fewer servers.

Sources for Detailed Guidance

For more information on the Transaction Processing Performance Council, refer to:

https://www.tpc.org

For more information on Windows Server 2003 Performance and Tuning, refer to:

https://www.microsoft.com/windowsserver2003/evaluation/performance

For more information on Microsoft Management Solutions, refer to:

https://www.microsoft.com/systemcenter/default.mspx

For more information on Virtual Server, refer to:

https://www.microsoft.com/windowsserversystem/virtualserver/default.mspx