Export (0) Print
Expand All

Chapter 1 - Scaling Business Web Sites with Application Center

Because World Wide Web (WWW) technologies are rapidly becoming the platform of choice for supporting enterprise-wide applications, the infrastructure required to develop and host applications has grown in scale and complexity. Server technology is particularly hard pressed to keep up with the daily client demands for Web pages. Microsoft Application Center 2000 (Application Center), one of the key new products in the Microsoft Windows Server System, is designed to address issues that are related to server scalability, manageability, and reliability.

This chapter provides an overview of this middleware server tier and shows how Application Center is positioned in the Windows Server System. Following this, you’ll learn about the challenges, issues, and solutions for building applications that are both available and scalable—by using Application Center.

On This Page

Building Blocks
Positioning Application Center
The Web Computing Model for Business
Enabling Highly Available and Scalable Application Services
Resources

Building Blocks

 Bb734902.spacer(en-us,TechNet.10).gif Bb734902.spacer(en-us,TechNet.10).gif

The Windows Server System is the direct result of a major shift in the computer application architecture that took place during the 1990s. To fully appreciate the significance of the new integrated server software, its necessary to examine this architectural shift, as well as its causes and continuing impact on the computer community.

The Applications Architectural Shift

When Internet technology, notably the Web, moved into the computing mainstream in the middle of the 1990s, the model for business computing changed dramatically. This shift (Figure 1.1) was centered on the industrys notion of client/server computing, which until this time was very complex, costly, and proprietary.

Bb734902.f01uj01(en-us,TechNet.10).gif 

Figure 1.1   Application architecture shifts since 1970

Computing on the Web

The Web model is characterized by loosely connected tiers of diverse collections of information and applications that reside on a broad mix of hardware platforms. Remember, the driving force behind the Internet since its inception has been the desire to provide a common information-delivery platform that is scalable, extensible, and highly available. This platform is flexible by design and not limited to one or two computing tiers. The only real limits to application development in the Internet world are computer capacity and the imagination of the application designer.

As the Web browser rapidly became ubiquitous and Web servers proliferated throughout companies, it was clear—despite the best efforts of client/server software producers to Web-enable their products—that a radically different way of thinking about the application model was needed. The developer community was the first group to be hit with the idea of the lowest common denominator approach to business computing. Obviously, new techniques and tools were required to meet the technology shifts and challenges facing developers.

Technology Shifts and Developer Challenges

As the Internet revolution took hold and new technology appeared, developers faced several challenges that existing design models and tools couldnt address adequately. These challenges were centered on the following issues:

  • Heterogeneous environments

  • Scalability

  • Rapid application development and deployment

  • Platform administration and management

  • Network-aware application design

Heterogeneous Environments

One of the earliest, and perhaps biggest, challenges was the need to build applications that could readily fit into heterogeneous environments. In most large organizations there were a mix of terminals, rich clients, and thin (Web) clients. In addition to accommodating the client base, new applications had to interact with legacy data and applications that were hosted on mainframe and mid-range computers, often from different hardware vendors.

Scalability

Prior to the influx of Internet technologies, scalability was a relatively easy issue to manage. To begin with, the computing environment was essentially a closed system because there was a limited amount of remote access by staff, customers, or business partners. This meant that the size of the user base and their usage patterns for given applications and services were well known. Strategic planners had ample historical data on which to base their projections for scaling the computing environment to match consumer demand.

Next, the application development life cycle typically spanned several years. Once again, planners had ample time to plan for system and application scaling.

Finally, microcomputers still hadnt realized their full potential—still being viewed by many as something slightly smarter than a terminal—and their deployment throughout corporations was just starting to take off. As time passed, there was an expectation that the desktop would become part of any given application.

While the microcomputer was redefining how people worked, Internet technology, notably the Web, altered the corporate mindset. Initially, this new technology was viewed as an ideal low-cost method for sharing information throughout the organization. Not only was it inexpensive, but it also made it very easy for users to do their own development, and internal Web sites (intranets) quickly appeared on the computing landscape.

The foundation for scalability planning started to erode, and when companies opened their doors to the outside world, it crumbled completely. The new design paradigm said that systems had to be designed to accommodate anywhere from less than one hundred to more than one million users.

Rapid Application Development and Deployment

The intranet and Internet phenomenon highlighted the possibility of, and need for, rapid applications deployment. The corporate intranet experience clearly demonstrated that business applications could be built quickly. An added bonus was the simplicity of URL-based deployment. The net result was that business managers and users began to question the entire traditional development platform and processes. They were no longer prepared to wait several years before being able to use an application. From an investment perspective, the business community questioned any investment in applications that would be legacy systems by the time they were completed.

As businesses expanded their applications horizon from the intranet to the Internet, the notion of rapid application development was redefined even further. In order to be competitive, applications needed to be created virtually on demand for immediate use—just-in-time (JIT) development. To achieve this, the developers needed to completely revamp and revitalize their approach to applications development.

Platform Administration and Management

As with any aspect of computer technology, things arent perfect in the Internet/Web world. The Information Technology (IT) professionals that embraced this new application model discovered that along with freedom and flexibility came a completely new set of administration and management issues. These issues revolved around clients, applications, and hosts.

The browser, coming as it did from the grass roots, left most organizations in the position of not having a browser standard. (Day-to-day support and upgrade issues themselves were often a logistical nightmare.) From a development perspective, the lack of standardization meant that application designers had to accommodate the core and extended HTML rendering capabilities of each browser version.

Application deployment was even more difficult to manage, because system administrators had to contend with large numbers of content publishers rather than a single developer group. The management of this aspect of Web-based computing became increasingly difficult as businesses bought into the idea of providing data-driven, dynamic content. The scope of the Web programming model was broadened by the need to include diverse data stores and accommodate several different scripting languages.

Any Webmaster from the initial days of Internet-based business applications can attest to the hours of painstaking, manual work required to keep even a medium-sized site operating properly and continuously—because another aspect of the Internet phenomenon was the users expectation of 24-hour/7-day access. Support demand increased as servers were added to accommodate increased traffic demands on Web sites.

Unfortunately, the Webs designers and advocates neglected to include a set of tools for managing the platform; it was left to the IT community to come up with a solution.

Network-Aware Applications

The final challenge facing developers is a result of the advances made in portable computer technology and the decline in cost for portable computers, such as laptops, notebooks, and palmtops. Coupled with the global access made possible by the Internet, mobile computing has grown at a rate comparable to the Web. Recent figures indicate that laptop sales now exceed that of desktop computers.

Offline, or disconnected, use is no longer the exception. The user community expects to be able to use applications and services in both online and offline mode. The application developer must be able to provide this capability in an application.

An Overview of Distributed Web Applications

The Windows Server System addresses the challenges facing organizations by combining an architectural vision with a complete set of Microsoft technologies. These technologies can be used to develop, deploy, and support n-tier, distributed applications. Highly integrated but flexible, this product suite enables developers to build end-to-end business solutions: solutions that can leverage existing architectures and applications. Lets take a look at the philosophy behind, and the major elements of, this platform.

Philosophy and Benefits

The key tenet of distributed Web-based applications is the logical partitioning of an application into three tiers:

  • Presentation

  • Business logic

  • Data access and storage

By partitioning applications along these lines, using component-based programming techniques, and by fully utilizing the services provided by the Microsoft Windows operating system, developers can build highly scalable and flexible applications. Table 1.1 summarizes the major benefits that can be derived by adopting the distributed Web-based applications model.

Table 1.1   The Benefits of Using the Windows Platform to Build Applications

Benefit

Description

Rapid development

Use the declarative programming techniques of Component Object Model (COM) and snap-together components.

Scalability

Use Windows Component Services to manage thread, resource, distribution, and concurrency issues.

Easy deployment and management

Use the Windows operating system services to contain or reduce the cost of deployment and tie it into a management schema.

Support for disconnected clients

Build rich clients that will continue to work after a user is disconnected.

Ease of customization

Use standard end-user and programming tools to customize components.

Support for multiple data stores

Use data services to enable the application to access databases, the message system, and the file system.

Integration and interoperability

Use Windows services to access data and communicate with heterogeneous systems.

Platform Components

A simple application model consists of a client that communicates with the middle tier, which itself consists of the application server and an application containing the business logic. The application, in turn, communicates with a back-end database that is used to supply and store data.

Lets look at the elements of each tier in more detail, starting with the Presentation layer, which is supported by Presentation Services.

Presentation Services

The Presentation layer consists of either a rich or thin client interface to an application. The rich client, which uses the Microsoft Win32 API, provides a full programming interface to the operating systems capabilities and uses components extensively. Arguably not as robust or capable of offering the performance levels as a rich client, the thin client (Web browser) is rapidly becoming the interface of choice for many developers. A developer is able to take advantage of several simple-yet-robust scripting languages to build business logic that can be executed on any of the three application tiers. With full support for HTML and the DHTML and XML object models, the thin client is able to provide a visually rich, flexible, and interactive user interface to applications. Thin clients also have the added advantage of providing a greater degree of portability across platforms.

Business Logic/Application Services

This layer is divided into application servers (Internet Information Services [IIS], Site Server, and SNA Server) and services, which are available to support clients. Web application logic, typically consisting of Active Server Pages (ASP) and written in Microsoft® Virtual Basic® Scripting Edition (VBScript), are processed in the IIS server space. Either ASP- or COM-based applications can be written to take advantage of Microsoft Transaction Server (MTS), Message Queuing (MSMQ), directory, and security services. Application services, in turn, can interact with several data services on the back end.

Data Access and Storage

The data services that support data access and storage consist of:

  • Microsoft ActiveX Data Objects (ADO), which provides simplified programmatic access to data by using either scripting or programming languages.

  • OLE database, which is an established universal data provider developed by Microsoft.

  • XML, which is a markup standard for specifying data structures.

XML is a recent standard put forward by the Internet community. Whereas HTML focuses on how information is rendered by the browser and displayed on the screen, the goal of XML is to handle a data structure and its representation.

System Services

Elements within each segment of our model are fully supported by the Windows operating system, which among its many services provides directory, security, management, and communications services that work across the three tiers. The programming tools that make up the Visual Studio version 6.0 development system enable developers to build application components across the tiers.

Windows Server System

The integrated server software in the Windows Server System extends the distributed applications vision. The basic philosophy and objectives—to provide a model and tools for building n-tier, distributed business solutions—have not changed. The diagram shown in Figure 1.2 illustrates the Windows Server System tier, including Application Center, and shows how the Windows Server System fits in the Microsoft platform.

Bb734902.f01uj02(en-us,TechNet.10).gif 

Figure 1.2   PositioningApplicationCenter in the Microsoft business platform

The most notable additions to the platform, and most important from our perspective, are the new server technologies in the Enterprise Servers layer. It was obvious that customers needed an economical and easy way to scale their Web server farms (also known as Web clusters) to accommodate increasing traffic. Plus, tools were needed for deploying and managing content and applications on these servers. The solution is Application Center, a product that:

  • Addresses issues related to scaling-out Web-based applications across multiple servers.

  • Accommodates deployment of content and applications across clusters.

  • Transparently load-balances and distributes work across a cluster.

  • Provides proactive monitoring of health and performance metrics.

  • Supports performance testing to enable scaling for next-generation applications.

Positioning Application Center

 Bb734902.spacer(en-us,TechNet.10).gif Bb734902.spacer(en-us,TechNet.10).gif

Application Center is a strategic server product in the Windows Server System. Application Center was developed to provide a competitively priced, yet robust, tool for scaling and managing a broad range of Web-based business applications. Its feature set, which includes load-balancing and server synchronization, to name but two, is not limited to Web applications, and can support COM+ applications. Version 1 is integrated with core Microsoft Windows 2000 Server services, such as Network Load Balancing (NLB), and because of its level of integration, extends the core operating system services by providing tools such as application publishing. It is positioned in the Microsoft Windows Server System so that it will be able to fully integrate with other middle tier servers and services, as well as the development tools layer.

As we move through the elements of the Web computing model in the next section, you’ll gain a better understanding of Application Center’s role and an appreciation of the features it brings to Web-based applications.

The Web Computing Model for Business

 Bb734902.spacer(en-us,TechNet.10).gif Bb734902.spacer(en-us,TechNet.10).gif

Because todays businesses are models of dynamic change—often growing quickly and instituting rapid directional shifts—the Web model is ideally suited for business computing. Web sites can grow exponentially with demand and provide a full range of services that can be tailored to meet user requirements. These services are often very complex and need to be integrated with other services in the organization.

In this section, well take a look at the architectural goals and elements of a business Web site, as well as some typical site topologies. For more information about building scalable n-tier sites, refer to the white paper, A Blueprint for Building Web Sites Using the Windows DNA Platform, which is in the Appendix.

Architectural Goals

The foundation for an architecture that adequately addresses business computing needs must meet these goals:

  • Scalability—Enabling continuous growth to satisfy user demands and respond to business needs by providing near-linear cost effective scaling.

  • Availability and reliability—Ensuring that there are continuous services to support business operations by using functional specialization and redundancy.

  • Management—Providing management with ease of use and completeness to ensure that operations can keep pace with growth and reduce the total cost of ownership (TCO).

  • Security—Ensuring that adequate security is in place to protect the organizations assets, namely its infrastructure and data.

Architectural Elements

The key architectural elements of an n-tier business Web site, illustrated in Figure 1.3, are as follows:

  • Clients

  • Front-end systems

  • Back-end systems

For the site architect and application developer, all of these elements must be considered in the context of scalability and reliability, security, and management operations.

Bb734902.f01uj03(en-us,TechNet.10).gif 

Figure 1.3  Architectural elements of an n-tier business Web site

Figure 1.3 shows the split between the front-end and back-end systems as well as the firewall and network segmentation, which are key security elements in site architectures. Lets examine the elements of this model in more detail, starting with the clients.

Clients

Clients issue service requests to the server hosting the application that the client is accessing. From the users perspective, the only things visible are a URL that identifies a page on a site, hyperlinks for navigation once the page is retrieved, or forms that require completion. Neither the client nor the user has any idea of the inner workings of the server that satisfies the request.

Front-End Systems

Front-end systems consist of the collections of servers that provide core services, such as HTTP/HTTPS and FTP, to the clients. These servers host the Web pages that are requested and usually all run the same software. For efficiencys sake, it is not uncommon for collections of these servers (Web farms or clusters) to have access to common file shares, business-logic components, or database systems located on the back-end (or middle-tier in more extended models) systems in our model.

Front-end systems are typically described as stateless because they don’t store any client information across sessions. If client information needs to persist between sessions, there are several ways to do this. The most common is through the use of cookies. Another technique involves writing client information into the HTTP header string of a Web page to be retrieved by the client. The last method is to store client information on a back-end database server. Because the latter technique can have significant performance implications, but increases reliability in a non-trivial fashion, it should be used judiciously. You’ll learn more about state and persistence—concepts that are central to good application design—in later chapters.

Scalability of the front-end systems is achieved by increasing the capacity of an individual server—scaling up or by adding more servers—scaling out. Well examine availability/scalability issues and options later in this chapter.

Back-End Systems

The back-end systems are the servers hosting the data stores that are used by the front-end systems. In some cases, a back-end server doesn't store data, but accesses it from a data source elsewhere in the corporate network. Data can be stored in flat files, inside other applications, or in database servers such as Microsoft SQL Server. The following table summarizes data and storage areas.

Table 1.2   Different Types of Data Stores

File systems

Databases

Applications

Example

File shares

SQL Server

Ad insertion, Service Advertising Protocol (SAP), Siebel

Data

HTML pages, images, executables, scripts, COM objects

Catalogs, customer information, logs, billing information, price lists

Banner ads, accounting information, inventory/stock information

Because of the data and state they must maintain, the back-end systems are described as stateful systems. As such, they present more challenges to scalability and availability. These topics are covered in detail later in this chapter.

Security Infrastructure

Securing the assets of todays businesses, with their mobile workers, business-to-business computer direct connections, and a revolving door to the Internet, is a complex and costly endeavor. The consequences of poorly implemented computer security can, however, spell disaster for a business.

At a high level, security domains—not to be confused with Internet or Microsoft Windows NT/Windows 2000 domains—provide regions of consistent security with well-defined and protected interfaces between them. Large organizations may partition their computing environment into multiple domains, according to business division, geography, or physical network, to name but a few types. Security domains may be nested within one another or even overlap. There are as many security architectures as there are security mechanisms.

At a low level, the basic model for securing a single site involves setting up one or several perimeters to monitor and, if necessary, block incoming or outgoing network traffic. This perimeter defense (firewall) may consist of routers or specialized secure servers. Most organizations use a second firewall system, as shown in Figure 1.4. Security specialists refer to this area between the firewalls as the DMZ (demilitarized zone).

Bb734902.f01uj04(en-us,TechNet.10).gif 

Figure 1.4   Using firewalls to establish a secure zone

Remember, this is only a model; every organization builds their security architecture to meet their own business requirements. Another factor in this decision is the performance cost of providing more protection. Some, in fact, put their Web servers on the Internet side of the firewall, having determined that the risk to, or cost of reconstructing, a defaced or damaged Web server isn’t high enough to warrant more protection.

Well cover security in greater detail later in the book as it applies to the Application Center environment specifically, with its aggregated servers and their applications.

Management Infrastructure

Site management systems are often built on separate networks to ensure high availability and to avoid having a negative impact on the application infrastructure. The core architectural elements of a management system are as follows:

  • Management consoles serving as portals that allow administrators to access and manipulate managed servers.

  • Management (also called monitoring) servers, which continuously monitor managed servers, receive alarms and notifications, log events and performance data, and serve as the first line of response to pre-determined events.

  • Management agents, which are programs that perform management functions within the device on which they reside.

As systems scale or their rate of change accelerates, the management and operation of a business Web site becomes a critical factor, in terms of reliability, availability, scalability, and TCO. Administrative simplicity, ease of configuration, and ongoing health/failure detection and performance monitoring become more important than application features and services.

Enabling Highly Available and Scalable Application Services

 Bb734902.spacer(en-us,TechNet.10).gif Bb734902.spacer(en-us,TechNet.10).gif

Now that we have covered the architectural goals and elements of a business Web site, lets take our site model from Figure 1.3 and scale it out (Figure 1.5).

Bb734902.f01uj05(en-us,TechNet.10).gif 

Figure 1.5  A typical n-tier Web site

Before we show you how Microsoft technologies can be used to scale this site and meet our architectural goals, well examine the different aspects of availability and scalability as well as the solutions that are currently available.

The Traditional Approach – Scaling Up

Availability and scalability are not new issues in the computing world; theyve been around as long as weve used computers for business. The traditional approaches for handling these issues werent really challenged until the microcomputer came into its own as a credible platform for hosting business applications.

Availability

There are several architectures that are used to increase the availability of computer systems. They range from computers with redundant components, such as hot swappable drives, to completely duplicated systems. In the case of a completely duplicated computer system, the software model for using the hardware is one where the primary computer runs the application while the other computer idles, acting as a standby in case the primary system fails. The main drawbacks are increased hardware costs—with no improvement in system throughput—and no protection from application failure.

Why is availability important?In 1992 it was estimated that system downtime cost U.S. businesses $4.0 billion per year.1 The average downtime event results in a $140,000 loss in the retail industry and a $450,000 loss in the securities industry. These numbers were all based on computerized businesses BEFORE the Internet phenomenon.1FIND/SVP Strategic Research Division Report, 1992

Scalability

Scaling up is the traditional approach to scalability. This involves adding more memory and increasing the size or number of the disks used for storage. The next step in scaling up is the addition of CPUs to create a symmetric multiprocessing (SMP) system. In an SMP system, several CPUs share a global memory and I/O subsystem. The shared memory model, as it is called, runs a single copy of the operating system with applications running as if they were on a single CPU computer. These SMP systems are very scalable if applications do not need to share data. The major drawbacks are the physical limitations of the hardware, notably bus and memory speed, which are expensive to overcome. The price steps in moving from one to two, two to four, and four to eight microprocessors are dramatic. At a certain point, of course, a given computer can’t be upgraded any further and its necessary to buy a larger system—a reality that anyone whos owned a microcomputer can appreciate.

In terms of availability, the SMP approach does provide an inherent benefit over a single-CPU system—if one CPU fails, you have n more to run your applications.

Multiprocessing systems with redundant componentsIn February 2000, a quick survey of the mainstream manufacturers of high-end servers showed that an Intel-based server with some redundant components (for example, power supply and hot swappable disks) and four microprocessors averaged $60,000 for an entry-level server.

Scaling Out as an Alternative

The alternative to scaling up is scaling out, especially in the front-end tier, by adding more servers to distribute and handle the workload. For this to be effective, instead of increasing the capacity of a single server, some form of load balancing is necessary to distribute the load among the front-end servers. There are three typical load-balancing mechanisms that can be used: multiple IP addresses (DNS round robin), hardware-based virtual-to-real IP address mapping, and software-based virtual-to-real IP address mapping.

Multiple IP Addresses (DNS Round Robin)

Round robin is a technique used by DNS servers to distribute the load for network resources. This technique rotates the order of the resource record (RR) data returned in a query answer when multiple RRs exist of the same type for a queried DNS domain name.

As an example, lets use a query made against a computer that uses three IP addresses (10.0.0.1, 10.0.0.2, 10.0.0.3), with each address specified in its own A-type RR. The following table illustrates how these client requests will be handled.

Table 1.3   IP Address Returns with DNS Round Robin

Client request

IP address return sequence

First

10.0.0.1, 10.0.0.2, 10.0.0.3

Second

10.0.0.2, 10.0.0.3, 10.0.0.1

Third

10.0.0.3, 10.0.0.1, 10.0.0.2

The rotation process continues until data from all of the same-type RRs for a name have been rotated to the top of the list returned in client query responses.

Although DNS round robin provides simple load balancing among Web servers as well as scalability and redundancy, it does not provide an extensive feature set for unified server management, content deployment and management, or health and performance monitoring.

Hardware Solutions

Hardware-based solutions use a specialized switch or bridge with additional software to manage request routing. For load balancing to take place, the switch first has to discover the IP addresses of all of the servers that its connected to. The switch scans all the incoming packets directed to its IP address and rewrites them to contain a chosen servers IP address. Server selection depends on server availability and the particular load-balancing algorithm in use. The configuration shown in Figure 1.6 uses hubs or switches in combination with a load-balancing device to distribute the load among three servers.

Bb734902.f01uj06(en-us,TechNet.10).gif 

Figure 1.6   A load-balancing device used in conjunction with hubs or switches

Load-balancing devices in general provide more sophisticated mechanisms for delivering high performance load-balancing solutions than DNS round robin. These products are intelligent and feature-rich in the load-balancing arena—for example, they can transparently remove a server if it fails. However, they do not provide broad and robust Web-farm management tools.

Software Solutions

The initial software-based load-balancing solution provided by Microsoft was Windows NT Load Balancing Service (WLBS), also known as Convoy.

Note   Network Load Balancing (NLB) is an enhanced version of WLBS for the Windows 2000 server family. NLB is only available with the Windows 2000 Advanced Server and Windows 2000 Datacenter versions of the operating system.

The essence of WLBS is a mapping of a shared virtual IP address (VIP) to the real IP addresses of the servers that are part of the load-balancing scheme. NLB is an NDIS packet filter driver that sits above the network adapters NDIS driver and below the TCP/IP stack. Each server receives every packet from the VIP. NLB determines on a packet-by-packet basis which packets should be processed by a given server. If another server should process the packet, the server running NLB discards the packet. If it determines that the packet should be processed locally, the packet is passed up to the TCP/IP stack.

Load balancing is one aspect of a computing concept called clustering.

Clustering

Clustering is a computer architecture that addresses several issues, including performance, availability, and scalability. As is the case with other architectures weve covered, clustering is not a new concept. The new aspects are its implementation and the platforms that can take advantage of this architecture.

Cluster Overview

A cluster is a collection of loosely coupled, independent servers that behave as a single system. Cluster members, or nodes, can be SMP systems if that level of computing power is required. However, most clusters can be built by using low-cost, industry standard computer technology. The following features characterize clusters:

  • The ability to treat all the computers in the cluster as a single server. Application clients interact with a cluster as if it were a single server, and system administrators view the cluster in much the same way: as a single system image. The ease of cluster management depends on how a given clustering technology is implemented in addition to the toolset provided by the vendor.

  • The ability to share the workload. In a cluster some form of load balancing mechanism serves to distribute the load among the servers.

  • The ability to scale the cluster. Whether clustering is implemented by using a group of standard servers or by using high-performance SMP servers, a clusters processing capability can be increased in small incremental steps by adding another server.

  • The ability to provide a high level of availability. Among the techniques used are fault tolerance, failover/failback, and isolation. These techniques are frequently used interchangeably—and incorrectly. See sidebar.

Fault Tolerance, Failover/Failback, and IsolationFault tolerance For server clusters, a fault-tolerant system is one thats always available. Fault-tolerant systems are typically implemented by configuring a backup of the primary server that remains idle until a failure occurs. At that time, the backup server becomes the primary server.Failover/Failback Failover describes the process of taking resources offline on one node, either individually or in a group, and bringing them back online on another node. The offline and online transactions occur in a predefined order, with resources that are dependent on other resources taken offline before and brought online after the resources on which they depend.Failback is the process of moving resources back to their preferred node after the node has failed and come back online.Isolation Isolation is a technique that simply isolates a failed server in a cluster. The remaining nodes in the cluster continue to serve client requests.

The different techniques used to provide a high level of availability are what distinguish two of the clustering technologies provided by Microsoft—Windows Clustering (formerly Wolfpack or Cluster Server), which implements failover/failback, and Application Center, which uses the isolation approach.

Both these technologies implement aspects of the shared nothing model of cluster architecture. (The other major clustering model is called shared disk.)

Windows Clustering Shared Nothing Model

As implied, the shared nothing model means that each cluster member owns its own resources. Although only one server can own and access a particular resource at a time, another server can take ownership of its resources if a failure occurs. This switch in ownership and rerouting of client requests for a resource occurs dynamically.

As an example of how this model works, lets use a situation where a client requires access to resources owned by multiple servers. The host server analyzes the initial request and generates its own requests to the appropriate servers in the cluster. Each server handles its portion of the request and returns the information to the host. The host collects all the responses to the subrequests and assembles them into one response that is sent to the client.

The single server request on the host typically describes a high-level function—a multiple data record retrieve, for example—that generates a large amount of system activity, such as multiple disk reads. This activity and associated traffic doesn't appear on the cluster interconnect until the requested data is found. By using applications, such as a database, that are distributed over multiple clustered servers, overall system performance is not limited by the resources of a single cluster member. Write-intensive services can be problematic, because all the cluster members must perform all the writes and the execution of concurrent updates is a challenge. Shared nothing is easier to implement than shared disk, and scales I/O bandwidth as the site grows. This model is best suited for predominantly read-only applications with modest storage requirements.

However, this approach is expensive where there are large, write-intensive data stores because the data has to be replicated on each cluster member.

Partitions and packsThe shared disk and shared nothing concepts can also be applied to data partitions that are implemented as a pack of two or more nodes that have access to the data storage. Transparent to the application, this technique is used to foster high performance and availability for database servers.

Application Center 2000 Shared Nothing Model

Application Center takes the shared nothing idea a bit further; there is no resource sharing whatsoever. Every member in the cluster uses its own resources (for example, CPU, memory, and disks) and maintains its own copy of Web content and applications. Each member is, for all intents and purposes, a carbon copy (sometimes called a replica or clone) of the cluster controller. The advantage of this implementation of the shared nothing model is that if any node, including the controller, fails, the complete system, its settings, content, and applications, continue to be available when the failed server is isolated.

At this point you’ve probably begun to realize that in spite of the advances in development tools, building distributed applications that perform well, and are available and scalable, is not a trivial task. When load balancing and clustering is added to the n-tier application model, you have to start thinking in terms of cluster-aware applications.

Tip   Consider adding network-awareness to your list of application design criteria.

Cluster-Aware Applications

Although clustering technology can deliver availability and scalability, its important to remember that applications have to be designed and written to take full advantage of these benefits. In a database application, for example, the server software must be enhanced to coordinate access to shared data in a shared disk cluster, or partition SQL requests into sub-requests in a shared nothing cluster. Staying with the shared nothing example, the database server may want to make parallel queries across the cluster in order to take full advantage of partitioned or replicated data.

As you can see, a certain amount of forethought and planning is required to build an application that will perform well and fully utilize the power of clustering technology.

Scaling Up vs. Scaling Out

Either scaling up or scaling out provides a viable solution to the scaling problem. There are pros and cons to both, and strong proponents of either strategy. Lets take a look at the issue of availability and fault tolerance, and then examine the economics of scaling.

High Availability

I don’t think anyone will disagree that high availability and fault tolerance are good things to have. The question of which approach provides the best solution has to be considered in the context of the application, the risk and cost of downtime, and the cost of ensuring that the system will satisfy a businesss availability requirements. (Its important not to confuse uptime with availability. Based on my own experience and that of other industry professionals, most lines of business systems can boast 99.9nn percent uptime. The problem is that theyre not always available when theyre needed.)

The 7-day/24-hour availability mantra is relatively new; it is one of the interesting consequences of the Internet phenomenon. It would seem that people around the world need access to information or have to be able to shop 24 hours a day, 7 days a week. Because the scaling up model is based on a single high-end system—and therefore, a single point of failure—some form of fault tolerance is needed to ensure availability. As noted in an earlier example, most high-end Intel-based server platforms have redundant power supplies, memory error checking mechanisms, and hot swappable components, such as disks.

In the hardware-based scale out model used by third-party load balancers, there is also a single point of failure, the load-balancing device itself. Figure 1.7 shows a fault-tolerant device configuration. This kind of approach to resolving the single point of failure conundrum is both complex and expensive to implement and maintain.

The software-based model provided by Application Center for scaling out does not have a single point of failure. High availability is achieved by having several identical servers available to service client requests. Because these servers are inexpensive, this approach to redundancy is cost effective. Furthermore, the more servers there are in a cluster, the lower the odds that all servers will be offline simultaneously. While performance may suffer if several servers go offline, the applications and services on the cluster remain available to the users. Of the various solutions, Application Center provides the highest level of availability for the most attractive cost.

Bb734902.f01uj07(en-us,TechNet.10).gif 

Figure 1.7   A fault-tolerant load-balancing device configuration

The Economics of Scaling

Prior to the release of Application Center, the economics of scaling was pretty straightforward.

If you had a single high-end server, it was simply a matter of upgrading it by adding more memory, additional CPUs, and so on. If the upgrade limit for the current server had been reached, you had to step up into a larger model. From this perspective, scaling up was expensive. However, operations costs remained, for the most part the same because of the simplicity of scaling up.

Scaling out by building Web farms that used DNS round robin or third-party load balancing tools was very economical from an equipment perspective. Single microprocessor, off-the-shelf servers could be used as Web servers. From the operations perspective, however, things werent as good.

As more and more servers were added to a Web farm, the environment increased in complexity and decreased in manageability. There were a limited number of options for managing servers or their contents and applications as a group. Increased capacity was frequently accompanied by an exponential increase in operations costs.

Application Center, however, provides near-linear capacity scaling while keeping operating costs level. Figure 1.8 compares the traditional scale up model and Application Centers scale out model. The y-axis plots capacity and the x-axis shows operating cost behavior as capacity increases. This illustration also serves to demonstrate that scaling out provides a higher level of scalability than scaling up.

Bb734902.f01uj08(en-us,TechNet.10).gif 

Figure 1.8   Comparison of the capacity and operating cost levels by using the scale up and scale out computing models

Scaling Out with Application Center

One of the main design objectives for Application Center is the reduction of operating costs by providing tools to manage groups of servers and their contents. The design philosophy is to automate as many activities as possible and provide an interface that enables a system administrator to manage a load-balanced cluster as a single system image. The end result is a solution that provides low-cost scalability and availability, while at the same time reducing and leveling operating costs.

Scaling n-Tier Sites with Microsoft Clustering Technology

Lets revisit the basic n-Tier site model (Figure 1.5) and show how Microsoft clustering technologies can be used to provide a highly available and scalable infrastructure for supporting business applications and services. Figure 1.9 uses the same topology and partitioning as Figure 1.5, except that load-balanced clusters and database clusters have been added to provide a higher level of availability and increase the sites capacity for handling client requests.

Bb734902.f01uj09(en-us,TechNet.10).gif 

Figure 1.9   A scaled out n-tier site

As you can see, its possible to create clusters anywhere in the computing infrastructure where availability and scalability is required. These clusters can be managed locally or remotely (by using a secure VPN connection to a cluster controller).

Notice that an intervening layer of load-balanced component servers is included in the sample site. Application Center supports Component Load Balancing (CLB) as well as NLB, so if the application design calls for it, components can be hosted on a separate load-balanced cluster. In some cases, more granularity in the partitioning of the business logic may be desirable (for security or performance reasons, for example). Support for CLB enables the developer to fully exploit object technology and build robust, fault-tolerant applications.

Resources

 Bb734902.spacer(en-us,TechNet.10).gif Bb734902.spacer(en-us,TechNet.10).gif

The following book provides additional information about the Windows platform, the Web computing model, and scalability.

Books

Pfister, In Search of Clusters (Prentice-Hall, 1998).

Author Gregory Pfister traces the evolution of clustering technology to the present day as the industry strives to provide scalable parallel computing solutions.

Bb734902.spacer(en-us,TechNet.10).gif

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2015 Microsoft