Component Load Balancing Architecture

Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

This section discusses CLB and its key structures. Also discussed are routing servers, designing and optimizing CLB clusters, and storage and memory requirements.

Component Load Balancing

Unlike Server cluster and NLB, which are built into the Advanced Server and Datacenter Server editions of the Windows operating system, CLB is a feature of Microsoft Application Center 2000. It is designed to provide high availability and scalability for transactional components. CLB is scalable up to eight servers and is ideally suited to building distributed solutions.

CLB makes use of the COM+ Services supplied as part of the Windows 2000 and Windows Server 2003 operating systems. COM+ Services provide:

  • Enterprise functionality for transactions

  • Object management

  • Security

  • Events

  • Queuing

COM+ components use the Component Object Model (COM) and COM+ Services to specify their configuration and attributes. Groups of COM+ components that work together to handle common functions are referred to as COM+ applications.

CLBKey Structures

Figure 8 below provides an overview of CLB. CLB uses several key structures:

  • CLB Software handles the load balancing and is responsible for determining the order in which cluster members activate components.

  • The router handles message-routing between the front-end Web servers and the application servers. It can be implemented through component routing lists stored on front-end Web servers, or a component routing cluster configured on separate servers.

  • Application server clusters activate and run COM+ components. The application Server cluster is managed by Application Center 2000.

6916fae6-14c0-42cf-9e42-70e3f581a689

Figure 8: Component load balancing

Routing List

The routing list, made available to the router, is used to track the response time of each application server from the Web servers. If the routing list is stored on individual Web servers, each server has its own routing list and uses this list to periodically check the response times of the application servers. If the routing list is stored on a separate routing cluster, the routing cluster servers handle this task.

The goal of tracking the response time is to determine which application server has the fastest response time from a given Web server. The response times are tracked as an in-memory table, and are used in round robin fashion to determine which application server should be passed an incoming request. The application server with the fastest response time (and theoretically, the least busy and most able to handle a request) is given the next request. The next request goes to the application server with the next fastest time, and so on.

Designing CLB Clusters

The architecture of CLB clusters should be designed to meet the availability requirements of the service offering. With small-to-moderate sized implementations, the front-end Web servers can host the routing list for the application Server cluster. With larger implementations, dedicated routing clusters are desirable to ensure that high availability requirements can be met.

Optimizing CLB Servers

As with NLB, servers in CLB clusters should be optimized for their role, the types of applications they will run and the anticipated local storage they will use.

High Speed Connections

Routing servers maintain routing lists in memory and need high-speed connections to the network.

Storage and Memory Requirements

Whether configured separately or as part of the front-end, CLB does not require a lot of storage, but a limited amount of additional RAM may be required. Application servers on the other hand, typically need a lot of RAM, fast CPUs and limited redundancy in the drive array configuration. If redundant drive arrays are used, a basic configuration, such as RAID 1 or RAID 5, may be all that is needed to maintain the level of availability required.