Capability: Desktop, Device and Server Management

On This Page

Introduction Introduction
Requirement: Automated Infrastructure Capacity Planning for Primary IT Services Requirement: Automated Infrastructure Capacity Planning for Primary IT Services
Requirement: Management of Mobile Devices Requirement: Management of Mobile Devices
Requirement: Virtualization to Dynamically Move Workloads from Server to Server Requirement: Virtualization to Dynamically Move Workloads from Server to Server

Introduction

Desktop, Device and Server Management is the second Core Infrastructure Optimization capability.

The following table describes the high-level challenges, applicable solutions, and benefits of moving to the Dynamic level in Desktop, Device and Server Management.

Challenges

Solutions

Benefits

Business Challenges

Users rely on help desk for various provisioning needs resulting in higher operating costs

Sprawling servers and applications demand more resources and increase costs for electricity real estate, and staff

User mobility lags due to a lack of automated solutions for deploying and updating ultra-mobile PC software

IT Challenges

Lack of end-to-end capacity planning results in reduced application availability, performance, and capacity challenges

IT Application teams slowly deliver and roll out updates and new applications due to configuration conflicts with IT operations and users

Projects

Implement automated infrastructure capacity planning

Implement a comprehensive solution for mobile device management

Implement virtual technology solution for testing and consolidation

Business Benefits

Remote API functions for accessing files and databases located on devices

Agile IT infrastructure helps business respond well to competitive threats

IT Benefits

Single management server infrastructure for servers, desktops, and devices

Reduced development time and costs with familiar development tools that enable corporate developers to utilize existing code, skills, and assets

Consistent profiles across systems enhance various device-management functions

Centralized solution for installing software and sharing content with mobile devices

Highly automated IT services reduce costs and improve consistency

The Dynamic level in the Infrastructure Optimization Model addresses the more advanced areas of mobile device and service management including the following requirements:

  • Automated Infrastructure Capacity Planning for Primary IT Services

  • Management of Mobile Devices

  • Virtualization to Dynamically Move Workloads from Server to Server

Requirement: Automated Infrastructure Capacity Planning for Primary IT Services

Audience

You should read this section if you do not have automation implemented for infrastructure capacity planning for your primary IT services.

Overview

In the Infrastructure Optimization Planning Guide for Implementers: Basic to Standardized guide, you read about monitoring the availability of critical servers. To move to the Dynamic level, you need to analyze the capacity of your enterprise servers, such as e-mail, and create a plan to optimize current and future needs.

Capacity planning is part of capacity management, which is the process of planning, analyzing, sizing, and optimizing capacity to satisfy demand in a timely manner and at a reasonable cost. This process should be proactive and responsive to business needs because the business cannot add resources after a capacity problem has happened without affecting performance.

Capacity management focuses on procedures and systems, including specification, implementation, monitoring, analysis, and tuning of IT resources and their resulting service performance. Capacity requirements are based on qualitative and quantitative standards set by the service level management process and specified within the provisions of a service level agreement (SLA) or operating level agreement (OLA). The capacity management process relies on a set of iterative tasks—monitoring, analysis, modeling, optimizing, and change initiation—to achieve its goals.

Phase 1: Assess

The Assess phase and capacity planning begins with identifying which services are primary to your organization and ranking those services by priority. For many organizations, these systems will include those services necessary to operate the business, such as Enterprise Resource Planning (ERP) systems, and those necessary to facilitate communication, such as messaging services. The results of this phase should be a prioritized list of the primary services in your organization, along with the IT or network infrastructure resources required to operate these services.

Phase 2: Identify

The Identify phase is responsible for nominating which services identified during the Assess phase will be candidates for automated capacity planning. Considerations will include whether your organization has control of the elements in that service that dictate capacity constraints. Hosted services delivered by a vendor, for example, will be subject to service level agreements between your organization and the vendor; in this case, the vendor is typically responsible for capacity planning. The results of the Identify phase will be a manifest of those services.

Phase 3: Evaluate and Plan

During the Evaluate and Plan phase, you will examine the processes associated with developing automated capacity planning or modeling solutions, as well as the technologies that can be used to automate these processes. It is likely that prepackaged software in most cases will not provide the breadth of information and service coverage necessary for your organization.

Capacity Management Process Flow

Capacity management is an iterative process with several activities being performed throughout the process. In order to keep this document brief, only a selection of the core capacity management tasks (see Figure 6) has been chosen for a detailed explanation.

Figure 6. Capacity management as an iterative process

Figure 6. Capacity management as an iterative process

Monitoring

Capacity management involves the internal operating level requirements and associated metrics for each of the key IT layers that contribute to the overall SLA. It is important that the utilization of each resource and service be monitored on an ongoing basis to ensure that hardware and software resources are being used optimally and that all agreed-upon service levels can be achieved.

Analysis

In the Analysis phase, the data monitored and collected is analyzed and used to carry out tuning exercises and establish profiles. These profiles are important since they allow the proper identification and adjustment of thresholds and alarms. When exception reports or alarms are raised, they need to be analyzed and reported upon, and corrective action needs to be taken.

Modeling

Modeling is a central element of the capacity management process. Modeling techniques and effective use of simulation software make it possible to investigate capacity planning “what-if” scenarios in order to build a model that simulates the desired outcome.

Modeling is the activity required by the Infrastructure Optimization Model at the Dynamic level; it requires that data collected from monitoring and analysis is in place to create automated capacity planning tools, whether developed by your organization or as part of a packaged software offering. Although the modeling element is the minimum requirement of the Dynamic level, it is recommended that the solutions used to automate capacity planning be accurate enough to use for new implementation or optimization of the identified primary services.

Optimizing

Analysis of the monitored data may identify areas of the configuration that could be tuned to better use system resources or improve the performance of the particular service.

Change Initiation

Change initiation introduces to the service in production any changes that have been identified by the analysis and tuning activities. This activity includes the identification of the necessary change and subsequent generation and approval of a change request. In some cases, implementation of the change runs concurrent to the service; in other cases, the type of change may require the service to be temporarily stopped.

Monitoring Technologies

The main tools from Microsoft that assist you in gathering server performance and network performance data are Windows System Monitor and Network Monitor, respectively. System Monitor is the recommended resource that is used for creating a server sizing model. Use System Monitor to identify standards for acceptable server performance and to recognize periods of peak performance. The data that is collected is instrumental in both establishing and maintaining SLAs.

Network Monitor 2.1, included with Microsoft Systems Management Server 2003 (SMS), makes it easier to monitor and analyze network traffic that is generated among computers on the network. You use Network Monitor to identify heavily used subnets, routers, and WAN connections, to recognize where network bottlenecks occur, and to develop trends to optimize network infrastructure and server placement or expansion.

For more information about using SMS 2003 for capacity planning and analysis, go to https://www.microsoft.com/technet/sms/2003/library/spgsms03/spsms13.mspx.

Capacity Planning Technologies

Microsoft System Center Capacity Planner 2006 helps size and plan deployments of Microsoft Exchange Server 2003 and Microsoft Operations Manager (MOM) 2005 by providing you with the tools and guidance to deploy efficiently while planning for the future by allowing for "what-if" analyses in the following ways:

  • Plan the correct amount of infrastructure needed for a new application to meet service level goals.

  • Infrastructure planning and optimization.

  • Proactive performance planning.

  • Performance analysis and predictive reporting.

System Center Capacity Planner 2006 is designed to create a system architecture model for deploying Exchange Server 2003 or MOM 2005. A typical system architecture model consists of the following information:

  • Topology. Site locations, types of networks, network components, and network characteristics (bandwidth, latency)

  • Hardware. Server distribution and characteristics, server and network mapping

  • Software. Server role and service mapping, file and storage device mapping

  • Usage profiles. Site usage and client usage

After creating a model, a simulation provides a summary and details about the performance of the application and its supporting components.

Custom Tools

Automated capacity planning tools can vary in type and level of functionality. It is very common to feed monitoring and analysis data into a spreadsheet and develop formulas to determine capacity based on changes to defined fields in the spreadsheet. At the other end of the spectrum are tools incorporating an even greater level of knowledge and detailed graphical modeling of topologies, such as the System Center Capacity Planner. Whether your solution is developed in-house or by a third party, the primary requirement of the Evaluate and Plan phase is to ensure that the tools or methods used to automate capacity planning are accurate and trusted for planning and optimizing the infrastructures that affect your organization’s primary IT services.

Phase 4: Deploy

The Deploy phase of an automated capacity planning project ensures that the tools selected are a key part of the planning process and maintained on an ongoing basis for new implementations and optimization projects involving your organization’s primary IT services. Models should be updated as new factors or technologies are introduced into your environment, such as the use of server clustering, blade servers, or virtualization. A deployment of these tools can only be successful if agreements are in place to standardize on their recommendations and provide sustained engineering or updates as necessary.

Further Information

For more information on capacity analysis and planning, go to Microsoft TechNet and search for “server capacity” or “capacity planning.”

Topic Checkpoint

Tick

Requirements

 

Identified primary IT service candidates for automated capacity planning.

 

Created capacity models to automate capacity planning or implemented capacity planning tools.

If you have completed the steps listed above, your organization has met the minimum requirement of the Dynamic level relating to automated infrastructure capacity planning for primary IT services capabilities of the Infrastructure Optimization Model. We recommend that you follow the guidance of additional best practice resources for analyzing critical services.

Go to the next Self Assessment question.

Requirement: Management of Mobile Devices

Audience

You should read this section if you do not have a defined life-cycle management strategy for your mobile devices.

Overview

In the Infrastructure Optimization Planning Guide for Implementers: Basic to Standardized and Infrastructure Optimization Planning Guide for Implementers: Standardized to Rationalized guides, you read about several topics associated with mobile device management as well as topics associated with desktop computer management. To move to the Dynamic level, you need to apply many of the same principles and capabilities from your desktop management activities to your mobile devices and therefore extend your current mobile device management capabilities.

The integration of mobile devices, the Internet, and wireless connectivity provides an exciting opportunity for organizations to extend the reach of their information and services to mobile professionals. Current mobile devices are quickly approaching the computational and functional levels generally associated with portable computers. The advantages and disadvantages of these devices are also similar to portable computers and while there is a definite opportunity to improve productivity, there is an equivalent threat of data loss or security breach.

As the use of mobile devices increases in your organization, the need to control the types of mobile devices also increases. Without standardization, the mix of mobile devices connecting to your corporate network would be nearly impossible to manage. User authentication, standardization of operating systems, patch management, and other everyday administrative controls can only be effectively managed when you have established a company standard for each type of mobile device.

The Core Infrastructure Optimization Model profiling or question set separates the areas covered in this requirement into several individual requirements. The focus of these questions is to ensure that many of the capabilities exercised for portable computers are eventually matched by the management capabilities for mobile devices. In many cases, the tools to perform tasks for desktop deployment or management do not have equivalents for mobile devices; however, the concepts are for the most part the same.

For guidance in planning a mobile device solution, go to https://www.microsoft.com/technet/archive/itsolutions/mobile/deploy/mblwirel.mspx?mfr=true.

For more information on managing mobile devices, go to https://www.microsoft.com/technet/solutionaccelerators/mobile/evaluate/mblmange.mspx.

Phase 1: Assess

As a requirement of the Standardized and Rationalized levels, your organization should already be maintaining a detailed inventory of mobile devices connecting to your organization’s resources. The Assess phase requires that this inventory be updated and available for subsequent phases.

Phase 2: Identify

The Identify phase examines the capabilities needed to fulfill the requirements of the Dynamic level. The requirements of the Dynamic level include the following:

  • Access to LOB applications

  • Defined basic images

  • Automated update of configurations and applications

  • Quarantine solution

  • Automated patch management

  • Automated asset management

By achieving the Rationalized level, your organization should already have mechanisms in place to access Web-based LOB applications, automate software and configuration files distribution, automate patch management, and automate asset management. The net new capabilities for the Dynamic level then are defining and maintaining basic mobile device images and establishing a quarantine solution for mobile devices.

Phase 3: Evaluate and Plan

The goal of the Evaluate and Plan phase is to identify the technologies needed to manage mobile device images and establish an effective quarantine mechanism to detect whether mobile devices comply with organizational standards and to quarantine them from resources when they are determined to be noncompliant.

This section of the guide addresses the following areas of mobile device management:

  • Defined basic images for mobile devices

  • Quarantine solution

Defined Basic Images for Mobile Devices

Defining standard images for mobile devices ensures that their configuration is known and manageable across the organization and generally consists of three primary activities:

  • Device standardization

  • Image standardization

  • File distribution

Until the technologies are common for organizations to install or refresh images on mobile devices, image standardization will be the responsibility of the device manufacturers or service providers.

Device Standardization

Members of your mobile workforce can carry many different types of mobile devices. It would be very inefficient to maintain a standard set of images for each type of cell phone or PDA that your users might choose to use. Choosing a company standard for each type of mobile device is the only way to efficiently and securely manage these devices. Once a device has been specified, a standard operating system and set of core applications can be chosen and maintained.

Image Standardization

Mobile device image management is much different than imaging desktop or server devices. A typical scenario will involve the organization standardizing on a device type and mobile operating system. The devices and mobile operating systems are then built and delivered by the device manufacturer or service provider; this is often attributed to the reduced asset life cycle of mobile devices in the organization and the lack of technology and interfaces available to install operating systems as you would on a network-connected or media-bootable client or server. In this case, image standardization consists of determining and enforcing a policy for standard mobile device operating systems and ensuring that required configurations and applications are applied to mobile devices. You can use the device provisioning tools that are available in the Windows Mobile® Software Development Kits (SDKs) to configure settings on the devices; to add, update, and remove software from the mobile devices; or to change the functionality of the mobile devices. For more information, see the Step-by-Step Guide to Deploying Windows Mobile-based Devices with Microsoft Exchange Server 2003 SP2: Step 8 - Manage and Configure Mobile Devices.

File Distribution

Since mobile devices tend to come with the operating system installed, you will need to have a mechanism for installing the standard applications and configuration files that each mobile device is required to have. There are several tools available for software distribution to mobile devices, such as:

These tools give you the ability to distribute a standardized set of applications to your mobile devices.

Quarantine Solution for Mobile Devices

Even with the best plans and policies in place to ensure that mobile devices comply with all security requirements of your organization, there will be times when these measures will be deactivated, circumvented, or breached. When a mobile device has been infected by malicious software or has had its security policies compromised, and it then connects to your network, your network and all assets and data on the network are compromised. You need a method to ensure that mobile devices connecting to your network comply with all of your corporate security policies.

In the Quarantine Solution for Unpatched or Infected Computers requirementlater in this document, we discuss using virtual private networks (VPN) and quarantine controls on client computers to restrict network access to computers not meeting minimum configuration compliance requirements. Microsoft partner solutions—such as Bluefire Mobile Security VPN—can enable quarantine protection for mobile devices and offer many of the same advantages of quarantine used with client computers.

Device Management Products

The Microsoft Systems Management Server (SMS) 2003 Device Management Feature Pack offers a comprehensive device management solution for mobile devices Windows CE (3.0 or later) and Windows Mobile software for Pocket PCs (2002 or later) with updated support for Windows Mobile 5.0 and Windows Mobile Pocket PC Phone Edition 5.0.

Partner Solutions

The following summary material was provided by each vendor. Microsoft does not endorse or recommend particular solutions. The solutions listed below can help your organization achieve near parity between mobile device and desktop management:

  • B2M (www.b2m-solutions.com). Offers the mprodigy product line designed for blue collar and industrial markets.

  • BeCrypt (www.becrypt.com). Specializes in producing security products for laptops, Tablet PCs, and desktops, as well as for Pocket PC/Windows Mobile 5 PDA devices.

  • Bluefire Mobile Security Suite (www.bluefiresecurity.com). A comprehensive suite of products that work together to help secure mobile devices.

  • iAnywhere (www.ianywhere.com). A subsidiary of Sybase. Their Afaria management suite provides comprehensive management capabilities.

  • Odyssey’s Athena product (www.odysseysoftware.com). A device management solution that provides comprehensive management for Windows Mobile®, Windows CE, Win32®, and Windows XP embedded devices.

  • Perlego Mobile Device Lifecycle Management (MDLM) (http://www.perlego.comwww.perlego.com). This suite provides remote control, data assurance, and content distribution tools to protect and manage devices and their data.

Further Information

For more information on user provisioning, go to Microsoft TechNet and search for “device management.”

Topic Checkpoint

Tick

Requirements

 

Standardized devices and device images from manufacturers or service providers.

 

Defined configuration compliance standards enforced on all mobile devices prior to connecting to data resources.

If you have completed the steps listed above, your organization has met the minimum requirement of the Dynamic level for Management of Mobile Devices in the Infrastructure Optimization Model.

Go to the next Self Assessment question.

Requirement: Virtualization to Dynamically Move Workloads from Server to Server

Overview

In the Core Infrastructure Optimization Implementer Resource Guide: Standardized to Rationalized guide, we discussed the requirement for developing a plan for consolidation using virtualization. The requirement at the Rationalized level was limited to planning and testing virtualization technologies. At the Dynamic level, the requirement for virtualization extends to implementation of production applications or services. This requirement continues to call out the virtualization best practices highlighted in the Solution Accelerator for Consolidating and Migrating LOB Applications. Additional guidance for using virtualization in the context for development and test can be found in the Windows Server System Reference Architecture Virtual Environments for Development and Test guide.

Consolidation of physical infrastructure, in general, is an effective business strategy and can be an effective tool in maintaining hardware utilization. Additionally, the nature of virtualization allows you to assign system resources to production virtual machines as necessary; this is different from a 1:1 server application physical environment, where only hardware upgrades or downgrades can be used to adjust performance.

Phase 1: Assess

At the Rationalized level and as discussed in the Core Infrastructure Optimization Implementer Resource Guide: Standardized to Rationalized guide, you were required to take an inventory of applications and infrastructure in your organization.

Phase 2: Identify

At the Rationalized level and as discussed in the Core Infrastructure Optimization Implementer Resource Guide: Standardized to Rationalized guide, you were required to nominate the appropriate services targeted for virtualization in your organization.

Phase 3: Evaluate and Plan

At the Rationalized level and as discussed in the Core Infrastructure Optimization Implementer Resource Guide: Standardized to Rationalized guide, you were required to evaluate virtualization technologies and plan for virtualization deployment in your organization.

Phase 4: Deploy

The Deploy phase is responsible for implementing a tested virtualization strategy selected at the Rationalized level. This phase includes establishing the consolidated virtual machine environment and virtualizing applications in the consolidated environment. The following figure from the Solution Accelerator for Consolidating and Migrating LOB Applications illustrates how planning and design tasks fit in the overall scope and sequence of the consolidation, migration, and virtualization of LOB applications using Microsoft Virtual Server 2005. For detailed information, see the Implementation Guide in the Solution Accelerator for Consolidating and Migrating LOB Applications.

Figure 7. LOB application virtualization process

Figure 7. LOB application virtualization process

Implementation of a virtual server solution starts with establishing a consolidated environment, including both building and stabilizing the environment. This includes:

  • Preparing the infrastructure.

  • Setting up hardware and software platforms for the destination servers.

  • Deploying the tools required to migrate and virtualize the source servers.

Setting Up the Infrastructure

The first step in building the consolidated environment is to set up the infrastructure, ensuring that the required network connectivity and infrastructure services (including directory services, Domain Name System (DNS), and Windows Internet Name Service (WINS) are in place. In addition, appropriate accounts and groups, and a migration account, should be created as appropriate to the deployment requirements.

Setting Up Tools

Virtual Server 2005 Migration Toolkit (VSMT) requires the availability of at least one Automated Deployment Services (ADS) Controller in the organization. The Controller can schedule jobs to run on devices (computers added to the ADS database) that have the deployment agent running on the system. The deployment agent can run on Windows 2000 Server or Windows Server 2003.

Stabilizing the Virtual Machine Infrastructure

After the infrastructure for the consolidated environment is designed and built, ensure that the target state meets the goals stated for implementation. The environment should be stabilized before it is put into production.

Before migrating servers to the consolidated environment, ensure that:

  • The consolidated environment and supporting infrastructure is operational.

  • Required hotfixes and security updates are installed on the host operating system and each guest operating system.

  • Event logs of the consolidated and supporting servers are cleared and there are no warnings, errors, or alerts.

  • The consolidated environment meets the availability goals.

  • The consolidated environment provides the capacity requirements.

  • Users, administrators, and LOB applications have proper access to the service resources.

  • The consolidated environment provides the required level of I/O performance.

  • Migration tools have been implemented.

Deploying Virtualized Services into Operations

After building and stabilizing the consolidated environment, you should be ready to virtualize services by capturing images of the source servers and creating virtual machines in the new environment. The deployment process for virtualization contains the following steps:

  1. Gather source information. Collect hardware and software information about the source server to complete pre-deployment validation of the source server.

  2. Load updated system files into the patch cache. Add required patch files to the VSMT patch cache to ensure that they are available to VSMT for installation after the image is deployed to the virtual machine.

  3. Filtering devices and services. Identify source server hardware-specific devices and services to disable during virtualization.

  4. Generate the command files and task sequences. Generate the necessary script files and ADS task sequences.

  5. Capture the image of the physical computer. Capture the disk partitions of the physical computer.

  6. Deploy the image to a virtual machine. Create and configure the virtual machine environment and deploy the captured disk partitions.

Post-Virtualization

After the consolidation and migration of servers is complete and the users are accessing servers in the consolidated environment, consider the following:

  • Verify that the migration is complete.

  • Create backups of the new environment and retain a baseline of the environment configuration.

  • Retire source servers after ensuring that the clients are no longer accessing the servers.

  • Complete the documentation of the environment and the project.

  • Educate and hand over the operation of the new environment with fully functional service to the corporate IT team.

Dynamic Movement of Workloads Between Servers

There are two primary options for moving workloads: using Virtual Server 2005 to reallocate system resources from virtual server to virtual server on a single physical host, or moving virtual servers between nodes in a server cluster. The following sections describe both options.

Reallocating System Resources in Virtual Server

Virtual Server 2005 provides scripting support through Component Object Model (COM) technology. This enables you to perform additional commands based on a schedule or other event. For example, if you know that a large batch process runs on a SQL Server database once per week, you can use scripts and schedule a task to automatically allocate more memory to the virtual server during the batch process. After the process is completed, another scripted task could reallocate memory back to the state prior to the batch process.

For guidance on using scripts to manage Virtual Server, see Using Scripts to Manage Virtual Server , or go directly to the Virtual Server Script Repository in the Microsoft Script Center.

Virtual Server Host Clustering

Virtual Server host clustering is a way of combining two technologies—Virtual Server 2005 R2 and the server cluster feature in Windows Server 2003—so that you can consolidate servers onto one physical host server without causing that host server to become a single point of failure. To give an example, suppose you had two physical servers providing client services as follows:

  • Microsoft Windows Server 2003, Standard Edition, used as a Web server

  • Microsoft Windows NT® Server 4.0 with Service Pack 6a (SP6a), with a specialized application used in your organization

By using host clustering, you can consolidate these servers into one and, at the same time, maintain availability of services if that consolidated server failed or required scheduled maintenance. To do this, you would run each service listed above as a virtual machine on a physical server. You would also configure the server as one node in a server cluster, meaning that a second server would be ready to support the virtual machines in the event of failover. If the first server failed or required scheduled maintenance, the second server would take over support of the services. The result is a dynamic movement of workloads with minimal to no impact on service availability in the event of a failure.

The following figure shows a simple Virtual Server host cluster:

Figure 8. Simple Virtual Server host cluster

Figure 8. Simple Virtual Server host cluster

It is important to understand that with Virtual Server host clustering, you are clustering the physical hosts, not the applications running on a physical host. Failure of a physical host would cause a second physical host to take over support of a guest, but failure of an application within a guest would not.

System Center Virtual Machine Manager

System Center Virtual Machine Manager (SCVMM) provides complete support for consolidating multiple physical servers within a virtual infrastructure, thereby helping to increase overall utilization of physical servers. System Center Virtual Machine Manager also enables administrators and authorized users to rapidly provision virtual machines. SCVMM features and benefits include:

  • Centralized deployment and management of virtual machines.

  • Intelligent placement analysis to determine the best servers for virtualization.

  • Quick physical-to-virtual and virtual-to-virtual conversion.

  • Ease of use with a familiar interface and seamless integration with other Microsoft products.

  • Faster deployments with administrator-managed self-service provisioning.

  • Resource efficiency with server consolidation and increased processor utilization.

  • Quick automation via PowerShell scripting integration.

For more information on System Center Virtual Machine Manager, visit https://www.microsoft.com/systemcenter/scvmm/default.mspx.

Topic Checkpoint

Tick

Requirements

 

Deployed a subset of production IT services or applications to virtual machines.

 

Actively managing and optimizing system resources on shared hardware devices.

If you have completed the steps listed above, your organization has met the minimum requirement of the Dynamic level for Virtualization to Dynamically Move Workloads from Server to Server capabilities of the Infrastructure Optimization Model. We recommend that you follow the guidance of additional best practice resources for server consolidation and virtualization addressed in the Solution Accelerator for Consolidating and Migrating LOB Applications or download the Virtual Server Host Clustering Step-by-Step Guide for Virtual Server 2005 R2.

Go to the next Self Assessment question.