Best Practices for Deploying Virtual Machines by Using Hyper-V Virtualization Technology
Technical Case Study
Published: February 2009
About 80 percent of server deployments in the Microsoft IT data centers are deployed as virtual servers via Windows Server® 2008 Hyper-V™ technology. To ensure optimal performance, Microsoft Information Technology (Microsoft IT) has developed configuration best practices, based on the application workloads or services that the virtual servers provide.
Technical Case Study, 286 KB, Microsoft Word file
Products & Technologies
In response to limited space and power in the Microsoft IT data centers, Microsoft IT is deploying most new server builds as virtual servers. As a result, Microsoft IT developed standards for building virtualization host computers to meet this demand, but also requires virtual machine build standards.
To ensure that virtual machines continue to provide services that meet the business requirements, Microsoft IT has developed standards and best practices for deploying virtual machines on Windows Server 2008 Hyper-V. Some of these standards and best practices apply to all virtual machines, while others are specific to the role or workload that is running on the virtual machine.
Microsoft IT has adopted a very aggressive approach for implementing virtualization on Windows Server 2008 Hyper-V. Virtualization has become a primary means by which Microsoft IT can address data center power and space issues, and to rationalize server utilization.
This case study describes the best practices related to configuring virtual machines running on the 11,500 Windows Server 2008 Hyper-V IT managed servers. It provides general guidelines for configuring virtual machines, and then focuses more specifically on configuring virtual machines running Microsoft® SQL Server® 2008 database software, Microsoft Exchange Server 2007, and Microsoft SharePoint® products and technologies.
Like most large enterprises, the number of servers that Microsoft IT deploys in its primary data centers has grown rapidly, while the utilization of many of those servers has remained low in relation to hardware capabilities. To address this issue, Microsoft IT introduced the Compute Utility strategy and the RightSizing initiative to implement and promote server virtualization. Microsoft IT also has developed several standards and best practices for configuring virtualization host computers.
Compute Utility Strategy
Microsoft IT implemented the Compute Utility strategy to remove the concept of server ownership for business groups and replace it with a concept of purchasing computing capacity. With this strategy, Microsoft business groups define their computing capacity requirements for the applications that they need to run their business, and then Microsoft IT focuses on meeting the computing capacity requirements.
In most cases, Microsoft IT is deploying virtual machines to meet the business requirements. Only in exceptional cases will Microsoft IT deploy physical servers. The Compute Utility strategy looks to create a level of abstraction for the business group, which now purchases computing capacity and space on a storage area network (SAN) rather than a physical server.
The RightSizing initiative is one component of a broader Compute Utility strategy and has two goals. The first is to identify servers that are good virtualization candidates, and to encourage business groups to replace those physical servers with virtual machines. The second goal is to ensure that if a physical server is necessary to meet a business group's requirements, it is the correct size. The RightSizing initiative collects information on all of the physical servers running in the Microsoft IT data centers and identifies those servers that are virtualization candidates.
The virtualization goals are set very high for Microsoft IT, which has deployed more than 3,500 virtual machines. By June 2009, Microsoft IT plans to have 50 percent of all server instances running on virtual machines. With Windows Server°2008 Hyper-V, the expectation is that at least 80 percent of new server orders will deploy as virtual machines.
Configuring Virtualization Hosts
Server virtualization has become a core component in managing the IT infrastructure at Microsoft. As Microsoft IT has worked with server virtualization over several years, it has developed the following best practices for configuring the host computers that run the virtual machines:
- Simplify and standardize the platform on which to deploy server virtualization. Microsoft IT has adopted a Server Core installation of Windows Server 2008 as its operating system for virtualization hosts. The virtualization hosts also are dedicated to this one task, so an administrator should install only the software and settings that this role requires.
- Automate and standardize administration of the virtual server environment.
Microsoft IT is planning to deploy several thousand virtual servers over the next
few years, and the only way to manage them efficiently is to standardize the deployment
and then automate management tasks as much as possible. This includes:
- Standardize the host and virtual server configuration. For example, Microsoft IT strives to assign the same administrator groups and virtual network names to all host computers. Increasingly, Microsoft IT is using scripts and other automation to deploy and manage host and virtual machines. Automation makes it much easier to ensure that users adhere to server build standards.
- Implement remote management solutions. An important criterion that Microsoft IT uses when choosing hardware and software platforms is the support that the platform provider offers for remote management and automation.
- Implement Microsoft System Center Virtual Machine Manager. Virtual Machine Manager provides many tools for simplifying the required administrative tasks to manage a large virtualization environment. An organization can use Virtual Machine Manager to store server templates and to automate the virtual server deployment to the appropriate host computers.
Note: For detailed information about how Microsoft IT plans the server and physical infrastructure for virtualization, see the Microsoft IT Showcase case study "How Microsoft Designs the Virtualization Host and Network Infrastructure" at http://technet.microsoft.com/en-us/library/cc974012.aspx.
Microsoft IT has deployed virtual machines in the test, development, and production environments, and therefore has identified configuration settings and administrative processes that optimize performance and management of virtual machines. Microsoft IT has developed general guidelines that apply to all virtual machines deployed in the IT data centers, and specific guidelines and best practices based on server workload.
Designing Virtual Machines: General Guidelines
One important lesson that Microsoft IT has learned while developing its virtualization strategy is to simplify and standardize the host computer and virtual machine configuration, as much as possible. In pursuit of this goal, Microsoft IT developed general guidelines that apply to all the virtual machines that it deploys.
Standard Operating System Configuration
Microsoft IT has developed general provisioning templates for several different deployment scenarios for virtual machines. In general, Microsoft IT configures each of the virtual machines with a 50-gigabyte (GB) system partition that is located on a SAN, and configures the disk as a dynamic disk. Microsoft IT dedicates the system partition to the operating system files, and provides additional disks to store data or install applications.
On the standard hardware platform, Microsoft IT performs a standard installation for each required operating system. After the installation is complete, Microsoft IT applies a standard Information Technology Service Pack (IPAK) process to all servers that the data center supports. The IPAK is a standard server configuration that includes required service updates for applications and operating systems, plus other Microsoft and third-party services or tools that are necessary to manage servers in an enterprise environment.
Planning Virtual Machines for Specific Server Roles
Although Microsoft IT configures most virtual machines with the same basic disk and operating system configuration, the actual physical requirements for each virtual machine will vary. For example, some virtual machines will require significantly more random access memory (RAM) or CPU resources than others. To design the actual physical requirements for a virtual machine, Microsoft IT uses the following guidelines:
- Configure each virtual server with a hardware configuration that is very similar to a physical server's hardware requirements. The fact that Microsoft IT virtualizes a server does not change the hardware resources that the server requires.
- Use data from the RightSizing initiative to plan server hardware. In the RightSizing initiative, Microsoft accumulated significant performance data regarding how specific applications perform on physical servers. If an application used a very low percentage of the hardware resources on a physical server, Microsoft IT will deploy a virtual server with significantly less capacity to run the same application.
- Require customers to justify additional hardware resources. If one of the business units requests hardware that is not consistent with the available performance data, the customer must justify the hardware requirement. With the number of virtual servers that Microsoft IT has deployed, the team has very good information on the hardware requirements for a wide variety of application and server deployments.
- Deploy Windows Server 2008–based virtual machines whenever possible. Windows Server 2008 supports synthetic devices on the virtual machine, which provides better network and storage performance than Windows Server 2003. Additionally, you can configure Windows Server 2008–based virtual machines with four processors rather than just two processors for Windows Server 2003. In many cases, application compatibility and testing issues require Microsoft IT to still deploy Windows Server 2003, but Windows Server 2008 always is the preferred operating system.
Choosing Server Workloads for Virtualization
Microsoft IT has deployed virtual machines running almost all server roles and workloads. Currently, the default deployment option for all servers is a virtual machine. However, there are several scenarios where Microsoft IT still deploys physical machines:
- Hardware requirements. In some cases, a server workload may require hardware resources that make it impractical to deploy the workload on a virtual machine. For example, if the server workload requires more than four processors to provide adequate performance, the server cannot be virtualized. Additionally, if the server workload requires more than half of the hardware resources that are available on a virtualization host, Microsoft IT does not virtualize the server because there is no server consolidation benefit. As Microsoft IT deploys more powerful virtualization hosts, the number of server workloads that it cannot virtualize due to hardware requirements is expected to decrease.
- Server workloads that provide native consolidation. The goal of the RightSizing initiative is to ensure that all servers, whether physical or virtual, are utilized adequately. Some server roles, such as SQL Server or Exchange Server 2007 Mailbox servers, can be utilized fully by deploying additional SQL Server instances or moving more mailboxes on to the server. In some cases, server workloads can be virtualized in one scenario, but not in another. For example, the Redmond domain contains about 70,000 users. Microsoft IT has not virtualized a domain controller in the Redmond domain, but has deployed fewer, more powerful, servers as domain controllers. In a smaller domain, or in a branch office deployment, Microsoft IT deploys domain controllers as virtual machines.
- Unique hardware requirements. A small number of servers have specific hardware requirements that make virtualizing the server impossible. For example, some servers require a hardware security module, a specialized storage connector, or a connection to a voice network.
- Legal or regulatory requirements. Legal or regulatory requirements have not prevented Microsoft IT from virtualizing or consolidating a specific server role, but it is prepared to deploy servers as physical servers if required.
Designing Virtual Machines for SQL Server
One of the server workloads that Microsoft IT is virtualizing is SQL Server. Other groups at Microsoft also have done extensive evaluation and testing of running SQL Server on a virtual machine.
Note: For detailed information about some of the testing Microsoft has done with virtualizing SQL Server and the recommendations that result from the tests, see the white paper "Running SQL Server 2008 in a Hyper-V Environment - Best Practices and Performance Recommendations" at http://sqlcat.com/whitepapers/archive/2008/10/03/running-sql-server-2008-in-a-hyper-v-environment-best-practices-and-performance-recommendations.aspx. This white paper also includes an interesting discussion on the relative benefits of implementing additional SQL Server instances or using virtual machines for server consolidation.
Microsoft has developed the following general recommendations for configuring virtual machines that run SQL Server:
- Ensure that the Hyper-V integration components are installed on the guest virtual machine. Additionally, use a synthetic rather than legacy network adapter when configuring networking for the virtual machine. Both of these options provide enhanced performance for the virtual machines.
- Plan to configure the hardware settings for the virtual machines to match the hardware settings that you would configure on a physical server for the same workload.
Additionally, Microsoft has developed standards and recommendations for configuring specific components on SQL Server–based servers.
One of the most critical components to ensure optimal performance for any SQL Server instance is to ensure that the storage system is the correct size and configuration. The storage hardware should provide sufficient input/output (I/O) throughput and storage capacity to meet the current and future needs of the planned virtual machines. Additionally, you should follow the recommended best practices for configuring disks for transaction logs and database storage.
You can configure Hyper-V virtual machines to use dynamically expanding virtual disks, fixed virtual disks, or pass-through disks:
- Pass-through disks. A pass-through disk uses a physical disk partition to store data rather than using a .vhd file. This means that a pass-through disk bypasses the NTFS file system layer in the parent partition, and the virtual machine accesses the hard disk directly. Microsoft IT has found that pass-through disks provide the best performance for SQL Server–based servers that are used heavily.
- Fixed virtual hard disks. A fixed virtual disk provides storage by using a .vhd file. The size of the .vhd file is specified when the hard disk is created, and that amount of space is reserved on the physical hard disk drive. The size of the .vhd file stays the same size regardless of the amount of data stored. Microsoft IT has found that fixed virtual hard disks provide performance that is only slightly lower than pass-through disks. One benefit of using .vhd files is that it makes the virtual machine more portable so that an administrator can move the virtual disk from one virtualization host to another.
- Dynamic virtual hard disks. Dynamic virtual disks provide storage capacity as needed to store data. The size of the .vhd file is small when the disk is created, and then grows as data is added to it. When you first write to a block, the virtualization stack must allocate space within the .vhd file for the block and then update the metadata. Additionally, every time an existing block is referenced, the block mapping must be looked up in the metadata. This increases both the number of disk I/Os for read and write activities and CPU usage. Because of these performance limitations, Microsoft IT does not recommend dynamic virtual disks for storage on SQL Server–based servers.
You can configure Hyper-V virtual machines to use both integrated device electronics (IDE) and small computer system interface (SCSI) hard disk controllers. Hyper-V virtual machines must use an IDE controller for the boot and system partitions, and Microsoft IT recommends using synthetic SCSI controllers for the disks containing SQL Server databases and logs.
A second critical component in planning SQL Server performance is the CPUs. Microsoft IT has found that it can achieve the same throughput on a virtual machine as it can on physical hardware, with only slightly increased CPU utilization. When designing a virtualization host that will run multiple SQL Server virtual machines, you should ensure that the host's cumulative physical CPU resources are adequate to meet the needs of all the guest virtual machines. Just like scenarios where you are deploying multiple SQL Server instances on a physical server, the only way to guarantee adequate performance is to test the deployment thoroughly. Microsoft has found the following CPU-based limitations when running SQL Server on a virtual machine:
- You can assign up to four CPU cores to a virtual machine when using Hyper-V. Because of this limitation, you should run SQL Server on Hyper-V guest virtual machines only if it requires no more than four CPUs to satisfy your workload performance.
- You should not over-commit CPU resources when the total number of logical CPU cores configured across all guest virtual machines is more than the actual number of physical CPU cores that the server has available. Microsoft has found that over-committing the CPU cores can affect server performance significantly when all the virtual machines are loaded heavily.
- Networking-intensive workloads will result in higher CPU overhead and more performance impact on a virtual machine.
To size the virtual hardware for SQL Server properly, you will need to monitor the server as you add virtual machines. Monitoring servers in a virtual environment is different from monitoring servers running on physical hardware:
- When monitoring CPU utilization on a server running Hyper-V, you should use the
Hyper-V Processor counters exposed on the root partition. Hyper-V exposes three
primary counters that relate to CPU utilization:
- Hyper-V Hypervisor Logical Processor: Provides the most accurate of total CPU resources that the entire physical server consumes.
- Hyper-V Hypervisor Root Virtual Processor: Provides the most accurate measure of CPU resources that the root partition consumes.
- Hyper-V Hypervisor Virtual Processor: Provides the most accurate measure of CPU consumption for specific guest virtual machines.
- Measuring I/O performance is different depending on the guest storage configuration:
- Use either the logical or physical disk counters on the guest virtual machine to monitor I/O performance.
- If the guest virtual machine storage is configured as pass-through, the disk will be offline at the root partition level and will not appear under the logical disk counters within the root partition. To monitor performance of pass-through disks at the root partition, use the physical disk counters.
- When guest virtual machines are configured to use .vhd files for storage and those files reside on common physical disks, monitoring the disk counters from the guest virtual machine will provide details about I/O for the specific virtual hard disk. Monitoring the volume that contains all of the .vhd files at the root partition will provide aggregate values for all I/O issued against the disk or volume.
Note: Microsoft IT is developing a SQL Server Utility that builds on the Compute Utility and uses Hyper-V as the virtualization platform. For information about this initiative, see the article "Green IT in Practice: SQL Server Consolidation in Microsoft IT" in The Architecture Journal Issue 18 at http://www.msarchitecturejournal.com/pdf/Journal18.pdf.
Designing Virtual Machines for Exchange Server
You can use a virtualization environment to run all Exchange Server 2007 server roles, except for the Unified Messaging role. Microsoft IT has just started virtualizing Exchange Server 2007 servers, but has been testing virtualization with Exchange Server 2007 servers in a test and User Acceptance Testing (UAT) environment. Microsoft IT has deployed at least one server running each of the Exchange Server 2007 server roles in production, including one mailbox server that hosts 1,500 mailboxes. Microsoft IT configured all of the Exchange Server virtual machines, except for the mailbox server, as highly available virtual machines by using Windows Server 2008 failover clustering.
Microsoft IT has developed the following guidelines based on its experience with virtualizing Exchange servers:
- Server sizing standards. Running Exchange Server 2007 Service Pack 1 (SP1) on a guest virtual machine does not change the Exchange Server design requirements from an application perspective. The Exchange Server guest virtual machine still must be the appropriate size to handle the workload. You must design mailbox, Client Access, and transport server roles for performance, capacity, and reliability. Additionally, you must allocate resources that are sufficient to handle the system load, based on the system's usage profiles.
- Storage. The storage that the Exchange Server guest machine uses can be fixed
virtual hard disk drives, SCSI pass-through storage, or Internet SCSI (iSCSI) storage.
As with SQL Server–based servers, pass-through storage provides the best performance.
Microsoft IT does not support dynamically expanding virtual disks and differencing
drives for Exchange servers.
The operating system for an Exchange guest machine must use a fixed sized disk that has a minimum size equal to 15 GB, plus the size of the virtual memory that is allocated to the guest machine. This requirement is necessary to account for the operating system and paging file disk requirements. For example, if the guest machine is allocated 16 GB of memory, the minimum disk space needed for the guest operating system disk is 31 GB.
Separate logical unit numbers (LUNs) on redundant array of independent disks (RAID) arrays should be used for the host operating system, each guest operating system disk, and all virtual machine storage. As with physical servers, you should create separate LUNs for each database and set of transaction log files.
- Snapshots. Hyper-V supports snapshots of virtual machines that capture a virtual machine's state while it is running. This feature enables an administrator to take multiple snapshots of a virtual machine and then revert the virtual machine to any of the previous snapshots. However, virtual machine snapshots are not application-aware, and using them can have unintended and unexpected consequences for a server application that maintains state data, such as Exchange Server. Therefore, Microsoft IT does not support making virtual machine snapshots of an Exchange guest virtual machine.
- CPU configuration. Hyper-V enables you to specify the number of virtual processors that you should allocate to each guest virtual machine, with a maximum of four virtual processors. Exchange supports a ratio of virtual processors to logical processors no greater than 2 to 1. For example, a dual-processor system that uses quad-core processors contains eight logical processors in the host system. On a system with this configuration, do not allocate more than 16 virtual processors to all guest virtual machines combined. If the CPUs for all virtual machine instances are utilized heavily, over-committing the CPUs will significantly affect performance. In these scenarios, do not assign more virtual processors to virtual machines than the number of processor cores on the host computer.
- High availability for Exchange servers. Exchange Server 2007 provides
several options for high availability. For server roles such as Client Access servers,
Hub Transport servers, and Edge Transport servers, an administrator can deploy multiple
servers for each role to ensure that the server role is available if a single server
failure occurs. For mailbox servers, Exchange Server 2007 provides several
Exchange clustering solutions such as cluster continuous replication (CCR) and single
copy clusters (SCC). These solutions provide various options for automatic failover
if a server failure occurs.
With Hyper-V, an administrator can make virtual machines highly available by deploying them in a failover cluster. You can use failover clustering to make virtual machines running the Client Access server role, the Hub Transport server role, and the Edge Transport server role highly available.
You cannot combine the Mailbox server high-availability options with failover clustering. For example, an administrator can deploy a Mailbox server on a virtual machine, and then configure the virtual machines to use CCR or SCC. However, you cannot configure the virtual machines as highly available virtual machines in a failover cluster.
This means that you need to choose between Hyper-V failover clustering and one of the mailbox server clustering options when enabling high availability for mailbox servers. As a general rule, Microsoft IT recommends the Exchange clustering options because depending on which Exchange clustering option you choose, you can take advantage of features such as having two copies of the mailbox database and application-aware clustering. Failover clustering cannot detect if there is a problem with Exchange, only at the virtual machine operating system level.
- Mailbox server performance. The most common performance bottlenecks for Mailbox servers are disk I/O and network I/O. Running mailbox servers in a virtual environment means that the virtual machines have to share this I/O bandwidth with the host machine and with other virtual machine servers deployed on the same host. If a single virtual machine is running on the physical server, the disk I/O and network I/O available to the virtual machine are almost equivalent to the I/O available to a physical server. However, a heavily utilized mailbox server can consume all of the available I/O bandwidth, which will make it impractical to host additional virtual machines on the physical server.
Designing Virtual Machines for SharePoint
One of the most common options for using virtual machines at Microsoft is for deploying Windows® SharePoint Services and Microsoft Office SharePoint Server 2007.
Note: For detailed information about some of the testing Microsoft has done with virtualizing Office SharePoint Server 2007 and the recommendations that come out of the tests, see the article " Performance and Capacity Requirements for Hyper-V" at http://technet.microsoft.com/en-us/library/dd277865.aspx.
Microsoft has developed the following recommendations for deploying Windows SharePoint Services or Office SharePoint Server on a virtual machine:
- Ensure that each virtual machine is configured with the same hardware capacity that a physical server would require. Additionally, consider the overhead performance on the host computer for each virtual machine.
- Do not take snapshots of virtual servers that connect to a SharePoint server farm. If you do, the timer services and the search applications might become unsynchronized during the snapshot process. To take server snapshots, first detach the server from the farm.
- Avoid over-committing the number of virtual CPUs. Although Hyper-V will allow you to allocate more virtual CPUs than the number of physical CPUs, this causes performance issues because the hypervisor software has to swap out CPU contexts. This is an issue only if the virtual machines are utilized heavily.
- Consider deploying all of the servers in a server farm on a single physical server and then taking advantage of the performance benefits of virtual networks. If you configure the virtual machines to use a private or internal network, you can remove the communication between members in the server farm from the physical network card, so communications are faster and network congestion is minimized. You can take advantage of this network performance gain by creating an external network for the Web front-end servers, and by creating a private or internal network for the application and SQL Server database servers.
- Ensure that you assign adequate memory to each virtual machine. Microsoft has found that inadequate memory will have the greatest impact on server performance. Because the amount of memory required depends on the server workload, you will need to test and optimize memory configuration for each scenario.
- Use Internet Protocol Version 4 (IPv4) as the network protocol for virtual machines. Microsoft has determined that virtual machines provide better performance when IPv4 is used exclusively. You should disable Internet Protocol Version 6 (IPv6) on each network card for both the Hyper-V host and its guest virtual machines.
- Choose the right storage implementation. If you are running only front-end Web servers or query servers on virtual machines, the disk performance is not as important as it would be if the image were hosting the Index role or a SQL Server database. If the image hosts the Index role, you should use a fixed-size virtual hard disk or a pass-through disk.
Over the past several years, Microsoft IT has deployed more than 3,500 virtual machines running a wide variety of server roles and workloads. During that time, Microsoft IT had developed a number of best practices:
- Apply the best practices for optimizing performance for Hyper-V virtual machines.
Regardless of the server workload that you deploy on a virtual machine, there is
a consistent set of recommendations that you should apply for all virtual machines,
- Use Windows Server 2008 as the guest operating system whenever possible.
- Install the virtual machine integration components.
- Avoid over-committing CPU processor cores on heavily utilized virtual machines.
- Use pass-through disks or fixed virtual disks attached to SCSI controllers for best performance.
- Provision and manage virtual machines just like physical machines. In almost all cases, virtual machines require the same hardware resources as are required to run the server workload on a physical machine. The benefit of deploying virtual machines is that you can deploy the right level of hardware easily. For example, if the server workload is using only a fraction of a physical computer's resources, you can assign less hardware to the virtual machine without affecting server performance.
- Avoid over-provisioning the physical hosting environment as you start deploying virtual machines. When you design the physical hosting environment, ensure that you allow some excess capacity in case you need to add more capacity to some of the virtual machines. It is much easier to increase the hardware available to a virtual machine if you have extra resources available.
- Do not assume that you can virtualize all server roles and workloads. For each new server workload that you consider virtualizing, make sure that you understand all performance characteristics of the workload. In some cases, it may make more sense to deploy a physical server and use the application consolidation options rather than deploying the workload on a virtual machine.
- Test and monitor all server workloads. If you decide to deploy a new server workload on a virtual machine, ensure that you test the workload before deploying the virtual machines in a production environment. As you deploy the server, monitor the server performance to identify bottlenecks as soon as possible.
- Standardize and automate the configuration and deployment of the majority of virtual machines. As Microsoft IT solidifies the hardware and software requirements for a variety of server workloads, it is working on developing System Center Virtual Machine Manager templates to deploy the virtual machines. Microsoft IT wants to develop templates and automated processes to deploy 80 percent of all virtual machine requests without any direct administrator involvement.
- Carefully consider both application features and Hyper-V functionality when designing virtual machine deployments. For example, both SQL Server and Exchange Server 2007 provide built-in functionality to enable high availability. Hyper-V also provides high-availability options. When choosing between the two options, ensure that you understand the benefits and disadvantages of each option.
Microsoft IT is realizing very significant cost savings from server virtualization. By reducing the physical servers in IT data centers, and deploying the server roles and workloads as virtual machines, Microsoft can save millions of dollars in hardware purchases and power consumption. Additionally, by freeing space in the current IT data centers, Microsoft may avoid the cost of building a new data center.
To gain maximum benefit from deploying virtual machines, Microsoft IT has developed numerous standards and best practices, which has several benefits:
- By consistently monitoring all servers, both physical and virtual, in the Microsoft IT data centers, Microsoft IT has developed a detailed body of knowledge about what server workloads you can virtualize and how to configure the virtual machines.
- By standardizing the virtual machine build process and identifying standard templates for each of the server roles and workloads, Microsoft IT is moving toward automating the processes for deploying and managing virtual machines.
- By understanding the unique requirements for each server role or workload, Microsoft IT has been able to virtualize a very large percentage of server deployments while still providing the performance and service levels that Microsoft business users require.
In the past several years, Microsoft IT has replaced physical servers in IT data centers by deploying hundreds of virtual machines. During this process, Microsoft IT developed best practices for configuring and managing the virtual machines. By clearly understanding the requirements for each server role and workload that it is virtualizing, Microsoft IT has deployed a virtualization infrastructure that provides a high service level to the Microsoft business units.
For More Information
For more information about Microsoft products or services, call the Microsoft Sales Information Center at (800) 426-9400. In Canada, call the Microsoft Canada information Centre at (800) 563-9048. Outside the 50 United States and Canada, please contact your local Microsoft subsidiary. To access information via the World Wide Web, go to:
© 2009 Microsoft Corporation. All rights reserved.
This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. Microsoft, Hyper-V, SharePoint, SQL Server, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.