Export (0) Print
Expand All
14 out of 20 rated this helpful - Rate this topic

How Microsoft Designs the Virtualization Host and Network Infrastructure

Technical Case Study

Published: January 2009

At Microsoft, server virtualization has become a primary way to address data-center power consumption, to address space issues, and to rationalize server utilization. To optimize deployment and management of thousands of virtual machines, Microsoft Information Technology (Microsoft IT) has developed standards and best practices for configuring host servers, storage, and network infrastructure.

Download

Download Technical Case Study, 268 KB, Microsoft Word file

Download IT Pro Webcast

Situation

Solution

Benefits

Products & Technologies

Microsoft has deployed hundreds of virtual servers to reduce the number of physical servers deployed in its data centers and to ensure that all physical servers are utilized fully. Server virtualization is a core component of the IT strategy at Microsoft.

To derive the maximum benefit from server virtualization, Microsoft IT developed standards and best practices for deploying and configuring virtualization hosts. These standards address hardware configurations, operating-system configurations, and network and storage design.

  • Standardizing the virtualization hardware platform simplifies the processes for deploying and managing virtualization hosts.
  • Using a Server Core installation of Windows Server 2008 enables maximum server uptime.
  • Monitoring and evaluating virtualization deployment constantly enables Microsoft IT to identify the best solutions to meet business requirements.
  • Microsoft IT already is working on implementing the new features in Windows Server 2008 R2 to maximize server-virtualization benefits.
  • Server Core installation of Windows Server 2008
  • Windows Server 2008 Hyper-V
  • System Center Virtual Machine Manager

Microsoft IT has adopted a very aggressive approach for implementing virtualization on the Windows Server® 2008 operating system with Hyper-V technology.

This case study describes the standards and best practices that Microsoft IT developed for configuring host computers, storage, and network infrastructure. This case study provides IT professionals and technical decision makers working in large enterprises with the information they need to plan and deploy server virtualization by using Windows Server 2008 Hyper-V.

Note: This case study assumes that readers have a good understanding of server virtualization and Windows Server 2008 Hyper-V. Readers who do not have this prerequisite knowledge should refer to Virtualization with Hyper-V, located at http://www.microsoft.com/windowsserver2008/en/us/hyperv.aspx.

Situation

Like most large enterprises, the number of servers that Microsoft deploys in its primary data centers has grown rapidly, while the utilization of many of those servers has remained low in relation to the hardware capabilities. As early as September 2004, Microsoft IT calculated that the average CPU utilization for servers in data centers and managed lab environments was less than 10 percent, and continuing to decrease.

By May 2007, the Redmond data centers were at capacity. To address this issue, Microsoft IT has implemented and promoted server virtualization aggressively by introducing the Compute Utility strategy and the RightSizing initiative. Microsoft IT also sees virtualization as a key component in developing a dynamic IT infrastructure.

Compute Utility Strategy

Microsoft IT implemented the Compute Utility strategy to remove the concept of server ownership for business groups and replace it with a concept of purchasing computing capacity. With this strategy, Microsoft business groups define their computing capacity requirements for the applications that they need to run their business, and then Microsoft IT focuses on meeting the computing capacity requirements.

If a physical server is the only way that a group can meet business requirements, Microsoft IT provides one. However, in most cases, a virtual machine more than meets business requirements. The Compute Utility strategy looks to create a level of abstraction for the business group, which now purchases computing capacity and space on a storage area network rather than a physical server.

RightSizing Initiative

The RightSizing initiative is one component of a broader Compute Utility strategy and has two goals. The first goal is to identify servers that would make good candidates for virtualization, and to encourage business groups to replace those physical servers with virtual machines. The second goal is to ensure that if a physical server is necessary to meet business group requirements, it is sized appropriately.

The RightSizing initiative collects information on all of the physical servers running in the Microsoft data centers and identifies those servers that are virtualization candidates. The initiative has a scorecard system for tracking and promoting server virtualization.

The Compute Utility strategy and RightSizing initiative have promoted the virtualization concept at Microsoft. Additionally, upper management has taken a very aggressive approach toward virtualization, essentially dictating that all server deployments use virtual machines and that physical servers are deployed only for exceptional circumstances.

The virtualization goals are set very high for Microsoft IT, which has deployed more than 3,500 virtual machines. By June 2009, Microsoft IT plans to have 50 percent of all server instances running on virtual machines. With Windows Server 2008 Hyper-V, the expectation is that at least 80 percent of new server orders will be deployed as virtual machines. 

Dynamic IT

As Microsoft IT looks forward, virtualization also is a key component in reaching its next goal, which is a dynamic IT infrastructure. Dynamic IT refers to a fully automated IT environment that includes dynamic and optimized resource utilization. Virtualization helps to enable dynamic IT by:

  • Enabling rapid movement of virtual components across hardware platforms.
  • Enabling rapid provisioning of new servers and applications.
  • Enabling automated, policy-based management.

Solution

As Microsoft IT has deployed virtual servers on Microsoft® Virtual Server 2005 and Hyper-V, it has developed several standards and best practices related to designing the server, network, and storage infrastructure for the virtualization hosts.

Host Server Configuration

Microsoft IT has explored several options for configuring virtualization host servers, and has established different hardware standards for the various server deployment scenarios.

As Microsoft IT developed standards for which physical machines to virtualize, it identified many lab and development servers with very low utilization and availability requirements. Because of the lower expectations, Microsoft IT now is deploying the lab and development virtual servers with four processor sockets, 16 to 24 processor cores, and up to 64 gigabytes (GB) of random access memory (RAM). These servers can host a large number of virtual machines, averaging 10.4 virtual machines per host machine.

As Microsoft IT developed its expertise in deploying virtual machines, and especially with the performance improvements available with Windows Server 2008 Hyper-V, it has increasingly moved toward virtualization of production servers. Although many production servers still have low utilization, some have significantly higher performance requirements than the lab and development computers. For the production-server deployments, Microsoft IT is using servers with two processor sockets, 8 to 12 processor cores, and 32 GB of RAM.

Although it may seem counterintuitive to deploy virtual servers with higher performance requirements on host computers that have less capacity, Microsoft IT has found this to be most effective. As the virtual server requirements increase, each virtual server requires a larger portion of the host computer's CPU and memory resources, and more disk and network bandwidth. By deploying a server with less CPU and memory capacity, and hosting fewer virtual machines on each physical server, each virtual machine can have greater access to the shared disk and network bandwidth. On average, the host servers with eight processors and 32 GB of RAM are hosting 5.7 virtual machines in the production environment.

As the next step, Microsoft IT is moving toward implementation of blade servers as the virtualization hosts. The blade servers, similar in hardware configuration to the current production servers, are deployed in large groups of blade enclosures with attached storage in the production data centers.

Note: For more information about the planned deployment of blade servers, refer to the "Future Directions" section.

Storage Configuration

Just as the Microsoft server-hardware standard for virtual machine hosts has changed over time, so has the storage configuration for the virtual machine hosts and virtual machines.

All virtual machine hosts currently are configured to start from direct-attached storage (DAS). As Microsoft IT considers deploying blade servers, the goal is to enable the blade servers to start from the storage area network (SAN).

Microsoft IT configures all virtual machine hosts to use a SAN to store the virtual machine configuration and hard disk files. The host computers connect to the SAN by using dual-path Fibre Channel host bus adapters (HBAs). For production virtual servers, the SAN storage uses redundant array of independent disks (RAID) 0+1, whereas RAID 5 is used for lab and development virtual machine storage. Microsoft IT has chosen the RAID 0+1 configuration for the production servers because it provides better performance, but it does consume more disks. Performance is not as critical in the lab environment, so Microsoft IT uses RAID5 because it uses fewer disks to store the virtual machines.

When Microsoft IT first deployed server virtualization, the goal was to use a shared storage model for the virtual machines. During the first iteration, Microsoft IT would create one or two large logical unit numbers (LUNs) on the SAN (100-plus GB) for each host computer and then deploy multiple virtual machines per LUN. In a typical scenario, Microsoft IT gave the customer a 50-GB drive C and a 20-GB drive D. Because both drives were dynamic virtual disks, the actual space used on the LUN was much less than the maximum size.

However, over time, the dynamic disks grew as the customers stored data on the virtual servers, and just two or three virtual machines could fill an entire LUN. This became a significant management issue for Microsoft IT, which had to track all LUNs for space availability and then move virtual machines before all space was utilized.

To address this issue and to enable failover clustering for the virtual machines, Microsoft IT next adopted a model of configuring just a single virtual machine per LUN. With this model, a LUN with 30 to 50 GB was dedicated to each virtual machine, with the option to give the virtual machines more space as required.

Microsoft IT has avoided using disk mount points, so the limiting factor for the number of virtual machines deployed on a host became the number of available drive letters on the host computers. In most cases, this meant not deploying more than 23 virtual machines on a host.

The primary limitation with this model was that the storage frames that Microsoft used could support only a limited number of LUNs. This meant that LUNs could be used completely while a large amount of SAN storage was still available. To address this issue, Microsoft IT went back to deploying more guests per LUN.

Yet another storage-configuration evolution has occurred, as Microsoft IT has deployed storage frames that support many more LUNs. Because of this added capacity, Microsoft IT now deploys one virtual machine per LUN. When a business group requests a new server, Microsoft IT requests a 500-GB LUN, with only 50 GB available as formatted space. As the virtual machine storage requirements grow, the virtualization team can expand the LUN capacity up to 500 GB.

Another new storage-frames feature is the option for thin provisioning, which enables Microsoft IT to "over-provision" space on the SAN based on the assumption that most virtual machines will not grow large enough to utilize the 500-GB storage space that is available to the virtual machine.

In rare cases, the virtual machines actually do require large storage locations. If the virtual machines need 300 GB or less, Microsoft IT provides this storage as a fixed virtual disk. If the virtual machine requires more than 300 GB of disk space, Microsoft IT provides the extra storage as a pass-through disk. The primary rationale for the 300-GB limit is to restrain the size of the virtual drive that must be copied when a virtual machine moves to another host. The virtual disks are copied when the virtual machine moves, but with pass-through disks, the destination physical machine can be attached to the same physical LUN.

Note: Microsoft IT has considered using Internet Small Computer System Interface (iSCSI) for virtual machine storage. Microsoft IT has tested deploying drive C for the virtual machines on Fibre Channel and then using iSCSI for additional storage. However, because of the storage-deployment method, and because this would require extensive reconfiguration of the IP address allocation, it was not deemed practical. Additionally, as Microsoft IT deployed a Fibre Channel infrastructure that could support more LUNs, this reduced the reasons to use iSCSI. As a result, iSCSI is used only for guest clustering.

Networking Configuration

The networking configuration for the physical host computers is fairly simple in the current configuration. Microsoft IT deploys all host computers with two integrated network adapters, and it configures both network adapters so that both Hyper-V management and guest networks can use them. Microsoft IT then configures a virtual network for each of the network adapters to provide load balancing of guest network traffic between the two virtual networks.

Microsoft IT also configures the host computers with two dual-port, multiple-function network adapters that are reserved for virtual machine iSCSI connections or for virtual machines that require extra network bandwidth. In most cases, the additional network adapters are not required.

High Availability

Another virtualization issue that Microsoft IT has had to address is whether to make virtual machines highly available by implementing failover clusters on host computers. Windows Server 2008 provides users with the option of implementing failover clusters and then configuring individual virtual machines as a highly available application within the cluster. The main requirement of implementing highly available virtual machines is to ensure that the virtual machine configuration and hard disk files are stored on shared storage that all of the cluster's nodes can access.

Microsoft IT has implemented high availability for some virtual machines but has not adopted this as the standard for most virtual machines running on Windows Server 2008 Hyper-V. There are several reasons why Microsoft IT has not implemented high availability for most virtual machines:

  • Microsoft IT has achieved 99.95 percent availability for virtual machines running on Microsoft Virtual Server 2005 R2, and it anticipates that the availability will increase for virtual machines running on Hyper-V. Very few applications that have been deployed as virtual machines require a higher availability level.
  • With Windows Server 2008 failover clustering, an administrator must store each virtual machine on an individual LUN. Because an administrator must provide all cluster nodes with access to the same shared storage by using the same drive letters, 23 is the maximum number of virtual machines that can run in a failover cluster. Microsoft IT could work around this limitation by using mount points and virtual machine groupings, but it considers this configuration too complex to administer. Because of this limitation, Microsoft IT has adopted a standard of using only three nodes in a cluster, with the cluster configured to tolerate one node's failure.
  • When virtual machines fail over in a Windows Server 2008 failover cluster, the cluster service with Hyper-V must save the virtual machine state, transfer the control of the shared storage to another cluster node, and restart the virtual machine from the saved state. Although this process takes only a few seconds, the virtual machine still is offline for that brief period. If an administrator has to restart all hosts in the failover cluster because of a security update installation, the virtual machines in the cluster have to be taken offline more than once. Therefore, Microsoft IT determined that highly available virtual machines could have more downtime than virtual machines deployed on stand-alone servers in the case of simple planned downtimes for host maintenance, such as applying software updates.
  • Because of the required brief outage every time a virtual machine is moved from one host to another, Microsoft IT found that coordinating the server update processes with virtual machine owners was difficult. Because one physical host could contain several virtual machines, Microsoft IT had to communicate with each of the virtual machine owners and coordinate host server maintenance with virtual machine maintenance.

Because of these issues, Microsoft IT has not deployed failover clustering as the default standard for virtual machines. Microsoft IT has deployed several three-node clusters and does provide this service for virtual machines running critical workloads. One of the places where Microsoft IT is using failover clustering for virtual machines is in some branch offices that do not have 24-hour support staff on site. In a data center where administrators always are available to react to host downtime, Microsoft IT has minimized the use of Hyper-V clustering.

Note: Some of the new features in Windows Server 2008 R2 Hyper-V will address the limitations that Microsoft IT has found with the current Hyper-V version. Because of this, as Microsoft IT moves to the newer Hyper-V version, it will make high availability the default option for all virtual machines deployed on Windows Server 2008 Hyper-V. For more details, refer to the "Future Directions" section.

Software Configuration

Another decision that Microsoft IT had to make when deploying server virtualization was determining which Windows Server 2008 edition to use as the host computer operating system. An organization can deploy Hyper-V in a variety of editions:

  • Windows Server 2008 Standard, Windows Server 2008 Enterprise, and Windows Server 2008 Datacenter, full installation option.
  • Windows Server 2008 Standard, Windows Server 2008 Enterprise, and Windows Server 2008 Datacenter, Server Core installation option.
  • Microsoft Hyper-V Server 2008 is a dedicated stand-alone product that contains only the Windows Server roles, features, and components needed to support a Hyper-V host.

Microsoft IT has chosen to use the Server Core installation option of Windows Server 2008 Enterprise for all Hyper-V hosts. The main reasons for this are:

  • The Server Core installation option has fewer components than the full server installation. This means that there are fewer components to update and less server-maintenance overhead. This means fewer software changes, service restarts, and system restarts, which maintains a higher availability level for hosts and virtual machines.
  • The Server Core installation option provides a smaller surface area for attack because fewer components are installed.
  • The Server Core installation option does not provide a graphical user interface (GUI) for server management. Instead, these servers are configured as dedicated Hyper-V hosts that are managed centrally through standard administrative procedures and tools. Microsoft IT has found that many server failures are due to operational errors. The Server Core installation option makes it more difficult for administrators to make nonstandard configuration changes, which encourages centralized management and decreases the chances of administrators making errors in the server's configuration after deployment. The lack of a GUI also increases the likelihood that the management processes will be automated. Microsoft IT has found that the automation and standardization of management processes is critical to maintaining high availability.

Security Considerations

As part of the server virtualization deployment, Microsoft IT has spent considerable time defining the security settings for the host computers. Microsoft IT has adopted the general principle that host machines need to be as secure as, or more secure than, any virtual machine running on the host computer. Because most servers at Microsoft have a similar security configuration, so too do the host computers.

One of the ways that Microsoft IT helps ensure the security of the host computers is by deploying computers running a Server Core installation of Windows Server 2008 as dedicated Hyper-V servers. This means that Microsoft IT installs only those roles and features that are needed to support Hyper-V. Microsoft IT also leaves Windows® Firewall enabled on all Hyper-V servers and opens only the ports required for Hyper-V and remote management. Essentially, the host computers take advantage of many Windows Server 2008 built-in security options, and Microsoft IT makes sure that it does not weaken the security settings.

Microsoft IT also ensures that a minimum of software is installed on the host computer, and that antivirus software is installed on both the host computer and all virtual machines. Because Microsoft IT manages and monitors all servers by using Microsoft System Center Configuration Manager and Microsoft System Center Operations Manager, it also installs the required agents. In some cases, it also installs software for an intrusion detection system on the hosts.

Microsoft IT also scans host computers regularly for security issues and applies security updates consistently. Additionally, Microsoft IT ensures that the management processes for host computers are consistent. One goal is to ensure that only appropriate people have access to host computers. For example, virtual machine owners do not have access to the host computer. They can modify settings only on the virtual machines. This ensures that the virtual machine owners cannot affect the host computer or any other virtual machines on the hosts. Microsoft IT uses Microsoft System Center Virtual Machine Manager to manage all virtual machines and to assign all permissions.

Microsoft IT has taken steps to enhance the security of the host computers and virtual machines by introducing Institute of Electrical and Electronics Engineers (IEEE) 802.1Q virtual local area network (VLAN) tagging. With Hyper-V, an administrator can assign a VLAN tag to a virtual network on the host computer or to a virtual network adapter attached to a virtual machine. By using this technology, Microsoft IT has been able to configure host computers that are connected to one network, while the virtual machines hosted on the server are attached to another network. For example, Microsoft IT deploys the host computers on the internal network, while some of the virtual machines connect to a perimeter network that must be isolated from the internal network. 802.1Q VLAN tagging makes this possible without requiring the networks to separate physically. This optimizes the host computers' security and makes server management easier by using centralized policies.

Future Directions

Microsoft IT consistently uses available technology to optimize server-virtualization deployment. As Microsoft IT has moved from Virtual Server 2005 R2 to Windows Server 2008 Hyper-V as the hosting platform, it has used new features to provide more capacity and features for virtual machines. Microsoft IT continues to take advantage of new features in new hardware platforms and with Windows Server 2008 R2 Hyper-V.

Scale Units

One of the goals for server virtualization is to rearchitect the server, storage, and network infrastructure that provides the virtualization service. The current model is to deploy single rack-mounted servers, which attach individually to direct-attached storage and a SAN, and to two or more networks. This current model makes planning for future growth difficult and requires significant time and effort to add new host computers to the environment.

Microsoft IT is developing a scale unit model to replace the current model for implementing virtual machine hosts. The scale unit model will create a pool of compute, storage, and network resources that can be deployed in bundles that enable both extensibility, and reuse and reallocation. The scale unit model will include new concepts for:

  • Computing power. Rather than deploying a single server, the scale unit model will deploy one or more racks of blade servers simultaneously. The current draft specification indentifies one scale unit as four blade enclosures deployed in one rack, with a total of 64 blade servers in each rack. Given the current ratios of virtual machines per host, and assuming that 80 percent of the blades will be virtualization hosts, each scale unit will be able to host more than 330 virtual machines and provide eight dedicated blades for nonvirtualization hosts. As Microsoft IT deploys these compute racks, it also plans to adopt additional new features, including:
    • SAN startup for all host computers.
    • Use of field-replaceable units for most of the design's components.
    • Higher density of virtual machines per host as the host-machine capacity increases with six core processors and additional RAM.
  • Storage. The scale unit model also calls for a dedicated SAN to be deployed for the virtualization hosts. The current draft specification calls for nine SAN storage racks to service 11 compute racks. The SAN deployments will take advantage of new storage features, including thin provisioning and Fibre Channel over Ethernet.
  • Network. A dedicated network infrastructure also will be deployed as part of the scale unit model. The network infrastructure will continue to use 802.1Q VLANs but will take advantage of new features, including 10-GB Ethernet and the convergence of TCP/IP networking and block storage over the same infrastructure. This enables use of the same network adapters in the host computer for both networking and SAN access.

Windows Server 2008 R2 Hyper-V

The release of the next Hyper-V version also will result in significant changes to the server virtualization infrastructure. It will ship with Windows Server 2008 R2. This Hyper-V version will provide several significant new features related to server virtualization, including:

  • Live Migration. This feature enables administrators to move a virtual machine between two virtualization host servers without service interruption. The users who are connected to the virtual machine that is moving may notice only a slight slowing of performance for a few moments.
  • Cluster Shared Volumes (CSV). Live Migration uses the new Cluster Shared Volumes feature within failover clustering. The CSV volumes enable multiple nodes in the same failover cluster to access the same LUN concurrently. From a virtual machine's perspective, each virtual machine appears to actually own a LUN. However, the .vhd files for each virtual machine are stored on the same CSV volume.
  • Dynamic input/output (I/O) redirection. The CSV architecture implements a mechanism called dynamic I/O redirection, in which an administrator can reroute I/O within the failover cluster based on connection availability. For example, if one node in the cluster loses its network connection or SAN connection, an administrator can redirect the network or disk I/O from that node through another cluster node.

Note: Windows Server 2008 R2 includes several additional features that an organization can use to optimize Hyper-V deployment. These features include Second Level Address Translation (SLAT), which uses new features on today's CPUs to improve virtual machine performance while reducing processing load on the Windows hypervisor; Core Parking, which consolidates processing onto the fewest number of possible processor cores and suspends inactive processor cores; and TCP Offload support and the use of Jumbo Frames. Windows Server 2008 R2 also provides improvements for Terminal Services, which is now called Remote Desktop Services, and Virtual Desktop Infrastructure (VDI) deployments. For details on these and other features in Windows Server 2008 R2, refer to the Windows Server 2008 R2 Reviewers Guide at http://download.microsoft.com/download/F/2/1/F2146213-4AC0-4C50-B69A-12428FF0B077/Windows_Server_2008_R2_Reviewers_Guide_(BETA).doc.

One of the most significant changes that the scale unit model and the next version of Hyper-V will bring to the Microsoft IT server virtualization environment is that all virtualization hosts will be clustered and all virtual machines will be highly available. With Live Migration, virtual machines no longer require any downtime as an administrator updates and restarts host computers. Additionally, with Cluster Shared Volumes, an administrator can store multiple virtual machines on a single LUN, which enables the presence of many more virtual machines on a failover cluster. The current expectation is that Microsoft IT will be able to deploy failover clusters with up to 16 nodes per cluster, which will provide much more capacity for absorbing unplanned single-node failures.

Best Practices

Server virtualization has become a core component in managing the IT infrastructure at Microsoft. As Microsoft IT has worked with server virtualization over several years, it has developed the following best practices:

  • Plan the server virtualization infrastructure carefully. Like any other IT project, the optimal deployment of server virtualization will require tradeoffs between higher availability with increased costs and lower availability. An organization must consider tradeoffs between cost and performance . Microsoft IT has developed two different server virtualization platforms: one for lab and development environments, and one for production environments. Another important consideration is whether to design a short-term solution or a long-term solution. The organization will need to find the balance between over-provisioning and over-engineering, and creating an infrastructure that cannot grow with its needs. Microsoft IT has found that short-term planning tends to result in solutions that have limited scalability, and now is developing an infrastructure that will meet its requirements for the next three to five years.
  • Simplify and standardize the platform on which to deploy server virtualization. Microsoft IT has adopted a Server Core installation of Windows Server 2008 as its operating system for virtualization hosts and plans to use a Server Core installation of Windows Server 2008 R2. The virtualization hosts also are dedicated to this one task, so an administrator should install only the software and settings that this role requires.
  • Automate and standardize the administration of the virtual server environment. Microsoft IT is planning to deploy several thousand virtual servers over the next few years, and the only way to manage them efficiently is to standardize the deployment and then automate the management tasks as much as possible. This includes:
    • Standardize the virtual machine builds by creating standard templates for different server workloads. Ideally, Microsoft IT wants to create a set of templates for building standard virtual machines for each server workload. For example, by examining the standard Internet Information Services (IIS) server build or the standard Microsoft SQL Server® build, Microsoft IT wants to create a standard virtual machine template for IIS or SQL Server. This will make deploying new virtual servers for each workload much more efficient.
    • Standardize the host and virtual server configuration. For example, Microsoft IT strives to assign the same administrator groups to all host computers, and to assign the same network names on all host computers. Increasingly, Microsoft IT is using scripts and other forms of automation to deploy and manage the host and virtual machines. Automation makes it much easier to ensure that users adhere to server build standards.
    • Implement remote management solutions. One of the important criteria that Microsoft IT uses when choosing hardware and software platforms is the support that is provided for remote management and automation.
    • Explore the option of using self-service provisioning of virtual machines. Microsoft IT has implemented self-service provisioning, which can alleviate the workload for the team that is managing the virtual environment, while increasing the responsiveness to business requirements. Self-service provisioning increases the requirement to make sure that an organization has automated and standardized all virtual machine settings.
    • Implement System Center Virtual Machine Manager. Virtual Machine Manager provides many tools for simplifying the administrative tasks that are required to manage a large virtualization environment. An organization can use Virtual Machine Manager to store server templates and to automate the virtual server deployment to the appropriate host computers. Virtual Machine Manager provides self-service provisioning of virtual machines and automatically places virtual machines on the most suitable host.

Benefits

Microsoft IT is realizing very significant cost savings from server virtualization. By reducing the physical servers in data centers, Microsoft can save millions of dollars in hardware purchases and power consumption. Additionally, by freeing space in the current data centers, Microsoft also may avoid the cost of building a new data center.

To gain maximum benefit from server virtualization, Microsoft IT has developed numerous standards and best practices, which has several benefits:

  • By standardizing on only a few hardware platforms that will provide virtualization services, Microsoft IT has simplified its processes for deploying and managing the virtualization hosts.
  • By standardizing the server operating system on a Server Core installation of Windows Server 2008, Microsoft IT has maximized the server uptime while enforcing standard deployment and operating procedures.
  • Microsoft IT has had to adapt to several different storage-deployment options to address either operating system or storage hardware limitations. Trying different solutions has enabled Microsoft IT to identify the best solutions to meet business requirements while simplifying virtualization host management.
  • By continuing to adopt new technologies, Microsoft IT can improve the virtualization environment continuously.

Conclusion

In the past several years, Microsoft IT has replaced physical servers in data centers by deploying hundreds of virtual machines. During this process, Microsoft IT developed best practices for deploying and managing the host computers on which the virtual machines run. By using the latest server, network, and storage hardware in conjunction with Windows Server 2008 Hyper-V and System Center Virtual Machine Manager, Microsoft IT has deployed a virtualization infrastructure that provides a high service level to the Microsoft business units.

For More Information

For more information about Microsoft products or services, call the Microsoft Sales Information Center at (800) 426-9400. In Canada, call the Microsoft Canada information Centre at (800) 563-9048. Outside North America, contact your local Microsoft subsidiary. To access information via the World Wide Web, go to:

© 2009 Microsoft Corporation. All rights reserved.

This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. Microsoft, Hyper-V, SQL Server, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.