Virtualization: Prepare to Virtualize

Getting ready to deploy virtual machines in your infrastructure requires a bit of homework and legwork, but the payoffs are worthwhile.

Brian Marranzini

Virtualization is a valuable technology for helping you get the most out of your IT investments. The cost benefit equation is easy to calculate and justify. Virtualization can also help with management and availability challenges, providing such capabilities as backup, restore, portability, testing and rollback.

You can snapshot a virtual machine (VM) before applying changes and then roll it back if those changes cause a problem. You can backup and restore across different hardware simply by exporting or importing a VM onto any Hyper-V-capable system. You can establish hardware redundancy by clustering virtualization hosts.

Deploying VMs does, however, require individual maintenance and, most importantly, proper system assessment. Furthermore, virtualized applications require additional layers of management and monitoring that aren’t necessary in the physical world.

In this article, I’ll address these and other challenges you’ll face when deploying a virtual environment (for additional information, check the build, architecture and deployment guides). These are some best practices for preparing to use virtualization and Hyper-V. It will help you maximize your use of virtualization while minimizing the risks.

Instant Infrastructure

Virtualization has caused a paradigm shift. It used to take weeks and sometimes even months to order, rack, power, network and configure. Now you can accomplish all that by right-clicking a machine in System Center Virtual Machine Manager (VMM), selecting “template,” then “deploy.”

All of the basics are essentially done for you at that point. By building a template, you get a “sysprepped” VM based on your specific requirements, joined to a domain, and ready to run in 30 minutes or less if you’re using SAN-based copy technology with VMM. This determines which of your virtualization hosts has the most capacity and then locates your VM there. While it does sound simple, virtualization is anything but a turnkey service straight out of the box.

Before we even get into hardware, the first question is, “Are you licensed for this?” While an in-depth licensing review is well beyond the purview of this article, we’ll give a basic overview. For the official stance, you can review the Microsoft product use rights.

Windows Server 2008 Standard Edition lets you run one VM for each license. Each Enterprise Edition license allows up to four VMs. The Datacenter Edition allows unlimited VMs and is sold per physical processor (not cores).

If you bought your Windows Server licensing with your hardware and got the OEM versions, then you can’t move the OS from that piece of hardware—it lives and dies there. If you bought it through one of the volume licensing channels or a large account reseller, you can move the license once every 90 days, or in the event of a hardware failure.

That means if you’re going to cluster or move VMs around within a 90-day window, you can license each host for the maximum number of VMs you’ll need to run within a 90-day window. You could also license all of your hosts with Datacenter Edition.

Datacenter Edition is the least-expensive way to buy Windows if you’re running more than three to four VMs per physical processor. You may also want to strongly consider Windows Software Assurance. When the next version of Windows comes out, if you plan on installing even one copy and want to move it around your cluster, you’d need one copy for each host.

The System Center products are available in both Enterprise and Datacenter Suite versions. You can secure rights to use the entire System Center stack with an unlimited number of VMs. Multiply the number of VMs on a host by the number of individual System Center Management Licenses you need per VM, plus the cost of the host versus the cost of the appropriate management suite.

There’s a cut-off around three tools or three VMs where the suites are the least-expensive way to license. For most applications, there’s a license mobility right that allows for movement within a farm that can include up to two datacenters.

Converting to the Virtual

If you’re converting physical machines to VMs using VMM, or if you’re building a new environment, you can address those conversions with either Operations Manager or the free Microsoft Assessment and Planning (MAP) Toolkit. These help you capture and review the performance information from potential virtualization candidates.

Now let’s consider your hardware. The hardware you choose must be able to handle the cumulative workload you plan to run at peak level. So, if you have a workload that uses a dual-core 2 Ghz processor at an average utilization rate of 80 percent of peak, and another using the same processor at 50 percent of peak, you need to ensure you have a system equal to at least 150 percent.

You could also use a dual-core 3 Ghz processor to allow for some overhead (approximately 10 percent in this case) for the physical host to run. Usually, though, the processor isn’t the limiting factor. Most workloads run at very low CPU-utilization average.

The biggest obstacles to virtualizing workloads are—in descending order—disk I/O, RAM, network I/O and, finally, the CPU. Increasing CPU speeds and numbers of cores continue to meet or exceed Moore’s Law of increasing capacity. RAM prices have continued to decline as their density increases.

Disk volume, however, has increased significantly over time. Disk I/O is still relatively price-prohibitive. If you go for maximum capacity, you lose spindles and overall speed. Make sure you look carefully at the disk I/O profile of your workloads.

You also need to understand and plan for the I/O type and optimize your storage for the workload. In the case of a Virtual Desktop Infrastructure (VDI), if you’re planning on placing VMs in a saved state or turning them off when users aren’t connected, expect a boot storm when your workforce arrives.

Generally speaking, more physical spindles are better. With a cluster, look toward optimizing your SAN-based disk for optimal write performance, dedicated communication channels and spindles for your virtualization workloads. Using multiple paths can also help. Finally, separate and segment physical disks (not partitions) to prevent I/O-intensive workloads from affecting less-intensive ones.

There are also issues with the way you configure VM hard drives. You can opt for dynamically expanding virtual hard drives (VHDs), fixed VHDs, pass through or native iSCSI directly from within the guest VM. All of these configurations are supported.

The basic recommendations are pretty simple: Use a VHD unless you absolutely need direct disk access for SAN-specific capabilities. This would include things like SAN-based, application-aware backups, or guest-to-guest clustering over iSCSI.

Fixed VHDs offer better performance than dynamically expanding VHDs. In fact, they’re very close to native disk performance. You can also mix and match configurations. Again, your ultimate solution will be based on workload. For more details, check out this white paper on VHD performance.

Another thing to remember is that you should defragment dynamically expanding VHDs from a host perspective on a semi-regular basis. The frequency will depend on how many hosts are on the drive and how often they expand.

One final consideration regarding disk storage space is a reasonable estimate for snapshot storage requirements. You’ll want to plan on roughly 20 percent to 30 percent of additional overhead, depending on how often and how many snapshots you plan to use.

Maximum Memory

It’s relatively easy to calculate how much RAM you need. Consider the cumulative amount of RAM for all workloads combined and add 1GB for the host, plus an additional 20MB to 30MB for each guest.

Hyper-V SP1 will include dynamic memory allocation that helps VMs give back any RAM not currently in use to the host. Even without Service Pack 1, if you’re looking at how much RAM a workload uses, there’s room for a lot of optimization.

Virtualization also lets you tune the amount of RAM available to a workload based on need, instead of actual memory chip sizes. For example, you can build a VM with 600MB of RAM, where normally you’d be limited by physical RAM chip sizes and pairing channels. The only limitation is that it needs to be an even number.

With System Center Operations Manager (SCOM) you can look at the historical performance information on the guest and adjust your RAM as needed. Dynamically expanding memory in Hyper-V for Windows 2008 R2 SP1 will reduce the need to do this level of monitoring and management. Until then, you can achieve significant consolidation by reviewing this dynamic memory information with each VM.

The biggest network I/O concerns are having enough ports and throughput necessary for the workloads. The internal VM switch supports high throughput, but only if you’re configuring your VM hardware profiles with the synthetic network interface controller (NIC). The legacy NIC is limited to 100MB. If you have legacy NICs for the initial deployment, you may want to remove and reconfigure them. Use a synthetic NIC if it’s supported by the underlying guest OS.

For host machines being clustered with iSCSI for clustering, dedicate two NICs with Microsoft Multipath IO in a load-balancing configuration for iSCSI only. You can use them for heartbeat communications while they’re handling iSCSI load. Add another dedicated NIC for host management, and whatever remains is for your VMs.

Typically, if the motherboard has two NICs onboard, an additional four-port NIC, at the very least, is helpful. More are recommended if you’re running a network-intensive VM workload, like an OS deployment or a high-utilization file server. All NICs should be 10GB.

Better Backup

For backup, System Center Data Protection Manager (DPM) does a great job offering Hyper-V-capable backups. It also provides application-specific backups. There are pros and cons to backing up an entire VM versus doing an application-aware backup. One of the new capabilities with DPM 2010 is the ability to mount a VHD and do individual file recovery directly from the backup of the VM.

Evaluate your application needs individually. A great example is SharePoint: If someone wants to recover a document from a document library using only VM backups, they need to restore SQL Server, front-end Web servers and a domain controller into a private network. The next step is browsing through the library and finding the file. If you have a SharePoint-aware application backup, you can simply restore the document library to another site and then grab the file.

If you have SCOM, it’s best to implement Performance and Resource Optimization (PRO). PRO lets SCOM monitor workloads at both the host and individual application layers. It can also automatically remediate common problems such as moving a VM from a host when the CPU fan isn’t spinning fast enough, or if the workload requirements exceed the host capacity. For more details, study this guide for integrating SCOM and VMM.

There are also some challenges to consider when virtualizing domain controllers. Basically, you never want to restore any snapshots or backups of a domain controller. This is a good guide that’s still applicable and goes through the details of virtualizing a domain controller.

Virtualization and Hyper-V are powerful technologies that can maximize resources, simplify management and save IT dollars. Do your homework before deploying your virtual infrastructure, and you can prevent headaches down the road.

Brian Marranzini

Brian Marranzini* is a core infrastructure technology architect focusing on virtualization, Windows Server, Windows Client, infrastructure and security. He’s also a freelance writer and has authored various articles for technology magazines, as well as delivered webcasts and internal and customer training materials for Hands on Labs. Beyond that, he’s developed and delivered many sessions at major product launch events.*