Virtualization: Run Hyper-V on commodity hardware

You don’t need high-end hardware to run Hyper-V. When properly configured, you can run Hyper-V on mere mortal machines.

Brien M. Posey

One of the big myths surrounding hardware virtualization is that it requires high-end server hardware. However, you can build an effective virtualization infrastructure using commodity hardware.

If you’re planning to run Hyper-V in a production environment and you work for a large organization, then you should use true enterprise-class hardware. Larger organizations don’t need to throw out their high-end hardware, but smaller businesses may find it easier to cope with the sometimes-astronomical costs of server virtualization by using commodity hardware.

By its very nature, server virtualization is demanding in terms of hardware requirements. After all, the entire science of server virtualization is based around the idea that multiple virtualized workloads can share a finite pool of server resources. As such, the idea of using commodity hardware for server virtualization may seem completely counterintuitive.

Believe it or not, however, you really can use commodity hardware to handle virtualized workloads in small and midsize businesses (SMBs). You can attribute this to two main factors. The first is that computer hardware is a lot more powerful than it used to be.

There are still limits to what you can do with commodity hardware. You probably won’t be able to go to Best Buy, purchase a PC, load Hyper-V, and use it to run production workloads (although it might work fine for a lab environment). You’ll need slightly higher-end hardware.

The other factor is improvements that Microsoft has made to Hyper-V. Hyper-V 3.0 (which is included with Windows Server 2012) is flexible enough to be used by extremely small businesses or by large enterprises.

Hyper-V improvements

There are two main improvements to Hyper-V that make it practical to use on commodity hardware. The first and most important of these improvements is that Hyper-V 3.0 clusters don’t require you to use shared storage.

It might seem strange to even mention clustering, but there’s a good reason for doing so. If you’re going to run production workloads in a virtualized environment, then your host servers really need to be clustered. It’s a bad idea to run Hyper-V as a standalone server. If the host server fails, then all the virtual servers running on that host will also fail. The end result is a major outage.

Clustering was one of the main factors that historically kept smaller organizations from virtualizing their servers. Clusters used to require host servers that complied with extremely precise specifications. Furthermore, clusters require a pool of shared storage connected to each cluster node with an iSCSI or a Fibre Channel connection. And shared storage can be tremendously expensive.

Hyper-V 3.0 greatly decreases the cost of clustering by not requiring that you use shared storage. Each server in the cluster can have its own direct-attached storage (DAS). Microsoft has also relaxed the hardware requirements for cluster nodes to the point where you can use just about any server capable of running Windows Server 2012. The Cluster Configuration Wizard can tell you whether your server meets the minimum clustering requirements.

Another reason clustering has traditionally been so expensive is because in the past you needed a minimum of three cluster nodes running matched hardware (or two matching nodes and a file share witness). Hyper-V clusters still require that you have at least three nodes (or two nodes and a file share witness), but the hardware no longer has to match. Matching hardware is, however, advisable.

If purchasing three cluster nodes is beyond your organization’s budget, you might be able to reduce your costs by using Hyper-V replication instead of clustering. Replication uses synchronization to create duplicate copies of your virtual machines (VMs) on an alternate server. It doesn’t provide true, unplanned failover capabilities for VMs, but you can manually fail a VM over from one host to another. The nice thing about replication is that you only need two host servers.

Hardware requirements

Microsoft designed Hyper-V 3.0 to be much more forgiving in terms of its ability to run on low-end hardware. So what are the minimum hardware requirements for running Hyper-V 3.0?

Running Hyper-V on commodity hardware isn’t really a matter of ensuring your host server meets a certain minimum hardware requirement. It’s more about making sure the host server has sufficient resources to run your intended virtualized workloads and deliver acceptable performance. As such, hardware requirements are going to vary depending on how you plan to use the host server.

As you consider the workloads you need to run, you may discover there’s no way to run all those workloads on commodity hardware. Keep in mind, however, that no one ever said you had to host all your VMs on a single server. A Hyper-V 3.0 cluster can contain up to 63 hosts. While you probably won’t build a cluster that big, you might discover it’s less expensive to purchase several commodity boxes than to buy even one enterprise-grade server.

Hardware planning

When you’re planning for the possibility of using commodity hardware, remember all VMs running on the host will share the same hardware. That’s going to be a bigger job than the average PC can handle. Still, you may be able to get away with using high-end PC hardware. Gamer PCs often make excellent host machines in smaller environments.

Memory is the No. 1 factor that will affect your ability to efficiently run VMs. Fortunately, memory is cheap. The trick is finding a system board that supports plenty of memory. Even so, there are consumer-grade system boards that support 32GB or even 64GB of RAM. There are also different speeds of RAM, so be sure to purchase the fastest RAM your system board will allow.

Probably the second biggest factor that will affect a Hyper-V host server’s performance is disk I/O. Enterprise-class servers typically use expensive 15K RPM drives (or solid-state drives) arranged in large storage arrays. Doing so really isn’t an option if you want to keep your hardware on a budget.

A better approach is to purchase a custom case for your hardware. Make sure the case has plenty of drive bays and lots of room for fans. High-end system boards often include at least six SATA ports. You can use these ports to build a SATA array.

If you decide to build a SATA array, there are a few things to keep in mind. First, try to use the SATA ports built into the system board instead of a PCI SATA controller. This approach will usually give you better performance. If your system board does include six SATA ports, don’t build a six-drive SATA array. You should plan on building a five-drive array. Use the sixth SATA port for the boot drive. That way you can run the host OS from a dedicated drive and give your VMs full access to your SATA array.

Pay attention to the BIOS settings for the SATA ports. Some system boards are configured by default to operate two of the SATA ports in EIDE mode. If you leave this setting enabled, you’ll end up with a very slow array. You should run all the SATA ports in Advanced Configuration and Power Interface (ACPI) mode.

You’re likely to need a DVD drive when installing the OS. The number of drives in your array directly impacts VM performance. Therefore, it’s not recommended to use a SATA port for a DVD drive. Instead, consider investing in a USB DVD drive you can move from machine to machine.

Finally, don’t set up the SATA array at the BIOS level. Instead, use Windows Storage Spaces to create the array. This will give your array some extra data-integrity features you won’t typically get at the hardware level (at least not on consumer-grade hardware). Also, don’t skimp on the CPU. Try to get a CPU with a clock speed of 3 GHz or higher and at least eight cores.

One last recommendation is to install as many NICs as your system board will allow. You should reserve one NIC for host communications, but combine the remaining NICs into a NIC team, even if the NICs don’t match.

You can configure a NIC team through Hyper-V 3.0. This is essentially a collection of NICs that function as one logical NIC. A NIC team provides the aggregate bandwidth of all NICs in the team. This means that the team provides greater bandwidth than any one individual NIC ever could.

The advantage of this approach is that you can configure all of your VMs to use the NIC team instead of having to assign individual NICs. If you have high-demand VMs, you can use bandwidth throttling to prevent those VMs from depriving your other VMs of the bandwidth they need.

Hyper-V 3.0 is remarkably flexible. You can use it as an enterprise-class hypervisor. You can also use it in a small business setting to virtualize your servers and run them on commodity hardware.

Brien M. Posey

Brien Posey, MVP, is a freelance technical author with thousands of articles and dozens of books to his credit. You can visit Posey’s Web site at brienposey.com.