Virtualization: Top 10 Virtualization Best Practices

As virtualization continues to mature as a technology, so too are best practices for its application. If your virtual infrastructure is falling short, take a run down this checklist.

Wes Miller

Virtualization has gone from being a test lab technology to a mainstream component in datacenters and virtual desktop infrastructures. Along the way, virtualization has occasionally received a “get out of jail free” card, and has not had the same degree of efficient IT practices applied to virtual deployments as would be expected of actual physical machines. This is a mistake.

If you had an unlimited budget, would you let everyone in your organization order a new system or two and hook it up to the network? Probably not. When virtualization first appeared on the scene, unlimited and unmanaged proliferation was kept in check by the fact that there was actually a cost associated with hypervisor applications. This provided some line of defense against rogue virtual machines in your infrastructure. That is no longer the case.

There are several free hypervisor technologies available, for both Type 1 and Type 2 hypervisors. Anyone in your organization with Windows installation media and a little free time can put up a new system on your network. When virtual machines are deployed without the right team members knowing about it, that means a new system can become an unwelcome honeypot for new zero-day vulnerabilities, ready to take down other systems on your network that are business critical.

Virtual systems should never be underappreciated or taken for granted. Virtual infrastructures need to have the same best practices applied as actual physical systems. Here, we will discuss 10 key best practices that should always be on your mind when working with virtual systems.

1. Understand both the advantages and disadvantages of virtualization

Unfortunately, virtualization has become a solution for everything that ails you. To rebuild systems more rapidly, virtualize them. To make old servers new again, virtualize them. Certainly, there are many roles virtualization can and should play. However, before you migrate all your old physical systems to virtual systems, or deploy a new fleet of virtualized servers for a specific workload, you should be sure to understand the limitations and realities of virtualization in terms of CPU utilization, memory and disk.

For example, how many virtualized guests can you have on a given host, and how many CPUs or cores, RAM and disk space is each consuming? Have you taken the storage requirements into account—keeping system, data and log storage separate as you would for a physical SQL server? You also need to take backup and recovery, and failover into account. The reality is that failover technologies for virtual systems are in many ways just as powerful and flexible as failover for physical systems, perhaps even more. It truly depends on the host hardware, storage and—most of all—the hypervisor technology being used.

2. Understand the different performance bottlenecks of different system roles

You have to take into account the role each virtual system plays when deploying them, just as with physical servers. When building out servers to be SQL, Exchange or IIS servers, you wouldn’t use the exact same configuration for each one. The CPU, disk and storage requirements are extremely different. When scoping out configurations for virtual systems, you need to take the same design approach as with your physical system deployments. With virtual guests, this means taking time to understand your server and storage options, and over-burdening a host with too many guests, or setting up conflicting workloads where the CPU and disk may be at odds.

3. You can’t over-prioritize the management, patching and security of virtual systems

Two new virus outbreaks have hit in just this past week alone. The reality is that far too many virtual systems are not patched, patched late, not properly managed or ignored from a security policy perspective. Recent studies point to the significant blame that USB flash drives have to bear for the spread of viruses—especially targeted threats. The reality is that too many physical systems are un-patched and unsecure. Virtual systems—especially rogue systems—pose an even larger threat. The ability to undo system changes adds to the problem, given it makes removal of patches and security signatures far too easy—even if unintentional. Limit the proliferation of virtual machines, and make sure to include all virtual machines in your patching, management and security policy infrastructures.

4. Don’t treat virtual systems any differently than physical systems unless absolutely necessary

The last point should have begun the thought process, but it bears repeating. You shouldn’t treat virtual systems any different than physical ones. In fact, when it comes to rogue systems, you may well want to treat them as hostile. They can become the bridge that malware uses to infiltrate your network.

5. Backup early, backup often

Virtual systems, as with physical systems, should be included in your backup regimen. You can back up the entire virtual machine or the data it contains. The latter approach may be far more valuable and far more flexible. Backing up an entire virtual machine takes considerable time and gives you few options for rapid recovery. Just as you protect your mission-critical physical systems, make sure you have the capability to recover rapidly and reliably as well. It’s all too often that systems are backed up, but not verified, which results in no backup at all.

6. Be careful when using any “undo” technology

Virtual technologies often include “undo” technology. Use this very carefully. This is another reason to be certain all virtual systems are included in your IT governance work. It’s far too easy to have a disk revert back a day or a week. This could re-expose any vulnerability you just rushed out to patch, and become the gateway to infecting the rest of your network.

7. Understand your failover and your scale-up strategy

Virtualization is often touted as the vehicle to achieve perfect failover and perfect scale-up. This depends entirely on your host hardware, hypervisor, network and storage. You should work with all your vendors to understand how well each role you’ve virtualized can scale per server guest. You also need to know how well it can failover; specifically, how long guests may be unavailable during a failover, and what their responsiveness and availability may be during the switch.

8. Control virtual machine proliferation

This is a critical aspect, yet one of the hardest to enforce. There are several hypervisors that are completely free, and even with a commercial hypervisor, it’s far too easy to “clone” a guest. This can result in a multitude of problems:

  • **Security: ** New systems or errantly cloned systems can result in systems that are improperly secured or cause conflicts with the system from which it was “cloned.”
  • Management: Conflicts from cloning can lead to systems that are not managed according to policy, aren’t patched, and result in conflicts or instability.
  • Legal: Until recently, Windows couldn’t always determine it was being virtualized or, more importantly, that it had been silently duplicated as a new guest (once or many times). All too often, there has been a proliferation of guests due to the ease of duplication, and a more laissez-faire attitude toward piracy. This is a dangerous attitude, and should be something your IT organization blocks using polity at a minimum.

It’s too easy to clone systems. Make sure your IT organization knows the risks of undue guest duplication. Only deploy new virtual machines in compliance with the same policies you would for physical systems.

9. Centralize your storage

A leading cause of virtual machine proliferation is hosts that are physically spread throughout your organization. If you saw an employee walk up to a physical server with an external hard disk and a CD, you might wonder what was going on. With virtual systems, copying the entire guest (or two) off is entirely too easy. This ease of duplication is a key reason for virtual machine proliferation. This can also result in data loss. If you can’t physically secure your virtual machines, they should have their virtual or physical disks encrypted to ensure no loss of confidential data. By placing your virtual machine hosts and storage in central, secure locations, you can minimize both proliferation and the potential for data loss.

10. Understand your security perimeter

Whether you’re developing software or managing systems, security should be a part of your daily strategy. As you consider how to manage and patch your physical systems, always include virtual systems as well. If you’re deploying password policies, are they being enforced on your virtual systems as well? The risk is there—make sure you’re prepared to answer how virtual systems will be governed, so the risk of them being cloned can be mitigated. You need to treat virtual machines as hostile, unless they’re a part of your IT governance plan. Many hypervisors now include either a free version or trial version of antivirus software, due to the potential for security threats to cross between host and guests.

Here Now and Here for the Future

Virtualization promises to become an even more significant IT component in the future. The best thing you can do is to find a way to work with it and manage it today, rather than ignoring it and hoping it will manage itself. You need to enforce the same policies for your VMs that you enforce for your physical systems. Know where virtualization is used in your organization, and highlight the risks to your team of treating virtual machines any differently from physical systems.

Wes Miller

Wes Miller is the director of Product Management at CoreTrace (CoreTrace.com) in Austin, Texas. Previously, he worked at Winternals Software and as a program manager at Microsoft.