Dynamic memory can be a helpful feature, but you have to be careful configuring your virtual machines and your host server.
When it comes to hosting virtual workloads, there is perhaps no hardware resource as important to overall performance as physical memory. It’s essential to allocate memory in a way that ensures each virtual machine (VM) has the memory it needs, but without wasting memory in the process. Here are several key considerations for allocating memory for use with Microsoft Hyper-V.
Memory management for Hyper-V is something of an art form. You must ensure you provision each VM with an adequate amount of memory. At the same time, you must also avoid assigning more memory to a VM than is really necessary.
The reasons for this seem obvious. Allocating excessive memory to a VM limits the amount of memory you can allocate to other VMs on the same server. However, sometimes allocating too much memory to a VM can actually impede its performance.
Most new servers use non-uniform memory access (NUMA) memory. NUMA memory is designed to improve performance by assigning memory on a per-processor basis. Each block of dedicated memory is known as a NUMA node. A CPU can access its local NUMA node (the memory directly assigned to that CPU) more quickly than it can access a non-local NUMA node.
The versions of Hyper-V for Windows Server 2008 and 2008 R2 don’t directly support memory affinity on a per-NUMA-node basis. In other words, you can’t directly configure a VM to use a specific NUMA node. This capability will reportedly exist in the Windows Server 8 version of Hyper-V. Nevertheless, you can still take steps to reduce the chances of a VM using non-local NUMA nodes.
The trick is to calculate the size of each NUMA node. For example, suppose that your server is equipped with two octa-core processors and 128GB of RAM. You can calculate the NUMA node size by dividing the memory size (128GB) by the number of CPU cores (16). In this particular case, the size of a NUMA node would be 8GB.
Hyper-V doesn’t yet let you assign a specific NUMA node to a specific VM. However, because you know this particular server has an 8GB NUMA node size, you can infer that any VM assigned more than 8GB will be guaranteed to use memory from multiple NUMA nodes. Limiting the memory assigned to a VM to 8GB or less (in this case) increases the chances that the VM will use memory from a single NUMA node, thereby improving performance.
NUMA nodes are not the only consideration when it comes to memory management. As you plan for the ways to use your host server’s memory, it’s critically important to account for virtualization-related overhead. There are two primary considerations regarding virtualization overhead. First, you must reserve some memory for the parent partition.
You’ll need to reserve at least 300MB for the hypervisor and 512MB for the host OS running on the root partition. However, most best practices guidelines state that you should reserve 2GB for the parent partition.
You shouldn’t use the host partition for anything other than Hyper-V (although you can run security and infrastructure software such as management agents, backup agents and firewalls). Therefore, that 2GB recommendation assumes you aren’t going to run any extra applications or server roles in the parent partition.
Hyper-V doesn’t let you allocate memory directly to the host partition. It essentially uses whatever memory is left over. Therefore, you have to remember to leave 2GB of your host server’s memory not allocated so it’s available for the parent partition.
The other thing to consider regarding virtualization overhead is that VMs use a small amount of memory for Integration Services and other virtualization-related processes. That amount of memory is somewhat trivial, so you won’t typically have to worry about assigning extra memory for that, unless you’re only planning on providing each VM with the bare minimum amount of memory.
VMs with 1GB or less of RAM only use about 32MB of memory for virtualization-related overhead. You should add 8MB for every gigabyte of additional RAM. For example, a VM with 2GB of RAM would use 40MB (32MB plus 8MB) of memory for virtualization-related overhead. Likewise, a VM with 4GB of memory would lose 64MB of memory to overhead.
Windows Server 2008 R2 SP1 introduced a new dynamic memory feature that lets VMs consume memory dynamically based on the current workload. This also lets you over-commit the server’s physical memory to run more VMs than might otherwise be possible. Despite the benefits of dynamic memory, it’s important to adhere to some best practice guidelines to avoid starving your VMs of memory.
First, using dynamic memory isn’t always the best option. You can enable or disable dynamic memory on a per-VM basis. It’s important to enable dynamic memory only on the VMs that can really benefit.
One of the most important considerations is the workload on your VMs. If an application on a VM is designed to use a fixed amount of memory, it’s better to give that VM exactly the amount of memory it needs instead of using dynamic memory.
The same goes for memory-hungry applications. Some applications are designed to consume as much memory as is available. Such applications can quickly deplete a server of physical memory if they’re allowed to use dynamic memory. In this case, it’s better to assign VMs running these types of applications a fixed amount of memory.
Finally, a server’s performance can suffer if VMs attempt to use memory from multiple NUMA nodes. So if your server uses NUMA memory and performance is a major concern, you might be better off avoiding dynamic memory.
One of the most important concepts to understand regarding dynamic memory is startup RAM. When using dynamic memory, you must assign each VM a value for startup RAM. This value reflects the amount of physical memory the VM will initially use when it’s booted. More importantly, the startup RAM also represents the minimum amount of physical memory the VM will ever consume. A VM can’t decrease its memory usage below the startup RAM value.
That being the case, Microsoft recommends you avoid assigning a VM a large amount of startup RAM. It’s best to base the startup RAM on the OS the VM is running. Microsoft recommends 512MB startup RAM for VMs running Windows 7, Windows Vista, Windows Server 2008 and Windows Server 2008 R2. If your VMs are going to run Windows Server 2003 or Windows Server 2003 R2, Microsoft recommends 128MB of startup RAM.
For a VM to use dynamic memory, it must be supported by the OS running on that VM. Windows XP doesn’t support dynamic memory. If you attempt to run Windows XP on a VM configured for dynamic-memory use, the OS will only be able to access the startup RAM.
Before you move on to other configuration tasks, it’s important to ensure that the total combined startup RAM for all of the VMs combined doesn’t exceed the physical RAM installed on your server. Otherwise, you’ll need to either remove some VMs or add memory.
You might also want to adjust the maximum RAM value. This value represents the most physical memory a VM can use. By default, Hyper-V sets each VM’s maximum RAM to 64GB. You might want to set the maximum RAM to a lower value if you don’t require that much physical memory for some of your VMs.
The whole idea behind using dynamic memory is that it lets you over-commit memory. This lets your VMs access the memory they need when they need it. The big drawback to overcommitting any hardware resource is that you could end up depleting the resource. In the case of dynamic memory, it’s entirely possible your VMs can consume all of the available physical memory and still need more.
The long-term solution to this problem is to ensure your server is equipped with enough memory to service the VM’s requirements. However, a short-term solution is to prioritize memory usage.
Almost any host server has some VMs that are more important than others. Hyper-V lets you prioritize them so that in the event of a physical memory shortage, memory is allocated to the higher priority VMs first. You can prioritize a VM’s need for dynamic memory by adjusting its memory weight. VMs with a higher memory weight take precedence over VMs with lower memory weights.
The other setting you have to configure for each VM using dynamic memory is the memory buffer. The memory buffer setting controls how much memory each VM should try to reserve as a buffer. This value is expressed as a percentage. For example, if a VM is using 4GB of committed memory and the memory buffer is set at 50 percent, then the VM could consume up to 6GB of memory.
The memory buffer doesn’t guarantee the buffer memory will be available for a VM. It merely controls how much memory the VM should try to claim. It’s worth noting that because the memory buffer is expressed as a percentage, the amount of memory buffered changes in response to the amount of memory the VM is using at the moment. All VMs that use dynamic memory start out using a minimal amount of memory. They dynamically adjust their memory usage based on the pressure that the workload exerts on its memory.
The process of actually configuring a VM memory usage is simple. Open the Hyper-V Manager, right-click on a VM (because each VM’s memory is managed independently). Choose the Settings command from the shortcut menu. When the Settings dialog box appears, click Memory.
Hyper-V gives you the option of either allocating a static amount of memory to the VM or using dynamic memory (see Figure 1). If you choose the dynamic option, you can adjust the Startup RAM, Maximum RAM, Memory Buffer and Memory weight directly through the Settings dialog box.
Figure 1 You can adjust the memory allocation for a virtual machine through the Settings dialog box.
If a host server has limited physical memory resources, there’s usually a tradeoff between using static and dynamic memory. Static memory typically offers better overall performance (assuming there’s adequate memory allocated). Dynamic memory might be tricky, but it typically permits a greater VM density.