Share via


Adam Bogobowicz and Dave Ohara

Contents

Step 1: Know Your Server Loads
Step 2: Plan for High Utilization
Step 3: Save Power
Step 4: Eliminate Waste
Step 5: Identify your Hyper-V SKUs
Step 6: Plan, Deploy, Monitor
Step 7: See the Forest, not the Trees
Moving Forward

Organizations of all sizes are focused on cutting costs by increasing the efficiency of their computing resources. Virtualization is a standard Green IT technique. But how do you equate energy efficiency efforts beyond "I used virtualization to consolidate my servers."

Server virtualization allows multiple operating systems to run on a single physical machine as virtual machines (VMs), consolidating workloads across multiple underutilized servers. But are you really achieving a performance per watt that is better than other enterprise users?

Building a Hyper-Green virtualization server system is focused on taking the extra steps necessary to further reduce energy consumption. Your approach can start small; it can be comparable to switching from incandescent light bulbs to compact fluorescent light bulbs at home. Your next step could be analogous to turning off lights, automatically setting thermostats, and using devices such as Kill-A-Watt to identify practices and components that waste energy. The specific technologies are not as important as an awareness of the situation and a dedication to make things better. A zealous focus on efficiency and reducing waste has lasting effects.

In Warren Buffett's biography Snowball, he discusses the idea of an "Inner Scorecard" that allows you to measure where you are meeting your personal goals and where you are falling short. The internal scorecard measures whether you are doing what is right versus doing what external forces or expedience would lead you to do. In the area of green services, doing what is right is having a focus on performance per watt, not just on performance or on watts expended. The Microsoft Virtualization site can help you keep score of your energy savings by measuring energy consumption before and after virtualization and then using local energy prices to calculate savings. By focusing on reducing energy costs, you can ensure the success of your virtualization project.

In this article, we will give you seven steps to follow in building a Hyper-Green server virtualization system that is better than most. In addition, this article estimates the savings percentage that each of the seven steps can achieve in creating a Hyper-Green virtualization system. Even if you are not quite ready to go hyper-green, the seven steps will still provide a good framework for understanding where you are on the path to energy efficient computing.

Step 1: Know Your Server Loads

What is the ideal server for virtualization? A 2 or 4 socket processor? 2 GB DIMMs or 4GB Dimms? Local drives or network storage? So many options make it difficult to make the right decisions. And tests can confuse you even more.

Consider automobiles. How you drive and the conditions of your environment can have a significant impact on fuel economy. The same is true for servers with varying environments and workloads. Some virtualization test labs will choose server loads that maximize a particular product's apparent potential savings. And even if a lab attempts to be neutral by using different loads and hardware, it will never replicate your server environment. The energy consumption in your organization will not match the consumption that a lab might have or that may occur in another enterprise with similar equipment and a different load. Therefore, to get a realistic picture of your energy consumption and savings, you need to determine your own loads and evaluate your strategy based on your conditions.

Most people focus on the efficiency of virtualization technologies, but there are situations where the speed of virtualization technology has savings as well. Testing and development are frequently the first business functions to take advantage of virtualization technology. Using virtual machines, development staff can create and test a wide variety of scenarios in a safe, self-contained environment that accurately approximates the operation of physical servers and clients. One of the main benefits of using virtualization in this scenario is the speed at which environments can be set up and then wiped clean for a fresh start, or go to an early checkpoint and then repeat a series of tests.

The Microsoft® Assessment and Planning (MAP) Toolkit makes it easy to assess your current IT infrastructure and determine the right Microsoft technologies for your IT needs. MAP is a powerful inventory, assessment and reporting tool that can run securely in small or large IT environments, without requiring the installation of agent software on any computers or devices.

Savings: Consider the following scenarios where you migrate to a virtual machine topology: In the first scenario, the migration results in randomly arranged and densely packed virtual machines (VMs). In the second scenario, the migration is planned carefully and results in an organized deployment with room for growth and changes. If you outsource your VM migration, which are you going to get? It is cheaper to use the first scenario because it requires fewer machines and uses all hardware to full capacity, but over time you may be in a worse state than you were with physical machines. Over the long run you could make a difference of 20 percent in performance per watt by understanding your server loads up front, measuring how your SLAs are met, and planning your migration based on this knowledge.

Step 2: Plan for High Utilization

One good thing about over-provisioning hardware is that you rarely run into resource constraints because you have plenty of processing power, RAM, storage, and network capacity. With virtualization server consolidation, component constraints can become an issue. You must evaluate whether you have enough RAM to run a VM without thrashing the disk, and you must question whether you have enough storage to accommodate your VMs and required backups.

With high utilization, you will run into weaknesses in your hardware choices more often. There was a reason why IT departments chose to deploy more hardware than necessary (besides being easy): it was a safe approach to ensure applications would keep running. But now, with the focus on energy savings, there are too many applications deployed on dedicated servers that remain idle because the application (and therefore the server) is not in use.

To meet maintenance and performance requirements, you should considering creating dedicated server roles. For example, you could create a Virtualized SharePoint Servers category, which would simplify maintenance and security and would allow you to fine-tune performance on the platform without having to consider any other application types. If you do run into performance problems, you'll know that it is not due to one server role affecting the performance of another. You might not be able to pack everything as densely as you could with mixed VM roles, but keep in mind that in the future, you will need to move VMs for more performance or storage. This is much easier to do and is more secure if you use dedicated server roles.

Performance and Resource Optimization (PRO) is a feature of Virtual Machine Manager. It helps you ensure that your virtual machine infrastructure is operating in an ideal and efficient manner. PRO uses rules and policies set by an administrator to dynamically respond to poor performance and failures of virtualized hardware, operating systems, and applications.

Savings: You might not be ready to operate your system at a high utilization level. For this reason, you might choose to run fewer VMs on each computer. However, if you push harder in this area and accept a bit more utilization, you can save approximately 10 percent of your energy consumption. There are monitoring tools available that can help you to increase utilization without incurring much risk.

Step 3: Save Power

Power management is essential for laptop computers and is now standard on server processors. However, Hyper-V virtual machines have power management disabled. Given that there are multiple VMs on one machine, you can easily understand how different VMs need to be in different power states. Therefore, VMs have processor power management disabled. Does this mean that processor power management is irrelevant? No. Hyper-V still saves power by allowing the root partition to control the power management policies for the entire system. Any VM power policy settings have no effect. This is a side-effect of a virtualized environment in which VMs do not interact with physical hardware.

Microsoft Windows Server 2008 R2 will provide new features for saving more power. R2 features power metering and budgeting in conjunction with hardware support, which will help provide the tools you need to set energy consumption goals, measure your progress against these goals, and save power throughout your environment. R2 provides other improvements, such as Intelligent Timer Tick Distribution, to help keep cores and processors in sleep states longer. (Timer interrupts are handled by a single processor.)

Performance tuning can improve system responsiveness as well as power savings. Minimizing background work, such as employing synthetic I/O and timer ticks for VMs, reduces interrupt traffic and ensures that Processor Power management PPM effects are maximized. You should follow the performance tuning steps for virtualized systems in Performance Tuning Guidelines for Windows Server 2008 on the WHDC Web site.

Every hardware component has a power footprint. RAM power consumption is second only to processors, especially when you consider the amount of DIMMS and memory required for VM solutions. To measure power consumption, you should create a spreadsheet that tracks the power consumption of components. Better yet, ask your vendor for a spreadsheet that lists the power consumption of components in your server. This will give you a good idea about whether a vendor can help you achieve a higher performance per watt by using low-power processors, fewer DIMMS, hard drives, and power supplies.

Savings: Microsoft has estimated that you can achieve a 10% power savings by using power management. You can also disable the BIOS-based power management and use the operating system power management to maximize savings.

Step 4: Eliminate Waste

Windows Server 2008 features a Server Core installation option. Server Core offers a minimal environment for hosting a select set of server roles, including Hyper-V. It features a smaller disk, memory profile, and attack surface. Therefore, we highly recommend that you use the Server Core installation option for Hyper-V servers.

Using Server Core in the root partition leaves additional memory for the VMs. But keep in mind that additional server roles installed on the server can adversely affect the performance of the virtualization server, especially if they consume significant amounts of CPU, memory, or I/O resources. Minimizing the server roles in the root partition is advised, and this has offers additional benefits, such as reducing the attack surface and the frequency of updates.

Minimizing the background activity in idle VMs releases CPU cycles that can be used elsewhere by other VMs or saved to reduce power consumption. Windows guests typically use less than 1% of one CPU when they are idle. Here are some best practices for minimizing the background CPU usage of a VM:

  • Install the latest version of VM integration services.
  • Remove the emulated network adapter through the VM settings dialog box (use a synthetic adapter).
  • Remove unused devices such as the CD-ROM and COM port or disconnect their media.
  • Use Windows Server 2008 for the guest operating system.
  • Disable, throttle, or stagger periodic activity, such as backup and defragmentation when appropriate.
  • Review scheduled tasks and services enabled by default.

And what if you're looking for that last ounce of performance? In his All Topics Performance blog, Anthony Voellm recommends that you configure your VM to a Non-Uniform Access (NUMA) node. Voellm says "There are not many performance knobs in Hyper-V, which is by design. We really seek out-of-the-box performance. However, if you are looking for that last bit of performance from your Virtual Machines (VMs) and have already made a good selection for networking and storage, you might consider setting the Non-Uniform Access (NUMA) node."

Savings: In this area, the savings are smaller. But you can still expect about 5% savings here.

Step 5: Identify your Hyper-V SKUs

What are your VM SKUs? What configurations are best for you? How much of your existing hardware can you repurpose as virtual machine servers, and how does this compare to new equipment purchases? The hardware considerations for Hyper-V servers generally resemble that of other Windows Server-based servers, but Hyper-V servers can exhibit increased CPU usage, they consume more memory, and they need more I/O bandwidth. Performance per watt should be your new focus as you consolidate servers, not just CPU metrics.

Are you focused on choosing 2 socket or 4 socket servers? Instead, think about RAM before processors. You are consolidating because of low processor utilization, so why focus on the processor now? To get your VMs running well, you need RAM. Therefore, evaluate components in the following order:

  • first, consider the amount of RAM
  • then the cost per GB
  • then the power per GB

And consider that different memory configurations have different costs as well as different power footprints.

Correct Memory Sizing You should size VM memory as you typically do for server applications on a physical machine. You must have sufficient memory to handle the expected load at ordinary and peak times because insufficient memory can significantly increase response times and CPU or I/O usage. In addition, the root partition must have sufficient memory (leave at least 512 MB available) to provide services such as I/O virtualization, snapshot, and management to support the child partitions. A good standard for the memory overhead of each VM is 32 MB for the first 1 GB of virtual RAM plus another 8 MB for each additional GB of virtual RAM. This should be factored into your calculation of how many VMs to host on a physical server. The memory overhead varies depending on the actual load and amount of memory that is assigned to each VM.

CPU Performance and Statistics For best CPU performance, plan on one virtual processor per logical processor core. If you need more than 4 virtual processors, then a physical machine is appropriate for that load. Hyper-V publishes performance counters to help characterize the behavior of the virtualization server and break out the resource usage. The standard set of tools for viewing performance counters in Windows includes Performance Monitor (perfmon.exe) and Performance Logger (logman.exe), which can display and log the Hyper-V performance counters. The names of the relevant counter objects are prefixed with "Hyper-V."

Storage I/O Performance Hyper-V supports synthetic and emulated storage devices in VMs, but the synthetic devices generally offer significantly better throughput and response times and reduced CPU overhead. The exception is if a filter driver can be loaded and can reroute I/Os to the synthetic storage device. Virtual hard disks (VHDs) can be backed by three types of VHD files or raw disks. The storage hardware should have sufficient I/O bandwidth and capacity to meet current and future needs of the VMs that the physical server hosts. Consider these requirements when you select storage controllers and disks and choose the RAID configuration. Placing VMs with highly disk-intensive workloads on different physical disks will likely improve overall performance.

Networking If you expect network intensive loads, the virtualization server can benefit from having multiple network adapters or multiport network adapters. VMs can be distributed among the adapters for better overall performance. To reduce the CPU usage of network I/Os from VMs, Hyper-V can use hardware offloads, such as Large Send Offload (LSOv1) and TCPv4 checksum offload. For details about network hardware considerations, see the document "Performance Tuning for Networking Subsystem."

Virtual Machine Manager 2008 Configuration Analyzer (VMMCA) Update 1 is a diagnostic tool that lets you evaluate important configuration settings for computers that are serving or might serve in Virtual Machine Manager (VMM) roles or are performing other VMM functions.

You should always measure the CPU usage of the physical system through the Hyper-V Hypervisor Logical Processor performance counters. The statistics that Task Manager and Performance Monitor report in the root and child partitions do not fully capture the CPU usage. Be aware, however, that Hyper-V clocks can give inaccurate results if you don't use them in the root. As Anthony Voellm notes in his blog:

"If you are doing performance analysis and using performance counters, be aware that the counters in the Guest Virtual Machine 'lie' so to speak. What you need to use are the Hyper-V Hypervisor Performance Counters in the Root to get Physical Processor usage."

For the full text of this blog entry, see his post "Hyper-V: Clocks lie... which performance counters can you trust?."

fig01.gif

Figure 1 Visio Planning Tool

Savings: Matching your virtualization SKUs to your server load is one of the most important decisions you need to make. Microsoft recently presented a strategy of using high end 4-processor machines in development and test labs and using 2-processor machines in production. The reasoning behind this strategy was that they were more power constrained in the development and test environments and achieving higher performance per watt was part of the plan. Because matching SKUs to server load has a broad effect on many different areas, this presents as much as 25% of your savings opportunity.

Step 6: Plan, Deploy, Monitor

A whole book could be dedicated to this topic. But despite space constraints, we can still point you to a few resources that can help you plan and deploy your solution and help you know what to monitor in your virtualized environment. Microsoft Visio has released planning tools for configuring racks for virtualization and then monitoring your cooling system in your server area. Note that the amount of power required for cooling servers can be as much as 100% to 200% the amount of energy used to run the servers. Monitoring is even more critical than usual because when servers are used to capacity, they generate more heat and put more stress on the cooling system.

One particularly handy resource is the Visio planning tool (see Figure 1, Figure 2, and Figure 3). For more information about the Visio planning tool, go to the Microsoft Visio Toolbox site online.

fig02.gif

Figure 2 Visio Planning Tool

fig03.gif

Figure 3 Visio Planning Tool

The Microsoft Virtualization site can help you quantify and report your virtualization project. The site provides a tool that reports your energy savings for servers and cooling and provides additional reporting on carbon emissions. Figure 4 shows one of the tools available on the hyper-green Web site. When you use these tools in combination with your previous calculations (the server load data, the SKUs that you need, where VMs should be allocated), you can evaluate your energy consumption and opportunities for increased savings.

Microsoft has also published case studies based on work done by the Microsoft Enterprise Engineering Center and their 9 Hyper-V labs in 2008. The results of the studies were shared with the Microsoft product teams, and best practices based on these EEC engagements have been published. The Enterprise Engineering Center is currently upgrading its procedures to incorporate power monitoring throughout the lab. This new capability will allow organizations to continue investigating the most energy efficient hardware and software to run IT services.

The following three examples of the Enterprise Engineering Center's Hyper-V work have been published:

  • "Performance and capacity requirements for Hyper-V" This paper describes tests that were conducted to compare the performance of Microsoft Office SharePoint Server 2007 servers deployed as guests on a Hyper-V host against SharePoint servers deployed on physical computers. It provides recommendations for deploying Office SharePoint on Hyper-V.
  • "PeopleSoft Virtualization with Windows Server 2008 Hyper-V Technology" This paper introduces and describes the benefits of PeopleSoft virtualization using the new Windows Server 2008 Hyper-V technology and how to effectively implement this new technology into a PeopleSoft environment.
  • "Should You Virtualize Your Exchange 2007 SP1 Environment?" This explains that with the release of Microsoft Windows Server 2008 with Hyper-V and Microsoft Hyper-V Server 2008, a virtualized Exchange 2007 SP1 server is no longer restricted to the lab—it can be deployed in a production environment and receive full support from Microsoft.

fig04.gif

Figure 4 Hyper-Green.com Virtualization Planning Tool

Savings: While researching this article, we discovered that there were many horror stories of virtualization projects that had gone bad during planning and rollout. For this reason, accurate planning and a careful rollout should require 20% of your effort and provides the second greatest opportunity for savings after defining Hyper-V skus.

Step 7: See the Forest, not the Trees

How do you know that you are achieving Hyper-Green performance per watt? If you use a synthetic driver rather than an emulated driver, you will be more efficient because you will use fewer processor cycles and get higher performance. However, if you demonstrate your virtualization success based on CPU utilization, you should use the inefficient emulated drivers to decrease your performance and increase CPU utilization. This action will make the hardware and processor vendors smile, but it is the wrong thing to do.

Measure your performance per watt. Power summarizes the energy used by all components in a server and gives you an overall indicator of how hard the server is working.

In this dynamic environment, the speed with which you take action can make the difference. If you postpone making decisions because you don't have all of the data that you think you need, you will be avoiding criticism, but you will be wasting money. Instead, you should be willing to take a risk, be an early adopter, and set the trend. Let your competion follow your lead. Demonstrate to others how much energy they can save by implementing a Hyper-Green strategy.

Savings: It is hard to gain perspective about when to act and when to wait for information, but being willing to take action is just as important as using power management. While it's hard to quantify, you'll likely get a 10% gain by being an early adopter.

Moving Forward

You may not be able to complete all of the tasks in the following list, but even if you complete a few of them, you will be on a better path. Here is a summary of the seven steps adding up to 100% of your potential savings:

  1. Know Your Server Loads - 20%
  2. Plan for High Utilization - 10%
  3. Save Power - 10%
  4. Eliminate Waste - 5%
  5. Your Hyper-V Server Skus? - 25%
  6. Plan, Deploy, Monitor - 20%
  7. See the Forest, not the Trees - 10%

If you think you can save 75% of your energy costs then multiply 75% times the specific step to get an approximation of the energy savings in your scenario. (For example, 75% times 10% Save Power equals 7.5% energy savings.)

Over half of your savings will come from understanding your loads, good planning, and making sure that you are solving the right problems. If you take the time to look at these seven steps, you will be ahead of others who are distracted by unnecessary details.

Every step should fit in your overall plan. The results can be modified to support your scenario. After all, your scenario differs from everyone else's, and the only way that you can be a Hyper-Green performer is by developing a plan for your own needs.