System Center Virtual Machine Manager 2012: Virtual management
You can manage the fabric of your entire virtual infrastructure with the new System Center Virtual Machine Manager.
The premise and promise of server virtualization is that it abstracts your underlying hardware infrastructure from the actual workloads. It also pools your virtual machines (VMs) into a single fabric upon which you host the VMs that run your business services. This virtual fabric is made up of compute, network and storage resources.
Microsoft System Center Virtual Machine Manager (VMM) 2008 R2 could manage the underlying infrastructure to some degree, but you had to initialize it from the outside. The big shift in VMM 2012 is that it can manage your entire infrastructure end to end. The next evolution of VMM is coming in the first service pack. (The material presented here is based on the VMM 2012 SP1 Community Technology Preview 2, so all information is subject to change with the final release.)
Because Hyper-V 3.0 in Windows Server 2012 has so many significant enhancements, VMM 2012 will correspondingly gain significant functionality in System Center 2012 SP1 (see Figure 1). This will bring compatibility with Windows Server 2012 and SQL Server 2012, along with many other improvements. The overall change in the first service pack for all System Center components is that they’ll run on Windows Server 2012 as well as manage machines running Windows Server 2012, and support SQL Server 2012 for the back-end database.
Figure 1 The VMM 2012 SP1 console has the familiar System Center look and feel.
VMM 2012 SP1 can manage Hyper-V hosts and clusters, as well as VMware ESX/ESXi 3.5 and 4.1, vCenter Server 4.1 (support for vSphere 5 is coming) and now also Citrix XenServer 6.0. Therefore, you can use one management solution for all your server virtualization hosts. This support also extends to host services that encompass multiple VMs on different platforms. You might encounter that type of configuration because of different networking or performance needs.
Using the Baseboard Management Controllers (BMCs)—which are more commonly known as Dell Remote Access Card (DRAC), HP Integrity Integrated Lights-Out (iLO) and IBM Integrated Management Module (IMM)—on your new physical servers, you can give VMM 2012 a list of servers or IP addresses, boot these machines, deploy a Windows Server 2008 R2 image as a virtual hard drive (VHD), enable the Hyper-V role and set up boot from VHD. VMM 2012 SP 1 will now deploy and cluster Windows Server 2012 in a similar manner.
This requires an existing Windows Deployment Server (WDS) where VMM 2012 SP1 will insert a Preboot Execution Environment (PXE) provider (it has to be first in the list) that responds to boot requests. The installation process will continue if the physical server has been authorized for deployment. If it’s a PXE request from another machine, it will be passed on to the next provider in WDS. This means you don’t need to have a separate deployment server for your Hyper-V hosts.
The BMCs need to support Intelligent Platform Management Interface (IPMI) 1.5 or 2.0. This is the most common protocol today and is used by HP iLO/iLO 2, Dell DRAC and Cisco Unified Computing System (UCS). For the future, Microsoft is banking on the uptake of System Management Architecture for Server Hardware (SMASH) 1.0 instead of Web Services for Management, or WS-MAN, because SMASH provides richer information. It will also support Data Center Management Interface (DCMI) 1.0.
If you need specific drivers for the WinPE image, you can inject them into a custom image once it’s stored in the Library. For drivers Windows needs to have on the host, VMM 2012 SP1 will direct plug–and-play to pick the correct drivers during deployment from the Library. The host profile you create for a group of hosts will contain the basic settings, such as domain to join. You can set more options using the Windows System Image Manager (WSIM), and customize an unattend.xml file used during the deployment.
If you’re troubleshooting a bare-metal deployment, be aware that any PXE boot attempts (whether successful or not) are logged in the VMM 212 SP1 event log. Also, the WinPE image download to the hosts has a 10-minute timeout. You could potentially hit this if the server has multiple NICs and they’re all trying to PXE boot.
If you need to configure RAID volumes or update firmware before installation, you can do this with Generic Command Executions (GCEs) that can run any script. There are also post-installation GCEs you can use for things such as NIC teaming in Windows Server 2008 R2. If you have multiple DNS servers in your environment, take into account the replication delay between them or pre-stage your host names. Deployments will fail if DNS name resolution isn’t working.
Another factor you’ll need to consider is disk space. Booting from VHD Windows will create a page file on the physical disk equal to the amount of RAM in the machine. A dynamic VHD will expand to its full size during deployment, so make sure your hosts have enough disk space for these together.
Using VMM 2012 SP1 for setting up brand-new hosts isn’t the only option, of course (see Figure 2). If you already have a solid infrastructure for this based on the Microsoft Deployment Toolkit (MDT) or System Center Configuration Manager, there’s nothing wrong with continuing to use those toolsets. When creating the VHD you’ll use to deploy to bare-metal hosts, it’s best to use the MDT, as it uses a more standardized process for creating a reference image.
Figure 2 Adding one or more hosts to VMM 2012 SP1 is easy.
You can cluster these new Hyper-V hosts from within VMM 2012 SP1. With VMM 2012, hardware discovery through the BMC wasn’t very deep. VMM 2012 SP1 offers more information that should help, particularly for network cards. VMM 2012 SP1 performs hardware discovery by booting hosts using the VMM WinPE image, gathering the data, including the new functionality called Consistent Device Naming and returning it to VMM 2012 SP1. When a motherboard has multiple NICs, it will be easier to assign the right NIC for the right type of network in the host profile.
There’s another cool feature if you need to add hosts to a cluster at a later time. You can stand them up using the bare-metal deploy feature. Then simply drag and drop them into an existing cluster. Fill in some details on the screen that pops up and VMM 2012 SP1 will take care of all the details under the covers.
A cornerstone of a modern virtualized datacenter is host clustering. This provides high availability (HA) for unplanned downtime. You can have your VMs automatically restart on other hosts if a particular host fails. Clustering also facilitates smooth administration during planned downtime, letting you migrate VMs to other hosts before shutting down or patching a particular host.
Many deployments also use guest clustering. This is where VMs have cluster functionality between them. Windows Server 2012 introduces an “anti-affinity” function so you can specify that two particular VMs shouldn’t run on the same host. This would defeat the benefits of guest clustering. VMM 2012 SP1 supports this, as well as extending this functionality to non-clustered, standalone hosts it also manages.
If a host is experiencing an issue, right-click on it within the console and go to properties. Here you’ll be able to see where the problem lies. There’s even a helpful “Repair all” button that will attempt to fix it (see Figure 3).
Figure 3 The “Repair all” button can be helpful in fixing host issues.
When a host had a problem in VMM 2008 R2, one way of fixing it and updating the database with the latest information was to remove it from VMM and then add it again. If you do this in VMM 2012 and you’ve deployed VMs that are part of services to the host, these VMs will permanently lose the pointers to the service. The only way to avoid this is to migrate these VMs to another host prior to the remove/add operation.
Another common problem is connectivity issues between VMM 2012 and its agents. These aren’t just on host machines, but also on your library servers, update servers and in VMs that are part of services. This traffic goes over HTTP and by default uses port 5985, so check firewall rules (see Figure 4). For a list of the ports that VMM 2012 uses see the TechNet Library page, “Ports and Protocols for VMM.”
Figure 4 The VMM 2012 SP1 setup program tells you about the ports each component uses.
Once you have all your hosts and clusters up and running, it’s important to keep the infrastructure up-to-date. VMM 2012 provides cluster-aware patching where all VMs are live migrated to other hosts. Then the host is patched and rebooted. When the host is confirmed as working, the process continues with the next host.
You need to set up a Windows Server Update Services (WSUS) server or use one you’ve integrated with Configuration Manager. You’ll also need to configure update baselines and scan selected hosts for compliance with your baselines. Then you can initiate update remediation.
You can install the WSUS server on the VMM 2012 server itself, but it’s recommended to house it on a separate server. The WSUS console, however, needs to be installed on each VMM 2012 server. If a particular patch is causing issues on some hosts, you can create exceptions for them. Windows Server 2012 brings native cluster-aware patching to the platform. It will be interesting to see if VMM 2012 SP1 will offload actions to the OS or keep the functionality in VMM itself.
VMM 2012 SP1 now offers the concept of logical networks. This means multiple datacenters might have a network called “corpnet.” Your VMs are automatically connected to the right subnet and VLAN in each location. This will help you better manage your underlying network and VM connectivity.
Large-scale implementations still have to use VLANs to segregate VMs from each other and from the hosts. This can lead to a large management overhead. The theoretical limit of numbers of VLANs is 4,095, but most switches top out somewhere between 1,000 and 1,500. One of the most exciting features coming in Hyper-V in Windows Server 2012 is network virtualization. This could replace the use of VLANs.
For IP addresses, each VM has a Customer Address as well as a Provider Address. The former is visible to the VM and the latter used on the physical network. This feature enables several scenarios. It will be a lot easier to move on-premises VMs to a hosting provider and back without having to alter IP addresses. Multi-tenant isolation will also be simplified.
In fact, multiple VMs running on the same host can have exactly the same IP address without ever seeing each other. There are new panes in the VMM 2012 SP1 interface dedicated to managing network virtualization, as well as controlling the new Logical switch—which is also new in Windows Server 2012. It lets you define centralized settings for the Hyper-V switch instead of having to manually define switch settings on each host individually (similar to how the VMware distributed switch works).
Also new in VMM 2012 SP1 is the ability to extend the VMM console with add-ins. This will further enhance VMM as the one-stop shop for virtual datacenter management.
Another difference between VMM 2008 R2 and VMM 2012 is the ability to manage the underlying storage for VMs. VMM 2012 integrates with Storage Area Networks (SANs) for true end-to-end management of each VM’s VHD storage. The software stack to enable this integration relies on Storage Management Initiative Specification (SMI-S). SMI-S was written for VMM 2012, and it’s being moved to Windows Server 2012 as the Windows Storage Management API (SMAPI). Windows Server 2012 on its own will be able to integrate with SAN storage (both iSCSI and Fibre Channel) in a way that’s never been possible before.
This functionality isn’t a replacement for SAN vendors’ own management tools. It’s a way to enable workloads running on Windows Server 2012 to consume shared storage in an efficient and automated way. While VMM 2012 retained the Virtual Disk Service (VDS) compatibility with earlier versions, the VMM 2012 SP1 release will deprecate this interface in favor of SMI-S. If you have existing SANs that rely on an older VDS driver, take this into account during your upgrade planning.
VMM 2012 refreshes the data provided by the SAN once every 24 hours by default. Because LUNs and other storage particulars are generally managed outside VMM, as well as through the console, this information could potentially be outdated. For these reasons, you might want to configure a more frequent refresh.
This places an unnecessary load on some SANs, so VMM 2012 SP1 brings the concept of notification. This lets the SAN and VMM know when changes have taken place. Therefore, the default refresh period works sufficiently.
In addition to iSCSI and Fibre Channel SANs, VMM 2012 SP1 also will support Serial Attached SCSI (SAS)-attached storage arrays. These are popular in smaller environments. Check the TechNet Library page, “Configuring Storage Overview,” for a list of supported SANs. Dell EqualLogic PS series iSCSI SANs use dynamic targets, something VMM 2012 didn’t work with earlier. The VMM 2012 SP1 release fixes this problem.
Helping Windows Server 2012 Hyper-V virtualize large workloads was one goal of the new Hyper-V. The new VHDX virtual hard disk format that supports up to 64TB virtual disks is part of that solution. VHDX also brings reliability enhancements and lets Microsoft and third-party utilities write management information to the disk file.
Another big-ticket item in Windows Server 2012 Hyper-V is its ability to run VMs from Server Message Block (SMB) 3.0 file shares. This is something that hasn’t been possible before. The VMM 2012 SP1 release will support this. However, having a Hyper-V host that’s also a file server with shares storing VHDs is not supported.
VMM 2012 lets you classify storage depending on its I/O characteristics (see Figure 5). For example, bronze could be slower SATA drives in an older SAN, silver could be SAS drives in a newer array and gold could be solid-state drive storage. This helps you charge accordingly. You can also apply this classification to SMB 3.0 file share storage.
Figure 5 You can set up different storage quality levels, and use these classifications elsewhere in VMM 2012 SP1.
Intelligent Placement, the algorithm that rates different hosts on a scale of one to five stars for best placement, now also considers the storage classification you’ve defined. Clouds in VMM 2012 SP1 can be scoped to only let VMs be deployed to particular storage classifications.
A sign of good SAN integration is the rapid provisioning of VMs using copy features directly on the SAN. This negates the need to transfer large VHD and VHDX files over the network. VMM 2012 offers this functionality through SAN Copy Capable Templates, but it’s limited to one VM per LUN. Windows Server 2012 supports Offloaded Data Transfer, or ODX, for SANs, and this means you can now quickly create multiple VMs per LUN.
Another great benefit of the new Hyper-V is built-in live storage migration. You move a VM’s running state from one host to another when the VHD file is stored on shared storage in a cluster (as in Hyper-V in Windows 2008 R2). You can also move a running VM from one host to another along with its underlying hard drive files, regardless of the connectivity status.
Shared storage isn’t necessary. In fact, the only thing you need is a network connection to enable migrations within the cluster, between clusters, or in and out of a cluster. You can also migrate between standalone hosts. VMM 2012 SP1 will support all these combinations on Windows Server 2012 Hyper-V hosts.
Troubleshooting Fabric Issues
There are two tools important for general troubleshooting in VMM 2012. The Virtual Machine Manager Configuration Analyzer can scan servers running VMM server roles as well as host servers. It compares the configuration against a set of rules. This will let you know where a particular setup isn’t optimal.
For gathering detailed system status and configuration, the best tool is Microsoft Product Support Reporting Tool. This takes into account particular scenarios you’re considering. It can even scan physical-to-virtual source computers before you proceed with a conversion. For scenarios where you need to import or export VMs to and from VMM, there’s also a new Open Virtualization Format Windows PowerShell plug-in.
In production VMM 2012 deployments, there has been as issue of the VMM 2012 database drifting out of sync with the state of the infrastructure. This can be due to using Hyper-V Manager or Cluster Manager to make changes instead of the VMM 2012 tools. The VM Recovery Tool released in June 2012 helps resolve these issues, but it would be better if it were included as a supported part of the product.
Because of the way VMM 2012 manages and orchestrates your entire server virtualization infrastructure, it’s critical that VMM 2012 is always available. VMM 2012 SP1 adds the concept of simply adding more VMM 2012 SP1 servers in an active/passive cluster configuration. Along with a clustered SQL back-end database and file server clusters to store the VMM library shares, this helps provide a more-resilient management infrastructure.
Overall, VMM 2012 SP1 is an extremely capable platform for managing your virtualization infrastructure. This release brings some really useful improvements. When you deploy VMM 2012 SP1 (and the rest of System Center 2012 SP1) along with Windows Server 2012 Hyper-V, you’ll have a capable and flexible private cloud platform.
Paul Schnackenburg has been working in IT since the days of 286 computers. He works part time as an IT teacher as well as running his own business, Expert IT Solutions, on the Sunshine Coast of Australia. He has MCSE, MCT, MCTS and MCITP certifications and specializes in Windows Server, Hyper-V and Exchange solutions for businesses. Reach him at firstname.lastname@example.org and follow his blog at TellITasITis.com.au.