You can configure VMM 2012, the new Microsoft Hyper-V management platform, to create a high-availability cloud service.
If the concepts of cloud computing and hosted services are still somewhere over your head, you can expect a dramatic increase in your comfort level when you upgrade to System Center Virtual Machine Manager (VMM) 2012. When installing the beta of the newest Microsoft Hyper-V management platform, you’ll find no fewer than four references to “clouds" and “services" prominently displayed across its ribbon’s buttons (see Figure 1).
Figure 1 The updated ribbon interface in VMM 2012.M
On the other hand, if you’re excited by the cloud-computing techniques for managing virtual resources, then you probably can’t wait for the VMM 2012 “Create Cloud" button. While adding this button may seem like Microsoft chutzpah, it suggests there’s something more tangible to “the cloud" with VMM. It finally seems like the cloud is becoming something you can see, feel and experience. Just click the button.
The VMM 2012 cloud-management resources incorporate a collection of host groups, logical networks, virtual IP profiles, load balancers, storage and its classifications, and content libraries. VMM 2012 aligns these management elements with capability profiles and capacity values, tying computing capacity into a quantitative whole you’ll be able to parcel out to users and their projects.
However, getting there will take some effort. While the VMM Create Cloud button is indeed prominently displayed, don’t plan on clicking it right out of the box. It requires plenty of attention first. You’ll have to design and configure plenty of hardware before you’ll ever create that cloud.
Even creating that cloud is completely optional. Many IT shops won’t need one, getting all the functionality they require out of a “group of Hyper-V hosts." If your virtualization needs are simple, then a fully realized cloud might be something far too comprehensive and complex.
It’s exactly that “group of Hyper-V hosts" that got me thinking about VMM 2012 in terms of simplicity. Everyone loves virtualization, but being responsible for everything IT means needing dead-simple virtual tools first. Only after your virtual simplicity needs are met will the more automated features of VMM 2012 become relevant.
This need for simplicity is particularly important considering the limitations many of you Jack-of-all-trades (JOAT) IT guys found with VMM 2008 and VMM 2008 R2. Let’s ignore the cloud for a minute and face the small-environment reality. Even simple virtualization today requires high availability (HA). Consolidating virtual machines (VMs) onto virtual hosts demands protections against host failures. It also desires load balancing for misbehaving VMs.
Accomplishing these tasks with VMM 2008 and VMM 2008 R2 was far from trivial. In fact, it was quite difficult. You needed experience with Windows Failover Clustering (WFC) technologies that most JOATs didn’t possess. That difficulty in simply building a Hyper-V cluster often presented a big hurdle to small-environment Hyper-V adoption.
WFC is by no means a poor solution. The general-purpose design of WFC just made it particularly challenging for use with Hyper-V. Too many of its configurations required attention, and too many could easily be improperly configured. WFC as a Hyper-V clustering solution was simply too complex at a time when most needed something simple.
Microsoft does listen, and in an exceedingly smart move the company chose to obscure the complexity of WFC by simply “skinning it over" beneath VMM 2012. All the activities involved in constructing and managing your Hyper-V clusters are now built into VMM. The process involves four major steps, the first three of which set up the constituent components—servers, networking and storage—to create what’s now called the Fabric (see Figure 2).
Figure 2 Creating the Fabric with VMM.
You can think of the Fabric as a logical manifestation of the hardware that eventually will become your cluster and your cloud. Hyper-V servers are a piece of that Fabric. Other pieces include Library Servers for storing content like VM and configuration templates, applications and other configuration-controlled bits you’ll use within your virtual environment. PXE Servers and Update Servers now build Hyper-V hosts from bare metal and keep them patched. Your VMware vCenter Servers and VMM servers will round out that list.
Abstracting networking is another function of the Fabric. It combines logical networks with Hyper-V virtual networks to define IP address assignments and route traffic. Logical networks can also provision static addresses for servers.
VMM 2012 can provision IP addresses using combinations of IP ranges, MAC address pools and virtual IP templates. While not specifically tested for this article, IP load-balancer support is also a part of the VMM 2012 beta release.
Things get interesting with storage—the third Fabric element. Managing and provisioning storage in virtual environments has long been painful. It requires careful coordination between storage, virtual and network administrators. A VM needing a LUN required a storage person to provision and mask it and often a network person to route it before it made its way to the virtual platform.
Storage connections within virtual platforms have historically and notoriously been virtual host-specific. They required each LUN to be carefully exposed to every host where a VM might go. Miss one and bad things happened.
Automating that process requires automating this level of coordination. This is an activity that even today is not widely agreed upon by storage vendors. VMM 2012 delves into new territory in an attempt to improve the storage provisioning experience.
VMM places heavy focus on the Microsoft Storage Management Service. This service can indeed automate storage assignments across cluster and cloud, but only with storage devices supporting the Storage Management Initiative – Specification (SMI-S). This technology is currently only supported by four storage arrays.
This number will likely grow prior to the official release of VMM 2012. It will have to, considering that essentially all of the VMM 2012 storage automations require SMI-S storage. Storage arrays lacking SMI-S will still work, but they’ll require manual configuration on each host, just like the old days. They’ll also miss out on the Fabric flexibility of VMM.
The entire conversation about the VMM 2012 Fabric means little if clustering its Hyper-V hosts remains painful. While the Fabric abstraction lets you provision resources more easily, does that same level of abstraction help you combine “just a couple of Hyper-V hosts" into a fully functioning cluster?
The process does feel somewhat improved. The first step is to add Hyper-V hosts as resources (see Figure 3) in the Fabric Servers node. These servers will eventually become a Host Group—a collection of hosts gathered together for management purposes. A Host Group doesn’t have to be a cluster. In fact, it won’t be as you initially add your Hyper-V hosts.
Figure 3 Creating a Hyper-V Host Group.
Windows Remote Management (WinRM) actually adds the hosts. If any Group Policy Objects (GPOs) have adjusted WinRM from its out-of-the-box configuration, the process may be inhibited. In the VMM 2012 beta, I experienced an Error 421 attempting to add hosts because a GPO made a single configuration change to WinRM—a change intended to release some of its restrictions. If you experience the same error, make sure you’re not inadvertently adjusting the WinRM configuration via any local or Group Policy.
With WinRM satisfied, adding two Hyper-V hosts and creating a Host Group is a simple process. The Add Resource Wizard identifies a discovery scope for candidate Hyper-V hosts (see Figure 4). Identified hosts are then added into a Host Group for management.
Figure 4 Configuring a Discovery Scope for adding Hyper-V Hosts.
In a production environment, configuring VM storage and networking for those hosts would be the next step. My environment doesn’t run on one of the four supported SMI-S SAN devices (EMC Symmetrix, EMC CLARiiON CX, HP StorageWorks Enterprise Virtual Array andNetApp FAS), so I was unable to validate the Fabric storage automation.
Clusters still require shared storage, so I had to create connections the old-fashioned way, connecting each host’s iSCSI Initiator to an exposed SAN LUN. One stored the cluster’s witness disk, and the other would eventually house VMs. Similar to the traditional cluster-creation process, you must perform this step on each host individually. For more details on this process, see “ Simple Clustering with Hyper-V."
My hosts were already connected to the appropriate subnet, so there was nothing more required to construct the cluster. That said, creating one or more Logical Networks will eventually become important for abstracting IP resources and delivering them to VMs. Logical Networks in VMM 2012 are part of the Fabric. Once created there, they’re associated with the Virtual Networks that are properties of each host. This association is an automated activity by default. You can adjust that automation under the Network Settings console within the Settings node (see Figure 5).
Figure 5 You can adjust automation in Network Settings.
With server, storage and networking resources configured, it’s time to set up the cluster by selecting Fabric | Servers | Create | Hyper-V Cluster. The six-page Create Cluster Wizard asks only for the nodes to join, the cluster’s IP address, and the storage and Virtual Networks to associate.
Of these, the wizard page for configuring cluster storage (see Figure 6) is perhaps the most perplexing. Microsoft intends for the Hyper-V world to move to an SMI-S-centered world of storage-provisioning automation. The provisioned storage the cluster can use should appear on that page; but I created the storage manually, so it selected the smallest disk as my cluster’s witness disk.
Figure 6 Configure cluster storage.
This automatic assumption of cluster-ready storage is based on the LUN name, a unique identifier shared by all attached hosts to keep track of LUNs. If you’ve configured LUNs appropriately, you should have a similar experience as you create your cluster. One assumes (in part because the 350-page beta documentation tells you) that this screen’s options are far more configurable when storage supports SMI-S. Two more clicks and my Hyper-V cluster was created.
Does this cluster-creation process achieve the goal of simplicity? Potentially, though the per-host storage connection process remains unchanged for those of us lacking SMI-S equipped SAN hardware. A fully empowered Microsoft Storage Management Service should ease this process somewhat.
One can argue the real value in the VMM 2012 “skinning over" of the WFC complexity lies not in creating clusters, but managing them over the long haul. With previous versions, working with highly available VMs sometimes required steps in VMM and other times in the WFC console. Obscuring those activities into a single interface presents the opportunity for far fewer cluster-impacting mistakes. Add Microsoft’s greatly improved failover and load-balancing technologies, and it makes one wonder if VMM 2012 might have finally landed Microsoft a seat at the virtual platform table.