Export (0) Print
Expand All
3 out of 3 rated this helpful - Rate this topic

Provide cost-effective storage for Hyper-V workloads by using Windows Server

Published: January 22, 2014

Updated: February 7, 2014

Applies To: System Center 2012 R2, Windows Server 2012 R2

Who is this guide intended for? Service providers (hosters) that offer Infrastructure-as-a-Service (IaaS) and large organizations that are setting up private clouds.

How can this guide help you? You can use this solution guide to understand the high-level design and implementation for one particular file server-based storage solution for Hyper-V compute clusters. Other solutions are possible, but we don’t describe them here.

The solution uses Storage Spaces with storage tiers, a Scale-Out File Server cluster, and easily managed Server Message Block (SMB) file shares to create a software-defined storage solution that maximizes storage performance, reduces costs, and scales compute resources and storage independently.

The following diagram illustrates the problem and scenario that this solution guide is addressing.

Storage for virtualized workloads

Diagram showing a generic storage solution

noteNote
We’re still refining this solution, so check back in the coming weeks for updates. Also make sure to look over the Challenges for this solution section to see some of the difficult areas where we and our hardware partners are doing ongoing work. For a list of recent changes to this topic, see the Change History section of this topic.

In this solution guide:

This section describes the scenario, problem statement, and goals for this solution guide.

Scenario

In this scenario, we’re assuming that you’re either a medium-sized hosting provider offering managed services (including infrastructure as a service), or a large organization looking to set up private clouds. You’re providing enterprises the ability to move an increasing variety of their workloads to the cloud, hosted on Hyper-V virtual machines. But these new workloads come with a staggering amount of data…

Problem statement

As you’re no doubt painfully aware, storage represents one of the biggest expenses for hosting cloud services. Data requirements keep growing, and while hard disk prices are falling, you’ve probably been purchasing an increasing number of solid state drives (SSDs) in an attempt to increase performance. The overall effect is that storage continues to be expensive to acquire and operate.

Your existing storage options involve expensive storage area networks (SANs) that use a Fibre Channel fabric, though you might also consider iSCSI in instances when performance isn’t critical. While these options can provide flexible storage configurations, they have some of the following drawbacks:

  • Fibre Channel (and even iSCSI) SANs are quite expensive.

  • SANs can be complex to set up and maintain.

As such, the overall problem that you want to solve is:

  • How can you provide resilient and high-performance storage for your Hyper-V hosts while keeping costs down?

Organization goals

Basically you’re looking for a storage solution that provides the following:

  • Continuous availability - You need to provide remote storage that is continuously available to keep downtime to an absolute minimum.

  • Scalable storage - You need to provide hundreds of terabytes of storage with high levels of throughput to the thousands of virtual machines that you want to host (this solution provides roughly 200-800 TB of capacity for 1,000-8,192 virtual machines with around 100 GB per virtual machine).

  • High performance - You need storage that can provide great performance for each virtual machine and service.

  • Efficient management - You need efficient and powerful management tools that help you set up and manage the entire cloud platform solution, which consists of hundreds of disks, and dozens of server nodes.

  • Low cost - You need to keep storage from consuming all of your budget.

This section defines one solution that we recommend for the problem and goals described above. This solution focuses on the storage portion of a cloud platform that consists of the following three parts:

  • Compute - Tenant workloads are hosted on a compute cluster running Hyper-V virtual machines.

  • Storage - Virtual machines are stored on a high-performance file server cluster.

  • Management - The compute and file server clusters are managed by a management cluster.

The following diagram illustrates the storage portion of this solution:

Windows Server-based Storage for Virtual Machines Solution Architecture

Storage solution using Microsoft software

The following table lists the elements that are part of this solution design and describes the reason for the design choice.

 

Solution design element How it supports this solution

Multiple JBOD Enclosures

Multiple just-a-bunch-of-disks (JBOD) enclosures house low-cost industry standard Serial Attached SCSI (SAS) hard disks (HDDs) and solid state disks (SSDs) without the expense of SAN devices.

File servers running Windows Server 2012 R2

The JBOD enclosures are connected to standard four-node file server clusters running Windows Server 2012 R2 using inexpensive (non-RAID) SAS controllers.

Clustered storage pools

All disks in the enclosures are added to clustered storage pools using Storage Spaces, obviating the need to manage individual disks.

Storage spaces

Virtual disks called storage spaces are created from free space in the storage pools. These storage spaces provide software-defined resiliency levels—in this solution we use three-way mirrors that provide high performance while preserving data in the event that two disk failures occur.

Storage tiers

Storage spaces are created with storage tiers that automatically move frequently accessed data to SSD storage and infrequently accessed data to hard disk (HDD) storage, combining the performance of SSDs with the capacity of HDDs.

Failover Clustering

Failover Clustering is set up on Windows Server file servers so that if one file server fails, the storage pools it’s hosting can fail over to other nodes of the cluster. The compute cluster and management nodes also use Failover Clustering so that virtual machines can fail over to other nodes.

Unified CSV namespace and Scale-Out File Server

By using cluster shared volumes (CSV) and creating a clustered file server role with the Scale-Out File Server option, all cluster nodes can simultaneously write to the same storage, increasing performance and availability.

Continuously available file shares

Continuously available file shares hosted on the scale-out file server let you store Hyper-V virtual machine configuration files and virtual hard disks in easy-to-manage, remotely accessible file shares without sacrificing performance or availability.

Hyper-V

Hyper-V enables you to create and manage a virtualized computing and management environment by using virtualization technology that is built in to Windows Server.

System Center Virtual Machine Manager

You can manage all virtual machines by using System Center Virtual Machine Manager, running on the management cluster.

Windows Server Update Services

You can use Windows Server Update Services, running on the management cluster in conjunction with Cluster-Aware Updating, Virtual Machine Manager, and optionally System Center Configuration Manager to deploy software updates to all nodes and virtual machines on the management and compute clusters.

System Center Operations Manager

You can monitor this solution by using System Center Operations Manager, running on the management cluster.

To design the hardware and software configuration for each cluster in this solution, see the Provide cost-effective storage for Hyper-V workloads by using Windows Server: planning and design guide.

Here are some of the challenges involved with this solution as well as some strategies to address them.

  • Compatibility issues between JBODs, HBAs, physical disks, and network interface cards

    To minimize compatibility issues, install the exact firmware and driver versions that are listed as approved in the Windows Server Catalog for the devices. If no version is listed, contact the hardware vendor to determine which firmware and driver version to use with Storage Spaces and Failover Clustering.

    Also run the Validate a Configuration Wizard and resolve every issue prior to setting up each cluster. For more information, see Validate Hardware for a Failover Cluster.

  • Difficulty completely erasing previous Storage Spaces and Failover Clustering information from JBODs and physical disks

    This isn’t usually a problem with new hardware, but if you’re using existing hardware to test the configuration, use the cmdlets in the Storage Windows PowerShell module to completely erase all Storage Spaces and Failover Clustering data from the physical disks and JBODs before setting up the solution. In some cases power cycling the JBODs can help remove persistent reservation info from the devices.

    TipTip
    See Completely Clearing an Existing Storage Spaces Configuration for a script that can help completely erase everything from a Storage Spaces configuration.

  • Difficulty finding JBODs and HBAs that support enough SAS ports to connect two SAS cables between each node and JBOD

    Ideally you’d connect each node in the file server cluster to all JBODs with two SAS cables to maximize throughput and provide redundant paths, but not all servers and JBODs can support eight SAS ports (for a four node, four JBOD configuration).

    In some instances you can use four SAS connectors per node to connect to six-port JBODs and then use the extra two ports per JBOD to connect the JBODs to each other, providing a redundant path. However, this configuration is more sensitive to firmware issues, so make sure to check with the hardware vendor to find out if this configuration is supported and which firmware to use.

    You can also scale the solution down to three file server cluster nodes and three JBODs, using JBODs with six SAS ports. Standalone SAS switches might also be a workable, but they add complexity and cost, and haven’t been tested as part of this solution.

  • Large scale of solution

    This solution requires a significant hardware investment to set up for testing purposes. You can work around this by starting with a smaller solution for testing. For example, you could use a file server cluster with two nodes and two JBODs, a simpler management cluster, and fewer compute nodes. When you’re comfortable with the solution in your lab, you can add nodes and JBODs to the file server cluster, though you’ll have to recreate the storage spaces to ensure that data is stored across all enclosures with enclosure awareness support.

You can use the steps in this section to implement the solution. Make sure to verify the correct deployment of each step before proceeding to the next step.

  1. Design your solution and purchase certified hardware

    Use the Provide cost-effective storage for Hyper-V workloads by using Windows Server: planning and design guide to plan and design your solution based on hardware certified for use with Storage Spaces and Failover Clustering.

  2. Rack and cable all hardware

    Hook up your file server cluster, management cluster, compute cluster, and the network switches that they connect to. Don’t connect this hardware to any external networks yet.

  3. Update all firmware

    Update the firmware for your JBODs, disks, servers, network switches, and HBAs to the certified versions as you bring hardware online.

  4. Deploy Windows Server 2012 R2 on the management cluster

    Install Windows Server 2012 R2 with the Server Core installation option on the management cluster to reduce the amount of software updates that apply to the server (assuming that you’re not using an existing management cluster). Use a laptop plugged into the management network to remotely configure all nodes, or install Windows Server with the GUI installation option.

  5. Install Hyper-V and create virtual machines for AD DS, DNS, and DHCP on the management cluster

    Install the Hyper-V server role and then use Hyper-V Manager or Windows PowerShell to create a virtual machine on one node of the management cluster for AD DS, DNS, and DHCP. This virtual machine isn’t highly available (these services replicate and load-balance without clustering), and you should store the operating system virtual hard disk (.vhdx) file on the local hard disk of one of the nodes. Repeat this two more times on two other nodes so that you have three virtual machines on three separate nodes. You’ll create more virtual machines after setting up Failover Clustering on the management cluster later in the set up procedure.

    For more information, see Install the Hyper-V role and configure a virtual machine.

    noteNote
    After setting up this solution, you can optionally create highly available virtual machines running AD DS, DNS, and DHCP and retire the stand-alone virtual machines created in this step. Doing so can make management more logical as all virtual machines are highly available, and stored on the file server cluster.

  6. Deploy AD DS, DNS, and DHCP

    If you’re installing a new management cluster, install AD DS on each of the virtual machines (three domain controllers) and create a new forest for your server clusters, with Active Directory-integrated DNS zones, and DHCP scopes for the storage network and the management network.

    For more information, see Install Active Directory Domain Services (Level 100) and Step-by-Step: Configure DHCP for Failover.

  7. Set up the file server cluster

    Use the following steps to set up the file server cluster:

    noteNote
    Virtual Machine Manager can quickly create a scale-out file server from the four bare-metal nodes of your file server cluster. The only problem is that you probably want to store the virtual hard disk files for Virtual Machine Manager on the file server cluster that isn’t yet set up. You can optionally work around this chicken and egg problem by installing Virtual Machine Manager in a non-highly available configuration on the management cluster, use it to set up the file server cluster, and then set up Virtual Machine Manager again in a highly available configuration (stored on the file server cluster).

    1. Install Windows Server 2012 R2

      Install Windows Server with the Server Core installation option on the nodes of the file server cluster, with the operating system installed on the local hard disk of each node.

    2. (Optional) Wipe existing Storage Spaces and Failover Cluster configuration data

      If your JBODs and servers were previously used for something else, completely erase all Storage Spaces and Failover Clustering data from the physical disks and JBODs. For a script that can help completely erase everything (and we do mean everything, so be careful!) from a Storage Spaces configuration, see Completely Clearing an Existing Storage Spaces Configuration.

    3. Validate physical disks and enclosures

      Check all physical disks to makes sure that they’re healthy, show the correct MediaType, and are showing as eligible for pooling. Also confirm that the JBODs are showing enclosure information properly.

      For a script that can validate your physical disks and enclosures and perform some performance and health checks, see Storage Spaces Physical Disk Validation Script.

    4. Create clustered storage pools

      Validate and optimize the cluster networking configuration, labeling each network (for example, storage network and management network), and then create three clustered storage pools with four SSDs and 16 HDDs from each of the four JBODs, for a total of 80 disks per pool.

      For detailed steps to set up the failover cluster and create storage pools, see Deploy Clustered Storage Spaces.

    5. Create a Scale-Out File Server

      Next create a clustered file server role with the Scale-Out File Server option.

      For more information, see Deploy Scale-Out File Server.

    6. Create the witness disk for the file server cluster

      Use Server Manager or the New-VirtualDisk cmdlet to create a 3 GB two-way mirror space without storage tiers for use as the witness disk for the file server cluster, and then configure the cluster quorum.

      For more information, see Configure the cluster quorum.

    7. Create storage tiers, storage spaces, partitions, volumes, and CSVs

      Create your storage spaces according to your design, and then create one partition, one volume, and one CSV per storage space.

    8. Create continuously available file shares for the management cluster virtual machines

      Create one continuously available SMB file share per CSV used by the virtual machines on the management cluster, and grant full control permissions to the computer accounts of each node of the management cluster, the SYSTEM account, and the Domain Administrators group.

      For more information, see Step 3: Create an SMB file share

  8. Set up the management cluster and the rest of the management virtual machines

    Use the following steps to set up Failover Clustering on the management cluster and create highly available virtual machines for the rest of your management and infrastructure services (you already set up AD DS, DNS, and DHCP in stand-alone virtual machines). Most virtual machines are highly available virtual machines, but for some services you might want to use guest clustering to create a cluster between virtual machines.

    1. Install Failover Clustering and set up the Hyper-V cluster

      Use the following topic to create the management cluster and configure Hyper-V to support highly available virtual machines Deploy a Hyper-V Cluster.

    2. Set up Cluster-Aware Updating

      Set up Cluster-Aware Updating to make it easy to update the cluster while minimizing or eliminating downtime. For more information, see Cluster-Aware Updating overview.

    3. Deploy SQL Server

      Deploy SQL Server to support Virtual Machine Manager. For more information, see the following topics:

    4. Deploy Virtual Machine Manager

      Deploy Virtual Machine Manager on a guest cluster. Virtual Machine Manager is used to deploy and manage the compute nodes and other network components for this solution.

      For more information see the following topics:

    5. Deploy Windows Server Update Services

      Use Virtual Machine Manager in conjunction with Windows Server Update Services to update all virtual machines in this solution.

      For more information, see Managing Fabric Updates in VMM (or Deploy Windows Server Update Services in Your Organization if you’re not using Virtual Machine Manager).

  9. Deploy the compute nodes and clusters

    Once your infrastructure is set up, use Virtual Machine Manager or Windows PowerShell to deploy the compute nodes from bare-metal, and set them up in a failover cluster, with Virtual Machine Manager and Windows Server Update Services providing updates to the cluster nodes.

    For more information, see Administering System Center 2012 - Virtual Machine Manager.

  10. Set up your tenant networking

    To set up your tenant networking, see Connect hosting provider and tenant networks for hybrid cloud services.

  11. Deploy your tenant virtual machines

    After your tenant networking is set up, use Virtual Machine Manager or Windows PowerShell to deploy your tenant virtual machines.

Change History

 

Date Description

February 7th, 2014

  • Added a Tip in the Challenges for this solution section that links to a script that can clean out existing Storage Spaces and Failover Clustering configuration data.

  • In the What are the high-level steps to implement this solution? section, added steps to optionally clean out existing Storage Spaces and Failover Cluster configuration data and to validate physical disks prior to adding them to the storage pools.

  • Updated art

January 22nd, 2014

  • Preliminary publication

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.