Deploy highly scalable tenant network infrastructure for hosting providers

Updated: August 25, 2014

Applies To: System Center 2012 R2, Windows Azure Pack, Windows Server 2012 R2

How can this guide help you? As a medium-sized hosting provider you can use this solution guide to understand the solution design and implementation steps we recommend to deploy a scalable network infrastructure to support infrastructure as service (IaaS). Provisioning tenant networks can be expensive to operate and complex to manage.

This guide helps you deploy a prescriptive and tested IaaS virtual network infrastructure solution that is cost-effective, flexible, scalable, and easy to manage. In addition, it provides your tenants with a simpler, cost-effective way to connect their datacenters to yours to deploy their hybrid cloud solutions.

Tip

If you aren’t familiar with network virtualization concepts, review Hyper-V Network Virtualization Overview and Hyper-V Network Virtualization technical details.

If you aren’t familiar with the network virtualization concepts in System Center 2012 R2 Virtual Machine Manager (VMM), we strongly recommend that you set up and run a test lab using the following test lab guide before you do any planning and design: Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

The test lab guide will help you understand Virtual Machine Manager concepts, and will help make it easier to plan, design, and deploy this solution.

Also review the VMM concepts presented in Microsoft System Center: Building a Virtualized Network Solution for more information about planning and design considerations in a VMM-based solution.

In this solution guide:

  • Scenario, problem statement, and goals

  • What is the recommended design for this solution?

  • Why are we recommending this design?

  • What are the steps to implement this solution?

  • Optional configurations

The following diagram illustrates the problem that this guide addresses. An individual gateway must be provisioned for each tenant, which requires significant configuration and VLANs only scale up to about 1,000 tenants.

Tenants connecting to a hosting provider

An non-scalable and difficult to manage design

Scenario, problem statement, and goals

This section describes the scenario, problem, and goals for an example organization.

Scenario

A medium-sized hosting provider offers IaaS to its customers. They recently started offering a virtual network service, based on customer demand.

The Marketing Department within the hosting provider has been so successful marketing the virtual networking service that the customer demand for it is increasing fast.

Problem statement

The hosting provider’s current virtual network service offering doesn’t scale well, and is inefficient and expensive to operate. For example:

  • Their current design requires two gateways for every tenant (for redundancy), and each pair of gateways requires a public IP address. As the number of tenants has increased, the number of gateways required to support them has increased linearly. This is difficult for the hosting provider to manage. Adding two gateways per tenant is not a cost-effective solution for them.

  • If a tenant needs to connect multiple sites, then each tenant site also requires a separate gateway.

  • They’re not currently using an industry standard routing protocol, which requires an administrator to manually administer network routes. This is inefficient and subject to configuration errors.

  • The current design utilizes VLANs for network isolation. Their network switches only support 1,000 VLANs, which limits their ability to scale beyond that. Moving a tenant virtual machine to a different host located on a different physical location often requires an IP address change and switch reconfiguration. This issue makes moving tenant virtual machines very difficult and provides them little flexibility in their datacenter infrastructure.

Organization goals

The hosting provider needs high availability, cost efficiency, and simplified management to deliver better and cost competitive services to meet their increased customer demand. They want to implement a new solution with the following attributes:

  • The ability to deploy gateways that can connect multiple tenant networks and multiple sites per tenant at the same time.

  • The ability to use an industry standard routing protocol, and enable a scalable virtual network isolation protocol that isn’t limited by current VLAN technologies.

  • The ability to provide isolated tenant networks using a technology that scales well as the number of tenants and their workloads increase.

  • A manageable virtual network design that has an easy-to-use management interface that allows them to manage their virtual networks, IP address spaces, and gateways all in one location. This makes it easier and more efficient for them to manage many tenants at a time.

  • The ability to provide a common self-service portal for tenants, which allows them to efficiently place their computing resources where they best meet their business needs.

  • The ability to provide easy-to-follow guidance for their customers so that they can easily connect their on-premises network to the hosting provider’s through a secure site-to-site virtual private network (VPN). This guidance will include router configuration guidance that details required protocols, settings, and end-point addresses.

The following diagram shows the recommended design for this solution, which connects each tenant’s network to the hosting provider’s multi-tenant gateway using a single site-to-site VPN tunnel. This enables the hosting provider to support approximately 100 tenants on a single gateway cluster, which decreases both the management complexity and cost. Each tenant must configure their own gateway to connect to the hosting provider gateway. The gateway then routes each tenant’s network data and uses the “Network Virtualization using Generic Routing Encapsulation” (NVGRE) protocol for network virtualization.

Multi-tenant networking solution design

Hybrid Cloud Multi-Tenant Networking Solution Arch

The following table lists the elements that are part of this solution design and describes the reason for the design choice.

Solution design element Why is it included in this solution?

Windows Server 2012 R2

Provides the operating system base for this solution. We recommend using the Server Core installation option to reduce security attack exposure and to decrease software update frequency.

Windows Server 2012 R2 Gateway

Is integrated with Virtual Machine Manager to support simultaneous, multi-tenant site-to-site VPN connections and network virtualization using NVGRE. For an overview of this technology, see Windows Server Gateway.

Microsoft SQL Server 2012

Provides database services for Virtual Machine Managerand Windows Azure Pack.

System Center 2012 R2 Virtual Machine Manager

Manages virtual networks (using NVGRE for network isolation), fabric management, and IP addressing. For an overview of this product, see Configuring Networking in VMM Overview.

Windows Server Failover Clustering

All the physical hosts are configured as failover clusters for high availability, as well as many of virtual machine guests that host management and infrastructure workloads.

The site-to-site VPN gateway can be deployed in 1+1 configuration for high availability. For more information about Failover Clustering, see Failover Clustering overview.

Scale-out File Server

Provides file shares for server application data with reliability, availability, manageability, and high performance. This solution uses two scale-out file servers: one for the domain that hosts the management servers and one for the domain that hosts the gateway servers. These two domains have no trust relationship. The scale-out file server for the gateway domain is implemented as a virtual machine guest cluster. The scale-out file server for the gateway domain is needed because you will not be able to access a scale-out file server from an untrusted domain.

For an overview of this feature, see Scale-Out File Server for application data overview.

For a more in-depth discussion of possible storage solutions, see Provide cost-effective storage for Hyper-V workloads by using Windows Server.

Site-to-site VPN

Provides a way to connect a tenant site to the hosting provider site. This connection method is cost-effective and VPN software is included with Remote Access in Windows Server 2012 R2. (Remote Access brings together Routing and Remote Access service (RRAS) and Direct Access). Also, VPN software and/or hardware is available from multiple suppliers.

Windows Azure Pack

Provides a self-service portal for tenants to manage their own virtual networks. Windows Azure Pack provides a common self-service experience, a common set of management APIs, and an identical website and virtual machine hosting experience. Tenants can take advantage of the common interfaces, such as Service Provider Foundation) which frees them to move their workloads where it makes the most sense for their business or for their changing requirements. Though Windows Azure Pack is used for the self-service portal in this solution, you can use a different self-service portal if you choose.

For an overview of this product, see Windows Azure Pack for Windows Server

System Center 2012 R2 Orchestrator

Provides Service Provider Foundation (SPF), which exposes an extensible OData web service that interacts with VMM. This enables service providers to design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2.

Windows Server 2012 R2 together with System Center 2012 R2 Virtual Machine Manager (VMM) give hosting providers a multi-tenant gateway solution that supports multiple host-to-host VPN tenant connections, Internet access for tenant virtual machines by using a gateway NAT feature, and forwarding gateway capabilities for private cloud implementations. Hyper-V Network Virtualization provides tenant virtual network isolation with NVGRE, which allows tenants to bring their own address space and allows hosting providers better scalability than is possible using VLANs for isolation.

The components of the design are separated onto separate servers because they each have unique scaling, manageability, and security requirements.

For more information about the advantages of HNV and Windows Server Gateway, see:

VMM offers a user interface to manage the gateways, virtual networks, virtual machines and other fabric items.

When planning this solution, you need to consider the following:

  • High availability design for the servers running Hyper-V, guest virtual machines, SQL server, gateways, VMM, and other services

    You’ll want to ensure that your design is fault tolerant and is capable of supporting your stated availability terms.

  • Tenant virtual machine Internet access requirements

    Consider whether or not your tenants want their virtual machines to have Internet access. If so, you will need to configure the NAT feature when you deploy the gateway.

  • Infrastructure physical hardware capacity and throughput

    You’ll need to ensure that your physical network has the capacity to scale out as your IaaS offering expands.

  • Site-to-site connection throughput

    You’ll need to investigate the throughput you can provide your tenants and whether site-to-site VPN connections will be sufficient.

  • Network isolation technologies

    This solution uses NVGRE for tenant network isolation. You’ll want to investigate if you have or can obtain hardware that can optimize this this protocol. For example, network interface cards, switches, and so on.

  • Authentication mechanisms

    This solution uses two Active Directory domains for authentication; one for the infrastructure servers and one for the gateway cluster and scale-out file server for the gateway. If you don’t have an Active Directory domain available for the infrastructure, you’ll need to prepare a domain controller before you start deployment.

  • IP addressing

    You’ll need to plan for the IP address spaces used by this solution.

Important

If you use jumbo frames in you network environment, you may need to plan for some configuration adjustments before you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

Determine your tenant requirements

To help with capacity planning, you need to determine your tenant requirements. These requirements will then impact the resources that you need to have available for your tenant workloads. For example, you might need more Hyper-V hosts with more RAM and storage, or you might need faster LAN and WAN infrastructure to support the network traffic that your tenant workloads generate.

Use the following questions to help you plan for your tenant requirements.

Design consideration Design effect

How many tenants do you expect to host, and how fast do you expect that number to grow?

Determines how many Hyper-V hosts you’ll need to support your tenant workloads.

Using Hyper-V Resource Metering may help you track historical data on the use of virtual machines and gain insight into the resource use of the specific servers. For more information, see Introduction to Resource Metering on the Microsoft Virtualization Blog.

What kind of workloads do you expect your tenants to move to your network?

Can determine the amount of RAM, storage, and network throughput (LAN and WAN) that you make available to your tenants.

What is your failover agreement with your tenants?

Affects your cluster configuration and other failover technologies that you deploy.

For more information about physical compute planning considerations, see section “3.1.6 Physical compute resource: hypervisor” in the Design options guide in Cloud Infrastructure Solution for Enterprise IT.

Determine your failover cluster strategy

Plan your failover cluster strategy based on your tenant requirements and your own risk tolerance. For example, the minimum we recommend is to deploy the management, compute, and gateway hosts as two-node clusters. You can choose to add more nodes to your clusters, and you can guest cluster the virtual machines running SQL, Virtual Machine Manager, Windows Azure Pack, and so on.

For this solution, you configure the scale-out file servers, compute Hyper-V hosts, management Hyper-V hosts, and gateway Hyper-V hosts as failover clusters. You also configure the SQL, Virtual Machine Manager, and gateway guest virtual machines as failover clusters. This configuration provides protection from potential physical computer and virtual machine failure.

Design consideration Design effect

What is your risk tolerance for unavailability of applications and services?

Add nodes to your failover clusters to increase the availability of applications and services.

   
   

Determine your SQL high availability strategy

You’ll need to choose a SQL option for high availability for this solution. SQL Server 2012 has several options:

  • AlwaysOn Failover Cluster Instances

    This option provides local high availability through redundancy at the server-instance level—a failover cluster instance.

  • AlwaysOn Availability Groups

    This option enables you to maximize availability for one or more user databases.

For more information see Overview of SQL Server High-Availability Solutions.

For the SQL high availability option for this solution, we recommend AlwaysOn Failover Cluster Instances. With this design, all the cluster nodes are located in the same network, and shared storage is available, which makes it possible to deploy a more reliable and stable failover cluster instance. If shared storage is not available and your nodes span different networks, AlwaysOn Availability Groups might be a better solution for you.

Determine your gateway requirements

You need to plan how many gateway guest clusters are required. The number you need to deploy depends on the number of tenants that you need to support. The hardware requirements for your gateway Hyper-V hosts also depend on the number tenants that you need to support and the tenant workload requirements.

For Windows Server Gateway configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

For capacity planning purposes, we recommend one gateway guest cluster per 100 tenants.

The design for this solution is for tenants to connect to the gateway through a site-to-site VPN. Therefore, we recommend deploying a Windows Server gateway using a VPN. You can configure a two-node Hyper-V host failover cluster with a two-node guest failover cluster using predefined service templates available on the Microsoft Download Center (for more information, see How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM).

Design consideration Design effect

How will your tenants connect to your network?

  • If tenants connect through a site-to-site VPN, you can use Windows Server Gateway as your VPN termination and gateway to the virtual networks.

    This is the configuration that is covered by this planning and design guide.

  • If you use a non-Microsoft VPN device to terminate the VPN, you can use Windows Server Gateway as a forwarding gateway to the tenant virtual networks.

  • If a tenant connects to your service provider network through a packet-switched network, you can use Windows Server Gateway as a forwarding gateway to connect them to their virtual networks.

Important

You must deploy a separate forwarding gateway for each tenant that requires a forwarding gateway to connect to their virtual network.

Plan your network infrastructure

For this solution, you use Virtual Machine Manager to define logical networks, VM networks, port profiles, logical switches, and gateways to organize and simplify network assignments. Before you create these objects, you need to have your logical and physical network infrastructure plan in place.

In this step, we provide planning examples to help you create your network infrastructure plan.

The diagram shows the networking design that we recommend for each of the physical nodes in the management, compute, and gateway clusters.

Networking design for cluster nodes

Compute and Management node network interfaces

You need to plan for several subnet and VLANs for the different traffic that is generated, such as management/infrastructure, network virtualization, external (outward bound), clustering, storage, and live migration. You can use VLANs to isolate the network traffic at the switch.

For example, this design recommends the networks listed in the following table. Your exact line speeds, addresses, VLANs, and so on may differ based on your particular environment.

Subnet/VLAN plan

Line speed (Gb/S) Purpose Address VLAN Comments

1

Management/Infrastructure

172.16.1.0/23

2040

Network for management and infrastructure. Addresses can be static or dynamic and are configured in Windows.

10

Network Virtualization

10.0.0.0/24

2044

Network for the VM network traffic. Addresses must be static and are configured in Virtual Machine Manager.

10

External

131.107.0.0/24

2042

External, Internet-facing network. Addresses must be static and are configured in Virtual Machine Manager.

1

Clustering

10.0.1.0/24

2043

Used for cluster communication. Addresses can be static or dynamic and are configured in Windows.

10

Storage

10.20.31.0/24

2041

Used for storage traffic. Addresses can be static or dynamic and are configured in Windows.

VMM logical network plan

This design recommends the logical networks listed in the following table. Your logical networks may differ based on your particular needs.

Name IP pools and network sites Notes

External

  • Rack01_External

    • 131.107.0.0/24, VLAN 2042

    • All Hosts

Host Networks

  • Rack01_LiveMigration

    • 10.0.3.0, VLAN 2045

    • All Hosts

  • Rack01_Storage

    • 10.20.31.0, VLAN 2041

    • All Hosts

Infrastructure

  • Rack01_Infrastructure

    • 172.16.0.0/24, VLAN 2040

    • All Hosts

Network Virtualization

  • Rack01_NetworkVirtualization

    • 10.0.0.0/24, VLAN 2044

    • All Hosts

VMM VM network plan

This design uses the VM networks listed in the following table. Your VM networks may differ based on your particular needs.

Name IP pool address range Notes

External

None

Live migration

10.0.3.1 – 10.0.3.254

Management

None

Storage

10.20.31.1 – 10.20.31.254

After you install Virtual Machine Manager, you can create a logical switch and uplink port profiles. You then configure the hosts on your network to use a logical switch, together with virtual network adapters attached to the switch. For more information about logical switches and uplink port profiles, see Configuring Ports and Switches for VM Networks in VMM.

This design uses the following uplink port profiles, as defined in VMM:

VMM uplink port profile plan

Name General property Network configuration

Rack01_Gateway

  • Load Balancing Algorithm: Host Default

  • Teaming mode: LACP

Network sites:

  • Rack01_External, Logical Network: External

  • Rack01_LiveMigration, Logical Network: Host Networks

  • Rack01_Storage, Logical Network: Host Networks

  • Rack01_Infrastructure, Logical Network: Infrastructure

  • Network Virtualization_0, Logical Network: Network Virtualization

Rack01_Compute

  • Load Balancing Algorithm: Host Default

  • Teaming mode: LACP

Network sites:

  • Rack01_External, Logical Network: External

  • Rack01_LiveMigration, Logical Network: Host Networks

  • Rack01_Storage, Logical Network: Host Networks

  • Rack01_Infrastructure, Logical Network: Infrastructure

  • Network Virtualization_0, Logical Network: Network Virtualization

Rack01_Infrastructure

  • Load Balancing Algorithm: Host Default

  • Teaming mode: LACP

Network sites:

  • Rack01_LiveMigration, Logical Network: Host Networks

  • Rack01_Storage, Logical Network: Host Networks

  • Rack01_Infrastructure, Logical Network: Infrastructure

This design deploys the following logical switch using these uplink port profiles, as defined in VMM:

VMM logical switch plan

Name Extension Uplink Virtual port

VMSwitch

Microsoft Windows Filtering Platform

  • Rack01_Compute

  • Rack01_Gateway

  • Rack01_Infrastructure

  • High bandwidth

  • Infrastructure

  • Live migration workload

  • Low bandwidth

  • Medium bandwidth

The design isolates the heaviest traffic loads on the fastest network links. For example, the storage network traffic is isolated from the network virtualization traffic on separate fast links. If you must use slower network links for some of the heavy traffic loads, you could use NIC teaming.

Important

If you use jumbo frames in you network environment, you may need to make some configuration adjustments when you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

Plan your Windows Azure Pack deployment

If you use Windows Azure Pack for your tenant self-service portal, there are numerous options you can configure to offer your tenants. This solution includes some of the VM Cloud features, but there are many more options available to you—not only with VM Clouds, but also with Web Site Clouds, Service Bus Clouds, SQL Servers, MySQL Servers, and more. For more information about Windows Azure Pack features, see Windows Azure Pack for Windows Server.

After reviewing the Windows Azure Pack documentation, determine which services you want to deploy. Since this solution only uses the Windows Azure Pack as an optional component, it only utilizes some of the Web Site Clouds features, using an Express deployment, with all the Windows Azure Pack components installed on a single virtual machine. If you use Windows Azure Pack as your production portal however, you should use a distributed deployment and plan for the additional resources required.

To determine your host requirements for a production distributed deployment, see Windows Azure Pack architecture.

Use a distributed deployment if you decide to deploy Windows Azure Pack in production. If you want to evaluate Windows Azure Pack features before deploying in production, use the Express deployment. For this solution, you use the Express deployment to demonstrate the Web Site Clouds service. You deploy Windows Azure Pack on a single virtual machine located on the compute cluster so that the web portals can be accessed from the external (Internet) network. Then, you deploy a virtual machine running Service Provider Foundation on a virtual machine located on the management cluster.

Why are we recommending this design?

The design includes failover clusters to provide high availability and scalability for the solution.

The following diagram shows the four types of failover clusters that are deployed. Each failover cluster isolates the roles required for the solution.

Physical clusters and VMs

The following table shows the physical hosts that we recommend for this solution. The number of nodes used was chosen to represent the minimum needed to provide high availability. You can add additional physical hosts to further distribute the workloads to meet your specific requirements. Each host has 4 physical network adapters to support the networking isolation requirements of the design. We recommend that you use a 10 GB/s or faster network infrastructure. 1 Gb/s might be adequate for infrastructure and cluster traffic.

Physical host recommendation

Physical hosts Role in solution Virtual machine roles

2 hosts configured as a failover cluster

Management/infrastructure cluster:

Provides Hyper-V hosts for management/infrastructure workloads (VMM, SQL, Service Provider Foundation, guest clustered scale-out file server for gateway domain, domain controller).

  • Guest clustered SQL

  • Guest clustered VMM

  • Guest clustered scale-out file server for gateway domain

  • Service Provider Foundation endpoint

2 hosts configured as a failover cluster

Compute cluster:

Provides Hyper-V hosts for tenant workloads and Windows Azure Pack for Windows Server.

  • Tenant

  • Windows Azure Pack portal accessible from public networks

2 hosts configured as a failover cluster

Storage cluster:

Provides scale-out file server for management and infrastructure cluster storage.

None (this cluster just hosts file shares)

2 hosts configured as a failover cluster

Windows Server gateway cluster:

Provides Hyper-V hosts for the gateway virtual machines.

For gateway physical host and gateway virtual machine configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

Guest clustered gateway

What are the steps to implement this solution?

Important

When you deploy Hyper-V hosts and virtual machines, it is extremely important to apply all available updates for the software and operating systems used in this solution. If you don’t do this, your solution many not function as expected.

You can use the steps in this section to implement the solution. Make sure to verify the correct deployment of each step before proceeding to the next step.

Note

If you want to print or export a customized set of solution topics, see Print/Export Multiple Topics – Help.

  1. Deploy (or identify) an Active Directory domain.

    Your management, compute, and scale-out file servers will join this domain. Or alternatively, identify an existing Active Directory domain that can host your servers.

  2. Deploy (or identify) a second Active Directory domain.

    This second Active Directory domain will host your Hyper-V host gateway servers and a scale-out file server for gateway storage. This second Active Directory domain should have no trust relationship with your infrastructure domain for security considerations.

    Important

    Ensure both domains can resolve names in the other domain. For example, you can configure a forwarder at each DNS server to point to the DNS server in the other domain.

  3. Deploy the storage nodes and clusters for the management domain.

    A scale-out file server hosts the storage for this solution as file shares. This scale-out file server is configured on physical hosts in the management domain. An additional scale-out file server for the gateway domain is implemented in virtual machines later on the management cluster. For more information about deploying a scale-out file server, see Deploy Scale-Out File Server.

  4. Deploy the management nodes and clusters.

    Note

    You’ll need to create a temporary virtual switch using Hyper-V Manager so you can install and configure your virtual machines. After VMM is installed, you can define a logical switch in VMM, delete the virtual switch defined in Hyper-V, and configure your hosts to use a virtual switch based on the logical switch defined in VMM.

    This host cluster will host the SQL server, VMM, Service Provider Foundation (SPF) server, and scale-out file server (for the gateway domain) virtual machines. The scale-out file server for the gateway domain is implemented in virtual machines and joined to the gateway domain. For more information, see the following topics:

    Important

    Deploy all the virtual machines on one host cluster node for now. After the networking features are configured in VMM, you load balance the virtual machines across the host cluster nodes.

    1. Deploy the SQL guest cluster.

      For information about deploying a SQL Server failover cluster instance, see the following topics:

    2. Deploy VMM.

      For information about how to do this, see Deploying System Center 2012 - Virtual Machine Manager. For this solution, you use VMM to deploy and manage your gateway and other network features.

      1. Install VMM on a guest cluster.

        For information about how to do this, see the following topics:

      2. Add a library server, using a share on your scale-out file server. For more information, see How to Add a VMM Library Server or VMM Library Share. When you are prompted to type the computer name, type the name you used when you configured the scale-out file server role. Do not use the cluster name.

        Important

        When you add a library server, ensure that you use a user account that is different from your VMM service account. If you don’t do this, VMM will silently fail to add the library server and you won’t see any job history indicating an error has occurred.

      3. Disable the Create logical networks automatically setting before you add any hosts. You’ll manually create logical networks with specific settings later. This setting is located in Settings, Network Settings.

      4. Add the designated Hyper-V hosts as VMM hosts.

        Add the management cluster and the Scale-Out File Server cluster. You’ll add the compute host cluster later.

        You should add the Scale-Out File Server cluster in the Fabric, Storage, File Servers category. You should add the management cluster (and eventually the compute cluster) under All Hosts. To help organize the hosts, you should create additional host groups (for example, Compute, and Management) and place the appropriate clusters in the host groups.

        Important

        When you deploy a scale-out file server for the gateway domain, you need to open the public Windows Remote Management (HTTP-In) port on both nodes of the guest cluster. This port needs to be opened because the VMM server and gateway cluster exist in separate, untrusted domains and that port is not open by default for the Public profile.

        For more information, see Adding Windows Servers as Hyper-V Hosts in VMM Overview.

        To see an example procedure, see “To add HNVHOST1, HNVHOST2, and HNVHOST3 as VMM Hosts” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      5. Add file share storage.

        After you add the cluster, you can configure storage locations for the virtual machines that are deployed to nodes in the cluster. Open the Properties page for the cluster and add a share from your scale-out file server on the File Share Storage page.

      6. Create the planned logical networks and associated IP pools.

        For this solution, you can create a logical network for External (Internet), Infrastructure, Host Networks (with Cluster IP Pool and Live Migration IP Pool), and Network Virtualization networks. Note that these are sample names—you can use your own names according to your plan. Create the appropriate IP pools for each logical network according to your plan, making sure that the IP address ranges don’t overlap with any existing IP addresses in use.

        You configure the Host Networks logical network as a VLAN-based independent network, and configure the others as One connected network.

        For more information, see How to Create a Logical Network in VMM.

        To see an example procedure in a test environment, see “Define logical networks with associated IP pools” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      7. Create VM networks for the Infrastructure, External (Internet), Live Migration, and Storage logical networks.

        Create an IP address pool for the Storage and Live Migration networks, using the appropriate address range according to your plan.

        For more information, see How to Create a VM Network in VMM in System Center 2012 R2.

        To see an example procedure in a test environment, see “Define VM networks” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      8. Create the uplink port profiles.

        Create a Gateway, Compute, and Infrastructure Uplink port profile. Configure the Host Default Load balancing algorithm and the Link Aggregation Control Protocol (LACP) teaming mode (assuming that your switch supports LACP). Select all the network sites for the network configuration of your Compute and Gateway port profile, and the Live Migration, Storage, and Infrastructure sites for your Infrastructure profiles.

        For more information, see Configuring Ports and Switches for VM Networks in VMM.

        To see an example procedure in a test environment, see “Create port profiles and logical switches” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      9. Create the logical switch.

        Select the Microsoft Windows Filtering Platform for the Extensions, select Team for the Uplink mode, and add the three uplink port profiles that you created previously.

        Add the following virtual ports: High bandwidth, Infrastructure, Live migration workload, Low bandwidth, and Medium bandwidth.

      10. Create a teamed virtual switch on a management node.

        Add a virtual switch to the management host cluster node. This is the node that doesn’t have any virtual machines associated with it.

        To do this in VMM, locate the host node on the Fabric, Servers pane, open the; Properties page and add a virtual switch on the New Virtual Switch page.

        Add your two fastest physical adapters to form a team and choose the Infrastructure Uplink port profile. Then add two virtual network adapters for Live Migration and Storage.

        When you’re done, verify that your virtual switch looks similar to the following:

        Virtual Switch

        Virtual switch virtual adapter - Live Migration

        Virtual switch virtual adapter - Storage

        Important

        You might need to make some configuration changes to the physical switch ports that these network adapters are connected to. If you’re using LACP for teaming, you’ll need to configure the switch ports for LACP. If your switch ports are configured in Access Mode (for untagged packets), you need to configure them in Trunk Mode, because tagged packets will be coming from the teamed adapters.

        For more information, see How to Configure Network Settings on a Host by Applying a Logical Switch in VMM.

        Tip

        For troubleshooting purposes, you can use the following Windows PowerShell cmdlets:

        Get-NetLbfoTeam, Get-NetLbfoTeamMember, and Get-NetLbfoTeamNic

        To see other related cmdlets, type Get-command *lbfo*.

      11. Configure your migration settings.

        Now that you have your live migration adapter configured on the virtual switch, you can configure your migration settings on each node’s Property, Migration Settings page. Configure your desired settings, and ensure your live migration subnet address has been added and is at the top of the list. The subnet is actually entered as a single IP address with a 32-bit mask: x.x.x.x/32. So, if your live migration virtual network adapter’s address is 10.0.3.6, then the Migration Settings page may look similar to the following:

        Migration Settings

      12. Live migrate your virtual machines.

        Now that you have a host configured with a virtual switch configured using VMM, you can migrate your virtual machines to it so you can prepare the other node the same way.

        To migrate your virtual machines, in VMM, select the VMs and Services workspace, select the node in your management cluster that has the virtual machines running on it, right-click the running virtual machine, and click Migrate Virtual Machine. Select the other node and move the virtual machine.

      13. Delete the virtual switch that was originally created using Hyper-V Manager.

        Now that you have moved the virtual machines, you can delete the original virtual switch that you created with Hyper-V Manager.

      14. Create a new teamed virtual switch using VMM.

        After you delete the old virtual switch, you can create a new teamed virtual switch like you did with the previous node. Follow the previous step to create the virtual switch on this node using VMM.

      15. Live migrate some virtual machines back.

        Now that you have both nodes configured with a teamed virtual switch using VMM, you can migrate some of the virtual machines back. For example, move one of the SQL guest cluster nodes so that you have the guest cluster nodes split across the host cluster nodes. Do this for all the other guest clusters.

      After this step is complete, you should have both of your management host cluster nodes installed with the management virtual machines and the host node networking configured through VMM.

  5. Deploy the compute nodes and clusters.

    This Hyper-V cluster hosts the tenant virtual machines and the Windows Azure Pack portal server.

    You can install the compute Hyper-V cluster in a similar manner that you installed the management cluster:

    1. Deploy the Hyper-V hosts and join the management domain.

    2. Cluster the hosts and add the cluster to your VMM Compute Host group.

    3. Create the teamed virtual switch and the live migration and storage virtual adapters for both host nodes like you did for both of the management nodes. When you team the physical adapters, use the Compute Uplink port profile for the adapters.

    4. Add file share storage.

      Configure a storage location for the virtual machines deployed to nodes in the cluster. Open the Properties page for the cluster and add a share from your scale-out file server on the File Share Storage page.

  6. Deploy the gateway.

    To deploy the Windows Server gateway in Windows Server 2012 R2, you deploy a dedicated Hyper-V host cluster and then deploy the gateway virtual machines using VMM. The Windows Server gateway provides a connection point for multiple tenant site-to-site VPN connections. You follow a similar procedure to deploy the physical hosts, but then you use a VMM service template to deploy the guest cluster virtual machines.

    To deploy the Windows Server gateway, use the following procedure:

    1. Deploy the Hyper-V hosts and join the gateway domain.

    2. Cluster the hosts and add the cluster to your VMM Gateway Host group.

    3. Create the teamed virtual switch and the live migration and storage virtual adapters for both host nodes like you did for both of the management and compute nodes. When you team the physical adapters, use the Gateway Uplink port profile for the adapters.

    4. Add file share storage.

      Configure a storage location for the virtual machines that are deployed to nodes in the cluster. Open the Properties page for the cluster and add a share from your scale-out file server on the File Share Storage page.

    5. Ensure that you have a file share available from VMM (where you have a Windows Server 2012 R2 .vhd or .vhdx file available). This file will be used by the VMM service template to deploy the gateway virtual machines.

    6. Configure hosts as gateway hosts.

      You must configure each gateway Hyper-V host as a dedicated network virtualization gateway. In VMM, right-click a gateway host and click Properties. Click Host Access and click the check box for This host is a dedicated network virtualization gateway, as a result is not available for placement of virtual machines requiring network virtualization.

    7. To deploy the gateway virtual machines, follow the procedures in the following topic: How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM and deploy using the 3-NIC HA Gateway service template.

      The service template that you use to deploy the gateway includes a Quick Start Guide document. This document includes some information to setup the infrastructure for the gateway deployment. This information is similar to the information provided in this solution guide. You can skip the infrastructure steps in the Quick Start Guide that are already covered in this solution guide.

      When you reach the final configuration steps and run the Add a network service wizard, your Connection String page will look similar to the following:

      Network Service Connection String

      And the Connectivity property of your gateway network service will look similar to the following:

      Network Service Connectivity

    After this step is complete, verify that two jobs in the log have completed successfully:

    • Update network service device

    • Add connection network service device

    Tip

    If you need to deploy a gateway guest cluster on a regular basis (for example, to address resource demands), you can customize the service template using the Service Template Designer. For example, you can customize the OS Configuration settings to join a specific domain, use a specific product key, or use a specific computer name configuration.

    Warning

    Do not modify the gateway service template to make the virtual machines highly available. The gateway service template intentionally leaves the Make this virtual machine highly available check box in the Advanced\Availability area unchecked. The virtual machines are configured as nodes of a guest cluster, but it’s important to not change this setting. Otherwise, during failover, the customer addresses (CA) won’t associate with the new provider address (PA) and the gateway will not function properly.

  7. Verify gateway functionality.

    Verify that there is connectivity between a test virtual machine and the hosts located on a test tenant network.

    Use the following steps to verify that your gateway and VM networks are functioning correctly.

    1. Establish a site-to-site VPN connection.

      How you connect your test tenant network will vary depending on the equipment you use to establish the VPN connection. Remote Access (which brings together Direct Access and Routing and Remote Access service (RRAS)) is one way to connect to your gateway. To see an example procedure using RRAS to connect to the gateway, see “Install RRAS on Contoso EDGE1 and create a site-to-site VPN connection to GatewayVM1 running on HNVHOST3” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      Tip

      To connect other VPN devices, the connectivity requirements are similar to the Windows Azure VPN connection requirements. For more information, see About VPN Devices for Virtual Network

    2. View the site-to-site VPN connection on your gateway.

      After you establish the VPN connection, you can use some Windows PowerShell commands and some new ping options to verify the VPN connection.

      To see an example procedure in a test environment, see “To view the S2S VPN connections on GatewayVM1” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

    3. Deploy test tenant virtual machines.

      After you verify that you have a successful site-to-site connection to your gateway, you can deploy a test virtual machine and connect it to the test VM network on your hosting service provider network.

      To see an example procedure in a test environment, see “Step 2: Deploy Tenant Virtual Machines” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

    4. Verify test VM network connectivity and HNV site-to-site operation.

      After you deploy your test virtual machine, you should verify that it has network connectivity to remote resources in the tenant on-premises network over the Internet through the multi-tenant site-to-site gateway.

      To see an example procedure in a test environment, see “Verify network connectivity for the APP2 virtual machines” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

  8. Deploy Windows Server IPAM (recommended).

    Windows Server IPAM is integrated with VMM to manage the IP address space for your customer and fabric infrastructure. For more information, see Deploying IPAM Server.

    To see an example procedure in a test environment, see “Step 6: Install and configure IPAM on HNVHOST2” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

    After IPAM has been deployed, configure the IPAM VMM plug-in. For more information, see How to Add an IPAM Server in VMM in System Center 2012 R2.

    To see an example procedure in a test environment, see “To configure the IPAM VMM plugin on HNVHOST2” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

    After this step is complete, verify that you can view the virtualized address space in IPAM.

    To see an example procedure in a test environment, see “To use IPAM to view the virtualized address space” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

  9. Deploy a self-service tenant portal.

    A tenant self-service portal allows your tenants to create their own virtual networks and virtual machines with minimal hosting service provider involvement. Service providers can design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2. Service Provider Foundation exposes an extensible OData web service that interacts with VMM.

    Windows Azure Pack is a Microsoft self-service portal solution that integrates with VMM using SPF. It offers a web site portal similar to Windows Azure, so if your tenants are also Windows Azure customers, they will already be familiar with the user interface presented in Windows Azure Pack. To demonstrate Windows Azure Pack features for this solution, an express Windows Azure Pack deployment is used. This deploys the required features on a single server. If you want to deploy Windows Azure Pack in production, you should use the distributed deployment. For more information, see Windows Azure Pack installation requirements.

    1. Create the WAPPortal virtual machine.

      Review Express deployment hardware and software prerequisites and then create the WAPPortal virtual machine on your compute cluster.

    2. Install software prerequisites.

      Follow the procedure in Install software prerequisites.

    3. Install an express deployment of Windows Azure Pack.

      Follow the procedure in Install an express deployment of Windows Azure Pack.

    4. Review the topics under Provision Virtual Machine Clouds, and then review the guidance in Requirements for using VM Clouds.

    5. Using VMM, create a cloud.

      For example, you could use the Create Cloud Wizard to create a cloud with the following properties:

      Properties Settings

      General

      Name: Gold

      Resources

      Host Group: Compute

      Logical Networks

      Network Virtualization

      Port Classifications

      High Bandwidth

      Storage

      Remote Storage

      Library

      VMM-Lib (a share located on the scale-out file server)

      Capacity

      Cloud Capacity: set to your desired capacity

      For more information about creating a cloud in VMM, see How to Create a Private Cloud from Host Groups.

    6. Install Service Provider Foundation on a separate virtual machine located on the management and infrastructure cluster using the procedure in How to Install Service Provider Foundation for System Center 2012 SP1.

    7. Configure SPF for use with Windows Azure Pack, as described in Configuring Portals for Service Provider Foundation in the “Configuring Windows Azure Pack for Windows Server” section.

      After you have completed the procedure to register the SPF endpoint for virtual machine clouds, you should see the cloud that you created in VMM on the Windows Azure Pack administrator portal.

    8. From the Windows Azure Pack administrator portal, author a plan that you can use to test with. For example, you could author a plan called Gold Plan with the following properties:

      Properties Settings

      Name

      Gold Plan

      Services

      Virtual Machine Clouds

      After the plan is created, click it to continue the configuration. Click the Virtual Machine Clouds service and configure the VMM Management Server, Virtual Machine Cloud, and usage limits. Click Save to complete the virtual machine clouds configuration. Click the back button and finally click Change Access to make the plan public.

    9. Create a Windows Azure Pack Gallery Resource. Tenants can use the Gallery to place virtual machines on their virtual networks. For more information, see Downloading and Installing Windows Azure Pack Gallery Resource.

    10. From the Windows Azure Pack tenant portal logon page, click Sign Up to sign up a test tenant account.

      Proceed through the tenant portal, add a subscription and choose a plan.

    11. After the account has been created, create a new virtual network for the tenant using Custom Create.

      When you’re done creating the network, verify that it exists in VMM under VM Networks.

    12. Establish a site-to-site VPN connection with the test tenant like you did previously when you created a manual test virtual network.

    13. Create a new virtual machine role using the Gallery that you created previously.

    14. After the test virtual machine has been created, verify that it has connectivity back to the tenant network through the site-to-site VPN tunnel.

Optional configurations

This section describes optional configurations to add functionality to this solution.

Deploy a forwarding gateway to support Internet connected virtual machines

You may have tenants that want to deploy virtual machines that are directly connected to the Internet. They may have connection needs that require no NAT in the connection path.

Or, you may have tenants that need connectivity directly to a physical network. For example a VLAN with co-located hardware or a packet switched network (such as a multiprotocol label switching (MPLS) network).

You can support these requirements using a forwarding gateway connected to a VM network used exclusively for directly connected virtual machines. You then create subnets on the VM network for each tenant. You can use extended port access control lists to isolate each of the tenant virtual machines and control the network traffic in and out of their virtual machines.

Here’s how to do it:

  1. Deploy a gateway using the service template in the same way as in the original solution.

  2. Note the cluster front end IP address and name of the new VM gateway cluster. You will use this information in the connection string used in the next step.

  3. Create a new network service in VMM to deploy the forwarding gateway service. Use a connection string similar to the following:

    VMHost=gateway-cl.adatum-gw.lab;GatewayVM=FGWCL01.adatum-gw.lab;BackendSwitch=VMSwitch;DirectRoutingMode=True;FrontEndServerAddress=131.107.0.55

    Note

    Notice the new parameters in this connection string: DirectRoutingMode and FrontEndServerAddress.

  4. Create a VM subnet that is configured for direct routing, using the new forwarding gateway as the gateway device.

    1. Create separate subnets for each tenant. For example:

      Forwarding network subnets

  5. Place tenant virtual machines in their respective subnets.

  6. To isolate the virtual machines, use extended port access control lists and run the cmdlets on the VMM host. Configure the ports and protocols required for the tenant virtual machines.

    Important

    Before running the following cmdlets, you must install the Hyper-V PowerShell module on the VMM host. The PowerShell cmdlet to do that is Install-WindowsFeature hyper-v-powershell

    Example:

    $vm = get-scvirtualMachine -Name "<computername>"
    Add-VMNetworkAdapterExtendedAcl -ComputerName $vm.vmhost.fqdn –VMName $vm.Name –Direction in  –Action Allow -Weight 15 -localport 68 -Protocol udp –Stateful $true
    Add-VMNetworkAdapterExtendedAcl -ComputerName $vm.vmhost.fqdn -VMName $vm.Name -Direction out -Action allow -Weight 12 -RemotePort 53 -Protocol udp -Stateful $true
    Add-VMNetworkAdapterExtendedAcl -ComputerName $vm.vmhost.fqdn -VMName $vm.Name -Direction out -Action allow -Weight 11 -LocalPort 443 -Protocol tcp -Stateful $true
    Add-VMNetworkAdapterExtendedAcl -ComputerName $vm.vmhost.fqdn -VMName $vm.Name -Direction out -Action allow -Weight 10 -LocalPort 80 -Protocol tcp -Stateful $true
    Add-VMNetworkAdapterExtendedAcl -ComputerName $vm.vmhost.fqdn –VMName $vm.Name –Direction in  –Action Allow -Weight 10 -localport 80 -Protocol tcp –Stateful $true
    Add-VMNetworkAdapterExtendedAcl -ComputerName $vm.vmhost.fqdn -VMName $vm.Name -Direction out -Action deny  -Weight 1
    Add-VMNetworkAdapterExtendedAcl -ComputerName $vm.vmhost.fqdn -VMName $vm.Name -Direction in  -Action deny  -Weight 1
    

    Example to remove port ACLs from a virtual machine:

    $vm = get-scvirtualMachine -Name "<computername>"
    Get-VMNetworkAdapterExtendedacl -ComputerName $vm.vmhost.fqdn -VMName $vm.Name | Remove-VMNetworkAdapterExtendedAcl
    

Troubleshooting tip

If your newly deployed forwarding gateway is not properly forwarding packets to the VM subnet configured for direct routing, double check that you followed the previous procedure correctly. If you are still experiencing problems, check to make sure the frontend interfaces on the forwarding gateway are configured for forwarding. To do this, check the following

  1. Logon to one of the forwarding gateway guest cluster virtual machines.

  2. From a Windows PowerShell administrator command prompt, use Get-NetIPInterface to examine the IP interfaces. Note the ifIndex number for the interface associated with your frontend network.

  3. Use Get-NetIPInterface –InterfaceIndex <ifindex for frontend interface> | fl, and examine the Forwarding parameter.

  4. If the Forwarding parameter is Disabled, enable it using the following command: Get-NetIPInterface –InterfaceIndex <ifindex for frontend interface> | Set-NetIpInterface –Forwarding Enabled

  5. Repeat for each node in the forwarding gateway guest cluster.

  6. Repeat your test to verify that the forwarding gateway is properly forwarding packets to the VM network.

See also

Content type References

Product evaluation/getting started

Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM

Planning and design

Hybrid Cloud Multi-Tenant Networking Planning and Design Guide

Microsoft System Center: Building a Virtualized Network Solution

Reference

Community resources

Related solutions

Related technologies