Export (0) Print
Expand All

Connect hosting provider and tenant networks for hybrid cloud services: planning and design guide

Published: February 14, 2014

Updated: April 2, 2014

Applies To: System Center 2012 R2 Virtual Machine Manager, Windows Azure Pack, Windows Server 2012 R2

Use the following steps to plan and design the Connect hosting provider and tenant networks for hybrid cloud services solution. This guide describes the hardware and software that you’ll need for the solution, as well as the design and planning decisions for implementing it.

For an overview of this solution, we recommend that you review Connect hosting provider and tenant networks for hybrid cloud services first.

TipTip
If you aren’t familiar with network virtualization concepts, also review Hyper-V Network Virtualization Overview and Hyper-V Network Virtualization technical details.

If you aren’t familiar with the network virtualization concepts in System Center 2012 R2 Virtual Machine Manager (VMM), we strongly recommend that you set up and run a test lab using the following test lab guide before you do any planning and design: Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

The test lab guide will help you understand Virtual Machine Manager concepts, and will help make it easier to plan, design, and configure this solution.

Also review the VMM concepts presented in Microsoft System Center: Building a Virtualized Network Solution for more information about planning and design considerations in a VMM-based solution.

In this guide:

The following diagram shows the four types of clusters that you’ll need for this solution.

Physical design of this solution

Physical clusters and VMs

Here are the physical hosts that we recommend for this solution. You can add additional physical hosts to further distribute the workloads to meet your specific requirements. Each host has 4 physical network adapters.

Physical host recommendation

 

Physical hosts Role in solution Types of virtual machines needed

2 hosts configured as a failover cluster

Management/infrastructure cluster:

Provides Hyper-V hosts for management/infrastructure workloads (VMM, SQL, Service Provider Foundation, guest clustered scale-out file server for gateway domain, domain controller).

  • Guest clustered SQL

  • Guest clustered VMM

  • Guest clustered scale-out file server for gateway domain

  • Service Provider Foundation endpoint

2 hosts configured as a failover cluster

Compute cluster:

Provides Hyper-V hosts for tenant workloads and Windows Azure Pack for Windows Server.

  • Tenant

  • Windows Azure Pack portal accessible from public networks

2 hosts configured as a failover cluster

Storage cluster:

Provides scale-out file server for management and infrastructure cluster storage.

None (this cluster just hosts file shares)

2 hosts configured as a failover cluster

Windows Server gateway cluster:

Provides Hyper-V hosts for the gateway.

For gateway physical host and gateway virtual machine configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

Guest clustered gateway

Physical network infrastructure

We recommend that you use a 10 GB/s or faster network infrastructure. 1 Gb/s might be adequate for infrastructure and cluster traffic.

Here are the products and technologies that are part of this solution.

ImportantImportant
When you start to deploy Hyper-V hosts and virtual machines, it’s extremely important to apply all available updates for the software and operating systems used in this solution. If you don’t do this, your solution might not function as expected.

 

Product/technology How it supports this solution

Windows Server 2012 R2

Provides the operating system base for this solution. We recommend using the Server Core installation option to reduce security attack exposure.

Windows Server Gateway

Support simultaneous, multi-tenant site-to-site VPN connections, forwarding gateway, and network virtualization using Network Virtualization using Generic Routing Encapsulation (NVGRE). This is a Windows Server 2012 R2 feature that is integrated with Virtual Machine Manager. For an overview of this technology, see Windows Server 2012 R2 Gateway Overview.

Windows Server Failover Clustering

Provides high availability for server roles. For an overview of this feature, see Failover Clustering overview.

Scale-Out File Server

Provides file shares for server application data with reliability, availability, manageability, and high performance. This solution uses two scale-out file servers: one for the domain that hosts the management servers and one for the domain that hosts the gateway servers. These two domains have no trust relationship. The scale-out file server for the gateway domain is implemented as a virtual machine guest cluster. The scale-out file server for the gateway domain is needed because you will not be able to access a scale-out file server from an untrusted domain.

For an overview of this feature, see Scale-Out File Server for application data overview.

For a more in-depth discussion of possible storage solutions, see Provide cost-effective storage for Hyper-V workloads by using Windows Server.

System Center 2012 R2 Virtual Machine Manager

Provides virtual network management (using NVGRE for network isolation), fabric management, and IP addressing. For an overview of this product, see Configuring Networking in VMM Overview.

Windows Server 2012 R2 IPAM (recommended)

Is integrated with Virtual Machine Manager for IP address space management. For an overview of this technology, see IP Address Management (IPAM) Overview.

Microsoft SQL Server 2012

Provides database services for Virtual Machine Manager and Windows Azure Pack.

Border Gateway Protocol (BGP) (optional)

Provides automatic routing, which supports multisite tenant network topologies. BGP is configured with Virtual Machine Manager when you create VM networks. For more information, see How to Create a VM Network in VMM in System Center 2012 R2

For more information about BGP in Windows Server 2012 R2, see Border Gateway Protocol (BGP) with Windows Server 2012 R2.

Windows Azure Pack (recommended)

Provides a common self-service tenant portal, a common set of management APIs, and a common web and virtual machine hosting experience. For an overview of this product, see Windows Azure Pack for Windows Server.

System Center 2012 R2 Orchestrator

Provides Service Provider Foundation (SPF), which exposes an extensible OData web service that interacts with VMM. This enables service providers to design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2.

To help with capacity planning, you need to determine your tenant requirements. These requirements will then impact the resources that you need to have available for your tenant workloads. For example, you might need more Hyper-V hosts with more RAM and storage, or you might need faster LAN and WAN infrastructure to support the network traffic that your tenant workloads generate.

Use the following questions to help you plan for your tenant requirements.

 

Design consideration Design effect

How many tenants do you expect to host, and how fast do you expect that number to grow?

Determines how many Hyper-V hosts you’ll need to support your tenant workloads.

What kind of workloads do you expect your tenants to move to your network?

Can determine the amount of RAM, storage, and network throughput (LAN and WAN) that you make available to your tenants.

What is your failover agreement with your tenants?

Affects your cluster configuration and other failover technologies that you deploy.

For more information about physical compute planning considerations, see section “3.1.6 Physical compute resource: hypervisor” in the Design options guide in Cloud Infrastructure Solution for Enterprise IT.

You should plan your failover cluster strategy based on your tenant requirements and your own risk tolerance. For example, the minimum we recommend is to deploy the management, compute, and gateway hosts as two-node clusters. You can choose to add more nodes to your clusters, and you can guest cluster the virtual machines running SQL, Virtual Machine Manager, Windows Azure Pack, and so on.

For this solution, you configure the scale-out file servers, compute Hyper-V hosts, management Hyper-V hosts, and gateway Hyper-V hosts as failover clusters. You also configure the SQL, Virtual Machine Manager, and gateway guest virtual machines as failover clusters. This configuration provides protection from potential physical computer and virtual machine failure.

You’ll need to choose a SQL option for high availability for this solution. SQL Server 2012 has several options:

  • AlwaysOn Failover Cluster Instances

    This option provides local high availability through redundancy at the server-instance level—a failover cluster instance.

  • AlwaysOn Availability Groups

    This option enables you to maximize availability for one or more user databases.

For more information see Overview of SQL Server High-Availability Solutions.

For the SQL high availability option for this solution, we recommend AlwaysOn Failover Cluster Instances. With this design, all the cluster nodes are located in the same network, and shared storage is available, which makes it possible to deploy a more reliable and stable failover cluster instance. If shared storage is not available and your nodes span different networks, AlwaysOn Availability Groups might be a better solution for you.

You need to plan how many gateway guest clusters are required. The number you need to deploy depends on the number of tenants that you need to support. The hardware requirements for your gateway Hyper-V hosts also depend on the number tenants that you need to support and the tenant workload requirements.

For Windows Server Gateway configuration recommendations, see Windows Server Gateway Hardware and Configuration Requirements.

For capacity planning purposes, we recommend one gateway guest cluster per 100 tenants.

 

Design consideration Design effect

How will your tenants connect to your network?

  • If tenants connect through a site-to-site VPN, you can use Windows Server Gateway as your VPN termination and gateway to the virtual networks.

    This is the configuration that is covered by this planning and design guide.

  • If you use a non-Microsoft VPN device to terminate the VPN, you can use Windows Server Gateway as a forwarding gateway to the tenant virtual networks.

  • If a tenant connects to your service provider network through a packet-switched network, you can use Windows Server Gateway as a forwarding gateway to connect them to their virtual networks.

ImportantImportant
You must deploy a separate forwarding gateway for each tenant that requires a forwarding gateway to connect to their virtual network.

This design for this solution is for tenants to connect to the gateway through a site-to-site VPN. Therefore, we recommend deploying a Windows Server gateway using a VPN. You can configure a two-node Hyper-V host failover cluster with a two-node guest failover cluster using predefined service templates available on the Microsoft Download Center (for more information, see How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM).

For this solution, you use Virtual Machine Manager to define logical networks, VM networks, port profiles, logical switches, and gateways to organize and simplify network assignments. Before you create these objects, you need to have your logical and physical network infrastructure plan in place.

In this step, we provide planning examples to help you create your network infrastructure plan.

The diagram shows the networking design that we recommend for each of the physical nodes in the management, compute, and gateway clusters.

Networking design for cluster nodes

Compute and Management node network interfaces

You need to plan for several subnet and VLANs for the different traffic that is generated, such as management/infrastructure, network virtualization, external (outward bound), clustering, storage, and live migration. You can use VLANs to isolate the network traffic at the switch.

For example, you could plan the following networks:

Subnet/VLAN plan

 

Line speed (Gb/S) Purpose Address VLAN Comments

1

Management/Infrastructure

172.16.1.0/23

2040

Network for management and infrastructure. Addresses can be static or dynamic and are configured in Windows.

10

Network Virtualization

10.0.0.0/24

2044

Network for the VM network traffic. Addresses must be static and are configured in Virtual Machine Manager.

10

External

131.107.0.0/24

2042

External, Internet-facing network. Addresses must be static and are configured in Virtual Machine Manager.

1

Clustering

10.0.1.0/24

2043

Used for cluster communication. Addresses can be static or dynamic and are configured in Windows.

10

Storage

10.20.31.0/24

2041

Used for storage traffic. Addresses can be static or dynamic and are configured in Windows.

VMM logical network plan

You can plan the following example logical networks, as defined in VMM:

 

Name IP pools and network sites Notes

External

  • Rack01_External

    • 131.107.0.0/24, VLAN 2042

    • All Hosts

Host Networks

  • Rack01_LiveMigration

    • 10.0.3.0, VLAN 2045

    • All Hosts

  • Rack01_Storage

    • 10.20.31.0, VLAN 2041

    • All Hosts

Infrastructure

  • Rack01_Infrastructure

    • 172.16.0.0/24, VLAN 2040

    • All Hosts

Network Virtualization

  • Rack01_NetworkVirtualization

    • 10.0.0.0/24, VLAN 2044

    • All Hosts

VMM VM network plan

You can plan the following example VM networks, as defined in VMM:

 

Name IP pool address range Notes

External

None

Live migration

10.0.3.1 – 10.0.3.254

Management

None

Storage

10.20.31.1 – 10.20.31.254

After you install Virtual Machine Manager, you can create a logical switch and uplink port profiles. You then configure the hosts on your network to use a logical switch, together with virtual network adapters attached to the switch. For more information about logical switches and uplink port profiles, see Configuring Ports and Switches for VM Networks in VMM.

For example, you can plan the following uplink port profiles, as defined in VMM:

VMM uplink port profile plan

 

Name General property Network configuration

Rack01_Gateway

  • Load Balancing Algorithm: Host Default

  • Teaming mode: LACP

Network sites:

  • Rack01_External, Logical Network: External

  • Rack01_LiveMigration, Logical Network: Host Networks

  • Rack01_Storage, Logical Network: Host Networks

  • Rack01_Infrastructure, Logical Network: Infrastructure

  • Network Virtualization_0, Logical Network: Network Virtualization

Rack01_Compute

  • Load Balancing Algorithm: Host Default

  • Teaming mode: LACP

Network sites:

  • Rack01_External, Logical Network: External

  • Rack01_LiveMigration, Logical Network: Host Networks

  • Rack01_Storage, Logical Network: Host Networks

  • Rack01_Infrastructure, Logical Network: Infrastructure

  • Network Virtualization_0, Logical Network: Network Virtualization

Rack01_Infrastructure

  • Load Balancing Algorithm: Host Default

  • Teaming mode: LACP

Network sites:

  • Rack01_LiveMigration, Logical Network: Host Networks

  • Rack01_Storage, Logical Network: Host Networks

  • Rack01_Infrastructure, Logical Network: Infrastructure

You can plan the following example logical switch using these uplink port profiles, as defined in VMM:

VMM logical switch plan

 

Name Extension Uplink Virtual port

VMSwitch

Microsoft Windows Filtering Platform

  • Rack01_Compute

  • Rack01_Gateway

  • Rack01_Infrastructure

  • High bandwidth

  • Infrastructure

  • Live migration workload

  • Low bandwidth

  • Medium bandwidth

We recommend that you isolate the heaviest traffic loads on your fastest network links. For example, consider isolating your storage network traffic and network virtualization traffic on separate fast links. If you must use slower network links for some of your heavy traffic loads, you might consider NIC teaming.

ImportantImportant
If you use jumbo frames in you network environment, you may need to make some configuration adjustments when you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

If you use Windows Azure Pack for your tenant self-service portal, there are numerous options you can configure to offer your tenants. This solution includes some of the VM Cloud features, but there are many more options available to you—not only with VM Clouds, but also with Web Site Clouds, Service Bus Clouds, SQL Servers, MySQL Servers, and more. For more information about Windows Azure Pack features, see Windows Azure Pack for Windows Server.

After reviewing the Windows Azure Pack documentation, determine which services you want to deploy. This solution includes some of the Web Site Clouds features, using an Express deployment with all the Windows Azure Pack components installed on a single virtual machine. If you decide to deploy it in production, you should use a distributed deployment and plan for the additional resources required.

To determine your host requirements for a production distributed deployment, see Windows Azure Pack architecture.

Use a distributed deployment if you decide to deploy Windows Azure Pack in production. If you want to evaluate Windows Azure Pack features before deploying in production, use the Express deployment. For this solution, you use the Express deployment to demonstrate the Web Site Clouds service. You deploy Windows Azure Pack on a single virtual machine located on the compute cluster so that the web portals can be accessed from the external (Internet) network. Then, you deploy a virtual machine running Service Provider Foundation on a virtual machine located on the management cluster.

After you’ve completed these planning and design steps, see Connect hosting provider and tenant networks for hybrid cloud services for steps to implement this solution.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft