Exporter (0) Imprimer
Développer tout
EN
Ce contenu n’est pas disponible dans votre langue. Voici la version anglaise.
6 sur 9 ont trouvé cela utile - Évaluez ce sujet

Connect hosting provider and tenant networks for hybrid cloud services

Published: June 24, 2013

Updated: April 2, 2014

Applies To: System Center 2012 R2 Virtual Machine Manager, Windows Azure Pack, Windows Server 2012 R2

Who is this guide intended for? Hosting providers who want to offer a hybrid IT solution to their customers.

How can this guide help you? You can use this solution guide to understand the high-level solution design and implementation steps that we recommend to address a challenging cross-technology problem.

This guide describes a hybrid cloud solution that enables a hosting provider to connect multiple tenant networks to their network and offer secure network isolation to each tenant.

The following diagram illustrates the problem that this solution guide addresses.

Tenants connecting to a hosting provider

Hybrid Cloud Multi-Tenant Networking

In this solution guide:

This section describes the scenario, problem, and goals for an example organization.

Scenario

As an example, your organization is a medium-sized hosting provider that offers managed services, including infrastructure as a service (IaaS). You’ve seen a lot of interest from enterprise customers for moving some of their enterprise workloads to the cloud, while maintaining connectivity back to their on-premises network.

Your organization provides a hybrid cloud service that makes it possible for a customer to create a virtual network that spans their cloud infrastructure and the customer’s on-premises network, using the customer’s existing private IP address space.

Marketing has been so successful that the demand for your service is increasing rapidly. This rapid rise in demand has made it imperative to lower operating expenses to make it a profitable service, given the inefficiencies in the current implementation.

Problem statement

Your organization has found that the current hybrid cloud solution doesn’t scale well, and is inefficient and expensive to operate. For example:

  • Previous configurations require two virtual machine network virtualization gateways for every tenant (for redundancy), and each pair of gateways requires a public IP address. As the number of tenants increases, the number of virtual machine gateways required increases linearly, and they can become very difficult to administer. The costs associated with all these network virtualization gateways can add up rather quickly, which makes the current solution not cost-effective.

  • Connecting multiple sites per tenant requires a virtual machine gateway for each tenant site.

  • Without an industry standard routing protocol, an administrator must manually administer network routes. This is inefficient and subject to configuration errors.

  • Using VLANs for network isolation limits the number of networks that can be supported.

The overall problem you want to solve is:
As a hosting provider, how can you provide tenants with a simpler, cost-effective way to connect their networks with secure network isolation in a hybrid cloud service?

Your organization wants to deploy gateways that can connect multiple tenant networks and multiple sites per tenant at the same time. Also, you want clustered gateways to offer redundancy and connection preservation in case of failure—and you want multiple gateways to be deployed to address throughput requirements. In addition, you want to use an industry standard routing protocol, and enable a scalable virtual network isolation protocol that isn’t limited by current VLAN technologies.

Also, your organization needs to use an easy-to-use management interface, which includes IP address space management, together with an easy-to-use self-service tenant portal to make a hybrid cloud simple and efficient to deploy.

Organization goals

To summarize, your organization wants to do the following in this hybrid cloud solution:

  • Use a single gateway to connect multiple tenant sites in a hybrid cloud service, which means that you don’t need multiple gateways using multiple public IP addresses for each tenant. This solution scales well, which allows you to connect more tenants with fewer resources.

  • Isolate tenant networks using network virtualization, which scales better than VLANs. VLANs are typically implemented with a limit of about 1000 different identifiers, which limits the number of tenant networks you can support. Network virtualization can support thousands of tenants, without any of the constraints imposed by VLANs, switches, and physical network locations.

  • Manage this hybrid cloud offering using an easy-to-use management interface that allows you to manage your virtual networks, IP address spaces, and gateways all in one location. This makes it easier and more efficient to manage many tenants at a time.

  • Offer tenants a common self-service portal, which allows them to efficiently place their computing resources where it best meets their business needs. You can offer tenants a customizable portal compatible with Windows Azure, which utilizes the same Service Management API based on REST (Representational State Transfer).

  • Offer tenants easy-to-follow guidance to connect their on-premises network to the hosting provider network through a secure site-to-site virtual private network (VPN). Router configuration guidance includes required protocols, settings, and end-point addresses.

This section describes the solution design that addresses the problem described in the previous section and provides high-level planning considerations for this design.

The following diagram shows the design for this solution, which connects each tenant’s network to the hosting service provider network using a site-to-site VPN tunnel and Border Gateway Protocol (BGP) for automatic routing table synchronization. Each tenant must configure their own gateway to connect to the hosting provider gateway. The gateway then isolates each tenant’s network data using the Network Virtualization using Generic Routing Encapsulation (NVGRE) protocol for network virtualization.

ImportantImportant
If you use jumbo frames in you network environment, you may need to make some configuration adjustments when you deploy. For more information, see Windows Server 2012 R2 Network Virtualization (NVGRE) MTU reduction.

Multi-tenant networking solution architecture

Hybrid Cloud Multi-Tenant Networking Solution Arch

The following table lists the elements that are part of this solution design and describes the reason for the design choice.

 

Solution design element Why is it included in this solution?

Windows Server 2012 R2 Gateway

Is integrated with Virtual Machine Manager to support simultaneous, multi-tenant site-to-site VPN connections and network virtualization using NVGRE. For an overview of this technology, see Windows Server Gateway.

System Center 2012 R2 Virtual Machine Manager

Manages virtual networks (using NVGRE for network isolation), fabric management, and IP addressing. For an overview of this product, see Configuring Networking in VMM Overview.

Failover Clustering

Provides high availability for clusters. All the physical hosts are configured as failover clusters for high availability, as well as many of virtual machine guests that host management and infrastructure workloads.

The site-to-site VPN gateway can be deployed in 1+1 configuration for high availability. For more information about Failover Clustering, see Failover Clustering overview.

Scale-out File Server

Provides a high availability storage solution. For a more in-depth discussion regarding a potential storage solution, see Provide cost-effective storage for Hyper-V workloads by using Windows Server.

Site-to-site VPN

Provides a way to connect a tenant site to the hosting provider site. This connection method is cost-effective and VPN software is included with Remote Access in Windows Server 2012 R2. (Remote Access brings together Routing and Remote Access service (RRAS) and Direct Access). Also, VPN software and/or hardware is available from multiple suppliers.

Border Gateway Protocol (BGP) (optional)

Connects multiple tenant sites to the service provider cloud over site-to-site connections with automatic destination determination through multiple connections. If your tenants have a simple, single site implementation, there may not be a need to use BGP.

Network Address Translation (NAT) (optional)

Provides a way for applications running in the tenant’s virtualized network to directly access public sites on the Internet using the built-in NAT functionality, rather than accessing the Internet through the site-to-site connection and the tenant’s on-premises network.

Windows Server 2012 R2 IPAM

Provides IP address space management. In Windows Server 2012 R2, IPAM is integrated with Virtual Machine Manager. For an overview of this technology, see IP Address Management (IPAM) Overview.

Windows Azure Pack

Provides a self-service portal for tenants to manage their own virtual networks. With Windows Azure Pack, you can deploy a common self-service experience, a common set of management APIs, and an identical website and virtual machine hosting experience. Tenants can take advantage of the common interfaces, such as Service Provider Foundation) which frees them to move their workloads where it makes the most sense for their business or for their changing requirements. For an overview of this product, see Windows Azure Pack for Windows Server.

Windows Server 2012 R2 together with System Center 2012 R2 Virtual Machine Manager (VMM) give hosting providers a multi-tenant gateway solution that supports multiple host-to-host VPN tenant connections, Internet access for tenant virtual machines by using a gateway NAT feature, and forwarding gateway capabilities for private cloud implementations. Hyper-V Network Virtualization provides tenant virtual network isolation with NVGRE, which allows tenants to bring their own address space and allows hosting providers better scalability than is possible using VLANs for isolation.

VMM offers a user interface to manage the gateways, virtual networks, virtual machines and other fabric items.

When planning this solution, you need to consider the following:

  • High availability design for the servers running Hyper-V, guest virtual machines, SQL server, gateways, VMM, and other services

  • Tenant virtual machine Internet access requirements

  • Infrastructure physical hardware capacity and throughput

  • Site-to-site connection throughput

  • Network isolation technologies

  • Authentication mechanisms

  • IP addressing

For more planning and design information for this solution, see the Connect hosting provider and tenant networks for hybrid cloud services: planning and design guide.

ImportantImportant
When you deploy Hyper-V hosts and virtual machines, it is extremely important to apply all available updates for the software and operating systems used in this solution. If you don’t do this, your solution many not function as expected.

You can use the steps in this section to implement the solution. Make sure to verify the correct deployment of each step before proceeding to the next step.

noteNote
If you want to print or export a customized set of solution topics, see Print/Export Multiple Topics – Help.

  1. Prepare to implement hybrid cloud multi-tenant networking.

    Use the Connect hosting provider and tenant networks for hybrid cloud services: planning and design guide to plan and design your solution.

    After you complete this step, verify that you have created a plan for implementing this solution that considers your specific requirements and existing infrastructure.

  2. Deploy (or identify) an Active Directory domain.

    Your management, compute, and scale-out file servers will join this domain. Or alternatively, identify an existing Active Directory domain that can host your servers.

  3. Deploy (or identify) a second Active Directory domain.

    This second Active Directory domain will host your Hyper-V host gateway servers and a scale-out file server for gateway storage. This second Active Directory domain should have no trust relationship with your infrastructure domain for security considerations.

    ImportantImportant
    Ensure both domains can resolve names in the other domain. For example, you can configure a forwarder at each DNS server to point to the DNS server in the other domain.

  4. Deploy the storage nodes and clusters for the management domain.

    A scale-out file server hosts the storage for this solution as file shares. This scale-out file server is configured on physical hosts in the management domain. An additional scale-out file server for the gateway domain is implemented in virtual machines later on the management cluster. For more information about deploying a scale-out file server, see Deploy Scale-Out File Server.

  5. Deploy the management nodes and clusters.

    noteNote
    You’ll need to create a temporary virtual switch using Hyper-V Manager so you can install and configure your virtual machines. After VMM is installed, you can define a logical switch in VMM, delete the virtual switch defined in Hyper-V, and configure your hosts to use a virtual switch based on the logical switch defined in VMM.

    This host cluster will host the SQL server, VMM, Service Provider Foundation (SPF) server, and scale-out file server (for the gateway domain) virtual machines. The scale-out file server for the gateway domain is implemented in virtual machines and joined to the gateway domain. For more information, see the following topics:

    ImportantImportant
    Deploy all the virtual machines on one host cluster node for now. After the networking features are configured in VMM, you load balance the virtual machines across the host cluster nodes.

    1. Deploy the SQL guest cluster.

      For information about deploying a SQL Server failover cluster instance, see the following topics:

    2. Deploy VMM.

      For information about how to do this, see Deploying System Center 2012 - Virtual Machine Manager. For this solution, you use VMM to deploy and manage your gateway and other network features.

      1. Install VMM on a guest cluster.

        For information about how to do this, see the following topics:



      2. Add a library server, using a share on your scale-out file server. For more information, see How to Add a VMM Library Server or VMM Library Share. When you are prompted to type the computer name, type the name you used when you configured the scale-out file server role. Do not use the cluster name.

        ImportantImportant
        When you add a library server, ensure that you use a user account that is different from your VMM service account. If you don’t do this, VMM will silently fail to add the library server and you won’t see any job history indicating an error has occurred.

      3. Disable the Create logical networks automatically setting before you add any hosts. You’ll manually create logical networks with specific settings later. This setting is located in Settings, Network Settings.

      4. Add the designated Hyper-V hosts as VMM hosts.

        Add the management cluster and the Scale-Out File Server cluster. You’ll add the compute host cluster later.

        You should add the Scale-Out File Server cluster in the Fabric, Storage, File Servers category. You should add the management cluster (and eventually the compute cluster) under All Hosts. To help organize the hosts, you should create additional host groups (for example, Compute, and Management) and place the appropriate clusters in the host groups.

        ImportantImportant
        When you deploy a scale-out file server for the gateway domain, you need to open the public Windows Remote Management (HTTP-In) port on both nodes of the guest cluster. This port needs to be opened because the VMM server and gateway cluster exist in separate, untrusted domains and that port is not open by default for the Public profile.

        For more information, see Adding Windows Servers as Hyper-V Hosts in VMM Overview.

        To see an example procedure, see “To add HNVHOST1, HNVHOST2, and HNVHOST3 as VMM Hosts” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      5. Add file share storage.

        After you add the cluster, you can configure storage locations for the virtual machines that are deployed to nodes in the cluster. Right-click the cluster in VMM, click Properties and then click File Share Storage, and add a share from your scale-out file server.

      6. Create the planned logical networks and associated IP pools.

        For this solution, you can create a logical network for External (Internet), Infrastructure, Host Networks (with Cluster IP Pool and Live Migration IP Pool), and Network Virtualization networks. Note that these are sample names—you can use your own names according to your plan. Create the appropriate IP pools for each logical network according to your plan, making sure that the IP address ranges don’t overlap with any existing IP addresses in use.

        You configure the Host Networks logical network as a VLAN-based independent network, and configure the others as One connected network.

        For more information, see How to Create a Logical Network in VMM.

        To see an example procedure in a test environment, see “Define logical networks with associated IP pools” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      7. Create VM networks for the Infrastructure, External (Internet), Live Migration, and Storage logical networks.

        Create an IP address pool for the Storage and Live Migration networks, using the appropriate address range according to your plan.

        For more information, see How to Create a VM Network in VMM in System Center 2012 R2.

        To see an example procedure in a test environment, see “Define VM networks” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      8. Create the uplink port profiles.

        Create a Gateway, Compute, and Infrastructure Uplink port profile. Configure the Host Default Load balancing algorithm and the Link Aggregation Control Protocol (LACP) teaming mode (assuming that your switch supports LACP). Select all the network sites for the network configuration of your Compute and Gateway port profile, and the Live Migration, Storage, and Infrastructure sites for your Infrastructure profiles.

        For more information, see Configuring Ports and Switches for VM Networks in VMM.

        To see an example procedure in a test environment, see “Create port profiles and logical switches” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      9. Create the logical switch.

        Select the Microsoft Windows Filtering Platform for the Extensions, select Team for the Uplink mode, and add the three uplink port profiles that you created previously.

        Add the following virtual ports: High bandwidth, Infrastructure, Live migration workload, Low bandwidth, and Medium bandwidth.

      10. Create a teamed virtual switch on a management node.

        Add a virtual switch to the management host cluster node. This is the node that doesn’t have any virtual machines associated with it.

        To do this in VMM, locate the host node on the Fabric, Servers pane, right-click the host, and click Properties. Click Virtual Switches, and then click New Virtual Switch.

        Add your two fastest physical adapters to form a team and choose the Infrastructure Uplink port profile. Then add two virtual network adapters for Live Migration and Storage.



        When you’re done, verify that your virtual switch looks similar to the following:



        Virtual Switch

        Virtual switch virtual adapter - Live Migration

        Virtual switch virtual adapter - Storage

        ImportantImportant
        You might need to make some configuration changes to the physical switch ports that these network adapters are connected to. If you’re using LACP for teaming, you’ll need to configure the switch ports for LACP. If your switch ports are configured in Access Mode (for untagged packets), you need to configure them in Trunk Mode, because tagged packets will be coming from the teamed adapters.

        For more information, see How to Configure Network Settings on a Host by Applying a Logical Switch in VMM.

        TipTip
        For troubleshooting purposes, you can use the following Windows PowerShell cmdlets:

        Get-NetLbfoTeam, Get-NetLbfoTeamMember, and Get-NetLbfoTeamNic

        To see other related cmdlets, type Get-command *lbfo*.

      11. Configure your migration settings.

        Now that you have your live migration adapter configured on the virtual switch, you can configure your migration settings. To do this, on each node property page, click Migration Settings. Configure your desired settings, and ensure your live migration subnet address has been added and is at the top of the list. The subnet is actually entered as a single IP address with a 32-bit mask: x.x.x.x/32. So, if your live migration virtual network adapter’s address is 10.0.3.6, then the Migration Settings page may look similar to the following:



        Migration Settings
      12. Live migrate your virtual machines.

        Now that you have a host configured with a virtual switch configured using VMM, you can migrate your virtual machines to it so you can prepare the other node the same way.

        To migrate your virtual machines, in VMM, select the VMs and Services workspace, select the node in your management cluster that has the virtual machines running on it, right-click the running virtual machine, and click Migrate Virtual Machine. Select the other node and move the virtual machine.

      13. Delete the virtual switch that was originally created using Hyper-V Manager.

        Now that you have moved the virtual machines, you can delete the original virtual switch that you created with Hyper-V Manager.

      14. Create a new teamed virtual switch using VMM.

        After you delete the old virtual switch, you can create a new teamed virtual switch like you did with the previous node. Follow the previous step to create the virtual switch on this node using VMM.

      15. Live migrate some virtual machines back.

        Now that you have both nodes configured with a teamed virtual switch using VMM, you can migrate some of the virtual machines back. For example, move one of the SQL guest cluster nodes so that you have the guest cluster nodes split across the host cluster nodes. Do this for all the other guest clusters.

      After this step is complete, you should have both of your management host cluster nodes installed with the management virtual machines and the host node networking configured through VMM.

  6. Deploy the compute nodes and clusters.

    This Hyper-V cluster hosts the tenant virtual machines and the Windows Azure Pack portal server.

    You can install the compute Hyper-V cluster in a similar manner that you installed the management cluster:

    1. Deploy the Hyper-V hosts and join the management domain.

    2. Cluster the hosts and add the cluster to your VMM Compute Host group.

    3. Create the teamed virtual switch and the live migration and storage virtual adapters for both host nodes like you did for both of the management nodes. When you team the physical adapters, use the Compute Uplink port profile for the adapters.

    4. Add file share storage.

      Configure a storage location for the virtual machines deployed to nodes in the cluster. Right-click the cluster in VMM, click File Share Storage, and add a share from your scale-out file server.

  7. Deploy the gateway.

    To deploy the Windows Server gateway in Windows Server 2012 R2, you deploy a dedicated Hyper-V host cluster and then deploy the gateway virtual machines using VMM. The Windows Server gateway provides a connection point for multiple tenant site-to-site VPN connections. You follow a similar procedure to deploy the physical hosts, but then you use a VMM service template to deploy the guest cluster virtual machines.

    To deploy the Windows Server gateway, use the following procedure:

    1. Deploy the Hyper-V hosts and join the gateway domain.

    2. Cluster the hosts and add the cluster to your VMM Gateway Host group.

    3. Create the teamed virtual switch and the live migration and storage virtual adapters for both host nodes like you did for both of the management and compute nodes. When you team the physical adapters, use the Gateway Uplink port profile for the adapters.

    4. Add file share storage.

      Configure a storage location for the virtual machines that are deployed to nodes in the cluster. Right-click the cluster in VMM, click Properties. Then click File Share Storage and add a share from your gateway scale-out file server.

    5. Ensure that you have a file share available from VMM (where you have a Windows Server 2012 R2 .vhd or .vhdx file available). This file will be used by the VMM service template to deploy the gateway virtual machines.

    6. Configure hosts as gateway hosts.

      You must configure each gateway Hyper-V host as a dedicated network virtualization gateway. In VMM, right-click a gateway host and click Properties. Click Host Access and click the check box for This host is a dedicated network virtualization gateway, as a result is not available for placement of virtual machines requiring network virtualization.

    7. To deploy the gateway virtual machines, follow the procedures in the following topic: How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM and deploy using the 3-NIC HA Gateway service template.

      The service template that you use to deploy the gateway includes a Quick Start Guide document. This document includes some information to setup the infrastructure for the gateway deployment. This information is similar to the information provided in this solution guide. You can skip the infrastructure steps in the Quick Start Guide that are already covered in this solution guide.

      When you reach the final configuration steps and run the Add a network service wizard, your Connection String page will look similar to the following:



      Network Service Connection String



      And the Connectivity property of your gateway network service will look similar to the following:



      Network Service Connectivity
    After this step is complete, verify that two jobs in the log have completed successfully:

    • Update network service device

    • Add connection network service device

    TipTip
    If you need to deploy a gateway guest cluster on a regular basis (for example, to address resource demands), you can customize the service template using the Service Template Designer. For example, you can customize the OS Configuration settings to join a specific domain, use a specific product key, or use a specific computer name configuration.

    CautionCaution
    Do not modify the gateway service template to make the virtual machines highly available. The gateway service template intentionally leaves the Make this virtual machine highly available check box in the Advanced\Availability area unchecked. The virtual machines are configured as nodes of a guest cluster, but it’s important to not change this setting. Otherwise, during failover, the customer addresses (CA) won’t associate with the new provider address (PA) and the gateway will not function properly.

  8. Verify gateway functionality.

    Verify that there is connectivity between a test virtual machine and the hosts located on a test tenant network.

    Use the following steps to verify that your gateway and VM networks are functioning correctly.

    1. Establish a site-to-site VPN connection.

      How you connect your test tenant network will vary depending on the equipment you use to establish the VPN connection. Remote Access (which brings together Direct Access and Routing and Remote Access service (RRAS)) is one way to connect to your gateway. To see an example procedure using RRAS to connect to the gateway, see “Install RRAS on Contoso EDGE1 and create a site-to-site VPN connection to GatewayVM1 running on HNVHOST3” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

      TipTip
      To connect other VPN devices, the connectivity requirements are similar to the Windows Azure VPN connection requirements. For more information, see About VPN Devices for Virtual Network

    2. View the site-to-site VPN connection on your gateway.

      After you establish the VPN connection, you can use some Windows PowerShell commands and some new ping options to verify the VPN connection.

      To see an example procedure in a test environment, see “To view the S2S VPN connections on GatewayVM1” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

    3. Deploy test tenant virtual machines.

      After you verify that you have a successful site-to-site connection to your gateway, you can deploy a test virtual machine and connect it to the test VM network on your hosting service provider network.

      To see an example procedure in a test environment, see “Step 2: Deploy Tenant Virtual Machines” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

    4. Verify test VM network connectivity and HNV site-to-site operation.

      After you deploy your test virtual machine, you should verify that it has network connectivity to remote resources in the tenant on-premises network over the Internet through the multi-tenant site-to-site gateway.

      To see an example procedure in a test environment, see “Verify network connectivity for the APP2 virtual machines” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

  9. Deploy Windows Server IPAM (recommended).

    Windows Server IPAM is integrated with VMM to manage the IP address space for your customer and fabric infrastructure. For more information, see Deploying IPAM Server.

    To see an example procedure in a test environment, see “Step 6: Install and configure IPAM on HNVHOST2” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.



    After IPAM has been deployed, configure the IPAM VMM plug-in. For more information, see How to Add an IPAM Server in VMM in System Center 2012 R2.

    To see an example procedure in a test environment, see “To configure the IPAM VMM plugin on HNVHOST2” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

    After this step is complete, verify that you can view the virtualized address space in IPAM.

    To see an example procedure in a test environment, see “To use IPAM to view the virtualized address space” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

  10. Deploy a self-service tenant portal.

    A tenant self-service portal allows your tenants to create their own virtual networks and virtual machines with minimal hosting service provider involvement. Service providers can design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2. Service Provider Foundation exposes an extensible OData web service that interacts with VMM.

    Windows Azure Pack is a Microsoft self-service portal solution that integrates with VMM using SPF. It offers a web site portal similar to Windows Azure, so if your tenants are also Windows Azure customers, they will already be familiar with the user interface presented in Windows Azure Pack. To demonstrate Windows Azure Pack features for this solution, an express Windows Azure Pack deployment is used. This deploys the required features on a single server. If you want to deploy Windows Azure Pack in production, you should use the distributed deployment. For more information, see Windows Azure Pack installation requirements.

    1. Create the WAPPortal virtual machine.

      Review Express deployment hardware and software prerequisites and then create the WAPPortal virtual machine on your compute cluster.

    2. Install software prerequisites.

      Follow the procedure in Install software prerequisites.

    3. Install an express deployment of Windows Azure Pack.

      Follow the procedure in Install an express deployment of Windows Azure Pack.

    4. Review the topics under Provision Virtual Machine Clouds, and then review the guidance in Requirements for using VM Clouds.

    5. Using VMM, create a cloud.

      For example, you could use the Create Cloud Wizard to create a cloud with the following properties:



       

      Properties Settings

      General

      Name: Gold

      Resources

      Host Group: Compute

      Logical Networks

      Network Virtualization

      Port Classifications

      High Bandwidth

      Storage

      Remote Storage

      Library

      VMM-Lib (a share located on the scale-out file server)

      Capacity

      Cloud Capacity: set to your desired capacity

      For more information about creating a cloud in VMM, see How to Create a Private Cloud from Host Groups.

    6. Install Service Provider Foundation on a separate virtual machine located on the management and infrastructure cluster using the procedure in How to Install Service Provider Foundation for System Center 2012 SP1.

    7. Configure SPF for use with Windows Azure Pack, as described in Configuring Portals for Service Provider Foundation in the “Configuring Windows Azure Pack for Windows Server” section.

      After you have completed the procedure to register the SPF endpoint for virtual machine clouds, you should see the cloud that you created in VMM on the Windows Azure Pack administrator portal.

    8. From the Windows Azure Pack administrator portal, author a plan that you can use to test with. For example, you could author a plan called Gold Plan with the following properties:

       

      Properties Settings

      Name

      Gold Plan

      Services

      Virtual Machine Clouds

      After the plan is created, click it to continue the configuration. Click the Virtual Machine Clouds service and configure the VMM Management Server, Virtual Machine Cloud, and usage limits. Click Save to complete the virtual machine clouds configuration. Click the back button and finally click Change Access to make the plan public.

    9. Create a Windows Azure Pack Gallery Resource. Tenants can use the Gallery to place virtual machines on their virtual networks. For more information, see Downloading and Installing Windows Azure Pack Gallery Resource.

    10. From the Windows Azure Pack tenant portal logon page, click Sign Up to sign up a test tenant account.

      Proceed through the tenant portal, add a subscription and choose a plan.

    11. After the account has been created, create a new virtual network for the tenant using Custom Create.

      When you’re done creating the network, verify that it exists in VMM under VM Networks.

    12. Establish a site-to-site VPN connection with the test tenant like you did when previously when you created a manual test virtual network.

    13. Create a new virtual machine role using the Gallery that you created previously.

    14. After the test virtual machine has been created, verify that it has connectivity back to the tenant network through the site-to-site VPN tunnel.

Cela vous a-t-il été utile ?
(1500 caractères restants)
Merci pour vos suggestions.
Afficher:
© 2014 Microsoft.Tous droits réservés.