There are some best practices you should follow when planning and deploying a virtual Exchange infrastructure.
Server virtualization has become the norm, but virtualization planning is still something of an art form. This is especially true when you’re planning to virtualize Exchange Server. You have to deploy Exchange with performance and fault tolerance in mind. Adding to the challenge is the fact most Exchange Server deployments span across multiple servers.
You should note all the recommendations presented here are based on Hyper-V as the virtualization platform. That isn’t to say you can’t use other virtualization platforms. The official Microsoft support policy states you can run Exchange on “any third-party hypervisor that has been validated under the Windows Server Virtualization Validation Program.” Presently, Citrix Systems Inc., VMware Inc. and a number of other virtualization vendors participate in this program.
When planning and deploying a virtual Exchange Server infrastructure, it’s important to allocate enough resources to the parent partition (which runs the management OS). Failing to reserve adequate resources can impact all your virtual servers running on a host server.
Microsoft recommends reserving at least 1GB of memory for the parent partition and the management OS. If you’re running Hyper-V on top of a full Windows Server 2008 or Windows Server 2008 R2 installation, you should consider reserving 2GB of memory for the management OS to ensure the OS never runs low on resources.
Microsoft also recommends dedicating a network interface controller (NIC) for management purposes. If you plan on using Live Migration, you should dedicate a NIC to the live migration process. It’s worth noting that Microsoft doesn’t support live migration for the Mailbox Server role if the mailbox server is a member of a Database Availability Group (DAG).
Exchange Server and Hyper-V are both quite flexible with regard to storage provisioning. Even so, there are some Microsoft-recommended best practices regarding configuring storage for virtualized Exchange Servers.
Hyper-V creates thinly provisioned virtual hard disks (VHDs) by default. This means regardless of how large you create your VHD, Hyper-V will initially create a VHD file that’s less than a gigabyte in size. This VHD file dynamically expands as data is added.
Although thin provisioning can help reduce disk space consumption, thinly provisioned virtual hard disks don’t perform as well as fixed-length disks. That being the case, Microsoft recommends you use only fixed-length disks for virtualized Exchange servers.
The actual disk configuration you should use will vary depending on server role, but there are some general guidelines that hold true for all server roles. For starters, Microsoft recommends you dedicate a physical disk (LUN) to the management OS. Doing this ensures the management OS will not be competing for disk resources with the virtual machines (VMs).
Microsoft also recommends providing a dedicated LUN to the OS on each of your virtual servers. The VHD containing the guest OS needs to be large enough to store Windows Server and the page file. The page file size is usually the same as the amount of RAM allocated to the VM.
As such, the Microsoft recommendation is to give the guest OS 15GB, plus space equivalent to the amount of memory allocated to the VM. Therefore, a virtual Exchange Server with 4GB of RAM would theoretically need about 19GB of disk space for the volume containing the OS.
Before you begin actually allocating disk resources, there are a couple of things to consider. You’ll notice I haven’t mentioned the space consumed by Exchange Server binaries. The 15GB requirement takes this into account. Windows Server 2008 requires a minimum of 10GB of disk space. The default multirole Exchange Server installation initially consumes roughly about 2.5GB of disk space (which can change as you move things around).
It’s still a good idea to make your guest OS volumes a bit larger for two reasons. First, the 15 GB recommendation only fulfills Windows Server 2008’s minimum disk requirements. The system requirements for Windows Server 2008 specifically recommend 40 GB or more. Machines with more than 16GB of RAM will require additional disk space for paging, hibernation, and dump files.
Virtual servers are very flexible in terms of hardware allocation. You can add memory to a virtual server on a whim. That being the case, it’s a good idea to start with a larger VHD than you really need so you can easily accommodate future memory expansion. These recommendations apply to the VHD containing the guest machine OS and the Exchange binaries. Microsoft also recommends creating one or more additional VHDs for data storage.
For example, if you were creating a hub transport server, you’d want to create a secondary VHD to accommodate the message queues. For a virtual mailbox server, you should create two extra VHDs—one for the mailbox databases and one for the transaction log files.
Microsoft doesn’t seem to emphasize the actual storage hardware, except with regard to mailbox servers. For these, Microsoft recommends using virtual SCSI. The preferred method involves using SCSI pass-through, but fixed disks are also acceptable.
Exchange also supports iSCSI storage for mailbox servers. If you choose iSCSI storage, Microsoft recommends you configure the iSCSI initiator within the management OS, instead of on the guest machines. Microsoft supports using the iSCSI initiator within a guest machine, but doing so eliminates jumbo frame support. It also results in lower performance than you’d get if you ran the iSCSI initiator in the parent partition.
When you set up the iSCSI initiator in the parent partition, you can present the attached iSCSI targets to guest OSes. Those should be configured to treat the iSCSI targets as SCSI pass-through disks.
Hyper-V and Exchange 2010 each offer their own fault tolerance mechanisms. Hyper-V supports host-based failover clusters. Exchange 2010 provides DAGs. These two fault tolerant solutions work in completely different ways, so it’s important to choose the fault tolerant solution that will work best for your organization.
If you’re unfamiliar with DAGs, these are an Exchange 2010 mechanism in which you can combine up to 16 mailbox servers into a single group. You can replicate individual mailbox databases to any of the DAG members, thereby providing protection in the event of a failure.
In contrast, Hyper-V host-based clustering works by linking multiple Hyper-V servers to a cluster shared volume. In the event of a host server failure, the VMs residing on the failed host can failover to an alternate host.
So which fault-tolerant solution should you use? The first thing to consider is the level of protection that each provides. DAGs offer Exchange Native Data Protection, which provides automatic failover control at the database level. As such, DAGs can protect against server, network and database failures.
Hyper-V Host-Based Failover Clustering operates at the virtualization host level. It’s not an Exchange-aware solution, so it can’t protect you against a database failure. It can only protect against a server or network failure. Its dependence on a cluster shared volume means that under the right circumstances, the shared storage could become a single point of failure.
While those factors may favor the use of DAGs, there are other considerations. One is that DAGs can only accommodate mailbox servers—they offer no protection to any other Exchange Server roles.That isn’t to say that you can’t achieve a degree of Exchange-level fault tolerance for the other server roles. Exchange will automatically use any available redundant hub transport servers. You can load balance a Edge Transport Server and Client Access Server (CAS) using a DNS round robin setup, but there isn’t really a good Exchange-level fault tolerant solution for these roles.
Given the vulnerability of Exchange non-mailbox servers, your best option might be two-fold. Create a DAG for mailbox servers and use Host-Based Failover Clustering to protect the other Exchange Server roles. However, achieving this type of protection isn’t quite so straightforward.
You’ll encounter some restrictions when you start virtualizing DAG members. For one thing, you can’t host a virtualized DAG member on a Hyper-V server that’s a member of a Host-Based Failover Cluster. Exchange won’t stop you from doing this, but it isn’t a supported configuration. In fact, Microsoft discourages mixing DAGs and Host-Based Failover Clusters.
The other thing is that you really can’t get away with putting multiple DAG members onto a single host server. Doing so would completely undermine the protection the DAG provides. If the host server were to fail, every mailbox server on the host would also fail. In this configuration, the host server becomes a single point of failure that could potentially force the entire DAG offline.
There are no firm Microsoft guidelines regarding the arrangement of virtualized Exchange Servers to provide the best fault tolerance, but there are several alternatives. Try to create a Host-Based Failover Cluster and use it to host your Edge Transport Server and CAS. If you need to provide load balancing to these server roles, consider creating additional Host-Based Failover Clusters and hosting an Edge Transport Server and a CAS in each cluster. You can use a combination of DNS round robin and redundant MX records to balance the load among the available virtual servers.
You might have noticed no mention of Unified Messaging. Right now, Microsoft doesn’t support virtualized Unified Messaging Servers. However, Exchange 2010 SP2 will add this support.
You might also consider creating non-clustered Hyper-V Servers. Each of these non-clustered servers should host a mailbox server and a Hub Transport Server. Make each mailbox server a DAG member and create at least two virtualized Hub Transport Servers for each Active Directory site.
With this configuration, the DAGs will protect against mailbox server or individual mailbox database failure. The redundant Hub Transport Servers provide transport-level redundancy.
The one thing missing from this architecture is a public folder server. For some time Microsoft has been saying that public folders are going away. If you can, this would be a great time to begin phasing them out. If you must continue using public folders, you need to understand why a DAG can’t protect public folder databases. The only way to provide fault tolerance for public folder servers is to replicate the public folders to multiple mailbox servers.
As you can see, there’s a lot of planning that goes into constructing a virtualized Exchange infrastructure, with your primary considerations being hardware allocation and fault tolerance. Keeping these factors in mind will help guide you to developing a solid, reliable infrastructure.
Not a TechNet Subscriber?
Confidently evaluate Microsoft software and plan deployments with a Microsoft TechNet Subscription.