SCC cluster tips
Applies to: Forefront Security for Exchange Server
Installing on clusters can be complicated by the default naming of the disk resources associated with each CMS in the cluster administrator. Be aware of changes to the disk resource names within the Cluster Administrator, since the installation process uses the disk resource name to derive the drive letter for the installation. During the installation, you are prompted for both a shared drive and a cluster folder. Based on the listed assumptions the results of the various combinations are listed below.
Assume the following configuration in the cluster administrator:
Disk resource Name | Physical path | Type |
---|---|---|
Disk E: |
E: |
Shared Drive |
Diskf |
F: |
Shared Drive |
Disk G: |
G: |
Shared Drive |
Mtptdr |
F:\mpd |
Mount point |
Gmpd |
G:\mpd2 |
Mount point |
For shared drive installs:
Disk resource name for shared drive | Cluster folder | Path Forefront uses |
---|---|---|
E: |
Forefront Cluster |
E:\Forefront Cluster |
Diskf |
Forefront Cluster |
F:\Forefront Cluster |
E: |
Test\Forefront Cluster |
E:\test\Forefront Cluster |
F:\mtpdr |
Forefront Cluster |
X – no match in resource names |
F:\mpd |
Forefront Cluster |
X – no match in resource names |
E:\test |
Forefront Cluster |
X – no match in resource names |
F: |
Forefront Cluster |
X – no match in resource names |
For mount point drive installs:
Disk resource name for shared drive | Cluster folder | Path Forefront uses |
---|---|---|
G: |
mpd2\Forefront Cluster |
gmpd\Forefront Cluster |
Diskf |
mpd\Forefront Cluster |
F:\mpd\Forefront Cluster |
Mpd |
Forefront Cluster |
X – no drive associated with mount point resource |
E: |
mpd\Forefront Cluster |
X - Installs, but not to mount point. It is installed to E:\mpd\Forefront Cluster |
G: |
gmpd\Forefront Cluster |
X – Installs, but not to mount point. It is installed to g:\gmpd\Forefront Cluster |
Additional considerations
There must be at least one passive node.
Forefront supports any number of active nodes and one or more passive nodes.
Each node can only run one Clustered Mailbox Server (CMS) at a time.
Failovers must be to the passive node.
All configuration data is stored on the shared drive, so active and passive nodes have the same settings.