Microsoft System Center Data Protection Manager 2012: Protect Your Data
Centralized management and a scoped console simplify administration and troubleshooting duties for running backups.
By bringing the System Center family closer together, Microsoft is reinforcing the message that the System Center suite is the best way to monitor, manage, automate and protect its workloads. One important part of the suite is System Center Data Protection Manager 2012 (DPM 2012).
While this member of the System Center family hasn’t received as thorough a makeover as others, such as System Center Virtual Machine Manager 2012, there are some valuable improvements in this version. There are new centralized management capabilities that will make life much easier for backup administrators. The scoped DPM 2012 console and the manner in which role-based access control (RBAC) is implemented are other notable improvements.
The installation process and requirements for DPM 2012 are similar to the 2010 edition. You’ll need a SQL Server database. SQL Server 2008 R2 is supplied with the installer, but larger environments will likely opt to have a central SQL Server installation. DPM 2012 requires Windows Server 2008, 2008 SP2 (x64 only), or 2008 R2 with or without SP1. If you tried out the beta version, you can upgrade to the release candidate (RC), which you can then in turn upgrade to the release to manufacturing (RTM) version.
You can have several DPM 2012 servers sharing a SQL Server. Each consumes about 2.5GB of memory, so plan your SQL Servers accordingly. The ability for multiple DPM servers to share a tape library carries over from DPM 2010, but you can’t mix versions and have both DPM 2010 and DPM 2012 servers connected to the same library.
The console for DPM 2012 has received a complete overhaul. It now follows the other System Center solutions with a “wunderbar” on the left instead of tabs at the top (see Figure 1). There’s also a context-sensitive Ribbon at the top.
Figure 1 Getting used to the new console is easy—just remember the “tabs” are on the left instead of at the top.
One of the challenges in large DPM 2010 environments is that each DPM server must essentially be managed separately. You can exert a certain degree of central control through Windows PowerShell scripts. The DPM team listened to requests for improvements to this situation, but instead of providing a separate central console for DPM 2012, they took it a step further and integrated with System Center Operations Manager. This makes perfect sense and is another step toward the “single pane of glass” for all management (see Figure 2).
Figure 2 Setting up centralized management on your System Center Operations Manager server and workstations is easy.
Because Operations Manager integrates with third-party monitoring and ticketing systems, DPM 2012 alerts will surface in those systems as well. The DPM 2012 beta only supported Operations Manager 2007 R2, whereas the RC version only supports Operations Manager 2012 RC. There are some indications that both versions will work with the RTM version.
In another move likely to please those managing large DPM environments, the centralized management extends to DPM 2010 servers as well (see Figure 3). To take advantage of this integration, you’ll need Operations Manager 2012 RC installed, and this hotfix. Install the Central Console on the Operations Manager server (it’s an option on the DPM 2012 installation splash screen). Finally, import the new management packs into Operations Manager (they’re provided with the installation of the DPM 2012 RC). For more in-depth information, see the “Installing Central Console” TechNet Library page. The centralized console has been tested with 50,000 data sources across 100 DPM servers.
Figure 3 The Central Console will simplify troubleshooting in large Data Protection Manager 2012 environments.
At its most basic level, the integration provides a consolidated alert view across all your DPM 2012 servers. Raised alerts are grouped by disk or tape, data source, protection group and replica volumes, which simplifies troubleshooting the most important areas first. This triaging is further aided by the manner in which the console separates issues that only affect one data source from problems that impact multiple data sources. Alerts are also separated into backup failure or infrastructure problems. This helps you put the right people on the right issues.
The context-sensitive Actions pane on the right in the console offers DPM 2012 tasks appropriate to the selected objects. These tasks also offer remote recovery and the ability to automatically and remotely implement recommended corrective actions. You can also configure alerts to conform to your service-level agreements (SLAs). If you guarantee a backup every five hours, but actually back up a particular data source every two hours, the DPM 2012 server will log an error for each failed backup. The centralized console will only warn you when the SLA is breached.
As if centralizing server and alert management wasn’t enough relief, DPM 2012 also offers the scoped console, an additional time-saving troubleshooting feature. If you’re trying to resolve a particular alert in Operations Manager, clicking the Troubleshoot button opens a DPM 2012 console that only shows you the DPM 2012 server, data sources, agents and backup jobs involved in the issue. At the top of the console, there’s the ticket number, the name of the DPM 2012 server and the alert on which you’re working. When you think you’ve fixed the underlying cause, you can run a test backup prior to resuming the entire job.
There’s also a Remote Administration feature. This lets you install the DPM 2012 console on workstations and connect the console to any remote DPM 2012 server where you have permissions, negating the need for Remote Desktop Protocol sessions.
While most machines in an enterprise are joined to a domain, there are often situations where you have to protect computers in untrusted domains or workgroup situations (perimeter network). DPM 2010 protected these workloads with local accounts and Windows NT LAN Manager (NTLM) authentication. Due to weaknesses in NTLM and the hassle of local account management and auditing, this wasn’t a great solution.
DPM 2012 brings certificate-based authentication to bear on the following workloads: File Server, Hyper-V and SQL Server in both standalone and clustered configurations. You can also use certificate-based authentication on a secondary DPM 2012 server for disaster recovery to protect data sources in a non-trusted domain when the primary DPM 2012 server fails. The two DPM 2012 servers need to be in the same or trusted domains. The only data sources that support certificate-based protection that are missing from this lineup are Exchange, SharePoint and Bare-Metal Recovery/System State.
You’ll need an internal certificate authority for the certificates, as they can’t be self-signed. There are several steps in getting it all up and running. First, generate a certificate for each DPM 2012 server. Then import that certificate to each server and enable certificate-based protection. Each server you want to protect must also have the DPM 2012 agent installed. When a certificate is about to expire, DPM 2012 will warn you 30 days in advance. It will also issue a critical warning the day before the certificate expires.
Role-Based Access Control
Another architectural change designed to make DPM 2012 more enterprise-friendly is RBAC. This covers the action so you can assign someone the right to recover data, but unlike RBAC in Exchange, for instance, you can’t then further limit which data sources they can recover.
There are seven roles provided: Read-Only User, Reporting Operator, Recovery Operator, Tape Operator and Tape Admins, and full DPM 2012 admin. There are also two support roles. Tier-1 Support (help desk) can only resume backups and take the recommended automatic action for issues. The Tier-2 Support (escalation) can run backups on demand and enable or disable agents.
All roles are supported by the Operations Manager console and scoped DPM 2012 consoles, and are started from within Operations Manager. Roles don’t apply in the native DPM 2012 console on the DPM 2012 server, as RBAC is built on the Operations Manager role-based system.
There’s quite a bit of overhead in DPM 2010 when it’s protecting virtual machines (VMs) on standalone Hyper-V servers. It has to read entire Virtual Hard Disk (VHD) files each time to ascertain which blocks have changed. DPM 2012 uses Changed Block Tracking, which will transfer only changed blocks. This won’t just improve backup performance—it will also lower the overall load on the server.
In virtualized environments, the choice is whether to backup from inside the guest OS or from the host side. The former provides granular restore capabilities for files and folders. The host-based backup generally only provides an entire VM recovery.
Besides simplified management, host-level backup also brings cheaper agent licensing. You don’t have to pay for an agent for each VM. New in DPM 2010 was Item Level Restore (ILR), which lets you restore files or folders from a host-based backup only if the DPM server itself was running on a physical server.
In DPM 2012, ILR is available from host backups, even if DPM 2012 itself is a VM. For both versions, this feature doesn’t extend to transaction-based workloads such as Exchange, SQL or SharePoint. You can only protect these with granular restore capabilities with a DPM 2012 agent in the guest. You need to install the Hyper-V role for IRL to be available in DPM 2010 running on both Windows Server 2008 and 2008 R2, as well as for DPM 2012 on Windows Server 2008 (all on physical hardware). If you’re running DPM 2012 on Windows Server 2008 R2, you don’t need to enable the Hyper-V role.
ILR is also available for SharePoint protection in DPM 2010. The entire content database is recovered to a staging location before an item can be recovered, though, making this a time–consuming task. DPM 2012 takes a different approach and attaches a SQL Server instance remotely to the data on a recovery point volume, and recovers items directly. This approach vastly improves performance. This is also available for SharePoint data stored in SQL Server Filestream databases.
SharePoint farm-level protection is also available. Any new sites added to the farm are automatically protected by DPM 2012. Sadly, this feature doesn’t extend to Hyper-V protection. In DPM 2010, there’s a Windows PowerShell script you can run to protect newly created VMs. It would have been nice to have that built-in. Scalability limits for DPM 2012 haven’t changed from DPM 2010. There’s still a maximum of 40TB for recovery point volumes and 80TB for replica volumes, for a total of 120TB.
While DPM 2010 offers tape co-location to better use available space, there’s no real control over which data sources are stored together. DPM 2012 introduces protection group sets (see Figure 4) and for each set you can assign a Write Period. This is the time a tape is available for writing new backups. An Expiration Tolerance that controls the time recovery points remains on the tape until it’s marked as expired.
Figure 4 Protection group sets form the basis of data protection in Data Protection Manager 2012.
When a backup for a protection group starts multiple tape jobs and there’s an issue with one job, DPM 2010 insisted the entire backup for that protection group had to be restarted. In DPM 2012, you only have to restart the affected job.
The lack of single-item restore for Exchange backups is a bit disappointing, though it’s more the responsibility of the Exchange team to provide a supported way to do this. There’s also no single-item restore from Active Directory backups, even though DPM 2012 recognizes these as a data source.
These minor points aside, DPM 2012 is a worthy successor to DPM 2010. The new enterprise-friendly features like centralized management, certificate-based protection and RBAC—as well as troubleshooting enhancements like the scoped console and item-level recovery—are welcome improvements. They will cement the reputation of DPM 2012 as the best backup product for Microsoft workloads.
Paul Schnackenburg has been working in IT since the days of 286 computers. He works part-time as an IT teacher and runs his own business, Expert IT Solutions, on the Sunshine Coast of Australia. He has MCSE, MCT, MCTS and MCITP certifications and specializes in Windows Server, Hyper-V and Exchange solutions for businesses. Reach him at firstname.lastname@example.org and follow his blog at TellITasITis.com.au.