DPM operations that affect performance

 

Updated: May 13, 2016

Applies To: System Center 2012 SP1 - Data Protection Manager, System Center 2012 - Data Protection Manager, System Center 2012 R2 Data Protection Manager

A number of System Center 2012 – Data Protection Manager (DPM) data transfer operations affect performance, including:

  • Creating replicas —This occurs once for each protection group member.

  • Change tracking —This is a continuous process on each protected computer.

  • Synchronizing data —This occurs on a regular schedule.

  • Running consistency checks —This occurs when a replica becomes inconsistent.

  • Running express full backups —This occurs on a regular schedule.

  • Backing up to tape —This occurs on a regular schedule.

  • Running DPM processes—Describes DPM processes that affect performance.

Creating replicas

A replica is a complete copy of the protected data on a single volume, database, or storage group. The DPM protection agent on the protected computer sends the data selected for protection to the DPM server. A replica of each member in the protection group is created. Creating an initial replica is a resource-intensive operation with a significant on network resources.

Typically

The initial data replication can be performed offline, or over the network. For more information, see the section about replica creation in Plan for protection groups. If you’re performing replication over the network note the following:

  • Replication over the network is limited by the speed of the network connection between the DPM server and the protected computers. That is, the amount of time that it takes to transfer a 1-gigabyte (GB) volume from a protected computer to the DPM server will be determined by the amount of data per second that the network can transmit.

  • On an extremely fast network, such as a gigabit connection, the speed of replica creation will be determined by the disk speed of the DPM server or that of the protected computer, whichever is slower.

  • You can reduce the network performance overhead of replication b using bandwidth throttling and compression.

  • Try and ensure that networks will be consistently available during replication. If the network goes down during replication and remains down for longer than five minutes, the replica status will be inconsistent.

The following table shows a sampling of the time it takes to transmit data at different network speeds. Times are in hours, unless otherwise specified.

Data transmission times

Data size Network speed

1 GB per second (assuming disk speed isn’t a bottleneck)
Network speed

100 MB per second
Network speed

32 MB per second
Network speed

8 MB per second
Network speed

2 MB per second
Network speed

512 KB per second
1 GB < 1 minute < 1 hour < 1 < 1 1.5 6
50 GB <10 minutes 1.5 hour 5 18 71 284
200 GB <36 minutes 6 hours 18 71 284 1137
500 GB <1.5 hours 15 45 178 711 2844

Note

Typically, the time to complete initial replica (IR) creation can be calculated as follows:

IR: hours = ((data size in MB) / (.8 x network speed in MB/s)) / 3600

Note 1: Convert network speed from bits to bytes by dividing by 8.

Note 2: The network speed is multiplied by .8 because the maximum network efficiency is approximately 80%.

Change tracking

After the replica is created, the agent on the protect computer begins tracking all changes to protected data. Changes to files are passed through a filter before being written to the volume. This process is similar to the filtering of files through antivirus software, but the performance load of DPM tracking changes is less than the performance load of antivirus software.

Synchronizing data

Synchronization is the process by which DPM transfers data changes from the protected computer to the DPM server and then applies the changes to the replica of the protected data. Note the following:

  • For a file volume or share, the protection agent on the protected computer tracks changes to blocks, using the volume filter and the change journal that is part of the operating system to determine whether any protected files were modified. DPM also uses the volume filter and change journal to track the creation of new files and the deletion or renaming of protected files.

  • For application data, after the replica is created, changes to volume blocks belonging to application files are tracked by the volume filter.

  • How changes are transferred to the DPM server depends on the application and the type of synchronization. For protected Microsoft Exchange data, synchronization transfers an incremental Volume Shadow Copy Service (VSS) snapshot. For protected Microsoft SQL Server data, synchronization transfers a transaction log backup.

  • DPM relies on synchronization to update replicas with the protected data. Each synchronization job consumes network resources and can therefore affect network performance.

  • The impact of synchronization on network performance can be reduced by using network bandwidth usage throttling and compression.

Running consistency checks

A consistency check is the process by which DPM checks for and corrects inconsistencies between a protected data source and its replica.

The performance of the protected computer, DPM server, and network will be affected while a consistency check is running, but it is expected to be optimized because only the changes and checksums are transferred.

The network impact from a consistency check is significantly lower than initial replica creation after a successful replica creation. If the initial replica creation is interrupted or unsuccessful, the first consistency check can have an impact similar to replica creation.

We recommend that consistency checks be performed during off-peak hours.

DPM automatically performs a consistency check in the following instances:

  • When you modify a protection group by changing the exclusion list.

  • When a daily consistency check is scheduled and the replica is inconsistent.

Running express full backups

An express full backup is a type of synchronization in which the protection agent transfers a snapshot of all blocks that have changed since the previous express full backup (or since the initial replica creation, for the first express full backup) and updates the replica to include the changed blocks. The impact of an express full backup operation on performance and time is expected to be less than the impact of a full backup because DPM transfers only the blocks changed since the last express full backup.

Backing up to tape

When DPM backs up data from the replica to tape, there is no network traffic and therefore no performance impact on the protected computer.

When DPM backs up data from the protected computer directly to tape, there will be an impact on the disk resources and performance on the protected computer. The impact on performance is less when backing up file data than when backing up application data.

Running DPM processes

On the DPM server, three processes can impact performance:

  • DPM protection agent (MsDpmProtectionAgent.exe). DPM jobs affect both memory and CPU usage by the DPM protection agent. It is normal for CPU usage by MsDpmProtectionAgent.exe to increase during consistency checks.

  • DPM service (MsDpm.exe). The DPM service affects both memory and CPU usage.

  • DPM Administrator Console (an instance of Mmc.exe). DPM Administrator Console can be a significant factor in high memory usage. You can close it when it is not in use.

Note

Memory usage for the DPM instance of the SQL Server service (Microsoft$DPM$Acct.exe) is expected to be comparatively high. This does not indicate a problem. The service normally uses a large amount of memory for caching, but it releases memory when available memory is low.