Enhancing Replication Performance
You can enhance the general performance for all types of replication in your application and on your network by:
- Setting a minimum amount of memory allocated to Microsoft® SQL Server™ 2000.
- Using a separate disk drive for the transaction log for all databases involved in replication.
- Consider adding memory to servers used in replication.
- Using multiprocessor computers.
- Setting a fixed size for the distribution database.
- Publishing only the amount of data required.
- Running the Snapshot Agent only when necessary and at off-peak times.
- Placing the snapshot folder on a drive not used to store database or log files.
- Using a single snapshot folder per publication.
- Consider using compressed snapshot files.
- Reducing the distribution frequency when replicating to numerous Subscribers.
- Consider use of pull or anonymous subscriptions.
- Reduce the verbose level of replication agents to '0' except during initial testing, monitoring, or debugging.
- Run agents continuously instead of on very frequent schedules.
- Consider using the –UseInprocLoader agent property.
Set a Minimum Amount of Memory Allocated to SQL Server
By default, SQL Server 2000 changes its memory requirements dynamically based on available system resources. To avoid low memory availability during replication activities, use the min server memory option to set the minimum available memory. If the server is a remote Distributor or a combined Publisher and Distributor, you must assign it at least 16 megabytes (MB) of memory. For more information, see Server Memory Options.
Use a Separate Disk Drive for All Databases Involved in Replication
This applies to the publication database, the distribution database, and the subscription database. You can decrease the time it takes to write transactions by storing the log files on a disk drive different than the one used to store the database. You can mirror that drive, using a Redundant Array of Inexpensive Disks (RAID)-1, if you require fault tolerance. Use RAID 0 or 0+1 (depending on your need for fault tolerance) for other database files. This is a good practice regardless of whether or not replication is being used. For more information, see RAID Levels and SQL Server.
Consider Adding Memory to Servers Used in Replication
If you need to improve replication performance, consider adding memory to the servers used in replication. For example, if the computer is configured with 64 megabytes (MB) of memory, consider increasing the memory to 128 MB or more. You can use the sp_configure stored procedure to assign additional memory to Microsoft® SQL Server™ 2000.
Use Multiprocessor Computers
SQL Server 2000 replication agents can take advantage of additional processors on the server. If you are running at high CPU usage, consider installing a faster CPU or multiple CPUs (symmetric multiprocessing).
Publish Only the Amount of Data Required
Because replication is easy to set up, there is a tendency to publish more data than is actually required. This can consume additional resources within the distribution databases and snapshot files, and can lower the throughput for required data. Avoid publishing unnecessary tables and consider updating publications less frequently.
Run the Snapshot Agent Only When Necessary and at Off-Peak Times
The Snapshot Agent bulk copies data from the published table on the Publisher to a file in the snapshot folder on the Distributor. In SQL Server 2000, the process of generating a snapshot for transactional replication no longer holds table locks on the published tables. Similarly, for merge replication in SQL Server 2000, concurrency is improved and lock duration is reduced when a snapshot is being generated. Although this reduces the impact on concurrently connected users, generating a snapshot is still a resource intensive process and is best scheduled during off-peak times.
Place the Snapshot Folder on a Drive that Does Not Store Database or Log Files
Similarly, the Snapshot Agent will perform a sequential write of data to the snapshot folder when generating the snapshot for any publication type. Because the snapshot agent always copies a complete copy of the data in the publication to disk when replicating changes, placing the snapshot folder on a separate drive from any database or log files reduces contention among the disks and helps the snapshot process complete faster.
Using a Single Snapshot Folder Per Publication
When specifying the publication properties related to snapshot location, you can choose to generate snapshot files to the default snapshot folder, to an alternate snapshot folder, or to both. Generating snapshot files in both locations requires additional processing when the Snapshot Agent runs. This takes more time than generating the snapshot files to a single location for the publication.
For more information, see Alternate Snapshot Locations.
Consider Using Compressed Snapshots
Compressing snapshot files in the alternate snapshot folder can reduce snapshot disk storage requirements and, in some cases, improve the performance of transferring snapshot files across the network when they are used for replication over the Internet. However, compressing the snapshot requires additional processing by the Snapshot Agent when generating the snapshot files, and by the merge agent when applying the snapshot files. This may slow down snapshot generation and increase the time it takes to apply a snapshot in some cases. Consider these tradeoffs carefully when using compressed snapshots.
For more information, see Compressed Snapshot Files.
Reduce the Distribution Frequency When Replicating to Numerous Subscribers
A single Distributor can distribute transactions to a larger number of Subscribers if the Distribution and Merge Agents associated with each Subscriber are scheduled to run less frequently. Stagger when the Distribution Agents or Merge Agents are initially run so they do not all attempt to start simultaneously the first time they are started. If the agents are running on a scheduled basis, the schedules are set by default so that the agents are not running at the same time for regular synchronizations.
Consider Pull or Anonymous Subscriptions
The Distribution or Merge Agent runs on the Distributor for push subscriptions, and on Subscribers for pull or anonymous subscriptions. Using pull or anonymous subscriptions can increase performance by moving Distribution or Merge Agent processing from the Distributor to Subscribers.
You can also offload agent processing by using Remote Agent Activation. Agent processing can be moved to the Subscriber for push subscriptions and to the Distributor for pull subscriptions. Administration of the agent still takes place at the Distributor for push subscriptions and at the Subscriber for pull subscriptions. For more information, see Remote Agent Activation.
Anonymous subscriptions, which are especially useful for Internet applications, do not require that information about the Subscriber be stored in the distribution database at the Distributor for transactional replication and reduces the storage of information about the Subscriber in the publishing database for merge replication. This reduces the resource demands on the Publisher and Distributor because they do not have to maintain information about anonymous Subscribers.
Anonymous subscriptions are a special category of pull subscriptions. In regular pull subscriptions, the Distribution or Merge Agent runs at the Subscriber (thereby reducing the resource demands on the Distributor), but still stores information at the Publisher. When a publication supports anonymous subscriptions, the publication is configured to always have a snapshot ready for new Subscribers.
For transactional replication, this means that every time the Snapshot Agent runs, a new snapshot will be generated. Typically, a snapshot is not generated if there are no new Subscribers waiting for a snapshot or no Subscriber needs to be reinitialized at the time the Snapshot Agent is run. So while anonymous Subscribers can reduce the resource demands at the Distributor, the tradeoff is that a snapshot is generated more often. With merge replication, a new snapshot is always generated when the Snapshot Agent runs regardless of the type of subscriptions supported by the publication.
Additional Indexes at the Subscriber
If a subscription database needs to be used for decision support analysis and you add a lot of indexes to support these queries, you should note that these additional indexes may significantly reduce the throughput with which changes can be applied to the Subscriber by the Distribution Agent or Merge Agent. In some cases, where you are mostly aggregating the data at the Subscriber, it may be more efficient to create an indexed view at the Publisher and publish it as a table to the Subscriber using transactional replication. For more information, see Indexed Views.
Application Logic in Triggers at the Subscriber
Similarly, additional business logic in user defined triggers at the Subscriber may also slow down the replication of changes to the Subscriber. For transactional replication, it can be more efficient to include this logic in custom stored procedures used to apply the replicated commands. For more information, see Using Custom Stored Procedures in Articles.
Use Horizontal Partitioning Judiciously
When a transactional publication is set up with an article(s) that is horizontally partitioned, the log reader has to apply the filter to each row affected by an update to the table as it scans the transactions log. The throughput of the log reader will therefore be affected. If achieving maximum throughput is key, you should consider using DTS custom partitions to do custom horizontal partitions . That allows the log reader agent to move transactions out of the published database's log as quickly as possible. Instead of affecting all Subscribers with the overhead of filtering the data, only the subscriber that chooses to use a DTS package to filter the data is affected.
Similarly, merge replication must evaluate changed or deleted rows to determine every time you synchronize changes to determine which Subscribers should receive those rows. When horizontal partitioning is employed to reduce the subset of data required at a Subscriber, this processing is more complex and can be slower than when you publish all rows in a table. Consider carefully the tradeoff between reduced storage requirements at each subscriber and the need for achieving maximum throughput.
Use a Fast Network
The propagation of changes to the Subscriber can be significantly enhanced by using a very fast network of 100 Mbps or faster.
Reduce the Verbose Level of Replication Agents
Reduce the –HistoryVerboseLevel parameter and/or the –OutputVerboseLevel parameter of the Distribution Agents or Merge Agents to the lowest value. This will reduce the amount of new rows inserted to track agent history and output. Instead, previous history messages with the same status will be updated to the new history information. Changing this agent parameter can yield a significant performance gain of up to or over 10 to 15 percent.
However, you should increase the –HistoryVerboseLevel for testing, monitoring, and debugging so that you have as much history information about agent activity as possible.
Run Agents Continuously Instead of on Very Frequent Schedules
Setting the agents to run continuously rather than creating frequent schedules (such as every minute) will improve replication performance. When you set the Distribution Agent or Merge Agent to run continuously, whenever changes occur, they will be immediately propagated to the other servers that are connected in the topology. Because the agent is continuously running, it does not have to start and stop which causes more work for the server where the agent is running.
Consider Using the –UseInprocLoader Agent Property
The –UseInprocLoader agent property improves performance of the initial snapshot for snapshot replication, transactional replication, and merge replication.
When you apply this property to either the Distribution Agent (for snapshot replication or transactional replication) or the Merge Agent (for merge replication), the agent will use the in-process BULK INSERT command when applying snapshot files to the Subscriber.
The –UseInprocLoader property cannot be used with character mode bcp, and it cannot be used by OLE DB or ODBC Subscribers.
Important When using the –UseInprocLoader property, the SQL Server 2000 account under which the Subscriber is running must have read permissions on the directory where the snapshot .bcp data files are located. When the –UseInprocLoader property is not used, the agent (for heterogeneous Subscribers) or the ODBC driver loaded by the agent (for SQL Server 2000 Subscribers) reads from the files, so the security context of the Subscriber SQL Server 2000 account is not used.