MSMQ-MQSeries Bridge Performance Results

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

Published: June 1, 2000

For SNA Server 4.0 (Service Pack 3) and Host Integration Server 2000 On Windows 2000

Version 1.0

A white paper by: EIG Performance Team

On This Page

Executive Summary
Test Results
Test Methodology and Environment
Observations
Overall Thoughts

Executive Summary

The MSMQ-MQSeries Bridge is capable of sustaining 450 TPS (Dual PII400), 750 TPS (Quad Xeon 400) of 1-kilobyte round trip transactions, with total sustained message throughput being 900 express msg/sec and 1500 express msg/sec respectively. The MSMQ-MQSeries Bridge is not CPU bound, which allows other more processor hungry back end applications to run on the same machine.

The following table shows a quick summary of the top numbers with some descriptive tests. The numbers achieved are shown with a distributed version of Component Object Model (DCOM) client load.

Table 1 Non-Transactional Messages through MSMQ-MQSeries Bridge (SNA 4.0 SP3 vs. HIS2000) throughput (msg/sec) on Windows2000

SNA 4.0 SP3

HIS2000 (1QM)

HIS2000 (3QM)

1k messages to Single MQSeries QM

740

740

1k/8k messages to Single MQSeries QM

350

450

1k message MQSeries to MSMQ only Single QM

370

370

8k message MQSeries to MSMQ only Single QM

250

345

1k message MSMQ to MQSeries only Single QM

425

450

Messages contained character only data.

* MSMQ limited see Observations for more details.

Table 2 Transactional Messages through MSMQ-MQSeries Bridge (SNA 4.0 SP3 vs. HIS2000) throughput (msg/sec) on Windows2000

SNA 4.0 SP3

HIS2000

1k messages to Single MQSeries QM

130

1k message MQSeries to MSMQ only Single QM

197

8k message MQSeries to MSMQ only Single QM

85

1k message MSMQ to MQSeries only Single QM

75

Messages contained character only data.

Test Results

Round Trip 1k Transactions

Non-Transactional

Using a single MQSeries Queue Manager through a single MSMQ-MQSeries Bridge, the maximum sustained rate was 370 TPS for a total of 740-msg/sec through the MSMQ-MQSeries Bridge using the SNA 4.0 SP3 version. When the same test was run changing to Host Integration Server 2000, the sustained rate stayed the same at 370 transactions per second.

By increasing the number of Queue Managers that the MSMQ-MQSeries Bridge serviced to three (or tripling the number of active message pipes), the rate increased to 525 TPS for a total of 1050-msg/sec. The limiting factor here was the amount of Outgoing Messages per second that MSMQ could sustain to the Routing Servers connector queue (350 msg/sec).

Cc723261.msmqpe01(en-us,TechNet.10).gif

Transactional

The sustained demand for a single MQSeries Queue Manager through a single MSMQ-MQSeries Bridge was 65 TPS for a total of 130-msg/sec through the MSMQ-MQSeries Bridge offered by both SNA 4.0 SP3 and Host Integration Server 2000. The MSMQ-MQSeries Bridge CPU was less than 10% but the disk queue length was 1.3. This means the disk was being heavily used, but still had 35% of the disk idle.

Round Trip MSMQ 1k MQSeries 8k Transactions

While testing on the MSMQ-MQSeries Bridge, using a single MSMQ message connector pipe and one MQSeries message connector pipe, the maximum sustained rate was 175 TPS for a total of 350-msg/sec using the MSMQ-MQSeries Bridge supplied by SNA 4.0 SP3. The maximum sustained rate increased to 225 TPS for a total of 450-msg/sec.when upgrading the MSMQ-MQSeries Bridge to the version supplied by Host Integration Server 2000.

Adding 2 MQSeries Queue Managers to the Host Integration Server 2000 version of the MSMQ-MQSeries Bridge increased the maximum sustained rate to 300TPS or 600-msg/sec.

MQSeries to MSMQ One Way 1k

Non-Transactional

On the SNA 4.0 SP3 MSMQ-MQSeries Bridge, the maximum sustained rate handled was 370-msg/sec. The messages were sent from one Queue Manager to one independent client. The average CPU usage for the MSMQ-MQSeries Bridge was 19%. Disk input/output and memory usage was relatively minor. Upgrading the MSMQ-MQSeries Bridge to Host Integration Server 2000 saw no change in maximum sustained rate. The processor usage was 19% again and the disk and memory were virtually untouched.

When 2 additional MQSeries Queue Managers were added to the Host Integration Server 2000 MSMQ-MQSeries Bridge, the maximum sustained rate handled was 1010-msg/sec. CPU increased to 57% but again, the memory and disk are virtually untouched.

Transactional

Changing to a transactional message does decease total throughput, as you would expect for a deliver once and only, one time message. Using the SNA 4.0 SP3 MSMQ-MQSeries Bridge, the maximum sustained rate handled was 197-msg/sec. The CPU was similar at 20% but disk activity increased to almost 0.95 of the queue length. (2.00 means this particular queue length is saturated.)

Updating the MSMQ-MQSeries Bridge to Host Integration Server 2000 did not make a difference in the throughput. Again the maximum sustained rate handled was 197-msg/sec. CPU however, dropped to 15%. Disk activity again had the queue length at 0.95.

Cc723261.msmqpe02(en-us,TechNet.10).gif

MQSeries to MSMQ One Way 8k

Non-Transactional

Increasing the message had the expected effect. The maximum sustained rate that the SNA 4.0 SP3 MSMQ-MQSeries Bridge can handle is 250-msg/sec. The MSMQ-MQSeries Bridge CPU usage stayed pretty consistent only increasing to 21%. Disk activity increased to 0.25 queue length. The maximum sustained rate increased to 345-msg/sec. when changing the MSMQ-MQSeries Bridge to the version shipped with Host Integration Server 2000. Disk activity stayed at 0.25 queue length.

Putting 2 additional MQSeries Queue Managers on the Host Integration Server 2000 version of the MSMQ-MQSeries Bridge increases the sustained throughput to 495-msg/sec. The MSMQ-MQSeries Bridge CPU increased to almost 70% and queue length was 0.87.

Transactional

Increasing the message size and making it deliver once and only once slows the throughput a little more. For the SNA 4.0 SP3 version of the MSMQ-MQSeries Bridge, the maximum sustained throughput was 85-msg/sec. CPU rate however decreased to about 10% and disk queue length was at 0.56. Converting to the Host Integration Server 2000 version of the MSMQ-MQSeries Bridge increased the maximum sustained throughput to 165 msg/sec. The MSMQ-MQSeries Bridge machines performance counters were pretty much the same - 10% CPU and 0.54 disk queue length.

Cc723261.msmqpe03(en-us,TechNet.10).gif

MSMQ to MQSeries One Way 1k

Non-Transactional

The MSMQ-MQSeries Bridge was able to sustain the demanded 425-msg/sec with one message pipe from MSMQ to MQS without any buffering going on the MSMQ-MQSeries Bridge machine using the SNA 4.0 SP3 version. The messages were sent from one independent client to one Queue Manager. The MSMQ-MQSeries Bridge CPU usage averaged to 34% over the 1000-second test. Disk input/output was low (queue length was 0.03). Changing to the Host Integration Server 2000 version of the MSMQ-MQSeries Bridge saw an increase to 450-msg/sec sustained demand. The CPU average again was 34% and queue length 0.03.

To boost the number of MQSeries Queue Managers from one to three, the expected result would be an increase in maximum sustained throughput. In this case, to a certain extent, this is correct. The MSMQ-MQSeries Bridge was able to sustain 975-msg/sec through the Host Integration Server 2000 MSMQ-MQSeries Bridge with CPU at 84% and disk queue length at 0.086. However, the throughput could be conceivably higher as the limiting factor was the sustained Outgoing Messages per second from the MSMQ client to the Routing Server connector queue (about 325-msg/sec per client).

Transactional

The sustained demand through a SNA 4.0 SP3 MSMQ-MQSeries Bridge averaged to 75-msg/sec. At this rate the CPU was only running at 8% however the disk queue length averaged to 1.25. Accordingly, when the MSMQ-MQSeries Bridge is upgraded to Host Integration Server 2000 the maximum sustained demand is again 75-msg/sec. The CPU was averaged at 4% and disk queue length at 1.01. The messages began to back up on the MSMQ client machine before they started to back up on the MSMQ-MQSeries Bridge thus causing the test to stop at the 75-msg/sec mark.

Test Methodology and Environment

The goal of the testing was to simulate a typical corporate messaging network, and then examine the behavior of the MSMQ-MQSeries Bridge as it was subjected to an ever-increasing message load. Testing was done at the Microsoft Enterprise Interoperability Group Performance Laboratory at corporate headquarters in Redmond, Washington during the period of 14-23 June 2000.

The Test Methodology

The test methodology for this comparison centered on a simulation of interactive transaction processing to deliver messages to the MSMQ and MQ Series networks. With MSMQ-MQSeries Bridge, MSMQ (Microsoft Message Queue) and IBM MQSeries, applications can send messages to each other between the message queuing systems. MSMQ-MQSeries Bridge achieves this by mapping the messages and data fields of the sending system, and the values associated with those fields, to the fields and values of the receiving environment. After the mapping and conversion, MSMQ-MQSeries Bridge completes the process by routing the message across the combined MSMQ and MQSeries networks.

A Microsoft test tool was used to create simulated messages. The tool, designed to provide basic stress testing for the MSMQ-MQSeries Bridge, works by sending or receiving messages from a predetermined queue. The tool is configurable to create any number, size, and type of messages that is required to test MSMQ-MQSeries Bridge functionality. The test software was an ActiveX application containing the MSMQ ActiveX reference or MQSeries ActiveX reference (the MQM.dll and MQIC{32}.dll do not have a way to check arrival time in the Queue and as the dequeueing from a MQSeries queue is slower than enqueueing there needs to be away to see how quickly the queue is populated). Performance counters were retrieved via the Windows System Monitor application.

Key Testing Metrics

Transactions per second (TPS) – the number of transactions the MSMQ-MQSeries Bridge was able to send and receive each second. Contains one request message and one reply message. A separate computer running Microsoft Network Monitor software (a protocol analyzer) recorded both data transaction throughput and total LAN traffic loads.

CPU Load – CPU utilization on the central system was recorded by performance counters gathered by Perfmon.

The Test Environment

Network Topologies

Table 3

Cc723261.msmqpe04(en-us,TechNet.10).gif

Seven tests were conducted using a Dual PII 400 MHz system for SNA 4.0 SP3 and fourteen tests were conducted using a Dual PII400 MHz system for Host Integration Server 2000. In each configuration, the messaging gateway system was incrementally stressed to attempt maximum load while running MSMQ-MQSeries Bridge.

For the gateways, all computers (hardware configurations are listed in Table 4) used two separate Ethernet segments. The environment consisted of two 100BaseT networks to provide connectivity for the client workstations, the gateway, network monitor (not shown), and MQS servers.

The clients each went through cycles of sending/receiving X transactions per second window. This request frequency generated the result messages per second load for each client. The second window was used to equalize client load (clients ran continuous cycles). The transactions were added incrementally, and test data was collected after each transaction was added to the test bed configuration. Transactions were added until the Connector Queue backed up and could not flush itself within a reasonable amount of time.

Hardware Platforms

MSMQ-MQSeries Bridge Machine. A Dual 400 MHz with 512mb of RAM was configured to run Microsoft Windows2000 Advanced Server, MQSeries Client v5.1, Microsoft MSMQ v2.0 with Routing enabled, and Microsoft MSMQ-MQSeries Bridge SNA 4.0 SP3 or Host Integration Server2000.

MQSeries Client Machines. Configured to run Microsoft Windows 2000 Professional, and MQSeries Client v5.1.

MQSeries Server Machine. Configuration for the server consisted of 4gig of RAM and Eight PIII550 processors. The machine was configured to run Microsoft Windows2000 Datacenter, and MQSeries Server v5.1.

MSMQ DC Machine. Configured to run Microsoft Windows2000 Advanced Server, and Microsoft MSMQ v2.0.

MSMQ Client Machines. Configured to run Microsoft Windows2000 Professional, and MSMQ v2.0.

Table 4

Function

Vendor

OS

Processor

RAM

Network Adaptor

Clients(8)

Ciara

Windows2000 Professional

PII350

128MB

Intel Ether Express Pro

Bridge

Ciara

Windows2000 Advanced Server

Dual PII400

512MB

Intel Ether Express Pro

DC

Ciara

Windows2000 Advanced Server

Dual PII400

512MB

Intel Ether Express Pro

MQSeries Server

Fujitsu

Windows2000 Datacenter

Eight PIII550

4GIG

Intel Pro /100+ Server

Performance Settings

The MSMQ-MQSeries Bridge has definable attributes that help increase or decrease the amount of messages that are passed through. One setting is the number of threads that are utilized for each of the message pipes. Ideally you would set this to reflect the number of queue managers that the Bridge will service. However, minimal testing was done on varying amounts of threads. The results showed that there is no noticeable difference in performance if you put more threads in than MQSeries Queue Managers. However, if you set the threads lower, the performance dips depending on the number of messages sent versus the number of MQSeries Queue Managers that are serviced. This is due to the number of jobs trying to get the available thread(s).

The settings that do have an impact on your Bridge, are located on each individual message pipe. Choose the properties option after right clicking on a highlighted message pipe, go to the Batch tab. There you will find 3 batch options that are configurable.

Default:

Max number of: 10
Max accumulated Size: 1024
Max accumulated Time: 512

Performance Settings used in this paper:

Max number of: 1000
Max accumulated Size: 300000
Max accumulated Time: 256

Observations

  • MSMQ transactional messages backed up on the client side after 75 to 100 messages per second. This caused the transactional message tests to stop and show low messages a second.

  • MSMQ non-transactional messages back up on the client side around the 425-message mark per connection. If you are sending to multiple queue managers the average dips to about 350 messages a second.

  • Roughly every 109 milliseconds the MSMQ-MQSeries Bridge sends out a 64-byte message to each of the active Connector Queues on the MQSeries side to see if there are any messages. This equates to a minimum of 10 - 64-byte messages a second to one active MQSeries message pipe. This is adjustable via a registry change in Host Integration Server 2000.

  • MQ Series takes 150-200 milliseconds to close a queue and 15-16 milliseconds to open a queue. This is an expensive operation so leaving it open for long periods of time prove more advantageous for performance.

  • With non-transactional messages, if the queue depth on the client or any message pipe to a connector queue gets to be above 32,000 1k messages, the MSMQ service (mqsvc.exe) starts to take processor time away from all other services. The time is not spent on sending the messages, as the Outgoing messages per second rate decreases to 100-200 messages a second.

  • If a variant full of strings is put into the body of a MSMQ message, the message then gets converted to Unicode thus doubling the size of the message. However, the MSMQ-MQSeries Bridge and MQS recognize the message as ASCII format. For example, a 1k of text converts to 2k when placed into a MSMQ message body. The MSMQ-MQSeries Bridge defines the size of the message as 1k and MQS defines the body size as 1k.

  • MSMQ VB plug-in does not allow you to specify the type of message body created. The default is string, which is then converted to Unicode. In C programming you are allowed to specify the type. In the VB plug-in for MQS you can specify the format of the message body.

  • If a message is sent from MQ Series to the MSMQ side the only prerequisite is that the Remote Queue Manager is identified in MQ Series Queue Manager. If a message is sent to a non-existent queue, the overall performance of the MSMQ-MQSeries Bridge is decreased and the messages wind up in the MSMQ-MQSeries Bridge Dead Letter Queue. MQ Series is not notified that the message could not be delivered. On the other hand, MSMQ will not allow you to even open the queue if it is not identified in the MSMQ GUI. (If it is not identified on the MQ Series Queue Manager the message is delivered to the MSMQ-MQSeries Bridge Dead Letter queue. Transactional messages stay in outgoing queue until the message expires. This also affects the performance through the MSMQ-MQSeries Bridge causing it to decrease to 40+ messages a second or less than 1 if there are tens of thousands of messages in the outgoing queue.) Performance on the MSMQ-MQSeries Bridge does degrade while these messages are passed through the MSMQ-MQSeries Bridge.

  • If a quota limit is not set on MSMQ and messages begin to pile up in the queue, then performance begins to degrade as the Available Bytes on the MSMQ-MQSeries Bridge machine are placed with MSMQ. If enough messages are put in to the queue depth, the machine becomes sluggish and unresponsive.

Overall Thoughts

In regard to the limitations of the hardware used for testing and the messaging software used to put messages into the MSMQ-MQSeries Bridge, it appears the MSMQ-MQSeries Bridge is, as a whole, virtually transparent in sending messages between MSMQ and MQSeries. There are optional fields (i.e. Reply To Queue Manager) that when populated increase the amount of time required to send messages through the MSMQ-MQSeries Bridge. As new features (i.e. Encryption) are added to the MSMQ-MQSeries Bridge the amount of time to send messages will increase thus causing the number of messages per second to decrease and the MSMQ-MQSeries Bridge process to use more CPU. Limiting the number of times that MSMQ protocol and MQSeries protocol are needed by the MSMQ-MQSeries Bridge will be instrumental in keeping the amount of overhead observed by the MSMQ-MQSeries Bridge to a minimal level.

Both parties of messaging software make it expensive on the network to open and close queues. However, if a queue is not closed after long periods of time, it may have detrimental consequences on the machine. The tests conducted opened the queue sent all messages in the specified time (i.e. 5000 in 10 seconds) then closed the queue. Previous testing had determined that opening and closing the queue for each message caused the TPS to decrease a minimum of 20%.

In comparing the MSMQ-MQSeries Bridge on the Windows NT 4.0 (see Microsoft MSMQ-MQSeries Bridge Performance Results for SNA Server 4.0 SP3 on Windows NT 4.0) and Windows 2000 platforms there is one glaring difference. MSMQ seems to have degraded its performance in the Connector queue. With MSMQ v1.0 and MQSeries v5.0 the limiting factor was that MQSeries was bottlenecked at 500+ messages. Now with MSMQ v2.0 and MQSeries v5.1 the maximum sustained rate that can be pumped into the Connector Queue from MSMQ is about 425 messages a second. An internal test of MQSeries put the enqueing at over 600 messages a second and the dequeueing at over 575 messages a second on an eight proc machine. The disk activity, processor activity, and memory usage were very low (.2 disk queue length, <10% processor time, 5% memory usage).

On the other hand, the changes made to MSMQ and MQSeries have increased the performance of the queues. On the NT 4 platform the one-way MSMQ->MQS messages measured at 500 messages a second and MQS->MSMQ measured 565 messages a second sustained. However, adding both pipes brought the total sustained messages per second down to 600 (or 300 messages from either side) messages a second. Now, on the surface, the number of messages sent has decreased in a one-direction fashion but increased on multiple pipes being used. The MSMQ-MQSeries Bridge does not cause this one direction slow down as internal testing on MSMQ show that a sustained rate of 400+ messages a second is expected.