Export (0) Print
Expand All

Microsoft MSMQ-MQSeries Bridge Performance Results

 

Host Integration Server 2000
Microsoft Corporation

October 2001

Summary: Microsoft Host Integration Server 2000 (HIS) includes a number of components that enable integration with applications on IBM mainframe and AS/400 minicomputers as well as on other platforms. The Microsoft MSMQ-MQSeries Bridge included with Host Integration Server 2000 and SNA Server 4.0 provides a gateway between IBM MQSeries messaging applications predominant on IBM host computers and Microsoft Message Queuing (MSMQ) used on Windows systems. This article reports on performance results based on testing the MSMQ-MQSeries Bridge supplied with Host Integration Server 2000 and SNA Server 4.0. (13 printed pages)

Contents

Introduction
Test Methodology and Environment
   The Test Methodology
   Key Test Metrics
   The Test Environment
   Performance Settings
Test Results
   Test Results for Round-Trip 1-KB Messages
   Test Results for Round-Trip MSMQ 1-KB MQSeries 8-KB Messages
   Test Results for MQSeries to MSMQ One Way 1 KB Messages
   Test Results for MQSeries to MSMQ One Way 8 KB Messages
   Test Results for MSMQ to MQSeries One Way 1 KB Messages
Observations
Final Thoughts

Introduction

Microsoft® Host Integration Server 2000 includes a set of Application Integration components, which provide desktop or server-based applications with access to host applications. The Application Integration components included in Host Integration Server 2000 include the following:

  • Microsoft MSMQ-MQSeries Bridge
  • COM Transaction Integrator (COMTI) for CICS and IMS

The Microsoft MSMQ-MQSeries Bridge included with Host Integration Server 2000 and SNA Server 4.0 provides a gateway between IBM MQSeries messaging applications predominant on IBM host computers and Microsoft Message Queuing (MSMQ) used on Windows systems.

This article reports the results of performance testing using the Microsoft MSMQ-MQSeries Bridge provided with Host Integration Server 2000 and SNA Server 4.0 with Service Pack 3.

Based on testing on a dual processor Pentium II 400 MHz computer, the MSMQ-MQSeries Bridge is capable of sustaining 450 transactions per second (TPS) for one kilobyte non-transactional round-trip messages. This represents a total sustained message throughput of 900 messages per second.

Based on testing on a quad processor Pentium II Xeon 400 MHz computer, the MSMQ-MQSeries Bridge is capable of sustaining 900 transactions per second (TPS) for one kilobyte non-transactional round-trip messages. This represents a total sustained message throughput of 1,500 messages per second.

The MSMQ-MQSeries Bridge is not CPU bound, which allows other more processor-intensive applications to run on the same computer.

The following tables show a quick summary of the results based on performance testing the MSMQ-MQSeries Bridge with a distributed version of the Component Object Model (DCOM) client load.

Table 1. Non-transactional messages throughput using MSMQ-MQSeries Bridge (messages/second)

MSMQ-MQSeries Bridge
Test Description
SNA Server 4.0
Service Pack 3
HIS 2000
1 MSMQ
Queue Manager
HIS 2000
3 MSMQ
Queue Managers
MSMQ to MQSeries
1 KB roundtrip messages
(1 KB send/1 KB receive)
740 740 1050 (Note 2)
MSMQ to MQSeries
1 KB one-way messages
(1 KB send to MQSeries queue)
425 450 975 (Notes 2)
MSMQ to MQSeries
(1 KB send/8 KB receive messages)
350 450 600
MQSeries to MSMQ
1 KB one-way messages
(1 KB send to MSMQ queue)
370 370 1010
MQSeries to MSMQ
8 KB one-way messages
(8 KB send to MSMQ queue)
250 345 495

Notes:

  1. All messages contained character data only.
  2. MSMQ was the factor limiting performance in this configuration.

Table 2. Transactional messages throughput using MSMQ-MQSeries Bridge (messages/second)

MSMQ-MQSeries Bridge
Test Description
SNA Server 4.0
Service Pack 3
HIS 2000
1 MSMQ
Queue Manager
MSMQ to MQSeries 1 KB one-way messages
(1 KB send only)
130 130
MSMQ to MQSeries 1 KB round-trip messages
(1 KB send/1 KB receive)
75 75
MQSeries to MSMQ 1 KB round-trip messages
(1 KB send/1 KB receive)
197 197
MQSeries to MSMQ 8 KB round-trip messages
(8 KB send/8 KB receive)
85 165

Notes:

  1. All messages contained character data only.

Test Methodology and Environment

The goal of testing was to simulate a typical corporate messaging network and examine the behavior of the MSMQ-MQSeries Bridge as it was subjected to an ever-increasing message load. Testing was done at the Microsoft Enterprise Interoperability Group Performance Laboratory at corporate headquarters in Redmond, Washington during June 2000.

The Test Methodology

The test methodology for this comparison centered on a simulation of interactive transaction processing to deliver messages to the Microsoft MSMQ and IBM MQ Series networks. Using the Microsoft MSMQ-MQSeries Bridge, MSMQ, and IBM MQSeries, applications can send messages to each other through the message queuing systems. The MSMQ-MQSeries Bridge achieves this by mapping the messages and data fields of the sending system and the values associated with those fields to the fields and values of the receiving environment. After the mapping and conversion, the MSMQ-MQSeries Bridge completes the process by routing the message across the combined MSMQ and MQSeries networks.

A Microsoft test tool was used to create simulated messages. The tool, designed to provide basic stress testing for the MSMQ-MQSeries Bridge, works by sending or receiving messages from a predetermined queue. The tool is configurable to create any number, size, and type of message that is required to test the MSMQ-MQSeries Bridge features. The test software was written in Microsoft® Visual Basic® using Microsoft ActiveX® and DCOM. Separate control center software launched the MSMQ and MQSeries clients on the client computers and controlled the type and number of messages sent by the client each second. By gradually increasing the number of messages sent by the client computers, it was possible to determine the maximum values for sustained throughput that could be maintained by the MSMQ-MQSeries Bridge.

Separate tests were conducted of the MSMQ-MQSeries Bridge included in Host Integration Server 2000 running on Microsoft® Windows® 2000 Advanced Server and the MSMQ-MQSeries Bridge included with SNA Server 4.0 SP3 running on Microsoft Windows NT® 4.0 Enterprise Server. In both series of tests, similar hardware was used by the computer running the MSMQ-MQSeries Bridge software.

The MSMQ or MQSeries client workstations each went through cycles of sending/receiving a specified rate of messages per second set by the control center application. This request frequency and the number of client workstations used generated the resulting total messages per second load on the MSMQ or MQSeries servers and the MSMQ-MQSeries Bridge. The message and transaction load was increased incrementally, and test data was collected after each increase in client load was added to the test bed configuration. Client loads were increased until the messaging Connector Queue backed up and could not flush itself within a reasonable amount of time.

Key Testing Metrics

The test methodology was based on using a number of key metrics for determining performance. In order to measure some of these metrics, a separate computer running the Microsoft Network Monitor software (NetMon, a protocol analyzer) was placed on each network segment. An analysis of the NetMon protocol analysis logs was used to determine the recorded data transaction throughput and total LAN traffic loads.

The MQSeries libraries and messaging APIs do not have a way to check arrival time in the MQSeries queue. Since dequeueing from an MQSeries queue is slower than enqueueing, there needs to be a way to see how quickly the queue is populated. A separate MQSeries ActiveX DLL downloaded from the IBM Web site was used for this purpose. Performance counters on CPU usage, disk writes, and other systems measures were retrieved via the Perfmon application.

The key testing metrics include the following:

  • Transactions per second (TPS)—the number of transactions the MSMQ-MQSeries Bridge was able to send and receive each second. Each transaction consisted of one request message and one corresponding reply message. This test metric was determined using NetMon.
  • Messages per second (msg/sec)—the number of messages the MSMQ-MQSeries Bridge was able to send and receive each second. Each message could be a request message or a reply message. For bi-directional messages of the same size, the value for transactions per second represented 50% of the value for messages per second. This test metric was determined using NetMon.
  • CPU Load—the CPU utilization on the computer system running the MSMQ-MQSeries Bridge application. This test metric was determined using performance counters gathered by Perfmon.

The Test Environment

The test environment consisted of two private network segments running fast Ethernet (100Base-T): one for MSMQ and one for MQSeries. The computer running the MSMQ-MQSeries Bridge contained two fast Ethernet network interface cards and connected these two segments. All testing was based on using TCP/IP as the network protocol for connecting to both the MSMQ and MQSeries segments.

The MSMQ network segment included the following:

  • Multiple MSMQ client workstations to create the MSMQ message load.
  • A server computer functioning as a domain controller running MSMQ server software.
  • A test computer running NetMon.

When testing the MSMQ-MQSeries Bridge included with SNA Server 4.0 on Windows NT 4.0, the domain controller on the MSMQ segment also functioned as the MSMQ Primary Enterprise Controller (PEC).

The MQSeries network segment included the following:

  • Multiple MQSeries client workstations to create the MQSeries message load.
  • Multiple server computers running MQSeries server software.
  • A test computer running NetMon.

When testing the MSMQ-MQSeries Bridge included with SNA Server 4.0 on Windows NT 4.0, multiple computers were functioning as MQSeries servers. On Windows 2000, a later version of MQSeries software was used, and only a single computer acted as the MQSeries server.

The following figure depicts the network topology used to test the MSMQ-MQSeries Bridge included in Host Integration Server 2000. Note that the computer operating as the network monitor running NetMon is not shown in this figure.

Figure 1. Network topology for testing MSMQ-MQSeries Bridge on Host Integration Server 2000

Hardware platforms for Host Integration Server 2000 tests

The following hardware was used for testing the MSMQ-MQSeries Bridge included with Host Integration Server 2000.

MSMQ-MQSeries Bridge Machine: A dual processor Pentium II 400 MHz computer with 512 MB of RAM was configured to run Microsoft Windows 2000 Advanced Server, IBM MQSeries Client v5.1, Microsoft MSMQ v2.0 with routing enabled, and the Microsoft MSMQ-MQSeries Bridge included with Host Integration Server 2000.

MQSeries Client Machines: Pentium II 350 MHz computers with 128 MB of RAM were configured to run Microsoft Windows 2000 Professional and MQSeries Client v5.1.

MQSeries Server Machine: An eight-processor Pentium III 550 MHz computer with 4 GB of RAM was configured to run Microsoft Windows 2000 DataCenter and IBM MQSeries Server v5.1.

MSMQ DC Machine: A dual processor Pentium II 400 MHz computer with 512 MB of RAM was configured to run Microsoft Windows 2000 Advanced Server as the domain controller and Microsoft MSMQ v2.0.

MSMQ Client Machines: Pentium II 350 MHz computers with 128 MB of RAM were configured to run Microsoft Windows 2000 Professional and MSMQ v2.0.

The following table describes in more detail the specific hardware used for testing the MSMQ-MQSeries Bridge included in Host Integration Server 2000.

Table 3. Specific hardware included in Host Integration Server 2000

Function Vendor Operating System Processor RAM Network Adaptor
MSMQ Clients (4) Ciara Windows 2000 Professional PII 350 MHz 128 MB Intel Ether Express Pro
MQSeries Clients (4) Ciara Windows 2000 Professional PII 350 MHz 128 MB Intel Ether Express Pro
Bridge Ciara Windows 2000 Advanced Server Dual PII 400 MHz 512 MB Intel Ether Express Pro
Domain Controller and MSMQ Server Ciara Windows 2000 Advanced Server Dual PII 400 MHz 512 MB Intel Ether Express Pro
MQSeries Server Fujitsu Windows 2000 Datacenter Eight-processor PIII 550 MHz 4 GB Intel Pro /100+ Server

The following figure depicts the network topology used to test the MSMQ-MQSeries Bridge included in SNA Server 4.0 Service Pack 3. Note that the computer operating as the network monitor running NetMon is not shown in this figure.

Click here to see larger image.

Figure 2. Network topology for testing MSMQ-MQSeries Bridge on SNA Server 4.0 (click image to see larger picture)

Hardware platforms for SNA Server 4.0 tests

The following hardware was used for testing the MSMQ-MQSeries Bridge included with SNA Server 4.0 with Service Pack 3.

MSMQ-MQSeries Bridge Machine: A dual processor Pentium II 400 MHz computer with 512 MB of RAM was configured to run Microsoft Windows NT 4.0 Enterprise Server with Service Pack 5, IBM MQSeries Client v5.0, Microsoft MSMQ v1.0 Routing Server, and the Microsoft MSMQ-MQSeries Bridge included with SNA Server 4.0 SP3.

MQSeries Client Machines: Pentium II 350 MHz computers with 128 MB of RAM were configured to run Microsoft Windows NT 4.0 Workstation and MQSeries Client v5.0.

MQSeries Server Machines: Five dual processor Pentium II 400 MHz computers with 512 MB of RAM were configured to run Microsoft Windows NT 4.0 Server with Service Pack 5 and IBM MQSeries Server v5.0. One additional quad processor Pentium Pro 200 MHz computer with 512 MB RAM was configured to run Microsoft Windows NT 4.0 Server with Service Pack 5 and IBM MQSeries Server v5.0.

MSMQ DC Machine: A dual processor Pentium II 400 MHz computer with 512 MB of RAM was configured to run Microsoft Windows NT 4.0 Server with Service Pack 5 as the domain controller, Microsoft® SQL Server™ 7.0, and Microsoft MSMQ v1.0 as the MSMQ Primary Enterprise Controller (PEC).

MSMQ Client Machines: Pentium II 350 MHz computers with 128 MB of RAM were configured to run Windows NT 4.0 Workstation and MSMQ v1.0.

The following table describes in more detail the specific hardware used for testing the MSMQ-MQSeries Bridge included in SNA Server 4.0 with Service Pack 3.

Table 4. Specific hardware included in SNA Server 4.0 with SP3

Function Vendor
Model
Operating System Processor RAM Network Adaptor
MSMQ Clients (2) Dell Optiplex GX1 Windows NT Workstation with SP5 PII 350 MHz 128 MB 3Com Fast Etherlink XL
MSMQ Clients (4) Ciara Windows NT Workstation with SP5 PII 350 MHz 128 MB Intel Ether Express Pro
MQSeries Clients (6) Ciara Windows NT Workstation with SP5 PII 350 MHz 128 MB Intel Ether Express Pro
Bridge Dell Precision 610 Windows NT 4.0 Enterprise Server with SP5 Dual PII 400 MHz 512 MB Intel Ether Express Pro
Domain Controller and MSMQ Server Ciara Windows NT 4.0 Enterprise Server with SP5 Dual PII 400 MHz 512 MB Intel Ether Express Pro
MQSeries Servers (5) Ciara Windows NT 4.0 Server with SP5 Dual PII 400 MHz 512 MB Intel Ether Express Pro
MQSeries Server (1) Amdahl Envista Windows NT 4.0 Server with SP5 Quad Pentium Pro 200 MHz 512 MB Intel Ether Express Pro

Performance Settings

The MSMQ-MQSeries Bridge has several definable attributes that can affect performance and message throughput. These settings can be viewed and changed using the MSMQ-MQSeries Bridge Explorer by selecting a Bridge instance and right clicking on properties.

The MSMQ-MQSeries Bridge software creates and uses four message pipes for message transport as follows:

  • MSMQ to MQSeries messages sent with normal service (transactional)
  • MSMQ to MQSeries messages sent with high service (non-transactional)
  • MQSeries to MSMQ messages sent with normal service (transactional)
  • MQSeries to MSMQ messages sent with high service (non-transactional)

An important setting on the Advanced tab is the number of threads that are used by the MSMQ-MQSeries Bridge software to service each of these message pipes. Ideally, this value would be set to reflect the number of MQSeries and MSMQ queue managers that the Bridge will service based on the number of CPUs. However, some limited testing was done using varying numbers of threads. The results indicated that there is no noticeable difference in performance if more threads are allocated than the number of MQSeries Queue Managers being serviced. However, if the thread count is lower than the number of MQSeries Queue Managers that are to be serviced, the Bridge performance drops depending on the number of messages sent and the number of threads allocated. This results from the number of jobs servicing the Queue Managers competing for access to the available threads.

Other settings that have an impact on MSMQ-MQSeries Bridge are batch attributes on each individual message pipe. A batch is a group of messages that get processed by the Bridge at the same time. These settings can be viewed and changed using the MSMQ-MQSeries Bridge Explorer by selecting a Bridge instance and right clicking on the properties for each of the four message pipes. The Batch tab exposes three properties on a message pipe that affect the number and size of batches used for each message pipe.

Table 5. Batch properties

Batch Property Comments
Max. Number of Messages The maximum number of messages in a batch (defaults to 10).
Max. Accumulated Size The maximum size in bytes of a batch (defaults to 1024 bytes).
Max. Accumulated Time The maximum time in milliseconds during which messages are batched (defaults to 512).

Transmission begins as soon as there are messages to be sent. When any of the above limits is reached, the message pipe checks that the batch was fully received on the destination side.

To improve Bridge performance, these batch properties were set as follows during testing:

  • Max. Number of Messages: 1,000
  • Max. Accumulated Size: 300,000
  • Max. Accumulated Time: 256

Test Results

Test Results for Round-Trip 1-KB Messages

Non-transactional

A single MQSeries Queue Manager through a single MSMQ-MQSeries Bridge using the SNA Server 4.0 SP3 version provided a maximum sustained rate of 370 TPS for a total of 740 msg/sec through the MSMQ-MQSeries Bridge. When the same test was run using the MSMQ-MQSeries Bridge in Host Integration Server 2000, the sustained rate stayed the same at 370 transactions per second.

By increasing the number of MQSeries Queue Managers that the MSMQ-MQSeries Bridge serviced to three (or tripling the number of active message pipes), the rate increased to 525 TPS for a total of 1050 msg/sec. The limiting factor was the number of outgoing messages per second that MSMQ could sustain to the connector queues of MSMQ routing server (350 msg/sec).

Transactional

The sustained demand for a single MQSeries Queue Manager through a single MSMQ-MQSeries Bridge was 65 TPS for a total of 130 msg/sec through the MSMQ-MQSeries Bridge in both SNA 4.0 SP3 and Host Integration Server 2000. The MSMQ-MQSeries Bridge CPU load was less than 10%, but the disk queue length was 1.3. A disk queue length of 2.0 represents the maximum disk processing possible. The test results indicate that the disk was being heavily used, but still had 35% of the disk processing idle.

Test Results for Round-Trip MSMQ 1-KB MQSeries 8-KB Messages

Non-transactional

While testing on the MSMQ-MQSeries Bridge using a single MSMQ message connector pipe and one MQSeries message connector pipe, the maximum sustained rate was 175 TPS for a total of 350 msg/sec using the MSMQ-MQSeries Bridge in SNA Server 4.0 SP3. The maximum sustained rate increased to 225 TPS for a total of 450 msg/sec when using the MSMQ-MQSeries Bridge in Host Integration Server 2000.

Adding two additional MQSeries Queue Managers to the Host Integration Server 2000 version of the MSMQ-MQSeries Bridge (total of 3 Queue Managers) increased the maximum sustained rate to 300TPS or 600 msg/sec.

Test Results for MQSeries to MSMQ One Way 1 KB Messages

Non-Transactional

On the MSMQ-MQSeries Bridge in SNA Server 4.0 SP3, the maximum sustained rate was 370 msg/sec. The messages were sent from one Queue Manager to one independent client. The average CPU usage for the MSMQ-MQSeries Bridge was 19 percent. Disk input/output and memory usage was relatively minor. Using the MSMQ-MQSeries Bridge in Host Integration Server 2000 had no impact on this maximum sustained rate. On Host Integration Server 2000, the processor, disk, and memory usage were virtually the same as the results on SNA Server 4.0 SP3.

When two additional MQSeries Queue Managers were added to the Host Integration Server 2000 MSMQ-MQSeries Bridge (total of 3 Queue Managers), the maximum sustained rate handled was 1010 msg/sec. CPU load increased to 57 percent, but the memory and disk usage remained virtually the same.

Transactional

Transactional messages guarantee delivery, delivering once and only one message. Changing to transactional messages decreases the total throughput, as would be expected. Using the MSMQ-MQSeries Bridge in SNA Server 4.0 SP3, the maximum sustained rate handled was 197 msg/sec. The CPU load was similar at 20 percent, but disk activity increased to 0.95 of the queue length. A disk queue length of 2.00 indicates a particular disk queue is saturated.

Testing the MSMQ-MQSeries Bridge in Host Integration Server 2000 did not make a difference in the throughput. The maximum sustained rate handled was 197 msg/sec. CPU load, however, dropped to 15 percent. Disk activity was unchanged with a queue length at 0.95.

Test Results for MQSeries to MSMQ One Way 8 KB Messages

Non-transactional

Increasing the message size had the expected effect. The maximum sustained rate that the MSMQ-MQSeries Bridge in SNA Server 4.0 SP3 could handle was 250 msg/sec. The MSMQ-MQSeries Bridge CPU usage stayed consistent only increasing to 21 percent. Disk activity increased to 0.25 queue length. The maximum sustained rate increased to 345 msg/sec. when changing the MSMQ-MQSeries Bridge in Host Integration Server 2000. Disk activity stayed at 0.25 queue length.

Adding two additional MQSeries Queue Managers to the MSMQ-MQSeries Bridge in Host Integration Server 2000 (a total of 3 Queue Managers) increased the sustained throughput to 495 msg/sec. The MSMQ-MQSeries Bridge CPU load increased to almost 70 percent and disk queue length was 0.87.

Transactional

Changing to transactional messages and increasing the message size slows the throughput as would be expected. For the MSMQ-MQSeries Bridge in SNA Server 4.0 SP3, the maximum sustained throughput was 85 msg/sec. The CPU load, however, decreased to about 10 percent and the disk queue length was at 0.56. Using the MSMQ-MQSeries Bridge in the Host Integration Server 2000 increased the maximum sustained throughput to 165 msg/sec. The performance counters in both MSMQ-MQSeries Bridge computers were the same with 10 percent CPU load and 0.54 disk queue length.

Test Results for MSMQ to MQSeries One Way 1 KB Messages

Non-transactional

The MSMQ-MQSeries Bridge in SNA Server 4.0 SP3 was able to sustain a rate of 425 msg/sec with one message pipe from MSMQ to MQSeries without any buffering. The messages were sent from one independent client to one Queue Manager. The MSMQ-MQSeries Bridge CPU usage averaged to 34 percent over the 1000-second test. Disk input/output was low (queue length was 0.03). Using the MSMQ-MQSeries Bridge in Host Integration Server 2000, the throughput increased to 450 msg/sec. The CPU load average was again 34 percent with a disk queue length 0.03.

Boosting the number of MQSeries Queue Managers from one to three, the expected outcome would be an increase in maximum sustained throughput. Tests on the MSMQ-MQSeries Bridge in Host Integration Server 2000 indicate this to a certain extent. The MSMQ-MQSeries Bridge using three MQSeries Queue Managers was able to sustain a rate of 975 msg/sec with a CPU load at 84 percent and disk queue length at 0.086. However, the throughput could be conceivably higher since the limiting factor was the sustained outgoing messages per second from the MSMQ client to the MSMQ routing server connector queue (about 325 msg/sec per client).

Transactional

Testing with transactional messages, the sustained rate through the MSMQ-MQSeries Bridge in a SNA Server 4.0 SP3 averaged 75 msg/sec. At this rate the CPU load was only running at 8 percent, but the disk queue length averaged to 1.25. Using the MSMQ-MQSeries Bridge in Host Integration Server 2000, the maximum sustained demand remained at 75 msg/sec. The CPU load was reduced to 4 percent and the disk queue length was decreased to 1.01. The messages began to back up on the MSMQ client machine before they started to back up on the MSMQ-MQSeries Bridge thus causing the test to stop at the 75 msg/sec rate.

Observations

MSMQ transactional messages back up on the client side after 75 to 100 messages per second. This causes the transactional message tests to stop and show low messages per second (below 10 msg/sec).

MSMQ non-transactional messages back up on the client side above 425 messages per second when sending to a single MQSeries Queue Manager. When sending to multiple MQSeries queue managers, the average rate drops to about 350 messages a second.

Roughly every 109 milliseconds, the MSMQ-MQSeries Bridge sends out a 64-byte message to each of the active Connector Queues on MQSeries to determine if there are any messages. This polling results in a minimum of ten 64-byte messages per second to each active MQSeries message pipe. This is adjustable via a registry setting in Host Integration Server 2000.

MQSeries takes 150-200 milliseconds to close a queue and 15-16 milliseconds to open a queue. This is an expensive operation, so leaving a queue open for long periods of time proves more advantageous for performance.

With non-transactional messages, if the queue depth on the client or any message pipe to a connector queue grows above 32,000 1-KB messages, the MSMQ service (mqsvc.exe) starts to take processor time away from all other services. The processor time is not spent on sending the messages, as the outgoing messages per second rate decreases to 100-200 messages per second.

If a variant full of strings is put into the body of a MSMQ message, the message then gets converted to Unicode, thus doubling the size of the message. However, the MSMQ-MQSeries Bridge and MQSeries recognize the message as ANSI format. For example, one KB of text converts to 2 KB when placed into a MSMQ message body. The MSMQ-MQSeries Bridge defines the size of the message as 1 KB and MQSeries defines the body size as 1 KB.

The MSMQ Visual Basic plug-in does not allow you to specify the type of message body created. The default is string, which is then converted to Unicode. The MSMQ C/C++ API allows the message type to be specified, so these interfaces should support increased performance since the MSMQ message body could be decreased by 50 percent for text messages of strings. The Visual Basic plug-in for the MQSeries client allows the format of the message body to be specified.

If a message is sent from MQSeries to MSMQ, the only prerequisite is that the Remote Queue Manager is identified in MQSeries Queue Manager. If a message is sent to a non-existent queue, the overall performance of the MSMQ-MQSeries Bridge is decreased and the messages wind up in the MSMQ-MQSeries Bridge Dead Letter Queue. MQSeries is not notified that the message could not be delivered. On the other hand, MSMQ will not allow you to even open the queue if it is not identified in the MSMQ GUI. If it is not identified on the MQSeries Queue Manager, the message is delivered to the MSMQ-MQSeries Bridge Dead Letter queue. Transactional messages stay in the outgoing queue until the message expires. This also affects the performance through the MSMQ-MQSeries Bridge causing it to decrease to 40 messages per second when there are only a few messages in the Dead Letter queue. Performance can decrease to less than 1 message per second if there are tens of thousands of messages in the outgoing queue. Performance on the MSMQ-MQSeries Bridge does degrade while these messages are passed through the MSMQ-MQSeries Bridge.

If a quota limit is not set for the maximum number of messages resident in the MSMQ service and messages begin to pile up in the queue, then performance of the MSMQ-MQSeries Bridge begins to degrade. Memory is associated with each message in the MSMQ service, so the Available Bytes of Memory on the MSMQ-MQSeries Bridge computer is reduced. If enough messages are put in to the queue, the machine becomes sluggish and unresponsive.

Final Thoughts

In regard to the limitations of the hardware used for testing and the messaging software used to put messages into the MSMQ-MQSeries Bridge, it appears the MSMQ-MQSeries Bridge is, as a whole, virtually transparent in sending messages between MSMQ and MQSeries. There are optional fields (Reply To Queue Manager, for example) that, when populated, increase the amount of time required to send messages through the MSMQ-MQSeries Bridge. As new features (encryption, for example) are added to the MSMQ-MQSeries Bridge, the amount of time to send messages will increase, thus causing the number of messages per second to decrease and the MSMQ-MQSeries Bridge process to use more CPU. Limiting the number of times that the MSMQ protocol and MQSeries protocol are needed by the MSMQ-MQSeries Bridge will be instrumental in keeping the amount of overhead observed by the MSMQ-MQSeries Bridge to a minimal level.

Both MSMQ and MQSeries messaging software make it expensive on the network to open and close queues. However, if a queue is not closed after long periods of time, it may have detrimental consequences on the machine. The performance testing software opened the queue and sent all messages in the specified time (5000 messages in 10 seconds, for example), and then closed the queue. Previous testing had determined that opening and closing the queue for each message caused the TPS to decrease a minimum of 20 percent.

In comparing the MSMQ-MQSeries Bridge on the Windows NT 4.0 and Windows 2000 platforms, there is one noticeable difference. MSMQ seems to have degraded its performance in the Connector queue. With MSMQ v1.0 and MQSeries v5.0, the limiting factor was that MQSeries was bottlenecked at 500+ messages. Now with MSMQ v2.0 and MQSeries v5.1, the maximum sustained rate that can be pumped into the Connector Queue from MSMQ is about 425 messages a second. An internal test of MQSeries put the enqueueing at over 600 messages per second and the dequeueing at over 575 messages per second on an eight-processor machine. The disk activity, processor activity, and memory usage were very low (less than 10 percent processor time, 0.2 disk queue length, and 5 percent memory usage).

On the other hand, the changes made to MSMQ and MQSeries have increased the performance of the queues. On the Windows NT 4.0 platform, the one-way MSMQ to MQSeries messages measured at 500 messages per second and MQSeries to MSMQ measured 565 messages per second sustained. However, adding both pipes brought the total sustained messages per second down to 600 (or 300 messages from either side) messages per second. Now, on the surface, the number of messages sent has decreased in a one-direction fashion, but increased on multiple pipes being used. The MSMQ-MQSeries Bridge did not cause this one directional slow down. Internal testing on MSMQ determined that a sustained rate of 400 messages per second is expected.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft