Export (0) Print
Expand All

BizTalk Server 2006: Scalability Case Study Using the SOAP Adapter in BizTalk Server 2006

Microsoft Corporation

October, 2006

Summary: This document demonstrates the potential increase in efficiency through scaled-up server and adapter combinations. Tests were performed on various combinations of BizTalk Servers and SQL Servers. Test criteria, procedures, server and adapter configurations, and results are presented. The data can help the reader determine which configuration best-suits their existing or anticipated business requirements.

This case study shows how to scale a simple SOAP request/response scenario and provides guidance on how to optimize the scenario for throughput. The system is incrementally built up starting with one computer running Microsoft® BizTalk® Server 2006 and one computer running Microsoft SQL Server™ 2000. We proceeded to increase throughput by scaling both computers up (that is, by adding processors) and by scaling both out (that is, by adding servers) until we reach the throughput scale limits of the system on the hardware used for the study. Along the way, we provide guidance on how to scale the SOAP adapter while maintaining fault tolerance using Network Load Balancing (NLB).

The primary audience for this case study includes anyone interested in how far the SOAP adapter can be scaled on the x86 test hardware, that is, what is the maximum sustainable throughput achieved on a variety of optimized and scaled up/out topologies. This includes:

  • System administrators and developers who are interested in estimating the size of a system on which a new SOAP request response scenario is to be developed, tested, and deployed.
  • System administrators and developers who are interested in understanding how best to increase the throughput of an existing system on which a SOAP request response scenario is running.

The request/response scenario is the most commonly used SOAP scenario. We therefore used it in this case study to observe the scaling patterns of the SOAP adapter and the core BizTalk Server engine. In this scenario, a user submits a SOAP message to a Web service and synchronously receives back a SOAP response.

In BizTalk Server, a simple orchestration with receive and send shapes (request/response port) is exposed as a Web service. We chose a simple orchestration because it eliminates any other latency introduced by complex logic in the orchestration, allowing us to focus on the behavior of the adapter and the engine.

Aa972198.note(en-US,BTS.10).gifNote
Since a real-world scenario will be more complex than this, the throughput results from this study will almost certainly be higher than is achievable when you add real business logic to the scenario.

The following figure shows the simple orchestration that we used in the request/response scenario.

Aa972198.a006de46-86e9-4c38-a6a2-915f14122699(en-US,BTS.10).gif

Hardware Specifications

BizTalk Server Hardware Specifications

The BizTalk servers had two processors running at 3.06 GHz with 512 KB L2 cache and 1 MB L3 cache. All of the computers were equipped with 2 GB of RAM, and hyperthreading was enabled.

SQL Server Hardware Specifications

Two kinds of SQL servers were used:

  • In scaled-up SQL Server topologies, an 8-processor computer running at 3 GHz with 512 KB L2 cache and 2 MB L3 cache was used as the master MessageBox database. This computer had two separate SAN drives to accommodate the data (.mdf) files and the log (.ldf) files of the MessageBox database.
  • In a simple topology where there was only one MessageBox database, a 4-processor computer running at 2 GHz with 512 KB L2 cache and 2 MB L3 cache was used as the master MessageBox database. This computer has two separate SAN drives for .mdf and .ldf files of the MessageBox database. These computers were also used as secondary MessageBox databases in a scaled-out SQL Server topology.
Aa972198.note(en-US,BTS.10).gifNote
Our testing has shown that the L2 and L3 cache levels can have significant impact on the performance of both BizTalk Server and SQL Server. We strongly recommend that you maximize both the L2 and L3 cache sizes when choosing server hardware so you maximize overall throughput and latency performance.

Client Hardware Specifications

We used the client computers to generate the load. They had similar hardware specifications to the BizTalk servers.

BizTalk Server Configuration

For the request/response scenario, we configured BizTalk Server as follows:

  • One request/response receive port was configured, with one SOAP receive location.
  • The XmlReceive pipeline was used as the receive pipeline and the PassthroughTransmit pipeline was used as the send pipeline on the request/response SOAP port. Both pipelines are included in BizTalk Server 2006 as standard pipelines.
  • The receive location points to the Web service where the orchestration was exposed as a Web service. The Web service as well as the receive location was automatically generated by using the Web Services Publishing Wizard. BizTalk Server 2006 includes the Web Services Publishing Wizard. For information about this wizard, see "Using Web Services" in BizTalk Server 2006 Help at http://go.microsoft.com/fwlink/?LinkId=74984.
  • As this was a request/response scenario, both receive and send functionality ran in the isolated host. The SOAP adapter is hosted inside IIS. For more information about the SOAP adapter, see "SOAP Adapter" BizTalk Server Help at http://go.microsoft.com/fwlink/?LinkId=74985.
  • A separate host was created for running the orchestration and tracking was disabled.
  • Default throttling settings were used in all the runs.
  • The number of computers in the topology and the host instance configuration changed during the scaling process. See the detailed topology diagrams with host instance configurations in "Test Topologies, Results, and Analysis," later in this white paper.

When we used more than one BizTalk server for receiving SOAP requests, we used NLB to balance the load between the BizTalk Server receivers. When NLB was used, the client computer contacted the virtual IP and, through NLB, the active computers in the cluster received the load. This also helped with fault tolerance because the system continued to receive messages even if one of the servers was down. The load was automatically balanced across computers by NLB.

  • MST: All the tests were run to achieve maximum sustainable throughput (MST). For information about MST for the BizTalk Server engine, see "Measuring Maximum Sustainable Engine Throughput" in the BizTalk Server 2006 Help at http://go.microsoft.com/fwlink/?LinkId=74986. For information about MST for the BizTalk Tracking database, see "Measuring Maximum Sustainable Tracking Throughput" in the Help at http://go.microsoft.com/fwlink/?LinkId=74987.
  • Time: All the tests were run for one hour as this gives the system time to ramp up and reach a stable state in terms of sustainability. The system took approximately 5 to 10 minutes to reach a consistent and stable state (MST conditions) in all our tests.
  • Tools: We used the BizTalk Server 2004 Load Generation Tool to generate the SOAP load (the tool works on BizTalk Server 2006, too). You can download LoadGen from http://go.microsoft.com/fwlink/?LinkId=59841. LoadGen used 100 threads to mimic 100 simultaneous user requests and always connected to the virtual IP. Because we used NLB, the load was automatically distributed among the receivers.
  • Message Size: 2 KB.
  • LoadGen Config File: Following is the configuration of the LoadGen Config file we used:

<LoadGenFramework>

<CommonSection>

<LoadGenVersion>2</LoadGenVersion>

<OptimizeLimitFileSize>204800</OptimizeLimitFileSize>

<NumThreadsPerSection>50</NumThreadsPerSection>

<SleepInterval>200</SleepInterval>

<LotSizePerInterval>20</LotSizePerInterval>

<RetryInterval>10000</RetryInterval>

<StopMode Mode="Files">

<NumFiles>1000</NumFiles>

<TotalTime>3600</TotalTime>

</StopMode>

<Transport Name="SOAP">

<Assembly>SOAPTransport.dll/SOAPTransport.SOAPTransport</Assembly>

</Transport>

</CommonSection>

<Section Name="TwoWayLatencySoapSection">

<SrcFilePath>C:\LoadGen\ConfigFiles\ConsoleConfigFiles\FileToFileLG.xml</SrcFilePath>

<DstLocation>

<Parameters>

<URL>http://VirtualIP/TwoWayLatencyRxSOAP/TwoWayLatencyRxWS.asmx</URL>

<SOAPHeader>SOAPAction: "http://tempuri.org/TwoWayLatencyRxWS/TwoWayLatencyRxWM"</SOAPHeader>

<SOAPPrefixEnv>&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"&gt;&lt;soap:Body&gt;&lt;TwoWayLatencyRxWM xmlns="http://tempuri.org/"&gt;</SOAPPrefixEnv>

<SOAPPostfixEnv>&lt;/TwoWayLatencyRxWM&gt;&lt;/soap:Body&gt;&lt;/soap:Envelope&gt;</SOAPPostfixEnv>

<IsUseIntegratedAuth>False</IsUseIntegratedAuth>

<LatencyFileName></LatencyFileName>

<ResponseMsgPath></ResponseMsgPath>

<DstEncoding></DstEncoding>

</Parameters>

</DstLocation>

</Section>

<!-- US-ASCII Or UTF-8 Or UTF-16BE Or UTF-16LE or UNICODE -->

</LoadGenFramework>

The IP in the configuration file was replaced with the virtual IP address of the NLB cluster.

Scaling Factor

The goal of this case study is to study the scaling patterns. The appropriate metric is the scaling factor in terms of throughput. The scaling factor is the ratio of throughput on a system, compared to the throughput of the same scenario on a system with twice the hardware capability.

For example, if the throughput of a system running with two BizTalk servers achieves 1.6 times the throughput of the same scenario running on one BizTalk server, then the scaling factor is 1.6.

Latency and Throughput

Latency and throughput were also measured as part of this case study. To obtain throughput and latency metrics:

  • Overall throughput of the system equals the sum of the "Documents processed/sec" performance counter of all isolated host instances under the "BizTalk: Messaging" object of all the receiving computers.
  • Latency equals the "Request-response latency" performance counter of the isolated host instances under the “BizTalk: Messaging Latency” object in Perfmon.

All of the performance counters related to BizTalk Server and SQL Server that we used to analyze performance were collected during the run and saved to log files using Perfmon. For a list of important BizTalk performance counters, see "Performance Counters" in the BizTalk Server Help at http://go.microsoft.com/fwlink/?LinkId=74988. All the performance counters will be represented in the following format for the rest of the document:

\Performance Object(Performance Instance)\Performance counter

Testing Tips and Tricks

In a scenario with multiple MessageBox databases, DTC (distributed transaction coordinator) cost is incurred when the master and secondary MessageBox databases are on different computers. By default DTC logs are on %systemdrive%\windows\system32\msdtc. To ensure that DTC logging doesn't bottleneck your system when you run these tests, verify that your system drive has good disk I/O. For this case study, we moved the log file to the SAN drive after observing that the system drive was weak; this weakness was indicated by the percentage of idle time on the physical disk (this is Performance counter\Physical Disk(_Total)\%Idle Time), which was around 11 percent on the system drive with DTC logging. After we moved the log file to the SAN, we observed better throughput because the physical disk %idle time of the SAN drive was around 80 percent.

To change the DTC log file path
  1. In Administrative Tools, expand Component Services, and then expand Computers.

  2. Right-click My Computer, and then click Properties.

  3. Click the MSDTC Tab.

  4. Type the path where you want the new log to be created. (For example, G:\Logs\DTCLog.)

  5. Click Reset log, and you will be prompted for service restart. Click OK for the service to restart.

The following sections describe the tests, and the results and analyses for each test.

Scaled-Out BizTalk Server Tier

First we scaled-out the BizTalk Server tier by adding computers until the MessageBox database became the bottleneck. Then, we scaled up the MessageBox database, and then scaled out the database tier by adding multiple secondary MessageBox databases until the master MessageBox database became the bottleneck.

Test 1: One BizTalk Server and one SQL Server that Hosted the MessageBox Database

Because this test was run for a request/response scenario, the BizTalk isolated host received and sent the SOAP messages. The orchestration host instance executed the simple orchestration. So both the BizTalk isolated host instance and orchestration were run on the BizTalk server.

As discussed in previous section, the orchestration was published as a Web service. The client computer, which generated the load, used only the virtual IP address to send the load. NLB ensured that the load was equally distributed among the receiving servers.

The following figure depicts a topology consisting of one BizTalk server and one SQL server.

Aa972198.b3177f17-ec4c-4cdf-a07c-b9fd9dcf0e9e(en-US,BTS.10).gif
Test Results

The following results show the performance counters and data that were collected for both BizTalk Server and SQL Server.

  • Percentage of CPU utilization (performance counter - \processor\_Total\%processor Time)
  • Documents Received/sec(performance counter - \BizTalk:messaging(BizTalkServerIsolatedHostInstance)\Documents received/sec)
  • Documents Processed/sec(performance counter - \BizTalk:messaging(BizTalkServerIsolatedHostInstance)\Documents processed/sec)
  • Orchestrations completed/sec(performance counter - \ XLANG/s Orchestrations\orchestrationInstance)\orchestrations completed/sec)
  • Latency(Seconds). This is the per-message latency and is measured in seconds. The performance counter is \BizTalk:Messaging Latency(BizTalkServerIsolatedHostInstance)\Request-Response Latency(sec)
  • SQL Lock wait time. This is measured in milliseconds and the performance counter is \SQL server : locks(_Total)\Lock wait time(ms)
  • SQL Lock timeouts/sec. The performance counter is \SQL server : locks(_Total)\Lock Timeouts/sec

The following performance data was collected during the one hour test run.

BTS Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Per Message Latency (seconds)

BizTalk Server 1

88

81

81

81

0.5

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

BizTalk Server 1

29

100

170

92

98

Test Analysis
  • CPU on the computer running BizTalk Server is the bottleneck in this case.
  • The SQL server does not have any apparent bottlenecks in terms of CPU, lock contention, and disk. The percentage of idle time in the above table denotes the disk idle times.

To scale the system further, we added another BizTalk server. The following section discusses the throughput and latency patterns with two BizTalk servers and one SQL server.

Test 2: Two BizTalk Servers, One SQL Server Topology

We used two different topologies, each with two BizTalk servers and one SQL server:

Topology 1: Both BizTalk servers received, transmitted the load, and processed the orchestration

Both BizTalk servers were clustered through NLB to receive the SOAP requests. This topology helps fault tolerance because if one BizTalk server is down, the other BizTalk server can continue receiving the load.

The following figure depicts a topology consisting of two BizTalk servers and one SQL server where both BizTalk servers receive messages.

Aa972198.ce1309d7-1ee5-402b-b581-c1ae106facc7(en-US,BTS.10).gif
Topology 2: One of the BizTalk servers received messages and the other executed the orchestration

The following figure depicts a topology where only one of the BizTalk servers receives and transmits messages. The other BizTalk servers execute the orchestration.

The advantage of this topology is that the orchestration host is separated onto another computer. This allocates more CPU for receiving and transmitting a BizTalk Server host instance. Fault tolerance is not achieved through this topology, however, as there is only one receiving server.

Aa972198.a6018f91-66bb-417c-a894-648b7398a482(en-US,BTS.10).gif
Test Results

Following is some of the most important performance data that we collected during the one-hour test runs for each topology.

Topology 1: Both BizTalk servers received and transmitted load and processed the orchestration

BizTalk Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Latency(Seconds)

BizTalk Server 1

56

55

55

55

1.4

BizTalk Server 2

56

56

56

56

1.4

Total

 

111

 

 

1.4

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

Master Message-Box

71

1665

2633

85

98

Topology 2: One of the BizTalk servers received messages and the other executed the orchestration

BTS Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Latency(Seconds)

BizTalk Server 1

73

124

124

 

0.65

BizTalk Server 2

39

 

 

124

 

Total

 

124

 

 

0.65

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

Master Message-Box

52

457

652

88

98

Test Analysis
  • SQL Server lock contention was the main bottleneck in both of the runs.
  • Topology 1 has double the host instances of that of topology 2. This added extra contention on SQL Server and we can see that from SQL Server lock wait time. Topology 1 had a lock wait time of 1,665 milliseconds and topology 2 had lock wait time of 457 milliseconds. This impacted the overall latency and topology 1 has almost double the latency.
  • The next step is to add another BizTalk server so that the BizTalk Server tier can be further scaled-out.

Recommendation: Select either topology 1 or 2 based on your throughput or latency requirements. Also, keep in mind that topology 1 offers fault-tolerance, whereas topology 2 does not.

Test 3: Scaled-Out BizTalk Server Tier

  • Another BizTalk server was added, and it was noticed that there is no increase in throughput. This was due to the fact that MessageBox database on the SQL server was already saturated in terms of lock contention and adding another BizTalk server did not help.
  • So the next step is to scale up the MessageBox database on the SQL server from a 4-processor computer to an 8-processor computer. The following section describes the SQL Server scale-up procedure.

Test 4: Scaled-Up SQL Server Tier

Testing Topology A: Three BizTalk Servers and One SQL Server

For this test, we upgraded the SQL server that hosts the MessageBox database from a 4-processor computer to an 8-processor computer. For more information about the 8-processor hardware specification, see "Hardware Specifications" earlier in this paper.

We had three BizTalk servers before we scaled up the MessageBox database on the SQL server. So we continue with the three BizTalk server topology. This topology has two BizTalk servers receiving and transmitting SOAP messages, one BizTalk server executing the orchestrations, and one 8-processor SQL server hosting the MessageBox database.

Aa972198.6ce36bb3-a2db-42b7-a564-63bca031e0ee(en-US,BTS.10).gif
Test Results

Following is some of the most important performance data that we collected:

BizTalk Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Latency(Seconds)

BizTalk Server 1

52

82

82

 

0.62

BizTalk Server 2

52

82

82

 

0.62

BizTalk Server 3

56

 

 

164

 

Total

 

164

 

 

0.62

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

Master Message-Box

37

450

1061

92

96

Test Analysis
  • By scaling up SQL Server from a 4-processor to 8-processor computer, the throughput has increased from 124 messages/sec to 164 messages/sec.
  • Latency also went down from 0.65 seconds between messages sent to 0.62 seconds.
  • The BizTalk servers had a lot of CPU available. But because SQL Server lock wait time was high (around 450 ms), scaling out the SQL Server tier by adding multiple MessageBox databases would be the only way to scale this system.

Scaled-Out SQL Server Tier

To scale out the SQL Server tier, we added MessageBox databases. We added databases until the master MessageBox database became the bottleneck. After this happened, the system reached its scaling threshold and could not be scaled further.

Master MessageBox and One Secondary MessageBox Database

The master MessageBox server is the 8-processor computer that we used in our previous scale-up run. The secondary MessageBox server is the regular 4-processor MessageBox server that we used before in our scaled-out BizTalk Server tier. Usually, the master MessageBox server and the secondary MessageBox database need to have the same processor power.

The following figure depicts a topology that includes a master MessageBox database, and one secondary MessageBox database.

Aa972198.01e44800-cb3c-4fd5-be57-8817b2b7f378(en-US,BTS.10).gif

You can use the SQL server that hosts the master MessageBox in the following two ways:

Test 5: Master MessageBox Database with Publishing Enabled

In this case, the master MessageBox database does both routing and publishing, whereas the secondary MessageBox database does publishing. When we ran tests for this scenario, we obtained the following results.

Test Results

BizTalk Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Latency(Seconds)

BizTalk Server 1

43

74

74

 

0.8

BizTalk Server 2

43

75

75

 

0.8

BizTalk Server 3

58

 

 

149

 

Total

 

149

 

 

0.8

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

Master Message-Box

47

550

455

92

75

First Secondary Message-Box

41

138

551

82

98

Test Analysis
  • Adding a new publishing secondary MessageBox database did not help increase the throughput because the master MessageBox database is still bottlenecked with lock contention. The lock wait time was 550 milliseconds per message, which also caused the latency to increase.
  • Adding multiple MessageBox databases to a different SQL server incurred DTC costs, and these extra costs reduced throughput speed.

Test 6: Master MessageBox with Publishing Disabled

In this case, the master MessageBox database does only routing with publishing disabled. The secondary MessageBox is the only publisher in this topology. Following are the results achieved in this scenario.

Test Results

BizTalk Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Latency(Seconds)

BizTalk Server 1

50

59

59

 

0.75

BizTalk Server 2

51

55

55

 

0.75

BizTalk Server 3

42

 

 

114

 

Total

 

114

 

 

0.75

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

Master Message-Box

27

119

289

97

86

First Secondary Message-Box

68

833

1437

91

87

Test Analysis
  • By making the master do only routing, we have only one publishing MessageBox database. The secondary MessageBox is a 4-processor computer and is the bottleneck in this topology. The secondary MessageBox server has a CPU usage of 68 percent with a SQL Server lock wait time of 833 milliseconds. Note that the throughput achieved in this topology is almost equal to the throughput achieved in the two BizTalk Server, one SQL Server topology. In both the cases, the publishing MessageBox database is a 4-processor computer.
  • Increasing from a one to a two MessageBox topology did not generally cause an improvement.
  • We can scale out the system by making the master MessageBox database do only routing. Thereby, the master has enough resources to do routing and does not become the bottleneck. To scale the system, we need to keep adding multiple MessageBox databases.
  • The following sections analyze master and two secondary MessageBox scenario with the master doing only routing.

Test 7: Master and Two Secondary MessageBox Database Scenario

The following figure depicts a topology of the master MessageBox database and two secondary MessageBox databases. Publishing was disabled on the master MessageBox.

Aa972198.266921a8-279c-478f-bd7a-3933cddc6a21(en-US,BTS.10).gif
Test Results

BizTalk Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Latency(Seconds)

BizTalk Server 1

54

55

55

 

0.95

BizTalk Server 2

49

53

53

 

0.95

BizTalk Server 3

48

54

54

 

0.95

BizTalk Server 4

45

 

 

81

 

BizTalk Server 5

47

 

 

81

 

Total

 

162

 

 

0.95

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

Master Message-Box

42

280

100

97

67

First Secondary Message-Box

55

355

931

98

78

Second Secondary Message-Box

75

740

1375

98

78

Test Analysis
  • 3 BizTalk servers are used as receivers and two orchestration servers were used to execute the orchestration.
  • The BizTalk servers still have CPU availability and no bottleneck exists on BTS tier.
  • The master MessageBox is almost saturated in terms of moderate SQL Server lock wait time of 280 ms. The master still has some room for more throughput.
  • The major bottlenecks can be found on secondary MessageBox databases in terms of SQL Server lock wait time (355 ms and 740 ms, respectively).
  • It can be concluded that when scaling out a one MessageBox database scenario, it’s better to directly go to a scenario with three MessageBox databases where the master does only routing. In other words, scaling out from one to three MessageBox databases instead of one to two MessageBox databases is better in this scenario.

Test 8: Master and Three Secondary MessageBox Databases

Three BizTalk servers are used for receiving and sending SOAP messages. Two BizTalk servers are used to execute orchestrations. The master MessageBox database is an 8-processor computer and publishing is disabled on the master. A total of three secondary MessageBox databases devoted to publishing are present in this scenario.

The following figure depicts a topology of this scenario: a master MessageBox database and three secondary MessageBox databases.

Aa972198.1f343d24-fb91-4651-9e11-56a6c5d0f863(en-US,BTS.10).gif
Test Results

BizTalk Server Data:

Computer %CPU Utilization Documents Received/sec Documents Processed/sec Orchestrations Completed/sec Latency(Seconds)

BizTalk Server 1

75

69

69

 

1.25

BizTalk Server 2

69

65

65

 

1.25

BizTalk Server 3

72

68

68

 

1.25

BizTalk Server 4

61

 

 

101

 

BizTalk Server 5

64

 

 

101

 

Total

 

202

 

 

1.25

SQL Server Data:

Computer %CPU Utilization SQL Lock Wait Time(ms) SQL Lock Timeouts/sec %Idle Time (C:) %Idle Time (S:)

Master Message-Box

60

793

10

97

63

First Secondary Message-Box

41

142

793

91

88

Second Secondary Message-Box

67

310

974

99

82

Third Secondary Message-Box

42

152

742

99

82

Test Analysis
  • Heavy SQL Server lock wait time (793 ms) is the major bottleneck and hence the system cannot scale further.
  • Another run with four publishers showed similar throughput and hence the system has reached its threshold in terms of scaling. So the maximum throughput achieved in this scenario using the hardware specified is 202 docs/sec with an average latency of 1.25 seconds per message.
  • Scaling factor is the ratio of throughput of a topology to the base topology (one BizTalk Server, one SQL Server). For example, scale factor for three BizTalk servers, one SQL Server topology (test 3) is the throughput of test 3/throughput of test 1. The scale factor is therefore 124/81 = 1.53.

The following table and graphs summarize all the results obtained.

Test No. No. of BizTalk Servers Publishing enabled on Master MessageBox No. of Processors on Master MessageBox No. of Secondary MessageBoxes Throughput (messages/sec) Latency (sec) Scale Factor

1

1

Yes

4

 

81

0.5

0

2

2

Yes

4

 

124

1.4

1.53

3

3

Yes

4

 

124

1.5

1.53

4

3

Yes

8

 

164

0.62

2.02

5

3

Yes

8

1

149

0.8

1.83

6

3

No

8

1

114

0.75

1.40

7

4

No

8

2

162

0.95

2

8

5

No

8

3

202

1.25

2.49

9

5

No

8

4

202

1.6

2.49

The following figure shows the throughput in the scaling tests of the SOAP adapter.

Aa972198.7083c067-1b25-4214-a576-cfa06163c78d(en-US,BTS.10).gif

The following figure shows the latency in the scaling tests of the SOAP adapter.

Aa972198.f6625bb1-b8dd-414e-a5a2-78de59e070cf(en-US,BTS.10).gif

We started with a scenario consisting of one BizTalk server and one SQL server. We scaled the BizTalk server tier first until the master MessageBox database became the bottleneck. Then the master MessageBox server was scaled up to further scale the system. Additional MessageBox databases were added to scale the system until the master became the bottleneck.

This scalability case study of the SOAP adapter helps us to see the scaling patterns of the SOAP adapter in terms of throughput and latency. The scaling factors, throughput, and latency completely depend on the complexity of the scenario and the hardware. It is therefore very important to start testing the scalability of the scenarios very early in the application development cycle and see how well the system scales.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Unless otherwise noted, the companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred.

© 2006 Microsoft Corporation. All rights reserved.

Microsoft, MS-DOS, Windows, Windows Server, Windows Vista, and BizTalk are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft