Export (0) Print
Expand All

Estimate performance and capacity requirements for Access Services in SharePoint Server 2010

SharePoint 2010
 

Applies to: SharePoint Server 2010 Enterprise

Topic Last Modified: 2011-08-05

This article provides guidance on the footprint that usage of Access Services in Microsoft SharePoint Server 2010 has on topologies that are running Microsoft SharePoint Server 2010.

In this article:

This section describes the dataset that was used during the testing; the workloads that were placed on the product during performance gathering; the hardware that was used during the testing; and the topology for how that hardware was deployed.

Access Services capacity and performance is highly dependent on the makeup of the applications that are hosted on the service. The size of tables and the complexity of queries often have the most effect on capacity and performance. The testing used representative sizes and complexities, but every application and dataset is different. The capacity and performance will depend on the applications that are being used, their specific complexity, and the data size.

To evaluate the capacity profile, Access Services applications were simulated on a farm dedicated to Access Services (no other SharePoint tests were running). The farm contained the following representative sites:

  • 1,500 Access Services applications that have a “Small” size profile; 100 items maximum per list.

  • 1,500 Access Services applications that have a “Medium” size profile; 2,000 items maximum per list.

  • 1,500 Access Services applications that have a “Large” size profile; 10,000 items maximum per list.

Each application is made up of multiple lists, and the other lists are appropriately sized based on this largest list. Access Services can handle more data than 10,000 items. This number for the “Large” profile was chosen because it was expected that larger applications would not be common.

The applications were evenly distributed among the following applications:

  • Contacts   A simple contact management application, dominated by a single list.

  • Projects   A simple task and project tracking applications, dominated by two lists (projects and tasks associated with each project).

  • Orders   A simple order entry system, similar to the Northwind Traders sample of Microsoft Access, but scaled down, and included many interrelated lists (orders, order details, invoices, invoice details, purchase orders, purchase order details, and so on).

To simulate application usage, workloads were created to perform one or more of the following operations:

  • Opening forms

  • Paging through the forms

  • Filtering and sorting data sheets

  • Updating, deleting and inserting records

  • Publishing application

  • Render reports

Each workload includes “think time” between user actions, ranging from 5 to 20 seconds. This differs from other SharePoint capacity planning documents. Access Services is stateful; memory cursors and record sets were maintained between user interactions. It was important to simulate a full user session and not merely individual requests. For a single user workload, there is an average of two requests per second.

The following table shows the percentages used to determine which application and which size of application to use.

 

  Small Medium Large

Contacts

16%

10%

9%

Projects

18%

12%

10%

Orders

11%

8%

6%

For each configuration, two tests were ran to determine a “green zone” and a “red zone.” The green zone is the recommended throughput that can be sustained. The red zone is the maximum throughput that can be tolerated for a short time, but should be avoided.

The green zone was defined as a point at which the test being run consumes at most half the bottlenecking resource. In this case, the bottlenecking resource was %CPU on any of the three tiers: front-end Web server, application server (Access Data Services), or database server (SQL Server). First, the bottleneck was identified for a particular configuration. If the bottleneck was Access Data Services CPU, we made sure that the green zone test consumed CPU on the Access Data Services computers in a range between 40 and 50 percent.

For the red zone, a point was selected at which the maximum throughput was reached. This proved to be a CPU range between 80 and 90 percent. When searching for bottleneck, we looked at %CPU, memory usage (private bytes), disk queue length, network I/O, and other resources that could result in a bottleneck.

Both the green and red zone tests were run for 1 hour at a fixed user load.

The specific capacity and performance figures presented in this article will differ from figures in a real-world environment. This simulation is only an estimate of what actual users might do. The figures presented are intended to provide a starting point for the design of an appropriately scaled environment. After you have completed the initial system design, you should test the configuration to determine whether the system will support the factors in your environment.

To provide a high level of test-result detail, several farm configurations were used for testing. Farm configurations ranged from one to four front-end Web servers, one to four application servers (if there is Access Services or Access Data Services), and a single database server computer that is running Microsoft SQL Server 2008. In addition, testing was performed by using four client computers. All server computers were 64-bit. All client computers were 32-bit.

The following table lists the specific hardware that was used for the testing.

 

Machine role CPU Memory Network Disk

Front-end Web server

2 processor, 4 core 2.33 GHz

8 GB

1 gig

2 spindles RAID 5

Application server (Access Data Services)

2 processor, 4 core 2.33 GHz

8 GB

1 gig

2 spindles RAID 5

Database server (SQL Server)

4 processor, 4 core 2.6 GHz

32GB

1 gig

Direct Attached Storage (DAS) attached RAID 0 for each Logical Unit Number (LUN)

From our experience, CPU on the application sever tier, where Access Data Services is running, is an important limiting factor for throughput. We varied our topology by adding additional Access Data Services computers until it was no longer the bottleneck, and then added a front-end Web server to obtain even more throughput.

  • 1x1: One front-end Web server computer to one Access Data Services computer

  • 1x2: One front-end Web server computer to two Access Data Services computers

  • 1x3: One front-end Web server computer to three Access Data Services computers

  • 1x4: One front-end Web server computer to four Access Data Services computers

  • 2x1: Two front-end Web server computers to one Access Data Services computer

  • 2x2: Two front-end Web server computers to two Access Data Services computers

  • 2x4: Two front-end Web server computers to four Access Data Services computers

The computer running SQL Server is a relatively strong computer and at no time did it become the bottleneck (although it started to approach CPU saturation on our 2x4 test), so we did not vary this in our topologies. Depending on the queries that are a part of a real-world application mix, it is expected that the database server (SQL Server) tier could become the bottleneck.

Reporting Services was running in connected mode for all of our tests, running in the application server (Access Data Services) tier.

The following tables show the test results of Access Services. For each group of tests, only certain specific variables are changed to show the progressive impact on farm performance.

All the tests reported in this article were conducted with think time or wait time. This differs from the capacity planning results for other parts of SharePoint.

For information about bottlenecks of Access Services, see Common bottlenecks and their causes later in this article.

The following table and graph summarize the impact of adding additional front-end Web servers and dedicated Access Data Services computers to the farm. These throughput numbers are specifically for the Access Data Services computers. They do not reflect the impact on the overall farm.

 

Topology Baseline solution maximum (RPS) Baseline recommended (RPS)

1x1

25

15

1x2

54

29

1x3

82

45

1x4

88

48

2x1

25

15

2x2

55

29

2x4

116

58

Maximum and recommended RPS

The following graph shows the results for recommended sustainable throughput.

Throughput vs. ADS

As described earlier in this article, adding the fourth Access Data Services computer shifts the bottleneck to the front-end Web server, and that adding a second front-end Web server resolves the resource constraint on the front-end Web server tier. This would imply, that 1x1, 1x2, and 1x3 are reasonable configurations. However, when the fourth Access Data Services computer is added, a front-end Web server should also be added. Because scaling is in a linear manner (straight line between from 1x1 to 1x4), it can be assumed that the addition of a seventh Access Data Services computer would also imply the addition of a third front-end Web server, and so on, to satisfy the needs of the farm.

Remember that these results are based on a simulated work load only, and that an actual deployment should be monitored to find the point at which additional front-end Web servers are needed to support additional Access Data Services computers. Also, the front-end Web servers are dedicated to Access Services, and in reality the front-end Web servers are likely shared with other SharePoint workloads. The following graph shows the results.

Response time vs. ADS

The following graph shows the response time at this throughput level. The response time is very fast, at less than ¼ second on average per request.

SQL %CPU vs. ADS

These results show that the SQL Server computer was not a bottleneck, because adding a second front-end Web server resolved the resource shortage, and the SQL Server CPU was always less than 50 percent. However, be aware that the instance of SQL Server is shared with other SharePoint services and SharePoint itself, and so the cumulative effect might drive CPU or disk I/O queue lengths to the point that they do become a bottleneck.

The following graph shows the results, in which throughput was pushed beyond what could be sustained.

In this graph we see that again a second front-end Web server was needed to maximum the usefulness of the fourth Access Data Services computer. Again, your results might vary, because this is highly dependent on the applications and their usage patterns.

Throughput vs. ADS

In this case, the response time is increased, as the overall system is under stress. However, these levels are still approximately one second, and acceptable to most users.

It might seem odd that with four Access Data Services computers, two front-end Web servers have an increased response time than one front-end Web server. This is because the overall throughput of the system is increased with two front-end Web servers.

Response time vs. ADS

SQL Server is again not a limiting factor here, because adding the second front-end Web server put us back on a linear scaling line. However, we are reaching almost 90 percent CPU usage on the instance of SQL Server. Therefore, there is very little headroom remaining. If we were to add a fifth Access Data Services computer, the SQL Server computer likely would have become the bottleneck.

SQL %CPU vs. ADS

The following tables show the detailed results for the recommended configurations.

 

Overall 1x1 1x2 1x3 1x4 2x1 2x2 2x4

Req/Sec

14.96

28.76

45.22

48.01

14.85

28.77

58.02

Tests/Sec

2.00

3.81

6.11

6.42

1.99

3.81

7.80

Average Latency

235.80

241.21

247.21

244.87

240.70

242.26

250.94

 

Average front-end Web server tier 1x1 1x2 1x3 1x4 2x1 2x2 2x4

%CPU

13.82

24.40

41.02

43.62

6.31

12.48

26.18

Max w3wp Private Bytes

9.46E+08

2.31E+08

1.49E+09

1.55E+09

8.43E+08

9.84E+08

1.19E+09

 

Average application server (Access Data Services) tier 1x1 1x2 1x3 1x4 2x1 2x2 2x4

%CPU

46.30

42.83

43.74

34.51

46.56

43.45

42.13

%CPU w3wp

33.61

31.15

30.71

24.29

33.48

31.64

29.72

%CPU RS

8.62

7.94

9.17

6.84

9.03

8.02

8.71

Max total Private Bytes

4.80E+09

4.89E+09

4.91E+09

4.62E+09

5.32E+09

4.82E+09

5.07E+09

Max w3wp Private Bytes

2.10E+09

1.97E+09

2.04E+09

1.86E+09

2.00E+09

2.00E+09

2.07E+09

Max RS Private Bytes

1.78E+09

2.00E+09

1.97E+09

1.86E+09

2.30E+09

1.89E+09

2.02E+09

 

Database server (SQL Server) tier (single computer) 1x1 1x2 1x3 1x4 2x1 2x2 2x4

%CPU

12.07

18.64

32.53

36.05

9.89

21.42

47.46

Avg Private Bytes

2.96E+10

3.22E+10

3.25E+10

3.25E+10

2.89E+10

3.22E+10

3.25E+10

Max Private Bytes

3.26E+10

3.25E+10

3.25E+10

3.25E+10

3.25E+10

3.25E+10

3.25E+10

Avg Disk Queue Length Total

0.74

1.18

1.64

1.77

0.67

1.24

2.18

The following tables show the detailed results for the maximum configurations.

 

Overall 1x1 1x2 1x3 1x4 2x1 2x2 2x4

Req/Sec

14.96

28.76

45.22

48.01

14.85

28.77

58.02

Tests/Sec

2.00

3.81

6.11

6.42

1.99

3.81

7.80

Average Latency

235.80

241.21

247.21

244.87

240.70

242.26

250.94

 

Average front-end Web server tier 1x1 1x2 1x3 1x4 2x1 2x2 2x4

%CPU

13.82

24.40

41.02

43.62

6.31

12.48

26.18

Max w3wp Private Bytes

9.46E+08

2.31E+08

1.49E+09

1.55E+09

8.43E+08

9.84E+08

1.19E+09

 

Average application server (Access Data Services) tier 1x1 1x2 1x3 1x4 2x1 2x2 2x4

%CPU

46.30

42.83

43.74

34.51

46.56

43.45

42.13

%CPU w3wp

33.61

31.15

30.71

24.29

33.48

31.64

29.72

%CPU RS

8.62

7.94

9.17

6.84

9.03

8.02

8.71

Max total Private Bytes

4.80E+09

4.89E+09

4.91E+09

4.62E+09

5.32E+09

4.82E+09

5.07E+09

Max w3wp Private Bytes

2.10E+09

1.97E+09

2.04E+09

1.86E+09

2.00E+09

2.00E+09

2.07E+09

Max RS Private Bytes

1.78E+09

2.00E+09

1.97E+09

1.86E+09

2.30E+09

1.89E+09

2.02E+09

 

Database server (SQL Server) tier (single computer) 1x1 1x2 1x3 1x4 2x1 2x2 2x4

%CPU

12.07

18.64

32.53

36.05

9.89

21.42

47.46

Avg Private Bytes

2.96E+10

3.22E+10

3.25E+10

3.25E+10

2.89E+10

3.22E+10

3.25E+10

Max Private Bytes

3.26E+10

3.25E+10

3.25E+10

3.25E+10

3.25E+10

3.25E+10

3.25E+10

Avg Disk Queue Length Total

0.74

1.18

1.64

1.77

0.67

1.24

2.18

This section provides general performance and capacity recommendations.

Access Services capacity and performance is highly dependent on the makeup of the applications that are hosted on the service. The size of tables and the complexity of queries often have the most effect. The testing used representative sizes and complexities, but every application and dataset is different. Therefore, the capacity and performance will depend on the applications in use, their specific complexity, and the data size.

Access Services uses standard hardware for both front-end Web servers and application servers; no special requirements are necessary. General SharePoint Server 2010 guidelines for CPU number, speed, and memory are applicable for computers in the application server (Access Data Services) tier.

To increase the capacity and performance of one of the starting-point topologies, you can do one of two things. You can either scale up by increasing the capacity of your existing servers or scale out by adding additional servers to the topology. This section describes the general performance characteristics of several scaled-out topologies.

The sample topologies represent the following common ways to scale out a topology for an Access Services scenario:

  • To provide for more user load, check the CPU for the existing Access Services application servers. Add additional CPUs or cores, or both, to these servers if it is possible. Add more Access Services server computers as needed. This can be done to the point that the front-end Web server has become the bottleneck, and then add front-end Web servers as needed.

  • In our tests, memory on the front-end Web server tier and application server (Access Data Services) tier was not a bottleneck. Depending on the size of the result sets, memory could become an issue. However, we do not expect that to be the norm. Track the private bytes for the Access Data Services w3wp process, as described here.

  • In our tests, SQL Server was not a bottleneck. However, our tests were run in isolation from other SharePoint Server 2010 services. SQL Server CPU and disk I/O should be monitored and additional servers or spindles added as needed.

One way to control the performance characteristics of Access Services is to limit the size and complexity of queries that can be performed. Access Services provides a set of configurable throttles for controlling queries. Each of the following queries can be set through SharePoint Central Administration. (In the Application Management section, click Manage Service Applications, and then click Access Services.)

In general, how much data that has to be retrieved from SharePoint to perform a query will have a significant effect on performance. This can be controlled in several ways. First, the inputs to a query can be limited:

  • Maximum Sources per Query

  • Maximum Records per Table

Second, the resulting size of a query can be limited:

  • Maximum Columns per Query

  • Maximum Rows per Query

  • Allow Outer Joins

In addition to the size of the query (data size in and out), the processing complexity on the data can be controlled, to reduce the CPU load on the application server (Access Data Services) tier:

  • Maximum Calculated Columns per Query

  • Maximum Order by Clauses per Query

Obviously, the previous settings will affect the applications that can be run on the server. For example, if an application is written with 40 output columns from a query, and the settings are below this level, the application will throw a runtime error. A balance between user need and acceptable performance must be struck, and is highly dependent on the kind of Access applications that are expected to be run on the farm.

One additional, more extreme measure can be taken. SharePoint Server 2010 supports a set of query operations natively, which Access Services augments to cover a broader set of application scenarios. For Access Services to improve queries from SharePoint, there is the potential that a large amount of data might have to be retrieved from the SharePoint content database. Instead, Access Services can be set to stick with only query operations, which can be natively supported by SharePoint. Therefore, avoiding the data fetch required for more complex operations:

  • Allow Non-Remotable Queries

During performance testing, several different common bottlenecks were revealed. A bottleneck is a condition in which the capacity of a particular constituent of a farm is reached. This causes a plateau or decrease in farm throughput.

The table in Troubleshooting later in this article lists some common bottlenecks and describes their causes and possible resolutions.

To help you determine when you have to scale up or scale out the system, use performance counters to monitor the health of the system. Use the information in the following tables to determine which performance counters to monitor, and to which process the performance counters should be applied.

The following table shows performance counters and processes to monitor for Web servers in your farm.

 

Performance counter Apply to object Notes

% Processor Time

Processor(_Total)

Shows the percentage of elapsed time that this thread used the processor to execute instructions.

Private Bytes

Process(w3wp)

This value should not approach the Max Private Bytes set for w3wp processes. Iif it does, additional investigation is needed into what component is using the memory.

The following table shows performance counters and processes to monitor for application servers, or Access Data Services (Access Data Services) in this case, within your farm.

 

Performance counter Apply to object Notes

% Processor Time

Processor(_Total)

Shows the percentage of elapsed time that this thread used the processor to execute instructions.

% Processor Time

Process(w3wp)

The Access Data Services runs within its own w2wp process, and it will be obvious which w2wp process this is as it will be getting the bulk of the CPU time.

Avg. Disk Queue Length

PhysicalDisk(_Total)

Watch for too much disk writing because of logging.

% Processor Time

Process(ReportingServicesService)

Reports are handled by SQL Server Reporting Services. If too many reports are being run, or if the reports are very complex, then the CPU and Private Bytes for this process will be high.

Private Bytes

Process(w3wp)

Access Services caches the results of queries in memory, until the user’s session expires (the time-out for which is configurable). If a large amount of data is being processed through the Access Data Services, memory consumption for the Access Data Services’ w3wp will increase.

Private Bytes

Process(ReportingSrevicesService)

Reports are handled by SQL Server Reporting Services. If too many reports are being run, or reports are very complex, the CPU and Private Bytes for this process will be high.

The following table shows performance counters and processes to monitor for database servers in your farm.

 

Performance counter Apply to object Notes

% Processor Time

Processor(_Total)

Shows the percentage of elapsed time that this thread used the processor to execute instructions.

% Processor Time

Process(sqlservr)

Average values larger than 80 percent indicate that processor capacity on the database server is insufficient.

Private Bytes

Process(sqlservr)

Shows the average amount of memory being consumed by SQL Server.

Avg. Disk Queue Length

PhysicalDisk(_Total)

Shows the average disk queue length; the database requests waiting to be committed to disk. This is often a good indicator that the instance of SQL Server is becoming overloaded, and that possibly additional disk spindles would help distribute the load.

The following table lists some common bottlenecks and describes their causes and possible resolutions.

 

Bottleneck Cause Resolution

Access Data Services CPU

Access Services depends on a large amount of processing in the application server tier. If a 1x1, 1x2, or 1x3 configuration is used, the first bottleneck encountered will likely be the CPU on the Access Data Services servers.

Increase the number of CPUs or cores, or both, in the existing Access Data Services computers. Add additional Access Data Services computers if possible.

Web server CPU usage

When a Web server is overloaded with user requests, average CPU usage will approach 100 percent. This prevents the Web server from responding to requests quickly and can cause timeouts and error messages on client computers.

This issue can be resolved in one of two ways. You can add more Web servers to the farm to distribute user load, or you can scale up the Web server or servers by adding higher-speed processors.

Database server disk I/O

When the number of I/O requests to a hard disk exceeds the disk’s I/O capacity, the requests will be queued. As a result, the time to complete each request increases.

Distributing data files across multiple physical drives allows for parallel I/O. The blog SharePoint Disk Allocation and Disk I/O (http://go.microsoft.com/fwlink/p/?LinkId=129557) contains useful information about resolving disk I/O issues.

Reporting Services CPU utilization

The Reporting Services process is using a large share of the CPU resources.

Dedicate a computer to Reporting Services, taking load from the application server (Access Data Services) tier (connected mode) or the front-end Web server tier (local mode).

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft