Capacity Planning for Windows SharePoint Services

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

This topic describes performance and scalability guidelines for Microsoft . The goal is to provide administrators with the information they need to purchase hardware, choose a server configuration, and manage the capacity of their deployments.

There are two kinds of capacity guidelines for:

  • Throughput The approximate number of transactions per second that a given server configuration for can handle. This guideline helps you determine how many simultaneous users can use a given server resource without negatively affecting performance.

  • Scale The approximate number of objects that can be created in a given scope, for example, the number of documents per folder. This guideline helps you determine the server configuration required to host a given number of objects.

About Capacity and Throughput Guidelines

The goal of the throughput testing is to measure the number of transactions per second that a server running can handle. The measured throughput is then used to extrapolate the number of simultaneous users by using a model of typical user behavior.

A rough rule of thumb is that 1 transaction per second maps to 1000 users. This rule of thumb is derived by applying the following model for user behavior:

  • 1000 users

    10% peak concurrency

  • 100 simultaneous users (10% of 1000)

    100 seconds per request per user (36 requests per hour per user)

  • 100 simultaneous users/100 seconds per user per transaction

  • 1 transaction/second

Capacity Testing Methodology

The team tests throughput by using automated load generation tools that work in machine time, not user time. In other words, real user behavior is not modeled in the test lab; server capacity is measured using fictitious "super users" who issue requests as fast as the server can respond. This is done to ensure that were measuring the capacity of the server, not the capacity of the load generation tool.

There are two main variables in the throughput testing:

  • Transaction mix The mix of user transactions, such as browse home page, save document, and edit list item, and so on.

  • Server configuration The configuration, such as a single server or a server farm with two Web servers, and so on.

About the Transaction Mix

The transaction mix defines the types and frequency of operations seen by the server, such browse homepage, edit document, and so on. This topic contains two different transaction mixes:

  • Read/write This is the typical SharePoint site operation mix. Most of the load is browsing to pages and documents in the site, but there is a substantial amount of list and document authoring as well. For details on the read/write operation mix, see Tested Read/Write Transaction Mix.

  • Read only This is the typical load of a reference site where the data on the server is changing very slowly. For this mix, the entire test load is on the home page of the site. The home page is one of the most expensive pages to render, so this is a fairly conservative read-only load.

About the Server Configuration

The server configuration describes how computers are configured to run the site. supports a server farm design where multiple Web servers can be used to serve the same content, as in the following illustration.

Cc767371.ZA010948241033(en-us,TechNet.10).gif

Administrators can add capacity to both the Web server and database server tiers by adding more server computers to the server farm. The total capacity of the server farm depends on the number of Web servers, the number of database servers, and the ratio of Web servers to database servers.

The following configurations were tested:

  • Single computer One computer running both the Web server and database server.

  • N -by-one server farm One to eight computers running the Web server and a separate computer running the database server. This covers the most common server farm scenarios. These tests determine the marginal throughput increase for each additional Web server and the optimum ratio of Web servers to database servers. As the test results show, adding Web servers to the server farm adds capacity linearly until the fourth or fifth Web server. Beyond five Web servers for each database server, the system bottleneck becomes the database server. To add even more capacity you need to add a database server to the server farm.

  • Eight-by-two server farm Eight Web servers and two database servers. This test validates the database scale out. An 8x2 server farm has roughly twice the total throughput as a 4x1 server farm. Extending the scale out model to 12x3 and 16x4 becomes a matter of providing sufficient network bandwidth for the server farm.

For the test hardware specifications, see Tested Hardware and Software.

About Throughput to Users

The performance lab found the peak throughput for each combination of transaction mix and server configuration. The throughput is measured in transactions per second.These transactions-per-second measurements can be converted to the total number of users using a model of typical end-user behavior. Like many human behaviors, there is a broad range of "typical" behavior. The user model for has two variables:

  1. Concurrency The maximum percentage of the total user base who will be using the system simultaneously. The models all use 10% concurrency.

  2. Request rate The number of requests per hour an active user generates on average. uses four models for user behavior:

    • Light 20 requests per hour. An active user will generate a request every 180 seconds. Each response per second of throughput supports 180 simultaneous users and 1,800 total users.

    • Typical 36 requests per hour. An active user will generate a request every 100 seconds. Each response per second of throughput supports 100 simultaneous users and 1,000 total users.

    • Heavy 60 requests per hour. An active user will generate a request every 60 seconds. Each response per second of throughput supports 60 simultaneous users and 600 total users.

    • Extreme 120 requests per hour. An active user will generate a request every 30 seconds. Each response per second of throughput supports 30 simultaneous users and 300 total users.

Throughput Data

The following table shows the throughput results for each transaction mix, server configuration, and user model. The peak throughput point is highlighted in bold .

 

Transactions per second

 

Total user count

           
     

Light

 

Typical

 

Heavy

 

Extreme

Configuration

Mix

Read

Mix

Read

Mix

Read

Mix

Read

Mix

SingleServer

34

43

61,200

77,400

34,000

43,000

20,400

25,800

10,200

1 by 1

65

70

117,000

126,000

65,000

70,000

39,000

42,000

19,500

2 by 1

121

132

217,800

237,600

121,000

132,000

72,600

79,200

36,300

3 by 1

156

194

280,800

349,200

156,000

194,000

93,600

116,400

46,800

4 by 1

161

256

289,800

460,800

161,000

256,000

96,600

153,600

48,300

5 by 1

164

279

295,200

502,200

164,000

279,000

98,400

167,400

49,200

6 by 1

157

278

282,600

500,400

157,000

278,000

94,200

166,800

47,100

7 by 1

163

280

293,400

504,000

163,000

280,000

97,800

168,000

48,900

8 by 1

153

279

275,400

502,200

153,000

279,000

91,800

167,400

45,900

8 by 2

-

462

-

831,600

-

462,000

-

277,200

-

The following chart shows that adding additional Web servers to a single database server farm increases the capacity of the server farm, but only to a certain point. For the read-only transaction mix, the capacity of the server farm increases steadily for up to four Web servers and stops increasing at six Web servers. For the read/write mix, the capacity does not increase significantly beyond three Web servers.

Cc767371.ZA010948331033(en-us,TechNet.10).gif

Total capacity does not increase because the throughput is now limited by the one database server computer. Extending capacity beyond this point requires adding another database server to the server farm.

The following chart shows that adding an additional database server to the farm can extend the total capacity of the farm if there are sufficient web servers to handle the load.

Cc767371.ZA010948341033(en-us,TechNet.10).gif

About Capacity and Scale Guidelines

The capacity of is also affected by scalability (how many objects can be created in a given scope, such as number of documents per folder).There are very few hard limits in . Most of the scale guidelinesare determined by performance. In other words, you can exceed these guidelines, but you may find the resulting performance to be unacceptable.

One of the most important scale dimensions is site collections per database. This scale dimension depends on the number of indexes on the database. As the number of site collections increases, the performance of the system degrades as it serves more and more different site collections. As you can see in following chart, there is no hard limit where performance becomes unacceptable, but performance does degrades faster beyond 10,000 site collections and drops below 100 responses per second beyond 50,000 site collections.

Cc767371.ZA010948361033(en-us,TechNet.10).gif

The other scale guidelines are shown in the following table. None of these are hard limits enforced by the system. They are guidelines for designing a server that has good overall performance.

Object

Scope

Guideline for optimum performance

Comment

Site collections

Database

50,000

Total throughput degrades as the number of site collections increases.

Web sites

Web site

2,000

The interface for enumerating subsites of a given Web site does not perform well much beyond 2,000 subsites.

Web sites

Site collection

250,000

You can create a very large total number of Web sites by nesting the subsites. For example, 100 sites each with 1000 subsites is 100,100 Web sites.

Documents

Folder

2,000

The interfaces for enumerating documents in a folder do not perform well beyond a thousand entries.

Documents

Library

2million

You can create very large document libraries by nesting folders.

Security principals

Web site

2,000

The size of the access control list is limited to a few thousand security principals, in other words users and groups in the Web site.

Users

Web site

2million

You can add millions of people to your Web site by using Microsoft Windows security groups to manage security instead of using individual users.

Items

List

2,000

The interface for enumerating list items does not perform well beyond a few thousand items.

Web Parts

Page

100

Pages with more than 100 Web Parts are slow to render.

Web Part personalization

Page

10,000

Pages with more than a few thousand user personalizations are slow to render.

Lists

Web site

2,000

The interface for enumerating lists and libraries in a Web site does not perform well beyond a few thousand entries.

Document size

File

50 MB

The file save performance degrades as the file size grows. The default maximum is 50 MB. This maximum is enforced by the system, but you can change it to any value up to 2 GB (2047 MB) if you have applied Service Pack 1.For more information, see Configuring large file support in Installing and Using Service Packs for Windows SharePoint Services .

Tested Read/Write Transaction Mix

The following table describes the mix of operations that make up the read/write transaction mix. The team counted only meaningful end-user operations in the throughput numbers, but the load on the server includes supporting transactions as well, such as getting the images, style sheets, and JavaScriptfiles for the home page.

End user operation

Percentage

Get home page

9.0%

Get list page (HTML)

9.0%

Get list page (grid)

9.0%

Get list form

6.0%

Get static document

15.0%

Insert list item

1.5%

Edit list item

1.5%

Delete list item

1.5%

Insert document

1.5%

Open document for edit

1.5%

Save document

1.5%

Delete document

1.5%

List URLs

1.5%

Short term check-out

15.0%

Get cached document

15.0%

404 errors

10.0%

Note: There are roughly two supporting transactions for each end-user transaction. In other words, the end-user operations make up about a third of the total transaction load on the server.

Tested Hardware and Software

The following hardware was used to gather the performance and scalability data in this topic.

Web Servers

The Web server computers were Compaq DL360s with two 1 GHz Pentium 3 processors and 1 GB of memory. The computers were running a prerelease version of Microsoft , Enterprise Edition, build 3718.

Note: The single computer tests were run on Web server hardware.

Database Servers

The database server computers were Compaq DL380s with two 1 GHz Pentium 3 processors and 2 GB of memory. The computers were running Microsoft SQL Server 2000 SP2 and a prerelease version of , Enterprise Edition, build 3718.