Share via


Estimate performance and capacity requirements for PerformancePoint Services

 

Applies to: SharePoint Server 2010

This article describes the effect that use of PerformancePoint Services has on topologies running Microsoft SharePoint Server 2010.

Note

It is important to be aware that the specific capacity and performance figures presented in this article will differ from the figures in real-world environments. The figures presented are intended to provide a starting point for the design of an appropriately scaled environment. After you have completed your initial system design, test the configuration to determine whether the system will support the factors in your environment.

In this article:

  • Test farm characteristics

  • Test results

  • Recommendations

For general information about how to plan and run your capacity planning for SharePoint Server 2010, see Capacity management and sizing for SharePoint Server 2010.

Test farm characteristics

Dataset

The dataset consisted of a corporate portal built by using SharePoint Server 2010 and PerformancePoint Services that contained a single, medium-sized dashboard. The dashboard contained two filters linked to one scorecard, two charts, and a grid. The dashboard was based on a single Microsoft SQL Server 2008 Analysis Services (SSAS) data source that used the AdventureWorks sample databases for SQL Server 2008 Analysis Services cube.

The table that follows describes the type and size of each element on the dashboard.

Name Description Size

Filter One

Member selection filter

7 dimension members

Filter Two

Member selection filter

20 dimension members

Scorecard

Scorecard

15 dimension member rows by 4 columns (2 KPIs)

Chart One

Line chart

3 series by 12 columns

Chart Two

Stacked bar chart

37 series by 3 columns

Grid

Analytic grid

5 rows by 3 columns

The medium dashboard used the Header and Two Columns template, and the dashboard item sizes were set to either auto-size or a specific percentage of the dashboard. Each item on the dashboard was rendered with a random height and width between 400 and 500 pixels to simulate the differences in Web browser window sizes. It is important to change the height and width of each dashboard item because charts are rendered based on Web browser window sizes.

Test scenarios and processes

This section defines the test scenarios and discusses the test process that was used for each scenario. Detailed information such as test results and specific parameters are given in the Test results sections later in this article.

Test name Test description

Render a dashboard and randomly change one of the two filters five times with a 15 second pause between interactions.

  1. Render the dashboard.

  2. Select one of the two filters and randomly select a filter value and wait until the dashboard is re-rendered.

  3. Repeat four more times, randomly selecting one of the two filters and a random filter value.

Render a dashboard, select a chart, and expand and collapse it five times with a 15 second pause between interactions.

  1. Render the dashboard.

  2. Select a random member on a chart and expand it.

  3. Select another random member on the chart and collapse it.

  4. Select another random member on the chart and expand it.

  5. Select another random member on the chart and collapse.

Render a dashboard, select a grid, and expand and collapse it five times with a 15 second pause between interactions.

  1. Render the dashboard. Select a random member on a grid and expand the member.

  2. Select another random member on the grid and expand it.

  3. Select another random member on the grid and collapse it.

  4. Select another random member on the grid and expand it.

A single test mix was used that consisted of the following percentages of tests started.

Test name Test mix

Render a dashboard and randomly change one of the two filters five times.

80%

Render a dashboard, select a chart, and expand and collapse it five times.

10%

Render a dashboard, select a grid, and expand and collapse it five times.

10%

Microsoft Visual Studio 2008 Load Testing tools were used to create a set of Web tests and load tests that simulated users randomly changing filters and navigating on grids and charts. The tests used in this article contained a normal distribution of 15-second pauses, also known as "think times," between interactions and a think time between test iterations of 15 seconds. Load was applied to produce a two-second average response time to render a scorecard or report. The average response time was measured over a period of 15 minutes after an initial 10 minute warm-up time.

Each new test iteration select a distinct user account from a pool of five thousand accounts and a random IP address (using Visual Studio IP Switching) from a pool of approximately 2,200 addresses.

The test mix was run two times against the same medium-sized dashboard. In the first run, the data source authentication was configured to use the Unattended Service Account, which uses a common account to request the data. The data results are identical for multiple users, and PerformancePoint Services can use caching to improve performance. In the second run, the data source authentication was configured to use per-user identity, and the SQL Server Analysis Services cube was configured to use dynamic security. In this configuration, PerformancePoint Services uses the identity of the user to request the data. Because the data results could be different, no caching can be shared across users. In certain cases, caching for per-user identity can be shared if Analysis Services dynamic security is not configured and the Analysis Services roles, to which Microsoft Windows users and groups are assigned, are identical.

Hardware setting and topology

Lab hardware

To provide a high level of test-result detail, several farm configurations were used for testing. Farm configurations ranged from one to three Web servers, one to four Application servers, and a single database server that was running Microsoft SQL Server 2008. A default enterprise installation of SharePoint Server 2010 was performed.

The following table lists the specific hardware that was used for testing.

Web server Application server Computer that is running SQL Server Computer that is running Analysis Services

Processor(s)

2px4c @ 2.66 GHz

2px4c @ 2.66 GHz

2px4c @ 2.66 GHz

4px6c @ 2.4 GHz

RAM

16 GB

32 GB

16 GB

64 GB

Operating system

Windows Server 2008 R2 Enterprise

Windows Server 2008 R2 Enterprise

Windows Server 2008 R2 Enterprise

Windows Server 2008 R2 Enterprise

NIC

1x1 gigabit

1x1 gigabit

1x1 gigabit

1x1 gigabit

Authentication

NTLM and Kerberos

NTLM and Kerberos

NTLM and Kerberos

NTLM and Kerberos

After the farm was scaled out to multiple Web servers, a hardware load balancer was used to balance the user load across multiple Web servers by using source-address affinity. Source-address affinity records the source IP address of incoming requests and the service host that they were load-balanced to, and it channels all future transactions to the same host.

Topology

The starting topology consisted of two physical servers, with one server acting as the Web and application server and the second server as the database server. This starting topology is considered a two-machine (2M) topology or a "1 by 0 by 1" topology where the number of dedicated Web servers is listed first, followed by dedicated application servers, and then database servers.

Web servers are also known as web front ends (WFE) later in this document. Load was applied until limiting factors were encountered. Typically the CPU on either the Web or application server was the limiting factor, and then resources were added to address that limit. The limiting factors and topologies differed significantly based on the data source authentication configuration of either the Unattended Service Account or per-user Identity with dynamic cube security.

Test results

The test results contain three important measures to help define PerformancePoint Services capacity.

Measure Description

User count

Total user count reported by Visual Studio.

Requests per second (RPS)

Total RPS reported by Visual Studio, which includes all requests and a static file requests such as images and style sheets.

Views per second (VPS)

Total views that PerformancePoint Services can render. A view is any filter, scorecard, grid, or chart rendered by PerformancePoint Services or any Web request to the rendering service URL that contains RenderWebPartContent or CreateReportHtml. To learn more about CreateReportHtml and RenderWebPartContent, see the PerformancePoint Services RenderingService Protocol Specification (https://go.microsoft.com/fwlink/p/?LinkId=200609).

IIS logs can be parsed for these requests to help plan the capacity of PerformancePoint Services. Also, using this measure provides a number that is much less dependent on dashboard composition. A dashboard with two views can be compared to a dashboard with 10 views.

Tip

When you are using a data source configured to use Unattended Service Account authentication, the rule for the ratio of dedicated servers is one Web server to every two application servers that are running PerformancePoint Services.

Tip

When you are using a data source configured to use per-user authentication, the rule for the ratio of dedicated servers is one Web server to every four or more application servers that are running PerformancePoint Services.

At topologies larger than four application servers, it is likely that the bottleneck is the Analysis Services server. Consider monitoring the CPU and query time of your Analysis Services server to determine whether you should scale out Analysis Services to multiple servers. Any delay in query time on the Analysis Services server will significantly increase the average response time of PerformancePoint Services beyond the desired threshold of two seconds.

The tables that follow show a summary of the test results for both Unattended Service Account authentication and per-user authentication when scaling out from two to seven servers. Detailed results that include additional performance counters are included later in this document.

Unattended Service Account authentication summary

Topology (WFE x APP x SQL) Users Requests per second (RPS) Views per sec (VPS)

2M (1x0x1)

360

83

50

3M (1x1x1)

540

127

75

4M (1x2x1)

840

196

117

5M (1x3x1)

950

215

129

6M (2x3x1)

1,250

292

175

7M (2x4x1)

1,500

346

205

Per-user authentication summary

Topology (WFE x APP x SQL) Users Requests per second (RPS) Views per sec (VPS)

2M (1x0x1)

200

47

27

3M (1x1x1)

240

56

33

4M (1x2x1)

300

67

40

5M (1x3x1)

325

74

44

2M and 3M topologies

To help explain the hardware cost per transaction and the response time curve, the load tests were run with four increasing user loads to the maximum user load for the 2M and 3M topologies.

Unattended Service Account authentication

User count 50 150 250 360

Average WFE/APP CPU

19.20%

57.70%

94.00%

96.70%

RPS

18

53

83

83

Views per second

10.73

31.72

49.27

49.67

Average response time (sec)

0.12

0.15

0.38

2

PPS_CapicityChart1

Per-user authentication

User count 50 100 150 200

Average WFE/APP CPU

30.80%

61.30%

86.50%

93.30%

RPS

17

32

43

47

Views per second

10.3

19.32

26.04

27.75

Average response time (sec)

0.28

0.45

0.81

2

PPS_CapicityChart2

3M (1x1x1) farm results

Unattended Service Account authentication

User count 100 250 400 540

RPS

36

87

124

127

Views per second

21

52

74

75

Average response time (sec)

0.12

0.18

0.65

2

Average WFE CPU

11%

28%

43%

46%

Max WFE private bytes of SharePoint Server Internet Information Services (IIS) worker process W3WP.

0.7 GB

1.4 GB

2.0 GB

2.4 GB

Average APP CPU

25%

62%

94%

95%

Max APP private bytes of PerformancePoint Services W3WP

5.9 GB10.8 GB

10.8 GB

14.1 GB

14.6 GB

PPS_CapicityChart3

Per-user authentication

User count 50 120 180 240

RPS

17

39

52

56

Views per second

10

23

31

33

Average response time (sec)

0.28

0.48

0.91

2

Average WFE CPU

5%

12%

17%

19%

Max WFE private bytes of SharePoint Server W3WP

0.78 GB

1.3 GB

1.6 GB

1.9 GB

Average APP CPU

25%

57%

81%

81%

Max APP private bytes of PerformancePoint Services W3WP

19 GB

20.1 GB

20.5 GB

20.9 GB

PPS_CapicityChart4

4M+ results for Unattended Service Account authentication

Starting with a 4M topology, load was applied to produce a two-second average response time to render a scorecard or report. Next, an additional server was added to resolve the limiting factor (always CPU on the Web server or the application server) and then the test mix was re-run. This logic was repeated until a total of seven servers was reached.

4M (1x2x1) 5M (1x3x1) 6M (2x3x1) 7M (2x4x1)

User count

840

950

1,250

1,500

RPS

196

216

292

346

Views per second

117

131

175

206

Average. WFE CPU

77%

63%

54%

73%

Max WFE private bytes of SharePoint Server W3WP

2.1 GB

1.7 GB

2.1 GB

2.0 GB

Average APP CPU

83%

94%

88%

80%

Max APP private bytes of PerformancePoint Services W3WP

16 GB

12 GB

15 GB

15 GB

4M+ Results for per-user authentication

The same testing was repeated for a data source configured for per-user authentication. Note that adding an application server to create a four-application server topology did not increase the number of users or requests per second that could be supported by PerformancePoint Services because of the query delays that Analysis Services produced.

3M (1x1x1) 4M (1x2x1) 5M (1x3x1) 6M (1x4x1)

User count

240

300

325

325

RPS

56

67

74

74

Views per second

33

40

44

45

Average. WFE CPU

19%

24%

26%

12%

Max WFE private bytes of SharePoint Server W3WP

2.1 GB

1.9 GB

1.9 GB

1.5 GB

Average APP CPU

89%

68%

53%

53%

Max APP private bytes of PerformancePoint Services W3WP

20 GB

20 GB

20 GB

20 GB

Analysis Services CPU

17%

44%

57%

68%

PPS_CapicityChart5

Recommendations

Hardware recommendations

The memory and processor counters from the test tables should be used to determine the hardware requirements for an installation of PerformancePoint Services. For Web servers, PerformancePoint Services uses the recommended SharePoint Server 2010 hardware requirements. Application server hardware requirements may have to be changed when PerformancePoint Services consumes a large amount of memory. This happens when data sources are configured to per-user authentication or when the application server runs many dashboards with long data source timeouts.

The database server did not become a bottleneck in the tests and peaked at a maximum CPU usage of 31% under the 7M Unattended Service Account authenticated dashboard. The PerformancePoint Services content definitions such as reports, scorecards, and KPIs are stored in SharePoint lists and are cached in memory by PerformancePoint Services, reducing the load on the database server.

Memory consumption

PerformancePoint Services can consume large amounts of memory in certain configurations, and it is important to monitor memory usage of the PerformancePoint Services application pool. PerformancePoint Services caches several items in memory, including Analysis Services and other data-source query results for the data source cache lifetime (a default of 10 minutes). When you are using a data source that is configured for Unattended Service Account authentication, these query results are only stored once and shared across multiple users. However, when you are using a data source that is configured for per-user authentication and Analysis Services dynamic cube security, the query results are stored once per user per view (that is, a "per filter" combination).

The underlying cache API that PerformancePoint Services uses is the ASP.NET Cache API. The significant advantage of using this API is that ASP.NET manages the cache and removes items (also known as a trim) based on memory limits to prevent out-of-memory errors. The default memory limit is 60 percent of physical memory. After reaching these limits, PerformancePoint Services still rendered views but response times increased significantly during the short period when ASP.NET removed cached entries.

The performance counter "ASP.NET Applications \ Cache API Trims" of the application pool hosting PerformancePoint Services can be used to monitor the ASP.NET cache trims that occur because of memory pressure. If this counter is greater than zero, then review the following table for possible solutions.

Problem Solution

Application server processor usage is low and other services are running on the application server.

Add more physical memory or limit the memory of the ASP.NET cache.

Application server processor usage is low and only PerformancePoint Services is running on the application server.

If acceptable, configure the ASP.NET cache settings to have the cache use more memory, or add more memory.

Application server processor usage is high.

Add another application server.

A data source configured to use per-user authentication can share query results and cache entries if the Analysis Services role membership sets of the users are identical and if dynamic cube security is not configured. This is a new feature for PerformancePoint Services in Microsoft SharePoint Server 2010. For example, if user A is in role 1 and 2, and user B is in Role 1 and 2, and user C is in Role 1 and 2 and 3, only user A and user B share cache entries. If there is dynamic cube security, users A and B and also user C do not share cache entries.

Analysis Services

When PerformancePoint Services was being tested with per-user authentication, two Analysis Services properties were changed to improve multiple-user throughput performance. The following table shows the properties that were changed and the new value of each property.

Analysis Services property Value

Memory \ HeapTypeForObjects

0

Memory \ MemoryHeapType

2

These two memory settings configure Analysis Services to use the Windows heap instead of the Analysis Services heap. Before changing these properties and while adding user load, response times increased significantly from 0.2 seconds to over 30 seconds while the CPU on the Web, application, and Analysis Services servers remained low. To troubleshoot, query time was collected by using Analysis Services dynamic management views (DMV), which showed an increase of individual query times from 10 milliseconds to 5000 milliseconds. These results led to modifying the above memory settings.

It is important to note that while this greatly improved throughput, according to the Analysis Services team, changing these settings has a small but measurable cost on single-user queries.

Before changing any Analysis Services properties, consult the SQL Server 2008 White Paper: Analysis Services Performance Guide (https://go.microsoft.com/fwlink/p/?LinkID=165486) for best practices on improving multiple-user throughput performance.

Common bottlenecks and their causes

During performance testing, several common bottlenecks were revealed. A bottleneck is a condition in which the capacity of a particular constituent of a farm is reached. This causes a plateau or decrease in farm throughput. If high processor utilization was encountered as a bottleneck, additional servers were added to resolve the bottleneck. The following table lists some common bottlenecks and possible resolutions assuming processor utilization was low and not the bottleneck.

Possible bottleneck Cause and what to monitor Resolution

Analysis Services memory heap performance

By default, Analysis Services uses its own memory heap instead of the Windows heap, which provides poor multi-user throughput performance. Review the Analysis Services query times using dynamic management views (DMV) to see if query times increase with user load and Analysis Services processor utilization is low.

Change Analysis Services to use the Windows heap. See the "Analysis Services" section earlier in this article and the SQL Server 2008 White Paper: Analysis Services Performance Guide (https://go.microsoft.com/fwlink/p/?LinkID=165486) for instructions.

Analysis Services query and processing threads

By default, Analysis Services limits the number of query and processing threads for queries. Long running queries and high user loads could use all available threads. Monitor the idle threads and job queue performance counters under the MSAS 2008:Threads category.

Increase the number of threads available to query and process. See the "Analysis Services" section of SQL Server 2008 White Paper: Analysis Services Performance Guide (https://go.microsoft.com/fwlink/p/?LinkID=165486) for instructions.

Application server memory

PerformancePoint Services caches the Analysis Services and other data source query results in memory for the data source cache lifetime. These items can consume a large amount of memory. Monitor the ASP.NET Applications \ Cache API Trims of the PerformancePoint Services application pool to determine whether cache removals or trims are being forced by ASP.NET because of low memory.

Add memory or increase the default ASP.NET cache memory limits. See Memory Consumption section earlier in this document for additional discussion. Also, see the ASP.NET cache element settings (https://go.microsoft.com/fwlink/p/?LinkId=200610) and Thomas Marquardt’s blog post on Some history on the ASP.NET cache memory limits (https://go.microsoft.com/fwlink/p/?LinkId=200611).

WCF throttling settings

PerformancePoint Services is implemented as a WCF service. WCF limits the maximum number of concurrent calls as a service throttling behavior. Although long-running queries could hit this bottleneck, this is an uncommon bottleneck. Monitor the WCF / Service Model performance counter calls outstanding for PerformancePoint Services and compare to the current maximum number of concurrent calls.

If needed, change the Windows Communication Foundation (WCF) throttling behavior. See the WCF service throttling behaviors (https://go.microsoft.com/fwlink/p/?LinkId=200612) and Wenlong Dong’s blog post on WCF Request Throttling and Server Scalability (https://go.microsoft.com/fwlink/p/?LinkId=200613).

Performance monitoring

To help you determine when you have to scale up or scale out the system, use performance counters to monitor the health of the system. PerformancePoint Services is an ASP.NET WCF service and can be monitored by using the same performance counters used to monitor any other ASP.NET WCF service. In addition, use the information in the following tables to determine supplementary performance counters to monitor, and to which process the performance counters should be applied.

Performance counter Counter Instance Notes

ASP.NET Applications / Cache API Trims

PerformancePoint Services application pool

If the value is greater than zero, review the "Memory consumption".

MSAS 2008:Threads / Query pool idle threads

N/A

If the value is zero, review the "Analysis Services" section in the SQL Server 2008 White Paper: Analysis Services Performance Guide (https://go.microsoft.com/fwlink/p/?LinkID=165486).

MSAS 2008:Threads / Query pool job queue length

N/A

If the value is greater than zero, review the "Analysis Services" section in the SQL Server 2008 White Paper: Analysis Services Performance Guide (https://go.microsoft.com/fwlink/p/?LinkID=165486).

MSAS 2008:Threads / Processing pool idle threads

N/A

If the value is greater than zero, review the "Analysis Services" section in the SQL Server 2008 White Paper: Analysis Services Performance Guide (https://go.microsoft.com/fwlink/p/?LinkID=165486).

MSAS 2008:Threads / Processing pool job queue length

N/A

If the value is greater than zero, review the "Analysis Services" section in the Analysis Services Performance Guide (https://go.microsoft.com/fwlink/p/?LinkID=165486).

WCF CountersServiceModelService 3.0.0.0(*)\Calls Outstanding

PerformancePoint Service Instance

If the value is greater than zero, see WCF Request Throttling and Server Scalability (https://go.microsoft.com/fwlink/p/?LinkID=200613).

See Also

Concepts

Plan for PerformancePoint Services (SharePoint Server 2010)