Export (0) Print
Expand All
This topic has not yet been rated - Rate this topic

Key performance metrics for Project Server 2013

Published: July 16, 2012

Summary:  Throughput and response time are two common metrics for measuring required, expected, or actual performance of a Project Server 2013 system.

Applies to:  Project Server 2013 

This article defines these two metrics, because they are important factors for measuring performance in Project Server 2013.

Throughput in a Project Server 2013deployment

Throughput is a measure of the number of operations that the system can handle in a unit of time. Throughput is typically expressed in operations per second. However, you have to clearly determine what an "operation" is in every specific context. For example, take a Web page: You can think of the serving of a whole page as one operation, or you can think of all the individual HTTP requests that the server receives to serve the page as separate operations. (A Web page can contain images and other resources that are requested independently). These two definitions should clarify why you have to be clear about what an "operation" is when you deal with a throughput measure.

Estimating the required throughput for a system is a challenge that requires a deep and thorough understanding of the usage patterns of the users. An industry average suggests that one operation per second maps to 1,000 users, based on the following calculation:

  1. 1,000 users work on average at 10 percent concurrency.

  2. Therefore, on average there are 100 concurrent users on a 1,000-user system.

  3. For each of the 100 concurrent users, there are 100 seconds per operation per each user (the user "think time").

  4. If an active user pauses 100 seconds between operations, the user will generate 36 operations per hour (3,600 seconds in an hour divided by 100 seconds between user requests equals 36 operations generated by the user).

  5. If users average 36 operations per hour, and there are 100 concurrent users, the concurrent users will request on average a total of 3,600 operations per hour. Because there are 3,600 seconds in an hour, users will require a solution that can provide one operation per second (3,600 seconds per hour / 3,600 user operations per hour).

Of course, the assumptions of the previous calculation should be adapted to your specific scenario with regard to user concurrency, peak factors, and usage patterns. Be aware that a throughput of 10 operations per second does not mean that every operation is fully processed in 0.1 second, but only that the system is handling 10 operations in that second. That's why the "response time" is a separate metric, as important as throughput with regard to performance.

Response time in a Project Server 2013deployment

Independent of how many operations the system can manage at the same time, another measure of performance that is even more important to users is absolute response time. Response-time degradation can be a good indicator of capacity issues. There are a range of potential response-time bottlenecks, such as disk access, network I/O, memory, and processor problems. Response times depend significantly on several factors such as operation types, data profiles, systems configuration, and so on. It is also important that you define in detail the acceptance thresholds in response times for all the different operations that you are considering.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
© 2014 Microsoft. All rights reserved.