Export (0) Print
Expand All

Plan hardware architecture in Project Server 2010

 

Applies to: Project Server 2010

Topic Last Modified: 2013-12-18

Many factors can have a major effect on throughput in Microsoft Project Server 2010. These factors include the number of users; the type, complexity, and frequency of user operations; the number of post-backs in an operation; and the performance of data connections. You should carefully consider the factors discussed in this section when you plan your hardware architecture. Project Server can be deployed and configured in a wide variety of ways. As a result, there is no simple way to estimate how many users can be supported by a given number of servers. Therefore, make sure that you conduct testing in your own environment before you deploy Project Server 2010 in a production environment.

This article describes the tested performance and capacity limits of Microsoft Project Server 2010, provides information about the test environment and test results, and offers guidelines for acceptable performance. Use the information in this article to estimate throughput targets for Project Server.

When you undertake capacity planning for Microsoft Project Server 2010, be aware of the variables that can affect the performance of a Project Server deployment.

Because of the rich functionality set that Project Server provides, deployments that seem similar when described at a high level can differ significantly in their actual performance characteristics. It is not enough to characterize your demands by just the number of projects, or the number of users that you will have in the system. Thinking about the performance of your Project Server deployment requires a more nuanced and holistic approach. For example, workloads, and subsequently your hardware needs, will differ in relation to the following variables:

 

Factor Characteristics

Projects

  • Number of projects

  • Typical project sizes with regard to tasks

  • Number of project-level custom fields

  • Level of linking (dependencies) between tasks

Users

  • Concurrency of users. How many users will be hitting the system at the same time? What is the average load, what are the spikes in traffic?

  • What security permissions do users have? This affects both the amount of data the server needs to present to the user at a given time, along with the complexity of the security checks the server has to do.

  • Geographic distribution of users. When users are spread over large geographical areas there can be detrimental performance effects caused by network latency. This also impacts usage patterns insofar as users are likely to hit servers at different times during the day that make it harder to find low-traffic periods in which to run maintenance tasks such as backups, reporting, or Active Directory sync.

Usage Patterns

  • Workload conditions. Which set of features are being commonly utilized? For example, a deployment that uses time-sheeting heavily will have different characteristics than one that does not use time-sheeting.

  • Average time between page requests

  • Average session time

  • Payload of Pages. How many Web Parts do you have on a given page? How much data do they contain?

There are many more variables that can affect performance in a given environment, and each of these variables can affect performance in different areas. Some of the test results and recommendations in this article might be related to features or user operations that do not exist in your environment, and therefore do not apply to your solution. Only thorough testing can provide you with exact data related to your own environment.

Other variables to consider:

Concurrency of Users: Concurrent user load is often a significant factor in setting capacity requirements. You may have fewer users in the system, but they may all transact with the server simultaneously during your “peak” traffic periods. For example, an organization that has its users all submit status/timesheet updates at the same time of the week will likely notice a substantial decrease in performance during those periods. If you have heavy peak usage periods, plan to add additional resources to the topology recommended for your dataset.

Split of User Roles: The distribution of your users between Administrators, Portfolio Administrators, Project Managers, and Team Members will affect the performance of your deployment insofar as each type of user has access to a different amount of data. Users in different security categories may vary with regard to how many projects and resources that they are able to see. Administrators, for example, are able to see all projects on the server when they load Project Center, and all resources when they load Resource Center. In comparison, a Project Manager will only see his own projects. The result is that these users are subject to a diminished perceived performance. Where possible, we suggest you limit the number of projects, tasks, or resources shown in a given view by defining appropriate filters in the views that you define in Server Settings>Manage Views.

Global Distribution of Users

Issues, Risks and Deliverables: Having larger numbers of these entities may place additional load on your SQL Server. In particular, it is the act of viewing and interacting with these entities in the Project site that is likely to create the additional load. If you use these features heavily, you may want to allocate additional resources to your deployment of SQL Server to maintain a high level of performance. Given that these artifacts and the Project site functionality are SharePoint sites and lists, consult documentation around scaling SharePoint sites and lists.

Calendars: Custom calendars can be defined for projects, tasks and resources. These largely affect the scheduling engine, exerting higher CPU usage on the application and database servers.

The datasets described in this section are characterized by the variables listed and explained in the table below. These variables may not capture all of the factors that affect the performance of Project Server (that is, they do not capture the mix of features that you tend to use in your deployment). However, they do capture much of the information that is significant in determining appropriate capacity.

 

Entity Description/Notes Small Medium Large

1

Projects

100

5000

20000

1

Tasks

17125

856250

3425000

1

Average Tasks Per Project

171.25

171.25

171.25

2

Task Transaction History

The number of times status tends to be submitted and approved for any given task

10

100

1000

1

Assignments

22263

1113125

4500000

1

Average Assignments Per Task

1.3

1.3

1.3

2/3

Approvals

Pending Updates Per Manager

50

600

3000

Users

1000

10000

50000

Custom Fields

Project (Formula)

3

20

25

Custom Fields

Project (Manual)

2

40

50

Custom Fields

Task (Formula)

Task formula fields tend to take the largest toll on performance because they need to be computed for each task.

6

12

15

Custom Fields

Task (Manual)

4

8

10

Custom Fields

Assignment Rolldown

50%

50%

50%

Custom Fields

Resource

10

20

25

Custom Fields

Look up Table Custom Fields

2

15

100

1

Timesheets (per year)

The more you utilize Timesheets the more resource demands that are placed on the SQL Server

52000

780000

8,320,000

1

Timesheet Lines

5

10

10

The following sections provide general performance and capacity recommendations. Use these recommendations to identify a suitable starting topology for your requirements, and to decide whether you have to scale out or scale up the starting topology.

Throughout this article we refer to the three different roles that are set up in Windows Server: the Web Front End Server Role, Application Server Role, and Database (SQL) Server Role. These are all components of a complete Project Server 2010 deployment. The Web Front End Servers act as the interface for users accessing Project Server. The Application Server handles the requests to the data tier of Project Server and implements the business logic of Project Server 2010. Lastly, the database tier is the data source, housing the Project Server 2010 databases. For small deployments, the Web Front End Server, Application Server and Database Server roles may be combined on the same physical computer. For larger deployments it may be necessary to separate these onto separate computer, even having multiple physical computers acting in the same role.

This section suggests a recommended topology for each of the small, medium, and large dataset sizes characterized earlier in the "Typical datasets" section. The recommended topologies for each dataset should be sufficient for obtaining reasonable performance with most usage patterns on those dataset sizes. However, we encourage you to take into account the specific recommendations given throughout the rest of this article for determining whether you need to expand beyond the topology that is recommended for your approximate dataset. In general, you should monitor the performance metrics of your topology and scale it accordingly if you are unsatisfied with the performance characteristics.

Note that because Project Server 2010 coexists with SharePoint Server 2010, it uses additional resources (Processor, RAM, and Hard Disk). The guideline requirements for SharePoint Server 2010 are also valid for a Project Server 2010 installation with a small data set and light usage. However, for more substantial data sets and usage patterns, additional hardware resources are required. For deployment on a stand-alone computer, with a small data set, 16 GB of RAM is advised to assure a high level of perceived performance. Beyond this, if possible, we recommend that you separate your Database Server from the Application and Web Front End tiers by placing your databases on a dedicated computer that is running SQL Server.

The following table lists the specifications for a single server with built-in database installations and server farm installations that include a single server or multiple servers in the farm.

 

Component Recommended

Processor

64-bit, four-core, 2.5 gigahertz (GHz) minimum per core

RAM

4 GB for developer or evaluation use, 8 GB for single-server and multiple-server farm installation for production use

Hard Disk

80 GB

 

Component Recommended

Processor

64-bit, four-core, 2.5 GHz minimum per core. (If your dataset size is considerably larger than the medium dataset, eight cores is recommended.)

RAM

4 GB for developer or evaluation use, 8 GB for single-server and multiple-server farm installation for production use

Hard Disk

80 GB

The minimum requirements specified for medium sets can be scaled out and scaled up to handle additional load. The scaled-up and scaled-out topologies discuss considerations about how to handle increased user load and increased data load.

As a general prescription, you should prepare to handle additional user load and data load by having sufficient computers to add Web Front End servers and Application Servers to your topology. The hardware specifications of your Web Front End servers and Application servers can remain largely the same. A 4 × 2 × 1 topology should be sufficient for handling the needs of most medium data sets and usage patterns. Scaling out your Application and Web Front End servers will add additional load on your deployment of SQL Server, which you should compensate for by adding more memory and CPU resources. The following SQL Server specification should be able to handle the performance needs of most medium datasets. The best way to identify whether the topology you have designed satisfies your performance needs is to set up a staging environment to test your topology and monitor the performance characteristics.

 

Component Recommended

Processor

64-bit, four-core, 2.5 GHz minimum per core

RAM

4 GB for developer or evaluation use, 8 GB for single-server and multiple-server farm installation for production use

Hard Disk

80 GB

 

Component Recommended

Processor

64-bit, four-core, 2.5 GHz minimum per core

RAM

4 GB for developer or evaluation use, 8 GB for single-server and multiple-server farm installation for production use

Hard Disk

80 GB

 

Component Recommended

Processor

64-bit, eight-core, 2.5 GHz minimum per core (If your dataset size is considerably larger than the medium dataset, eight cores is recommended.)

RAM

32 GB

Hard Disk

160 GB

For large datasets, the data load is the most substantial performance bottleneck.

Generally, at a minimum for large datasets, you will want a 4 × 2 × 1 topology. The hardware characteristics of the Web Front End and Application Servers can generally remain the same as those recommended for the small and medium datasets. However, given that the SQL Server installation will be the bottleneck, you may find that this constrains your ability to scale out to additional Web Front End and Application Servers. If you find that data load is your bottleneck, you may find that additional Web Front End and Application severs do not produce an improvement in throughput.

For large datasets, if the SharePoint Server 2010 instance which Project Server 2010 is coexisting with is also getting heavy usage (that is, you are not using that SharePoint Server 2010 deployment specifically for Project Server 2010 functionality), then we recommend to separate the four Project Server 2010 databases from the SharePoint Server 2010 content databases, placing them on their own dedicated instance of SQL Server.

Given that data throughput will be the bottleneck, you should invest in additional resources on the SQL Server tier of your topology. You can “scale-up” your installation of SQL Server by adding RAM, CPU, and hard disk resources. In the following sections we list the minimum and recommended specifications for the SQL Server tier of a large dataset topology.

 

Component Recommended

Processor

64-bit, eight-core, 2.5 GHz minimum per core. (If your dataset size is considerably larger than the medium dataset, eight cores is recommended.)

RAM

32 GB

Hard Disk

250 GB

 

Component Recommended

Processor

64-bit, eight-core, 2.5 GHz minimum per core. (If your dataset size is considerably larger than the medium dataset, eight cores is recommended.)

RAM

64 GB

Hard Disk

300 GB or more. Put your reporting database on a separate database server. Ideally, you should separate and prioritize data among disks. Place your data files and your SQL Server 2008 transaction logs on separate physical hard disks. RAID 5 should provide a good compromise between reliability and throughput.

Project Server 2010 does support running on virtualized machines. Most of the advice given for virtualization of SharePoint Server 2010 also applies to Project Server 2010. For documentation on virtualization in SharePoint Server 2010, see Virtualization planning for on-premise or hosted technologies (SharePoint Server 2010). You may also refer to the Project Server 2007 Virtualization guide for additional information on virtualization and Project Server 2010, as most of that guidance is still applicable. However, as in any situation where virtualization is employed, it is important to consider contention for physical computer's resources between virtualized machines running on the same physical instance.

noteNote
We do not recommend running SQL Server on a virtualized machine. The competition for resources on a virtualized machine can substantially decrease the performance of the server. If you must run SQL Server in a virtualized environment, we recommend that you use the following settings:
  1. Network Adaptor:

    • If you are using Hyper-V virtualization, you should utilize the virtual network adapter rather than the legacy network adapter.

  2. Virtual Disk:

    • For the virtual machine that you are running SQL Server on, we recommend that you select the “pass through” option for the disk type (rather than dynamic or fixed). If this is not an option, you should use a fixed disk size rather than a dynamically sized virtual disk.

    • We recommend that you select IDE over SCSI for your boot drive

    • Allocate sufficient hard disk space to handle the expected maximum size of your dataset and ULS logging demands.

  3. Memory:

    • You should allocate as much memory to the virtual machine that is running SQL Server as can feasibly be allocated. This should be comparable to the amount of memory required/recommended for physical servers serving the same function.

    • At least 2 GB of memory should be reserved for the Host Operating System.

Running the Web Front End or Application Servers in virtualized environments tends to not be as detrimental to the performance of running SQL Server in a virtualized environment.

For most Project Server deployments, network bandwidth tends not to be the bottleneck on performance. The table below lists the recommended specifications of network components. A general aim should be to maintain low latency between the Application and SQL Server tiers.

 

Component Small and Medium Dataset Large Dataset

Number of NICs

1

2

#NIC Speed (Network)

Any speed greater than 100mbps should be fine

1 GB/s

Load Balancer Type

NLB or hardware; both are acceptable

NLB or hardware; both are acceptable

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft