Export (0) Print
Expand All

Estimate performance and capacity requirements for Search Server 2008

Search Server 2008

Updated: September 11, 2008

Applies To: Microsoft Search Server 2008

 

Topic Last Modified: 2008-09-17

In this article:

Key characteristics

Test environment

Assumptions

Lab topologies

Usage profile

Estimating throughput targets

Throughput and latency by farm configuration

Data trends

Recommendations

Hardware and software recommendations

Starting-point topologies

Estimate crawl window

Estimate disk space requirements

Determining specifications for index, query, and database servers

Guidelines for acceptable performance

This article provides test results and recommendations for both Microsoft Search Server 2008 and Microsoft Search Server 2008 Express. Use this article to help determine the hardware and configuration details required for the deployment.

This performance and capacity planning scenario incorporates a single Search Server 2008 farm or a computer running Search Server 2008 Express that is used to search and index content in an enterprise environment. The performance and capacity characteristics of multiple-farm environments are not covered in this topic.

Note that the only substantive difference between Search Server 2008 and Search Server 2008 Express is that in a Search Server 2008 Express deployment, you can only have one application server. In both Search Server 2008 and Search Server 2008 Express, you can use either Microsoft SQL Server 2005 Express Edition or Microsoft SQL Server 2005, and the database server software can reside either on the same server as or on a separate server from the computers that are running Search Server 2008 Express or Search Server 2008. In all other respects, Search Server 2008 and Search Server 2008 Express function and are managed in the same way.

Use the information in this article to help you determine the best architecture for the Search Server 2008 or Search Server 2008 Express deployment.

Before you read this topic, you should review About performance and capacity planning (Search Server 2008).

Key characteristics describe environmental factors, usage characteristics, and other considerations that are likely to be found in deployments based on this scenario.

The key characteristics for this scenario include:

  • Time to complete user queries. Some organizations might tolerate slower user response times or might require faster user response times. The expected response time is a key factor that influences overall throughput targets. Throughput is how many requests the server farm can process per second. When you have more users, you require faster throughput to achieve the same response time.

  • User concurrency. A concurrency rate of 10 percent is assumed, with one percent of concurrent users making requests at a given moment. For example, for 10,000 users, 1,000 users are actively using the solution simultaneously, and 100 users are actively making requests.

  • Long-running asynchronous tasks. Tasks such as crawling content and backing up databases add a performance load to the server farm. The general performance characteristics of sample Search Server 2008 and Search Server 2008 Express topologies assume that these tasks may run during peak hours, and may therefore impact the overall performance of the farm.

  • Scale-out factors. Several different scaling options were tested to help determine the most effective topology for the environment. See the Lab Topologies section for information about the tested topologies.

Testing for this scenario was designed to help develop estimates of how different farm configurations respond to changes in a variety of factors, including:

  • How many query servers are deployed in the farm.

  • How many content sources are being crawled simultaneously.

  • How many items are in the index that is being queried.

Note that although some conclusions can be drawn from the test results, the specific capacity and performance figures in this article might be different from the figures in real-world environments. The figures in this article are intended to provide a starting point for the design of an adequately sized environment. After you complete the initial system design, test the configuration to determine if the system will meet the requirements of the environment.

64-bit architecture. Only 64-bit servers were used in the test environment. Although Search Server 2008 can be deployed on 32-bit servers, we recommend that you use 64-bit servers in Search Server 2008 and Search Server 2008 Express farm deployments.

The guidance presented in this guide is based on testing that was performed on 64-bit servers. Therefore, if you are planning to deploy to 32-bit servers, you should perform additional testing on the 32-bit servers in the environment. The best practices and performance trends in this guide will generally apply to 32-bit environments, but actual results may vary.

The 64-bit system architecture has the following characteristics that contribute to superior server scalability and performance:

  • Memory addressability. A 32-bit system can directly address only a 4-gigabyte (GB) address space. Windows Server 2003 with Service Pack 1 (SP1) running on a 64-bit system architecture supports up to 1,024 GB of physical memory.

  • Larger numbers of processors and more linear scalability per processor. Improvements in parallel processing and bus architectures enable 64-bit platforms to support more processors (up to 64) while providing close to linear scalability with each additional processor. Server platforms that offer more than 32 CPUs are available exclusively on 64-bit architecture.

  • Enhanced bus architecture. The front-side bus on current 64-bit chip sets is faster than that of earlier generations, and also has higher bandwidth. More data is passed more quickly to the cache and processor.

The farm configurations that were used for testing are described in the following table. All server computers were running Search Server 2008 Enterprise Edition or Microsoft Search Server 2008 Express on the Microsoft Windows Server 2008 operating system with Service Pack 1 (SP1), Enterprise x64 Edition.

In this article, the guidelines are determined by performance. In other words, you can exceed the guidelines provided, but as you increase the scale, you might experience reduced performance.

Note that there are many factors that can affect performance in a given environment, and each of these factors can affect performance in different areas. Some of the test results and recommendations in this article might be related to features or user operations that do not exist in the environment, and therefore do not apply to your solution. Only thorough testing can provide you with exact data related to your own environment.

The following table lists the specific hardware and software configurations used for testing.

Configuration A: Search Server 2008 Express and SQL Server 2005 Express Edition on a single server

Computer roleHardware Hard disk capacity

Single server

Dual-core Intel 2.4 gigahertz (GHz) processor

4 gigabytes (GB) RAM

500 GB SATA

Configuration B: Search Server 2008 and SQL Server 2005 on a single server

Computer roleHardware Hard disk capacity

Single server

Dual-core Intel 2.4 GHz processor

4 GB RAM

500 GB SATA

Configuration C: Search Server 2008 on one server and SQL Server 2005 on a separate server

Computer roleHardware Hard disk capacity

Application server

Dual-core Intel 2.4 GHz processor

4 GB RAM

500 GB SATA

Database server

Quad-core Intel 3.4 GHz processor

8 GB RAM

750 GB SAS

Configuration D: Search Server 2008 on two servers and SQL Server 2005 on a separate server

Computer roleHardware Hard disk capacity

Query server

Dual-core Intel 2.4 GHz processor

4 gigabytes (GB) RAM

500 GB SATA

Index server

Dual-core Intel 2.4 GHz processor

4 gigabytes (GB) RAM

500 GB SATA

Database server

Quad-core Intel 3.4 GHz processor

8 GB RAM

750 GB SAS

A gigabit (1 billion bits/sec) network was used in the test environment. We recommend using a gigabit network between servers in a Search Server 2008 farm to ensure adequate network bandwidth.

The following tables show the usage profile for the Search Server 2008 search test environment.

NoteNote:
Only query operations were used to determine system performance in these scenarios.

A range between 1.5 million and 6.5 million items was crawled for the different tests. The following table shows the type and number of items crawled. Items were 10 kilobytes (KB) to 100 KB in size, and included list items, Web pages, and various document types.

 

Type of item Number of items

Content on SharePoint sites

4.5 million items, including the following:

  • site collections

  • sites

  • lists

  • files on file shares

Content on file shares

3.5 million

HTTP content

Between 1.5 million and 6.5 million

The following table shows disk space usage.

 

Type of usage Volume

Index size on query server

100 GB*

Index size on index server

100 GB*

Search database size

600 GB

NoteNote:
The tested index sizes are smaller than what might be in a production environment. This is because in the test-generated corpus, the number of unique words is smaller in the test-generated corpus.

This section provides test data that shows farm throughput for an increasing number of query servers and more content sources for Search Server 2008. The test data for configurations B, C, and D in this section does not apply to Search Server 2008 Express.

Because Search Server 2008 can be deployed and configured many ways, there is no simple way to estimate how many users can be supported by a given number of servers. Therefore, it is important that you conduct testing in your own environment before deploying Search Server 2008 in a production environment.

There are several factors that can affect throughput, including the number of users, complexity and frequency of user operations, and the types of content being crawled. Each of these factors can have a major effect on Search Server 2008 farm throughput. You should carefully consider each of these factors when you are planning the deployment.

If the organization has an existing search solution, you can view the Web server logs to determine the usage patterns and trends in the current environment.

If the organization is planning a new search solution deployment, use the information in the following section to estimate usage patterns.

The table in this section shows test results for a variety of user operation profiles using the hardware and usage profile listed in Test environment earlier in this article. For each farm configuration, a range of one through eight query servers was tested in conjunction with one index server and one database server.

The following table shows test results for query and crawling operations.

 

Farm configuration Number of content sources Average crawl speed in items per second Query latency Query latency when crawling Query throughput (RPS) Query throughput when crawling

Configuration A:

Search Server 2008 Express on a single server with SQL Server 2005 Express Edition

1

24.27

0.5738

1.0028

33.01

26.36

Configuration B:

Search Server 2008 on a single server with SQL Server 2005

1

19.18

0.7433

n/a

20.53

n/a

2

16.54

0.8404

1.62

17.65

15.54

3

13.04

0.8844

2.4429

12.46

11.51

4

11.9

0.9226

2.5011

10.45

10.94

5

10.85

1.1112

3.4106

8.59

9.82

Configuration C:

Search Server 2008 on one server and SQL Server 2005 on a separate server

1

19.3

0.7821

n/a

20.83

n/a

2

16.54

0.8561

0.868

18.61

17.17

3

17.11

0.8998

1.0274

18.56

12.21

4

17.28

0.8882

1.0833

15.37

11.93

5

16.12

1.0929

1.0985

12.77

11.16

6

14.81

1.3025

1.3909

10.89

7.74

Configuration D:

Search Server 2008 on two servers and SQL Server 2005 on a separate server

1

19.47

0.6578

n/a

27.5

n/a

2

17.19

0.8467

0.7148

18.49

21.47

3

17.79

0.8536

0.8493

18.48

18.26

4

18.47

0.8855

0.9387

18.37

15.21

5

15.34

1.0562

0.9891

12.84

14.61

6

15.6

1.2461

1.209

10.92

11.13

The following graphs compare trends between configurations B, C, and D as the volume of crawled items increases. Configuration A is not represented because Search Server 2008 Express running SQL Server 2005 Express Edition has a maximum database size of 4 GB, which effectively limited the number of crawled items to 500,000.

The graph below shows the difference in average crawl speed in items per second between configurations B, C and D.

Average crawl speed

As the number of items being crawled increases past 1,000,000 items, configurations C and D, both having separate index and database servers, continue to perform well up to the maximum number of items crawled, whereas the performance of configuration B begins to fall. Therefore, if the number of items to crawl in the environment exceeds 1,000,000, you should consider deploying a separate database server.

The following graph shows the difference in query latency while crawling content between configurations B, C, and D.

Query latency while crawling

In this graph, you can see that latency while crawling is substantially higher for configuration B than for configurations C and D. If the needs of the organization indicate a need for low latency for user requests while crawling, you should consider deploying a separate database server.

The following graph shows the difference in query throughput in requests per second (RPS) between configurations B, C, and D.

Query throughput while crawling

Whereas configurations C and D show similar performance in terms of latency while crawling, configuration D performs substantially better than configurations B and C in query throughput during a crawl operation. If user requirements do not permit a regular crawling window in which crawling can take place without interfering with the performance of query response, you should consider a deployment architecture that includes at least one separate query server and separate index and database servers.

This section provides general performance and capacity recommendations. Use these recommendations to determine the capacity and performance characteristics of the starting topology that you created in Determine hardware and software requirements (Search Server 2008), and to determine whether you need to scale out or scale up the starting topology.

NoteNote:
Scale out means to add more servers in a particular role, and scale up means to increase the performance or capacity of a given server by adding memory, hard disk capacity, or processor speed.

Memory requirements for Web, index, and database servers are dependent on the size of the farm, the number of concurrent users, and the complexity of features and pages in the farm. The memory recommendations in the following table may be adequate for a small or light-usage farm. Nonetheless, memory usage should be carefully monitored to determine if more memory must be added.

For information about minimum and recommended system requirements, see Determine hardware and software requirements (Search Server 2008).

You can estimate the performance of the starting-point topology by comparing the topology to the starting-point topologies that are provided in Plan to deploy Search Server 2008 or Search Server 2008 Express. So doing can help you quickly determine if you need to scale up or scale out the starting-point topology to meet performance and capacity goals.

To increase the capacity and performance of one of the starting-point topologies, either scale up by implementing server computers with greater capacity or scale out by adding servers to the topology. This section describes the general performance characteristics of several scaled-up or scaled-out topologies. The sample topologies represent the following common ways to scale up or scale out a topology for a search environment:

  • To accommodate greater user load, add query server computers. You can also add dedicated query servers to relieve some of the processing burden from the Web servers.

  • To accommodate greater data load, increase the capacity of the index server, or add capacity to the database server role by increasing the capacity of a single (clustered or mirrored) server, by upgrading to a 64-bit server, or by adding clustered or mirrored servers.

  • Maintain a ratio of no greater than eight query server computers to one (clustered or mirrored) database server computer.

In a Search Server 2008 or Search Server 2008 Express environment, crawling content is the longest-running operation that is not initiated by users. You will need to perform testing in your own environment to determine the amount of time it takes to crawl content using a particular content source, and whether the throughput consumed by crawling this content interferes with target user response times. You should use incremental crawls when possible to reduce the impact of crawling on farm performance.

Use the following information to plan the disk space requirements for the index server, query servers, and database servers in the environment.

Use the following information to plan the disk space requirements for the index server and query servers in the server farm.

NoteNote:
The size of the content index is typically smaller than the corpus because all noise words are removed before the content is indexed.
NoteNote:
If the query server role is enabled on a server other than the index server, the index is automatically propagated to those query servers. To store a copy of the content index in the file system on a query server, each query server requires the same amount of disk space as the index server uses for the content index.

To estimate the disk space requirements for the hard disk that contains the content index:

  1. Estimate how much content you plan to crawl and the average size of each file. If you do not know the average size of files in the corpus, use 10 KB per document as a starting point.

    Use the following formula to calculate how much disk space you need to store the content index:

    GB of disk space required = Total_Corpus_Size (in GB) x File_Size_Modifier x 2.85

    where File_Size_Modifier is a number in the following range, based on the average size of the files in the corpus:

    • 1.0 if the corpus contains very small files (average file size = 1 KB).

    • 0.12 if the corpus contains moderate files (average file size = 10 KB).

    • 0.05 if the corpus contains large files (average file size = 100 KB or larger).

NoteNote:
This equation is intended only to establish a starting-point estimate. Real-world results may vary widely based on the size and type of items being indexed, and how much metadata is being indexed during a crawl operation.

In this equation, you multiply Total_Corpus_Size (in GB) by the value of the File_Size_Modifier to get the estimated size of the index file. Next, you multiply by 2.85 to accommodate overhead for master merges when crawled data is merged with the index. The final result is the estimated disk space requirement.

For example, for a corpus size of 1 GB that primarily contains files that average 10 KB in size, use the following values to calculate the estimated size of the index file:

1 GB x 0.12 = 0.12 GB

According to this calculation, the estimated size of the index file is 120 MB.

Next, multiply the estimated size of the index file by 2.85:

120 MB x 2.85 = 342 MB

Thus, the disk space required for the index file and to accommodate indexing operations is 342 MB, or 0.342 GB.

NoteNote:
The volume of crawled data can differ based on the content being crawled.
  1. Based on the estimate, if the content index will fit within the available hard disk space on the index and query servers, go to step 3. Otherwise, add disk space or reevaluate step 1 before proceeding to step 3.

  2. Crawl some of the content.

  3. Evaluate the size of the content index and the number of files that were crawled. Use this information to increase the accuracy of the calculation that you performed in step 1.

  4. If the remaining hard disk space is adequate, crawl some more content. Otherwise, add hard disk space as necessary or reevaluate how much content you plan to crawl.

  5. Repeat steps 3 through 5 until all content is crawled.

    After you have crawled the entire corpus, we recommend that you keep a record of the size of the content index and search database for each crawl so that you can determine an average growth rate. A corpus tends to grow over time as new content is added to the farm. Therefore, you should monitor the available hard disk space to ensure that adequate capacity for indexing operations is maintained.

The search database, which stores metadata and crawler history information for the search system, typically requires more disk space than the index. This is especially the case if you primarily crawl SharePoint sites, which are very rich in metadata.

NoteNote:
Both the metadata for all indexed content and the crawler history are stored in the search database. For this reason, the search database requires more storage space than the content index.

Use the following formula to calculate how much disk space you need for the search database:

GB of disk space required = Total_Corpus_Size (in GB) x File_Size_Modifier x 4

where File_Size_Modifier is a number in the following range, based on the average size of the files in the corpus:

  • 1.0 if the corpus contains very small files (average file size = 1 KB).

  • 0.12 if the corpus contains moderate files (average file size = 10 KB).

  • 0.05 if the corpus contains large files (average file size = 100 KB or larger).

For example, for a corpus size of 1 GB that primarily contains files that average 10 KB in size, substitute the following values into the equation to calculate the estimated size of the index file:

1 GB x 0.12 = 0.12 GB, or 120 MB

Then multiply the estimated size of the index file by 4:

120 MB x 4 = 480 MB

Thus, the disk space required for the search database is 480 MB, or 0.48 GB.

In Search Server 2008, search is a shared service available at the Shared Services Provider (SSP) level. The Search Server 2008 search system consists of two main server roles: the index server and the query server.

Crawling and indexing are resource-intensive operations. Crawling content is the process by which the system accesses and parses content and its properties to build a content index from which search queries can be serviced. Crawling consumes processing and memory resources on the index server, the query server or query servers that are servicing the crawl operations, the server or servers that are hosting the content repository that is being crawled, and the database server that is serving the Search Server 2008 farm.

Crawls affect the overall performance of the system, and directly affect user response time and the performance of other shared services in the farm. Crawls also affect the Web service on the query server that services crawl operations. You can dedicate a query server for crawling operations to reduce the load on other farm servers.

Indexing the crawled content can also affect the overall performance of the system if crawl operations are not assigned to a dedicated query server. If search-related operations constitute a significant portion of farm operations, consider deploying a dedicated query server.

Use the information in this section to specify requirements for the index server in a Search Server 2008 farm.

The index server processor speed influences the crawl speed and the number of crawling threads that can be instantiated. Although there is no specific number or type of processors that are recommended, you should consider the amount of content that will be crawled when determining the index server requirements. In an enterprise environment, the index server should have multiple processors to handle a large indexing load.

The following table shows how crawl speed increases as the number of processors available on the index server increases.

 

Number of processorsPercentage of improvement in crawl speed

1

0.00

2

10.89

4

19.77

8

30.77

On the index server, items are loaded in buffers for processing by the crawler engine. In a farm with a corpus of approximately 1 million items, the index server requires approximately 1.5 GB of memory. After a document is processed in memory, it is written to disk. The greater the memory capacity, the more items the crawler can process in parallel, which results in improved crawl speed.

We recommend a minimum of 4 GB RAM on the index server.

We recommend that you specify redundant arrary of independent disks (RAID) 10 with 2 millisecond (ms) access times and faster than 150 MB/sec write times for fast disk writes.

Search Server 2008 does not support the use of multiple index servers for scaling out, as each index server requires a separate SSP, and only one SSP can exist in a Search Server 2008 farm.

Indexing operations increase the load on the database server, and can reduce the responsiveness of the farm. Indexing operations can also affect other shared services on the application server that is running the Indexing Service. You can adjust the indexing performance level for each index server to one of the following three values:

  • Reduced

  • Partly reduced

  • Maximum

The default setting is Partly reduced.

Crawls affect performance of the database server because the Search Server 2008 Search service writes all the metadata collected from the crawled items into database tables. It is possible for the index server to generate data at a rate that can overload the database server.

You should conduct your own testing to balance crawl speed, network latency, database load, and the load on the content repositories that are being crawled.

The following table shows the relationship between the performance-level setting and the CPU utilization on the index and database servers as tested.

 

Performance-level setting Index server CPU utilization percentage Database server CPU utilization percentage

Reduced

20

20

Partly reduced

24

24

Maximum

25

26

TipTip:
If the index server and database servers are used only for the Search Server 2008 Search service, you can set the level to Maximum. However, we recommend that the maximum increase in database server CPU utilization related to index server activity not be greater than 30 percent. If the increase in database server CPU utilization exceeds 30 percent when the performance level is set to Maximum, we recommend setting the performance level to the next lower setting.

Crawler impact rules are farm-level search configuration settings that specify the number of simultaneous requests that the Search Server 2008 Search service generates when it crawls using a specified content source. The greater the number of simultaneous requests, the faster the crawl speed. Note that the request frequency specified in a crawler impact rule directly affects the load on the database server and the load on the server hosting the content that is being crawled. If you increase the request frequency for a given site, you should carefully monitor the servers being crawled to evaluate whether the greater load is acceptable.

The default value is the number of processes on the index server. Therefore, for a quad-processor computer, the default value is eight. We recommend that you adjust the value and measure the load on the target server to determine the optimum number of simultaneous requests. You can select the number of simultaneous requests from the following available values: 1, 2, 4, 8, 16, 32, 64.

You can also create a rule to request one document at a time and wait a specified number of seconds between requests. Such a rule can be useful for crawling a site that has a constant user load.

The following table shows the relationship between the number of simultaneous requests and the CPU utilization on the index server and database servers.

 

Number of crawl threads Index server CPU utilization percentage Database server CPU utilization percentage

4

35

12

8

40

15

12

45

15

16

60

20

Use the information in this section to determine specifications for query servers in the Search Server 2008 farm.

The more memory that is available, the fewer times the Search Server 2008 Search service will need to access the hard disk to perform a given query. Having adequate memory also permits more effective caching. Ideally, enough memory should be installed on the query servers to accommodate the entire index.

We recommend using RAID 10 for fast disk writes.

You can deploy multiple query servers in the farm to achieve redundancy and load balancing. The number of query servers you use depends on how many users are present in the farm and the peak hour load that you expect. We have tested up to eight query servers per farm.

Server latency is a major factor that affects crawl performance. Performance between farm servers must be balanced for overall crawl performance to reach its potential. For example, a powerful index server can be operating at 25% of its capacity if the database server for the farm being crawled is not able to respond quickly enough. In such a case, you can scale up the database server, which will in turn increase crawl speeds across the entire farm.

You should conduct your own testing to evaluate the responsiveness of servers in the environment. The database server serving the target farm is often the bottleneck in cases where crawl performance is poor. To improve crawl performance, you can:

  • Scale up database server hardware by adding or upgrading processors, adding memory, and upgrading to hard disks with faster seek and write times.

  • Increase the memory on query servers in the farm.

  • Crawl during off-peak hours so that the database server being crawled can service user traffic during the day, and respond to crawls during off-peak hours.

The Search Server 2008 search system crawls both text data and the metadata associated with the content. In Search Server 2008, the inverted full text index is stored on the index server, and the metadata is stored in the Search database. The index server writes metadata to the database, and the query servers read that data to process property-based queries issued by users.

Use the information in this section to determine specifications for database servers in a Search Server 2008 farm.

The database metadata store is shared by the index server and all query servers in the farm. The index server writes all metadata, and the query servers read this data to process search requests. Query throughput is dependent largely on the metadata store responsiveness.

As the number of query servers increases in the farm, the load on the database server also increases and affects the overall query throughput. You should carefully monitor the database server when you add query servers to the farm to ensure that database performance remains satisfactory.

Because the Search Server 2008 Search service writes a large amount of data to the search database during crawls, we recommend that you use separate hard disks for the SharedServices_Search_Db, SharedServices_Db, and TempDb databases for better performance in scenarios in which the index contains more than 5 million items.

We recommend that you use RAID 10 for fast disk writes.

Capacity is directly affected by scalability. This section lists the objects that can compose a solution and provides guidelines for acceptable performance for each kind of object. Limits data is provided, together with notes that describe the conditions under which the limits obtain and links to additional information where available. Use the guidelines in this article to review your overall solution plans.

If the solution plans exceed the recommended guidelines for one or more objects, take one or more of the following actions:

  • Evaluate the solution to ensure that compensations are made in other areas.

  • Flag these areas for testing and monitoring as you build and deploy the solution.

  • Re-design the solution to ensure that you do not exceed capacity guidelines.

The following tables list the objects by category and include recommended guidelines for acceptable performance. Acceptable performance means that the system as tested can support that number of objects. However, that the number cannot be exceeded without some decrease in performance. An asterisk (*) indicates a hard limit; no asterisk indicates a tested or supported limit.

Search Server 2008 is usually deployed to be a Search solution in an environment that already contains sites or other data sources. You might find it desirable to use a Search Server 2008 farm for hosting Windows SharePoint Services 3.0 sites in addition to the Search Center.

For information about recommended guidelines for Search Server 2008 and Search Server 2008 Express site objects and people objects, see Plan for software boundaries (Windows SharePoint Services).

The following table lists the recommended guidelines for search objects.

 

Search objectGuidelines for acceptable performance Notes

Search indexes

1 per Search Server 2008 or Search Server 2008 Express farm

Indexed document

50 million per search index

50 million items per index server are supported, and one search index per index server. This means that the effective limit of items per index server is 50 million.

The following table lists the recommended guidelines for logical architecture objects.

 

Logical architecture objectGuidelines for acceptable performanceNotes

Shared Services Provider (SSP)

Search Server 2008: 3 per farm (20 per farm maximum)

Search Server 2008 Express: 1 maximum

 

Internet Information Services (IIS) application pool

8 per Web server

Maximum number is determined by hardware capabilities.

Search Server 2008 does not require additional application pools unless you decide to use the farm to host additional Windows SharePoint Services 3.0 sites.

Site collection

50,000 per Web application

Content database

100 per Web application

Site collection

50,000 per database

The following table lists the recommended guidelines for physical objects.

 

Physical objectGuidelines for acceptable performanceNotes

Index servers

1 per SSP*

 

Query servers

No hard limit, recommended 8 maximum

Note that this limit only applies to Search Server 2008 deployments. You may only have one application server in a Search Server 2008 Express deployment.

Query server/domain controller ratio

3 query servers per domain controller

Depending on how much authentication traffic is generated, the environment may support more query servers per domain controller.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft