Export (0) Print
Expand All
This topic has not yet been rated - Rate this topic

Performance and capacity planning (FAST Search Server 2010 for SharePoint)

FAST Search Server 2010

Updated: August 13, 2013

When planning performance and capacity of a Microsoft FAST Search Server 2010 for SharePoint deployment, you must consider several aspects of both your own business environment and the system architecture.

For more information about how to plan the performance and capacity of a Microsoft SharePoint Server 2010 farm overall, refer to Capacity planning for SharePoint Server 2010.

note Note:

This article assumes that you use the SharePoint Server 2010 crawler, indexing connector framework and the FAST Search Server 2010 for SharePoint Content Search Service Application (Content SSA) to crawl content.

Business environment considerations

In your environment, define the following:

Content volume capacity

How much content has to be searchable? The total number of items should include all objects: documents, web pages, list items, and so on.

Redundancy and availability

What are the redundancy and availability requirements? Do customers need a search solution which can survive the failure of a particular server? Note that there are two main levels of availability: Availability of query matching and availability of indexing.

Content Freshness

How "fresh" must the search results be? How long after the customer modifies the data do you expect searches to provide the updated content in the results? How often do you expect the content to change?

Query throughput

How many people will be searching over the content at the same time? This includes people typing in a query box and other hidden queries like web-parts automatically searching for data, or Microsoft Outlook 2010 Social Connectors requesting activity crawls that contain URLs which need security trimming from the search system.

System architecture considerations

In the system architecture perspective, make sure that you understand the indexing and query evaluation process in addition to the effect of various topology choices and the network traffic they generate. In addition, you may also want to pay attention to the dimensioning of the web analyzer component as the performance of this component depends on the number of indexed items and whether the items contain hyperlinks.

Impact of indexing on performance and capacity

A FAST Search Server 2010 for SharePoint farm move through multiple stages as content is crawled: Index acquisition, index maintenance and index cleanup.

Index acquisition

The index acquisition stage is characterized by full, possibly concurrent, crawls.

When adding new content, crawling performance is mainly determined by the configured number of item processing components. Both the number of CPU cores and the speed of each of them will affect the results. As a first order approximation, a 1GHz CPU core will be able to process one average size Office document (around 250 kB) per second. For example, a scenario with 48 CPU cores for item processing, each being 2.26GHz, provides a total estimated throughput of 48 × 2.26 ≈ 100 items per second on average.

note Note:

The actual throughput may deviate from the estimated throughput, depending on the size and type of the indexed items. Test your installation to ensure that it meets your performance and capacity expectations.

Index maintenance

The index maintenance stage is characterized by incremental crawls of all content, detecting new and changed content. Typically, when you crawl a SharePoint Server 2010 content source, most of the encountered changes are related to access rights.

Incremental crawls can consist of various operations:

  • Access right (ACL) changes and deletes: These require near zero item processing, but high processing load in the indexer. Crawl rates will be higher than for full crawls.

  • Content updates: These require full item processing in addition to more processing by the indexer compared to adding new content. Internally, such an update corresponds to a delete of the old item, and an addition of the new content.

  • Additions: Incremental crawls may also contain newly discovered items. These have the same workload as index acquisition crawls.

Depending on the kind of operation, an incremental crawl may be faster or slower than an initial full crawl. It will be faster if there are mainly ACL updates and deletes, and slower if there are mainly updated items.

In addition to updates from content sources, internal operations such as link analysis, click-through log analysis and reorganization of the index partitions, also alter the index. The FAST Search Server 2010 for SharePoint link analysis and click-through log analysis generate additional internal updates to the index. For example, a hyperlink in one item will lead to an update of the anchor text information associated with the referenced item. Such updates have a similar load pattern as the ACL updates. The indexer regularly performs internal reorganization of index partitions and data defragmentation. It starts defragmentation every night at 3 am and redistribution across partitions when it is needed. Both of these internal operations imply that you may see indexing activity also outside intervals with ongoing content crawls.

Index cleanup

The index cleanup stage occurs when you delete a content source or start address, or both, from a search service application. It can also occur when the content indexing connector cannot find a host supplying content: The indexing connector will look for the host during three consecutive crawls, but if the host is not found it will delete the content source and cause the index to enter the cleanup stage.

Impact of query evaluation on performance and capacity

The overall index has two levels of partitioning: index columns and index partitions.

When the complete index is too large to reside on one server, you can split it into multiple disjoint index columns. The query matching component will then evaluate the queries against all index columns within the search cluster, and merge the results from each index column into the final query hit list. Within each index column the indexer uses a partitioning of the index in order to handle large number of indexed items with low indexing and query latency. This partitioning is dynamic and handled internally on each index server. When the query matching component evaluates a query, each partition runs in a separate thread. The default number of partitions is 5. In order to handle more than 15 million items per server (column), you must configure a larger number of partitions.

Evaluation of a single query is schematically illustrated in the following figure.

Single query illustration

CPU processing (light blue) is followed by waiting for disk access cycles (white) and actual disk data read transfers (dark blue); repeated in the order of 2-10 times per query. This implies that the query latency depends on the speed of the CPU in addition to the I/O latency of the storage subsystem. The query matching component evaluates each single query separately, and in parallel, across multiple index partitions in all index columns. In the default five-partition configuration, this means that each query is evaluated in five separate threads within every column.

When query load increase, the query matching component evaluates multiple queries in parallel as indicated in the following figure.

Multiple queries illustration

As different phases of the query evaluation occurs at different times, concurrent I/O accesses are unlikely to become a bottleneck. CPU processing shows considerable overlap, which will be scheduled across the available CPU cores of the server. In all scenarios tested, the query throughput reaches its maximum when all available CPU cores are 100% utilized. This happens before the storage subsystem becomes saturated. More and faster CPU cores will increase the query throughput, and eventually make disk accesses the bottleneck. For more information about the tested scenarios, refer to Performance and capacity test results (FAST Search Server 2010 for SharePoint).

note Note:

In larger deployments with many index columns the network traffic between query processing and query matching components may also become a bottleneck, and you might consider increasing the network bandwidth for this interface.

Query latency is somewhat independent of query load up to the CPU starvation point at maximum throughput. The latency for each query is a function of the number of items in the largest index partition. Note that when you apply query load after an idle period, query latency is slightly elevated because of caching effects. Ongoing crawl gives some degradation of query latency. But if you have a search row with backup indexer, the effect is much less than in systems with search running on the same servers as indexer and item processing.

Impact of topology on performance and capacity

As indexing and query matching both use CPU resources, deployments with indexing and query matching on the same row will show some degradation in query performance during content crawls. Single row deployments are likely to have indexing, query matching and item processing all running on the same servers.

You can deploy a dedicated search row to isolate query traffic from indexing and item processing. This requires two times the number of servers in the search cluster, which provides better and more consistent query performance. Such a configuration will also provide query processing and query matching redundancy. A dedicated search row implies some additional traffic during crawling and indexing when the indexer creates a new index generation (a new version of the index for a given partition). The indexer component passes the new index data over the network to the query matching component. Given a suitable storage subsystem, the main effect on query performance is a small degradation when new generations arrive because of cache invalidation.

You can also deploy a backup indexer in order to handle non-recoverable errors on the primary indexer. You will typically co-locate the backup indexer with a search row. For this scenario, you should not deploy item processing to the combined backup indexer and search row. The backup indexer increase the I/O load on the search row because there will be additional housekeeping communication between the primary and backup indexer to keep the index data on the two servers in sync. This also includes additional data storage on disk for both servers. Make sure that you dimension your storage subsystem to handle the additional load.

Impact of network traffic on performance and capacity

With increased CPU performance on the individual servers, the network connection between the servers can become a bottleneck. As an example, even a small FAST Search Server 2010 for SharePoint farm with four servers can process and index more than 100 items per second. If the average item is 250 Kbytes, this represents around 250 Mbit/s average network traffic. Such a load may saturate even a 1Gbit/s network connection.

The network traffic generated by content crawling and indexing can be decomposed as follows:

  • The indexing connector within the Content SSA retrieves the content from the source

  • The Content SSA (within the SharePoint Server 2010 farm) passes the retrieved items in batches to the content distributor component in the FAST Search Server 2010 for SharePoint farm

  • The content distributor sends each item batch to an available item processing component, typically located on another server

  • After processing, the item processing component passes each batch to the indexing dispatcher that splits the batches according to the index column distribution

  • The indexing dispatcher distributes the processed items to the indexers of each index column

  • If there are multiple search rows, the indexer copies the binary index to the additional search rows

The accumulated network traffic across all servers and components can be more than five times higher than the content stream itself in a distributed system. You must have a high performance network switch to interconnect the servers in such a deployment.

High query throughput also generates high network traffic, especially when you are using multiple index columns. Make sure that you define the deployment configuration and network configuration to avoid too much overlap between network traffic from queries and network traffic from content crawling and indexing.

Impact of web link analysis on performance and capacity

Performance dimensioning of the web analyzer component depends on the number of indexed items and whether the items contain hyperlinks. Items that contain hyperlinks, or items that are linked to, represents the main load. Database-type content does typically not contain hyperlinks. Intranet content (including SharePoint site content) will often contain HTML with hyperlinks. External web content is almost exclusively HTML documents that have many hyperlinks.

Although the number of CPU cores is important for performance dimensioning of the web analyzer component, determining the time that it must have to update the index with anchor text and rank data, disk space is the most important. The web analyzer component will only perform link, anchor text or click through log analysis if sufficient disk space is available. The following table specifies rule-of-thumb dimensioning recommendations for the web analyzer component.

Content type Number of items per CPU core GB disk per million items

Database

20 million

2

SharePoint Server 2010 / Intranet

10 million

6

Public web content

5 million

25

note Note:

The table provides dimensioning rules for the whole farm. If you distribute the web analyzer component over two servers the requirement per server will be half of the given values.

The amount of memory that you must have available is the same for all kinds of content, but depends on the number of cores you use. We recommend 30 MBytes per million items plus 300 MBytes per CPU core. If the installation contains different kinds of content, the best capacity planning strategy is to use the most demanding content type as the basis for the dimensioning. For example, if the system contains a mix of database and SharePoint site content we recommend that you dimension the system as if it only contains SharePoint site content.

Change History

Date Description

August 13, 2013

Emphasized that for crawling performance the actual throughput may deviate from the estimated throughput.

February 10, 2011

Initial publication

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.