Chapter 38 - Scaling Out on SQL Server

This chapter addresses the different types of partitioned data in respect to partitioned views, distributed partitioned views and data dependent routing, and provides some guidelines for determining whether and how to use this technology for your application.

An application's level of throughput has a direct impact on productivity and revenue. Performance and throughput are a not only a measure of the underlying hardware platform, but a measure of the design of the application itself. After the application and the database have been carefully optimized, there are two main methods of increasing throughput: scaling up and scaling out. Scaling up is maximizing the capabilities of an existing server by adding more memory, storage capacity, and processors or replacing these existing components with newer, faster versions. The best approach is to explore all options for scaling up to see how they can be best utilized for your particular system.

Cc917665.c38001(en-us,TechNet.10).gif

Scaling out is adding additional servers can provide increased scalability. When the server for a particular application is at its maximum potential, and is no longer able to meet user demands, you should consider scaling out. In Microsoft® SQL Server™ 2000, scaling out is both a hardware solution and a data storage solution; the data itself is partitioned over multiple servers. New features in SQL Server 2000 simplify administration of partitioned data.

The servers containing the partitioned data are sometimes known as a federation, if certain distributed data techniques are used. Each server in the scale-out scenario is administered independently, but share part of the processing load. This chapter details different types of data partitioning and how and when to implement it. This high-end architecture is intended for customers who need to implement very high volume OLTP workloads. For more information, see "Designing Applications to Use Federated Database Servers" in SQL Server Books Online.

SQL Server 2000 delivers impressive performance when the application is optimized and the hardware is scaled up to the appropriate level. But, in situations where a fully optimized database and application support a database load beyond the reasonable expectations of a single server, scaling out can help. By scaling out using SQL Server 2000, you can support the tremendous processing loads and growth requirements of the largest Web or enterprise systems.

Readiness Checklists

Cc917665.spacer(en-us,TechNet.10).gif Cc917665.spacer(en-us,TechNet.10).gif

This section provides some checklists to help you ensure you have considered the important issues involved in choosing to scale out your system.

Are You Ready to Scale Out on SQL Server?

The following checklist will help you determine whether scaling out is a viable approach for your application. All of the following questions, which are focused on an existing application, should be resolved before you begin to design a scaled-out enterprise:

  • Are the queries optimized and tuned? 

  • Does each table have indexes that are appropriately designed and maintained, to optimize performance? Are auto create statistics and auto update statistics options turned on to keep up with changing usage patterns in the database?

  • Is the schema appropriate? Is the level of normalization adjusted for the usage of the table (normalized for writes, denormalized for reads)? 

  • Have all stored procedures and functions been optimized? 

  • Has application logic been streamlined? Have all blocking issues been eliminated? 

  • Is the hardware scaled appropriately for the application? If the hardware is already being pushed to the limit, have processors, memory, or disk storage (spindles) been added? 

  • Is the physical layout (that is, where the database files are placed) of the database storage components optimal? 

These are the things you should consider when you design a new application. The advantage with a new system is that it is easier to start out designing for partitioning than it is to design a partitioned/distributed solution for an existing application. But in either case, care should be taken to make the application take maximum advantage of good design principles to obtain the best performance and scalability. Be sure to consider how you will maintain the system, with an eye toward high availability and ease of administration considerations.

For more information, see Chapter 16, "Five Nines: The Ultimate in High Availability," and Chapter 33, "The Data Tier: An Approach to Database Optimization."

  • Scaling up is the first approach you should consider. For most applications, scaling up provides higher performance for the same cost than scaling out on the same level of hardware. There are some unusual cases; for example, some applications are naturally partitionable based on application requirements, and so might benefit from the inclusion of the scale-out design concepts. With the understanding that there are very few exceptions, scaling out should be the final consideration. Scaling out provides scalability for a system where throughput is a higher requirement than providing the highest speed for each individual transaction.

Scaling out provides the best value on larger servers; there is little advantage in scaling out on small servers. Because distributed database management requires much higher administrative overhead, the smaller benefits of running on a higher number of smaller computers may not be worth the administrative cost. Because the primary focus of throughput for scaling out is based on processing power (CPU), smaller servers do not provide much gain. It is better to reserve this design for systems that need it (for example, for geographical considerations), and for shops that are willing to commit to the extra expense in higher-level hardware and administration for the purpose of scalability.

Design Considerations

  • After you have fully optimized your application, database design, and layout, and you have scaled up to the largest feasible server configuration, you may determine that your scalability requirements can be met by scaling up rather than by scaling out. If you decide that your application is a good candidate for scaling out, here are some more things to consider. 

Designing a scale-out solution of any type requires analysis and careful planning. There are several techniques that can be applied to designing for scale out:

  • Distributed partitioned views 

  • Data-dependent routing 

  • Replication

  • Message Queuing 

  • A combination of these 

Although these techniques can be combined in many different ways, depending on your requirements, the classic federated server scenario combines the use of distributed partitioned views with data-dependent routing, usually with some form of replication. This solution is the focus of this chapter.

The following list provides some initial design considerations, which can also be used to determine how much your system might benefit from scaling out, whether it is a current system, a new one, or one being migrated from another database platform.

  • Are you trying to support a high volume of concurrent transactions? The decision to scale out is most often driven by the need to support a database load that exceeds the capacity of a large multiprocessor server. The transaction volume is not directly the issue, but rather the overall load on the DBMS.

    Concurrency issues are frequently based on application design problems, but can also be based on disk I/O limitations, a problem that can be addressed by adding more physical disks, by using filegroups, or by significantly increasing the amount of memory to reduce physical I/O with a larger cache. Because applying standard solutions can increase concurrency, this issue alone does not make your system a candidate for a federated scenario. 

    The key factor is the processing capacity requirements. The primary reason for scaling out is that your requirement exceeds the processor capacity of the largest SMP hardware available.

  • Is transaction throughput more important than the precise speed of each individual query? This question will become important if your application frequently requires data from a partition on a remote server. How often do you need to retrieve remote data? The fewer calls that must be made to remote servers to accumulate all the data to satisfy the request, the better performance will be.

    The more frequently you collect remote data for the total result set, the more of a performance issue this will become. This is less of a concern; however, for a query that is used occasionally, or in cases where the query can be directed to the server containing the majority of the data related to that result set. For example, if a set of information is frequently queried, you should try to organize the data and queries so the queries run on the same node as the data. 

  • Can you store related partitioned data together on one server in the federation (known as collocating the data), and then direct the incoming query to the server that has that data? A key consideration for scaling out is considering how the data will be organized, and how data access will occur, along slices of the database.

    The locality of the data, which is the measure of how much of a query can be completed with local data rather than remote data, is a performance consideration. If you have a high degree of data locality, you will get better performance. How much data needs to move between servers?

    The amount of data being exchanged between servers should also be considered (you can get this number by multiplying bytes per row by number of rows). If you frequently transfer a large amount of

    data, approaching thousands of kilobytes per rowset, this performance consideration should influence your design. In this case, look for a more optimal way of partitioning your data, or a better way to handle data access. 

    Like everything else in the database, it all depends on usage. For queries that are used frequently, examine the data involved and do some quick calculations on how often these are executed compared to how much data is requested. If you cannot collocate related data, calculate the amount of data you expect to be requested from remote servers. If the rowsets involved are large, approaching thousands of kilobytes, and the frequency of the requests is high, consider organizing the data differently. If the rowsets are large, but the query will only be run occasionally, then this might also provide acceptable performance for the system as a whole. 

  • How large is the data involved? It is easy to assume you should simply partition your largest tables. Companies considering this scenario often have large tables: for example, more than one billion narrow rows or more than 20 million wide rows (more than 5 KB). However, if you are using partitioning as part of a scale-out solution, you should partition based on data access patterns rather than just choosing the largest tables (even if they are in the terabyte range). Some partitionable tables are apt to grow large, but size alone should not be the deciding factor.

Data size also relates to the amount of data exchanged between servers. While large volumes could become an issue in extreme cases, the linked server connection can easily support exchanges of hundreds or thousands of rows per transaction, depending on the byte size of the rows in question. The exchange of large volumes of data becomes a problem only if the volume of remote traffic degrades response time to an unacceptable level.

Understanding the Federation

Distributed partitioned views in SQL Server 2000 are only a first step toward the shared-nothing clustering used in clustered business services environments. When considering partitioned views for your environment, make sure you understand what you are undertaking:

  • By using distributed partitioning correctly, with an eye to maintenance, administration and usage, for an application that is a good candidate for this solution, you can achieve a linear increase in scalability. This is something to consider if you have scaled up completely on one server to the maximum feasible capacity (that is, a maximized 8-way). The trade off in this scenario is that you are increasing the maximum throughput your system can accommodate, and the price is that each individual transaction will take a little longer to complete.

  • The idea with scale-out is not that response time improves for each individual query (in fact, it typically goes down), but rather that you can ultimately perform more transactions because of more total hardware capacity. For example, imagine that a single 8-way symmetric multiprocessing (SMP) server can perform 100 transactions per second on a particular workload. With distributed partitioned views, the rate might drop to 50 transactions per second, but you would be able to add more and more hardware. Four servers could do 200 transactions per second, which is twice the capacity of the original SMP. It is easy to see how this could scale to meet a large workload: Scaling out provides you with a way to apply more processing power to meet your system requirements. 

  • All servers in a given distributed partitioned view must be available for that distributed partitioned view to be usable. There is no built-in failure tolerance. If one server is down, the view cannot access data from any of the member tables even if the queries do not need data from the unavailable node. Therefore, you should consider using a failover cluster, grouping servers in sets of four for maximum hardware advantage. This will minimize the amount of time that the system is unavailable in the event of a hardware failure. 

  • SQL Server 2000 does not support the process load balancing available through Microsoft Windows 2000 COM+ components. When distributed partitioned views are used across a group of servers, each server operates autonomously with separate operational rules supporting independent processes and data.

  • You will need at least three servers in your federation to see a benefit in increased throughput versus the reduced response time. It is possible to have fewer than three servers, but the cost will outweigh the benefits. Distributed servers work best with a Storage Area Network between servers. 

  • Each server must be maintained independently. There are currently no management tools to allow unified administration of the federated servers or to automate repartitioning of views. There is no easy way to repartition the data without interrupting service of the distributed partitioned view (note that data-dependent routing might still work while the distributed partitioned view is unavailable, provided you have planned your design with that in mind). 

  • You must plan ahead for how you will support a scale-out solution. Partitioned views add significant operational and administrative overhead, even where maintenance is automated.

  • Disaster recovery plans are far more complex where partitioned data is involved. For more information, see "Disaster Recovery and Partitioning" in this chapter.

Data Partitioning Components

Cc917665.spacer(en-us,TechNet.10).gif Cc917665.spacer(en-us,TechNet.10).gif

Now that you have examined your application for the possibility of optimization and scale-up, and you have looked at the design considerations, you should know how good a candidate your system is for scaling out. Most scaled-out systems rely on some form of data partitioning, so your next step is to begin planning your partitions from a design and administrative perspective.

SQL Server uses partitioned views to facilitate access to partitioned data. A federation is created when data is partitioned and distributed across servers by using a distributed partitioned view. This feature involves creating a view across multiple tables of identical physical structure in order to treat all of them as one entity. There are several different ways to partition data. Some common uses of application data partitioning include dividing data by geographical region or by time perspective on the data. This is a natural type of partitioning. Data partitions similar to this have been used for many years in application design. The advent of distributed partitioned view technology in SQL Server facilitates dividing data by

usage, in addition to other reasons for partitioning. For example, you could divide a Customer table into sections to balance out the number of hits to each section. If you are approaching this from the server only, a simple distributed partitioned view functions exactly like the original table, so that the front-end application code does not need to be changed. If you are designing (or redesigning) an application, you can optimize by using data-dependent routing in addition to distributed partitioned views.

The best situation is when you can partition the data symmetrically. This means that, in addition to having a distributed partitioned view and a member table on each server in the federation, you also have a related set of tables containing only data related to the primary partition. When a query comes in from the application, you can use data-dependent routing to send this query to the server containing most of the data for that result set. A good test of whether this can be achieved is to see whether the partition allows all rows to be placed on the same member server as all their referencing foreign key rows. Database designs that support this goal are the best candidates for partitioning.

It would be an unusual application that contained only tables that could be partitioned easily. Most applications have look-up tables, which do not need to be, and often cannot be, usefully partitioned. These can be copied (using replication or INSTEAD OF triggers) between servers.

Partitioned views can be either read-only or updatable. A read-only partitioned view is similar to a copy of the member tables and is useful when accessing historical or other data that is read only, such as catalog information for an e-commerce site. An updatable partitioned view can exhibit all the capabilities of a single table.

Partitioned views are further categorized as either local or distributed. Local partitioning is geared toward midsize systems; distributed partitioning is for enterprise-level systems. A local partitioned view resides with the member tables on a single instance of SQL Server on a single server. Within that instance, member tables can reside in a single database or multiple databases. In the case of multiple databases, the partitioned view resides in each database. In the following illustration, the partitioned view and tables are implemented in a single server.

Cc917665.c38002(en-us,TechNet.10).gif

The typical use for partitioning within the same database include facilitating occasional querying of data divided into historical or other natural partitions. The benefits to doing this are mainly administrative. For example, running maintenance or optimization procedures against each partition table simultaneously reduces the amount of time these procedures will take to run. However, this adds a layer of complexity to administration, and this complexity will increase if you partition over filegroups or databases. If you are interested in using partitioned views for this purpose, give careful consideration to how much usage each member table will receive, as well as maintenance issues that may become relevant.

If you are attempting to alleviate disk contention, the best approach is to solve this as a standard disk I/O problem. Partitioning data on the same server does not provide noticeable performance gains, and most likely you would be better served to use a single table with good indexes that contain all the data.

A distributed partitioned view has several implementation options. The standard implementation is to use every server in the federation in the same manner. This involves partitioning data across all the servers and putting a distributed partitioned view on each server, so each server can be queried by the application, and provide the same result. You would not partition every table in your database, just those whose usage is appropriate for partitioning.

Cc917665.c38003(en-us,TechNet.10).gif

When you are designing partitioned views for a distributed environment, carefully consider how the data will be routed to the member tables. The most important goal is to minimize the processing done on each

server to return the requested data, ensuring reasonable performance in such a system. You should be able to collocate related data on the same member server, and then route each SQL statement to a member server that contains most, if not all, of the data required to process the statement. Examining the WHERE clause of the queries is also a good way of determining what data should be collocated.

For example, you may find that all the sales, customer, sales representative, and inventory tables in a database can be partitioned by sales region, and that most SQL statements only reference data in a single region. You can then create member servers where each server has the horizontally partitioned data for one or more regions. The partitioned view will submit any generated SQL statement only to the member server containing the data for that region. The only SQL statements that will generate distributed queries are those that reference data from multiple regions.

Careful design of the partitioned data is the most time consuming element in creating partitioned views. It is important to remember that distributed partitioning is all related to data usage. The steps to create the member tables and partitioned views are straightforward:

  1. Create and populate the member tables on the appropriate member servers. 

  2. Add linked server definitions on each member server (for distributed partitions only). 

  3. Set the lazy schema validation option on each member server (for distributed partitions only). 

  4. Create the distributed partitioned view on each member server. 

How Partitioned Views Work

SQL Server has enhancements to optimize support of distributed partitioned views. Improvements to the query processor to better use OLE DB between linked servers significantly increases access speed to remote data. As queries are run against each partition for the first time, execution plans related to each partition are stored for reuse. If the execution plans remain in the cache and there have been no schema changes, the optimized plans will be reused to eliminate needless trips to remote servers. In addition, if lazy schema validation is enabled, schema validation for remote partitions occurs only when there is a need to access them.

Another enhancement, function shipping, is an improvement to the way the SQL Server works. Function shipping allows requests to be passed to the remote servers, so the function is shipped, rather than the data. The goal with function shipping is that fewer rows are shipped over the wire. It is not specific to distributed partitioned views, although it benefits queries of partitioned data. Function shipping minimizes the amount of data transferred between the remote servers. This effect cannot be planned, but it is intrinsic to the way the engines work. Certain queries will still invoke data shipping, which is slower; mainly this would be queries that require that all the data be on the calling server before the result set can be processed to provide the answer the user wanted. The result of this is that it will minimize the amount of distributed data that is transferred between servers, while the partition-aware execution plans allow a larger set of queries to benefit from the improved performance gains of partitioned views.

Creating Partitioned Views

Partitioned views are created by horizontally dividing a table into many smaller member tables and then overlaying those smaller tables with a partitioned view that merges all the data so the table divisions are transparent to the end user or application. Whether the view is really made up of one or ten tables, it looks and acts like one entity. Each member table must have an identical schema, with one difference: A CHECK constraint on the partitioning column limits the range of the data for that particular member table. When you are creating partitioned views, consider the following:

  • The servers participating in a distributed partitioned view must all use SQL Server 2000. The reason for this is the schema validation. The table structures must be identical in every way (including collation) except for the check constraint on the partitioning column. 

  • You can modify data through a partitioned view only if it runs under SQL Server 2000 Enterprise Edition or SQL Server 2000 Developer Edition. 

  • Member tables updated through the partitioned view must not contain triggers or cascading actions. 

For example, this illustration shows a Customer table segmented into three member tables using the CustomerID column as the partitioning column.

Cc917665.c38004(en-us,TechNet.10).gif

The following Transact-SQL example creates the member table for all customers with a customer ID between 'AAAAA' and 'HZZZZ'. The constraint on the CustomerID column identifies it as the partitioning column. In our examples here, the prefix 'pm' stands for partition member. Avoid naming your partitions after the range of data they contain. Because partition members in a distributed partitioned view are located on different computers, in most cases you should avoid numbering your partition member tables. Consistently using standard four-part naming will make your code more portable; the table names would be the same everywhere, distinguished by their explicit four-part name: server.database.owner.table.

CREATE TABLE pm_Customers (
CustomerID char(05)PRIMARY KEY NOT NULL
CONSTRAINT CustRange1 CHECK (CustomerID BETWEEN 'AAAAA' AND 'HZZZZ'), 
-- This constraint creates the partitioning column
CompanyName varchar(40) NOT NULL,
Address varchar(60),
City varchar(15),
Region varchar(15),
PostalCode varchar(10),
Country varchar(15) )

Regardless of the physical placement of the member tables in data storage, the partitioned view provides seamless integration of data access to all the member tables. The syntax for creating the view uses the UNION ALL operator to combine the member tables, as shown in the following example:

CREATE VIEW Customers AS
SELECT * FROM LinkedServer1.Northwind.dbo.pm_Customers
UNION ALL
SELECT * FROM LinkedServer2.Northwind.dbo.pm_Customers
UNION ALL
SELECT * FROM LinkedServer3.Northwind.dbo.pm_Customers

For more information about setting up partitioned views, see "Creating a Partitioned View" in SQL Server Books Online.

Partitioned Query Plans

Based on the query and the constraints, redundant partitions are discarded at compile time if possible or at run time with a mechanism called startup filters or dynamic filters. A startup filter is a condition on the parameters in the query that indicates if the partition needs to be accessed. The cost of evaluating a startup filter locally is dramatically lower than going to a remote server, executing a query and then discovering that no rows qualify.

To see your partitioned query running, examine a showplan output showing startup filter plans. You will see that all the partitions are acknowledged in the plan, but the startup filter registers the range of values that are available per partition. You can use set statistics io on and set statistics showplan on for more complete information.

CREATE TABLE T1(I INT PRIMARY KEY CHECK (I < 0))
CREATE TABLE T2(I INT PRIMARY KEY CHECK (I >= 0))

CREATE VIEW V AS
SELECT * FROM T1
UNION ALL
SELECT * FROM T2

CREATE PROC P @P INT = NULL AS SELECT * FROM V WHERE I = @P

When you query the table, you get the showplan output below. When the query runs, the startup expressions prune redundant partitions dynamically. When the key values are not known at the time the plan is built, SQL Server builds an execution plan with conditional logical to control which member tables is accessed. This execution plan that has startup filters: conditional logic that controls which member table is accessed based on the input parameter value. You will see in the graphical query analyzer view that all of the remote partitions are acknowledged in the display. At runtime, the unnecessary partitions will be pruned from the execution plan, based on the parameter supplied, and in the statistics profile you will see that only one partition is actually queried.

SET STATISTICS IO ON
SET STATISTICS PROFILE ON

EXEC P 

STATISTICS IO shows that neither table was scanned, as no parameter was supplied:

Table 't2'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0.
Table 't1'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0.

STATISTICS PROFILE shows the same, with more detail. Notice the Startup Expr on the filter:

Executes

StmtText

1

select * from v where i = @p

1

|--Concatenation

1

|--Filter(WHERE:(STARTUP EXPR([@p]<0)))

0 <

| |--Clustered Index Seek(OBJECT:([t1].[PK1]), SEEK:([t1].[i]=[@p]) ORDERED FORWARD)

1

|--Filter(WHERE:(STARTUP EXPR([@p]>=0)))

0 <

|--Clustered Index Seek(OBJECT:([t2].[PK2]), SEEK:([t2].[i]=[@p]) ORDERED FORWARD)

EXEC P 1

STATISTICS IO shows that table t2 was scanned, based on the parameter supplied, this was the correct partition for it to scan.

Table 't2'. Scan count 1, logical reads 0, physical reads 0, read-ahead reads 0.

Table 't1'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0.

Statistics profile now shows that table t2 had an execute performed against it.

Executes

StmtText

1

select * from v where i = @p

1

|--Concatenation

1

|--Filter(WHERE:(STARTUP EXPR([@p]<0)))

0 <

| |--Clustered Index Seek(OBJECT:([t1].[PK1]), SEEK:([t1].[i]=[@p]) ORDERED FORWARD)

1

|--Filter(WHERE:(STARTUP EXPR([@p]>=0)))

1 <

|--Clustered Index Seek(OBJECT:([t2].[PK2]), SEEK:([t2].[i]=[@p]) ORDERED FORWARD

An important part of preparing to partition data involves analyzing the queries and execution plans. A good practice would be to add some audit code to your queries, or run SQL Profiler to establish baseline information on the specific queries used, the data they access and how often. This information will provide valuable insights on the best possibilities for distributing your data.

Data-Dependent Routing

There are two main ways to access partitioned data. One is by partitioned views. Another method, which can be used instead of partitioned views or in combination with them, is data-dependent routing. This method uses code to determine where the target data is located, and routes connections to the appropriate server.

The goal of partitioning is to distribute the workload among physical resources: disks or servers. As with any other type of system, application design is the most influential part of performance. To optimize performance, simplify and reduce distributed processing; eliminate all unnecessary round trips to remote servers.

Data-dependent routing is an application-based approach. How the data is laid out is no different than with distributed partitioned view. In data-dependent routing, which can be used for more situations than just federated servers, the information on how to go after the data is made available to the application. Data-dependent routing does not use the linked server connections, instead sending the data requests directly to the appropriate server. For this to be most effective, the databases containing the partitioned table should be identical in terms of name and structure.

The standard federated server design uses both distributed partitioned views and data-dependent routing, in order to take full advantage of all possible performance opportunities. A simple way to do this is to create a routing table that contains the information on which server has the data the application is requesting. Deciding where to put the routing table is important. For ease of administration, you can put the table in SQL Server. In SQL Server, it can be updated easily and as often as it needs to be. However, you should not query this table every time you indicate which partition has the data you need in it. The data should be cached in the application layer using a method such as data caching on Microsoft Internet Information Services (IIS).

Putting the table in a COM+ object within the code is another possibility, but the tradeoff comes in when it needs to be updated. Updating the table requires the code itself to be updated and recompiled. You can also store the table in a registry setting or text file. Instead of using a routing table, you could choose to use a custom hashing algorithm based on the partitioning column to send the data to the correct partition or server. This facilitates repartitioning because you can alter the algorithm to change the number of partitions. Determine which method will allow you to manage the data partitions, which is done by moving the data around to balance usage..

An example of a data-dependent routing solution is included on the SQL Server 2000 Resource Kit CD-ROM, in the folder \ToolsAndSamples\FederatedToolKitSamples.

Other Options

The following other options can be used, as part of a standard federated configuration, to achieve partitioning with SQL Server.

Replication

If data is read-only, replication may be a strong candidate. This is especially helpful in an environment such as an e-commerce Web site, where it could be used for catalog information or environments that report or use the same data, but have network latency (for example, Tokyo accessing a New York server).

Adding a Unique Column

Instead of looking to partition the data to other servers, or even to multiple databases in one instance of SQL Server, you can add a column with a unique value that could be queried on. This may also require minimal code changes to the application. An example is adding a state_id column to a sales receipt table. This would allow an application to grab the right state data easily. Later, if the table gets too large, this modification will make true partitioning easy.

INSTEAD OF Trigger

If a view is not a partitioned view (one that does not follow the guidelines outlined in this chapter), it can be made updatable by using an INSTEAD OF trigger. The query optimizer may not always be able to build execution plans for a view utilizing an INSTEAD OF trigger that are as efficient as the plans for a true partitioned view.

Administration Considerations

Cc917665.spacer(en-us,TechNet.10).gif Cc917665.spacer(en-us,TechNet.10).gif

Partition Maintenance

Successfully partitioning a large existing system usually requires a series of iterative improvements. The tables chosen for partitioning in each step are usually those that will give the highest performance gain at that time. Data usage shifts over time, so in order to remain effective, partitions will occasionally need to be adjusted.

Balancing the load among the tables, or repartitioning, can be challenging. Currently, there is not an automated method to do this. If you have made a miscalculation in planning your partitions, you may decide to repartition an entire table. For example, if you partition your order table by a sequential order number, you will find that the last partition in the group not only grows largest, but also is queried most frequently. If instead you partition by geographical region, you still may find that one area grows faster or is queried much more frequently. In either case, you will need to rebalance the load by moving some of this data around. This will be less disruptive if you do it more frequently, moving smaller amounts of data.

The most straightforward way to do this is to disable the relevant constraint temporarily while you shift the data between partitions, and then re-create the constraint when you are finished. If your application is able to use data-dependent routing exclusively, at least for the duration of the partition adjustment, you may not have much downtime. You should shift the data around first, update your routing table, update any data-dependent routing-related or COM objects that contain partition-specific information, refresh any caching of the data-dependent routing information, and finally, delete the rows that should no longer be in a particular partition because they have been moved to a better location.

If you add a new partition, you will have much the same type of work to perform, in addition to altering any distributed partitioned view or other SQL procedures, functions, or tasks (especially backup tasks) that need to be aware of the new partition.

Other than the maintenance of the partition, it is good to keep in mind that partitioned tables are just like any other table. Each server and database in a federation must be separately maintained. They still need index maintenance, and they still need to be backed up. Occasionally, you may want to run DBCC checks in order to be as thorough with maintenance as possible. One advantage to maintenance in a federation is that because each partition is an independent database and maintenance is run separately on each, you can run maintenance on all partitions at the same time. Each partition would also benefit from parallel processing during DBCC and index builds, a feature of SQL Server 2000 Enterprise Edition.

Disaster Recovery and Partitioning

The following sections provide tips on creating an optimal backup and restore strategy for partitioned databases.

Backing Up and Restoring Partitioned Databases

SQL Server 2000 does not require you to coordinate backups across member servers. If you do not require transactional consistency between member servers, backups can be taken of each database independently, without regard for the state of the other member databases. With this method, overhead for synchronization is minimized, so it has the lowest possible effect on transaction processing. Another option is to attempt to back up all databases simultaneously; however, this option is not recommended.

If transactional consistency – making sure all partitions are able to be backed up to the same point – is a requirement, you can achieve this in SQL Server 2000 by using named marks in the transaction log. This special mark allows the restore of a database to a named point, and with partitioned views, they enable all databases with member tables to be synchronized to the same point. This requires that the recovery mode be set to Full for each database that is part of the partition.

During backup and restore operations, distributed partitioned views and data-dependent routing partitioned databases must remain synchronized. Because data-dependent routing is essentially code-controlled partitioning, you can employ methods such as marked backups. To stay transactionally current and avoid transaction loss, you can use coding or middle-tier technology.

Data-dependent routing can also take advantage of log marks. However, unlike partitioned views, data can get out of synchronization because each server is independent from each other from a partitioned standpoint. This means that each server is theoretically only linked in the application code, even if the servers are linked to put the mark in the transaction log.

To restore a partitioned database that uses data-dependent routing, roll each database back to the damaged server's last known good mark to ensure that all servers are synchronized. Rolling back can result in lost transactions. This may be handled with another technology, such as Message Queuing or Microsoft Component Services, or with code that uses XML to store the transaction state. If you choose one of these technologies, test carefully before relying on it to recover lost data.

For more information about marking transactions, see the topics "Backup and Recovery of Related Databases" and "Recovering to a Named Transaction" in SQL Server Books Online.

High Availability

Your data is now partitioned, but what happens if one server goes down? If you are using partitioned views, the entire view will become unavailable if the execution plan is not in the cache.

If you are using data-dependent routing, only the data on that one server will become unavailable. This also presents other challenges: If one server fails, and the other servers that are part of the partition are up and being added to, the data set gets out of sync, especially if the data on each server is necessary. For example, if a database partitioned over three servers is accessed only by data-dependent routing, these are essentially three separate databases. If the application inserts records into B and C, but not matching records into A, the application needs to be aware that A has gone down, and account for that. In a partitioned view, this is not as much of a concern.

There are two high-availability methods that compliment partitioning: failover clustering and log shipping. For the first option, each server would need to be part of a failover cluster. If your database is partitioned across three virtual servers, the best solution would be a minimum of four servers to create the N+1 scenario using Windows 2000 Datacenter, which is described in Chapter 12, "Failover Clustering." Another option would be to have each instance be a separate virtual server on a dedicated SQL Server failover cluster. So if you have three database partitions, you would need six servers to make three independent failover clusters.

In conjunction with or in place of failover clustering, you can use log shipping to create a warm standby server. This can be one server to host all the partitioned databases (unless your databases do not have unique names), or a 1:1 ratio for partitioned server to log shipped server.

For more information about creating a highly available server, consult Chapter 16, "Five Nines: The Ultimate in High Availability." For more information about optimizing the database as a whole, see Chapter 33, "The Data Tier: An Approach to Database Optimization."

Cc917665.spacer(en-us,TechNet.10).gif