Share via


Creating Distributed Partitioned Views

Before you implement a partitioned view, you must first partition a table horizontally. In designing a partitioning scheme, it must be clear what data belongs to each member table. The original table is replaced with several smaller member tables. Each member table has the same number of columns as the original table, and each column has the same attributes as the corresponding column in the original table, such as data type, size, and collation. If you are creating a distributed partitioned view, each member table is on a separate member server. For the greatest location transparency, the name of the member databases should be the same on each member server, although this is not required. For example: Server1.CustomerDB, Server2.CustomerDB, Server3.CustomerDB.

Creating Member Tables

You design the member tables so that each table stores a horizontal slice of the original table based on a range of key values. The ranges are based on the data values in a partitioning column. The range of values in each member table is enforced by a CHECK constraint on the partitioning column, and ranges cannot overlap. For example, you cannot have one table with a range from 1 through 200000, and another with a range from 150000 through 300000, because it would not be clear which table contains the values from 150000 through 200000.

For example, you are partitioning a Customer table into three tables. The CHECK constraint for these tables is the following:

-- On Server1:
CREATE TABLE Customers_33
  (CustomerID   INTEGER PRIMARY KEY
                CHECK (CustomerID BETWEEN 1 AND 32999),
  ... -- Additional column definitions)

-- On Server2:
CREATE TABLE Customers_66
  (CustomerID   INTEGER PRIMARY KEY
                CHECK (CustomerID BETWEEN 33000 AND 65999),
  ... -- Additional column definitions)

-- On Server3:
CREATE TABLE Customers_99
  (CustomerID   INTEGER PRIMARY KEY
                CHECK (CustomerID BETWEEN 66000 AND 99999),
  ... -- Additional column definitions)

Defining Distributed Partition Views

After you create the member tables, you define a distributed partitioned view on each member server, with each view having the same name. This enables queries that reference the distributed partitioned view name to run on one or more of the member servers. The system operates as if a copy of the original table is on each member server, but each server has only a member table and a distributed partitioned view. The location of the data is transparent to the application.

You build the distributed partitioned views by performing the following tasks:

  • Adding linked server definitions on each member server that contains the connection information required to run distributed queries on the other member servers. This gives a distributed partitioned view access to data on the other servers.

  • Setting the lazy schema validation option, by using sp_serveroption, for each linked server definition that is used in distributed partitioned views. This optimizes performance by making sure the query processor does not request metadata for any one of the linked tables until data is actually needed from the remote member table.

  • Creating a distributed partitioned view on each member server. The views use distributed SELECT statements to access data from the linked member servers, and merges the distributed rows with rows from the local member table.

To create distributed partitioned views for the previous example, you must perform the following:

  • Add a linked-server definition named Server2 with the connection information for Server2, and a linked server definition named Server3 for access to Server3.

  • Create the following distributed partitioned view:

    CREATE VIEW Customers AS
       SELECT * FROM CompanyDatabase.TableOwner.Customers_33
    UNION ALL
       SELECT * FROM Server2.CompanyDatabase.TableOwner.Customers_66
    UNION ALL
       SELECT * FROM Server3.CompanyDatabase.TableOwner.Customers_99
    
  • Perform the same steps on Server2 and Server3.

Table Rules

Member tables are defined in the FROM clause in each SELECT statement in the view definition. Each member table must comply with the following rules:

  • Member tables cannot be referenced more than one time in the view.

  • Member tables cannot have indexes created on any computed columns.

  • Member tables must have all PRIMARY KEY constraints on the same number of columns.

  • Member tables must have the same ANSI padding setting. For more information about the ANSI padding setting, see SET ANSI_PADDING.

Column Rules

Columns are defined in the select list of each SELECT statement in the view definition. Each column must comply with the following rules:

  • All columns in each member table must be included in the select list. SELECT * FROM <member table> is acceptable syntax.

  • Columns cannot be referenced more than one time in the select list.

  • The columns must be in the same ordinal position in the select list

  • The columns in the select list of each SELECT statement must be of the same type. This includes data type, precision, scale, and collation. For example, the following view definition fails because the first column in both SELECT statements does not have the same data type:

    CREATE VIEW NonUpdatable
    AS
    SELECT IntPrimaryKey, IntPartNmbr
    FROM FirstTable
      UNION ALL
    SELECT NumericPrimaryKey, IntPartNmbr
    FROM SecondTable
    

Partitioning Column Rules

Only one column can be used for partitioning, and it must exist on each member table. CHECK constraints identify the data available in each member table. The following additional rules apply:

  • The key ranges of the CHECK constraints in each table cannot overlap with the ranges of any other table. Any specific value of the partitioning column must map to only one table. The CHECK constraints can only use these operators: BETWEEN, IN, AND, OR, <, <=, >, >=, =.

  • The partitioning column cannot be an identity, default or timestamp column.

  • The partitioning column must be in the same ordinal location in the select list of each SELECT statement in the view. For example, the partitioning column is always the first column in each select list or the second column in each select list, and so on.

  • The partitioning column cannot allow for nulls.

  • The partitioning column must be a part of the primary key of the table.

  • The partitioning column cannot be a computed column.

  • There must be only one constraint on the partitioning column. If there is more than one constraint, SQL Server ignores all the constraints and will not consider them when determining whether the view is a partitioned view.

  • There are no restrictions on the updatability of the partitioning column.

A partitioning column that meets all these rules will support all the optimizations that are supported by the query optimizer. For more information, see Resolving Distributed Partitioned Views.

General Rules

Note

The following conditions do not apply to locally partitioned views created on the same server. This is a feature included for backward compatibility purposes.

The following are some additional rules to consider:

  • A distributed partitioned view cannot be formed by using the EXCEPT or INTERSECT operators.

  • A distributed transaction will be started to guarantee atomicity across all nodes that are affected by the update.

  • The XACT_ABORT SET option must be set to ON.

  • smallmoney and smalldatetime columns in remote tables are mapped as money and datetime respectively. Therefore, the corresponding columns in the local tables should also be money and datetime.

  • Any linked server cannot be a loopback linked server. This is a linked server that points to the same instance of SQL Server.

A view that references partitioned tables without following all these rules may still be updatable if there is an INSTEAD OF trigger on the view. However, the query optimizer may not always be able to build execution plans for a view with an INSTEAD OF trigger that are as efficient as the plans for a partitioned view that follows all the rules.