Microsoft Time Series Algorithm
The Microsoft Time Series algorithm is a regression algorithm provided by Microsoft SQL Server 2005 Analysis Services (SSAS) for use in creating data mining models to predict continuous columns, such as product sales, in a forecasting scenario. While other Microsoft algorithms create models, such as a decision tree model, that rely on being given input columns to predict the predictable column, prediction for a time series model is based only on the trends that the algorithm derives from the original dataset while it is creating the model. The following diagram shows a typical model for forecasting sales of a product over time.
The model that is shown in the diagram is made up of two parts: historical information, at the left side of the graph, and predicted information, at the right side of the graph. The historical data represents the information that the algorithm uses to create the model, while the predicted data represents the forecasts that the model makes. The line that is formed by the combination of the historical and the predicted data is called a series. Each forecasting model must contain a case series, which is the column that distinguishes between points in a series. For example, because the data in the diagram shows the series for historical and forecasted bicycle sales over a period of several months, the date column is the case series.
An important feature of the Microsoft Time Series algorithm is its ability to perform cross prediction. This means that if you train the algorithm with two separate but related series, you can use the resulting model to predict the outcome of one series based on the behavior of the other series. For example, the observed sales of one product can influence the forecasted sales of another product.
The management team at Adventure Works Cycles company wants to predict monthly bicycle sales for the coming year. The company is particularly interested in whether the sales of different bicycle models are related to one another, that is, whether the sale of one bike model can be used to predict the sale of another model. By using the Microsoft Time Series algorithm on historical data from the past three years, the company can produce a data mining model that forecasts future bike sales. Additionally, the company can perform cross predictions to see whether the sales trends of individual bike models are related.
The Microsoft Decision Trees algorithm trains a model by using an Auto Regressive decision tree. Each model contains a key time column, which defines the time slices that the model will define. The algorithm relates a variable number of past items to each current item that is being predicted.
For a more detailed explanation about how the Microsoft Time Series algorithm works, see Autoregressive Tree Models for TimeSeries Analysis.
You can define input data for the Microsoft Time Series model in two ways. To understand the first method, consider the following table of input cases:
TimeID  Product  Sales  Volume 

1/2001 
A 
1000 
600 
2/2001 
A 
1100 
500 
1/2001 
B 
500 
900 
2/2001 
B 
300 
890 
The TimeID column in the table contains a time identifier, and has two entries for each day. The Product column defines a product in the database. The Sales column describes the gross profits of the specified product for one day, and the Volume column describes the quantity of the specified product that remains in the warehouse. In this case, the model would contain two predictable columns: Sales and Volume.
Alternatively, input data for the time series model could be defined as in the following table:
TimeID  A_Sales  A_Volume  B_Sales  B_Volume 

1/2001 
1000 
600 
500 
900 
2/2001 
1100 
500 
300 
890 
In this table, the sales and volume columns are each split into two columns, each preceded by the product name. As a result, only a single entry exists for each day in the TimeID column. The model would contain four predictable columns: A_Sales, A_Volume, B_Sales, and B_Volume.
While both methods of defining the input data cause the same information to be displayed in the model, the format of the input cases changes how you define the mining model.
A time series algorithm requires that the column or columns to be predicted must be continuous. Only one case series is allowed for each model.
The Microsoft Time Series algorithm supports specific input column content types, predictable column content types, and modeling flags, which are listed in the following table.
Input column content types 
Continuous, Key, Key Time, and Table 
Predictable column content types 
Continuous and Table 
Modeling flags 
NOT NULL and REGRESSOR 
All Microsoft algorithms support a common set of functions. However, the Microsoft Time Series algorithm supports additional functions, listed in the following table.

For a list of the functions that are common to all Microsoft algorithms, see Data Mining Algorithms. For more information about how to use these functions, see Data Mining Extensions (DMX) Function Reference.
The Microsoft Time Series algorithm does not support using the Predictive Model Markup Language (PMML) to create mining models.
The Microsoft Time Series algorithm supports several parameters that affect the performance and accuracy of the resulting mining model. The following table describes each parameter.
Parameter  Description 

MINIMUM_SUPPORT 
Specifies the minimum number of time slices that are required to generate a split in each time series tree. The default is 10. 
COMPLEXITY_PENALTY 
Controls the growth of the decision tree. Decreasing this value increases the likelihood of a split. Increasing this value decreases the likelihood of a split. The default is 0.1. 
PERIODICITY_HINT 
Provides a hint to the algorithm as to the periodicity of the data. For example, if sales vary by year, and the unit of measurement in the series is months, the periodicity is 12. This parameter takes the format of {n [, n]}, where n is any positive number. The n within the brackets [] is optional and can be repeated as frequently as needed. The default is {1}. 
MISSING_VALUE_SUBSTITUTION 
Specifies the method that is used to fill the gaps in historical data. By default, irregular gaps or ragged edges in data are not allowed. Following are the methods that are available to fill in irregular gaps or edges: by Previous value, by Mean value, or by specific numeric constant. 
AUTO_DETECT_PERIODICITY 
Specifies a numeric value between 0 and 1 that is used to detect periodicity. Setting this value closer to 1 favors the discovery of many nearperiodic patterns and the automatic generation of periodicity hints. Dealing with many periodicity hints will likely lead to significantly longer model training times and more accurate models. If the value is closer to 0, periodicity is detected only for strongly periodic data. The default is 0.6. 
HISTORIC_MODEL_COUNT 
Specifies the number of historic models that will be built. The default is 1. 
HISTORICAL_MODEL_GAP 
Specifies the time lag between two consecutive historic models. For example, setting this value to g causes historic models to be built for data that is truncated by time slices at intervals of g, 2*g, 3*g, and so on. The default is 10. 
Note: 

The periodicity hint is very sensitive to model quality. If the hint that you give is different from the actual periodicity, your results can be adversely affected. 
Change History
Release  History 

15 September 2007 
