Export (0) Print
Expand All
2 out of 2 rated this helpful - Rate this topic

Chapter 24 - Effective Strategies for Data Mining

Data mining is an effective set of analysis tools and techniques used in the decision support process. However, misconceptions about the role that data mining plays in decision support solutions can lead to confusion about and misuse of these tools and techniques.

Introduction

Cc917651.spacer(en-us,TechNet.10).gif Cc917651.spacer(en-us,TechNet.10).gif

Databases were developed with an emphasis on obtaining data; more data meant more information. Professionals trained in decision support analysis analyzed such data and discovered information in the form of patterns and rules hidden in the relationships between its various attributes. This assisted in the business decision process by providing feedback on past business actions and helped to guide future decisions. The volume of captured data has grown to the extent that there is too much data from which to discover information easily. For example, sampling, a technique designed to reduce the total amount of data to be analyzed for meaningful information, fails because even a marginally statistical sample of data can mean millions of records.

In the business world, the current emphasis on data warehouses and online analytical processing (OLAP) registers this need to convert huge volumes of data into meaningful information. This information can then be converted into meaningful business actions, which provide more data to be converted into more information, and so on in a cyclical manner, creating a "closed loop" in the decision support process. Ideally, this "closed loop" behavior is the key behind such decision support strategies, recursively improving the efficacy of business decisions.

Unfortunately, because most businesses implement only the data warehouse and OLAP portions of this closed loop, they fail to secure true decision support. For example, obtaining customer demographic data and account data from an online transaction processing (OLTP) database, cleaning and transforming the data, translating the prepared data into a data warehouse, constructing and aggregating the data warehouse data into OLAP cubes for presentation, and then making such data available through data marts still does not provide such necessary insight as to why certain customers close their accounts or why certain accounts purchase certain services or products. Without this information, the business actions that attempt to reduce the number of closed accounts or improve sales of certain services or products can be ineffectual or even cause more harm than good.

It is frustrating to know that the information you want is available but only if the right questions are asked of the data warehouse or OLAP cube. The data mining tools in Microsoft® SQL Server™ 2000 Analysis Services provide a way for you to ask the right questions about data and, used with the right techniques, give you the tools needed to convert the hidden patterns and rules in such data into meaningful information.

Another use for data mining is supplying operational decision support. Unlike the closed loop decision support approach, in which the time between the discovery of information and the business decision resulting from the information can take weeks or months and is typically used to provide long-term business decision support, operational decision support can happen in minutes and is used to provide short-term or immediate decision support on a very small set of cases, or even on a single case.

For example, a financial client application can provide real-time analysis for customer support representatives in a banking call center. The client application, by using a data mining model to analyze the demographic information of a prospective customer, can determine the best list of products to cross-sell to the customer. This form of data mining is becoming more and more common as standardized tools, such as Analysis Services, become more accessible to users.

What Is Data Mining?

Simply put, data mining is the process of exploring large quantities of data in order to discover meaningful information about the data, in the form of patterns and rules. In this process, various forms of analysis can be used to discern such patterns and rules in historical data for a given business scenario, and the information can then be stored as an abstract mathematical model of the historical data, referred to as a data mining model. After a data mining model is created, new data can be examined through the model to see if it fits a desired pattern or rule. From this information, actions can be taken to improve results in the given business scenario.

Data mining is not a "black box" process in which the data miner simply builds a data mining model and watches as meaningful information appears. Although Analysis Services removes much of the mystery and complexity of the data mining process by providing data mining tools for creating and examining data mining models, these tools work best on well-prepared data to answer well-researched business scenarios—the GIGO (garbage in, garbage out) law applies more to data mining than to any other area in Analysis Services. Quite a bit of work, including research, selection, cleaning, enrichment, and transformation of data, must be performed first if data mining is to truly supply meaningful information.

Data mining and data warehouses complement each other. Well-designed data warehouses have handled the data selection, cleaning, enrichment, and transformation steps that are also typically associated with data mining. Similarly, the process of data warehousing improves as, through data mining, it becomes apparent which data elements are considered more meaningful than others in terms of decision support and, in turn, improves the data cleaning and transformation steps that are so crucial to good data warehousing practices.

Data mining does not guarantee the behavior of future data through the analysis of historical data. Instead, data mining is a guidance tool, used to provide insight into the trends inherent in historical information.

For example, a data warehouse, without OLAP or data mining, can easily answer the question, "How many products have been sold this year?" An OLAP cube using data warehouse data can answer the question, "What has been the difference in volume of gross sales for products for the last five years, broken down by product line and sales region?" more efficiently than the data warehouse itself. Both products can deliver a solid, discrete answer based on historical data. However, questions such as "Which sales regions should be targeted for telemarketing instead of direct mail?" or "How likely is it that a particular product line would sell well, and in which sales regions?" are not easily answered through data warehouses or OLAP. These questions attempt to provide an educated guess about future trends. Data mining provides educated guesses, not answers, towards such questions through analysis of existing historical data.

The difficulty typically encountered when using a data mining tool such as Analysis Services to create a data mining model is that too much emphasis is placed on obtaining a data mining model; very often, the model itself is treated as the end product. Although you can peruse the structure of a data mining model to understand more about the patterns and rules that constitute your historical data, the real power of data mining comes from using it as a predictive vehicle with current data. You can use the data mining model as a lens through which to view current data, with the ability to apply the patterns and rules stored in the model to predict trends in such data. The revealed information can then be used to perform educated business decisions. Furthermore, the feedback from such decisions can then be compared against the predicted result of the data mining model to further improve the patterns and rules stored in the model itself, which can then be used to more accurately predict trends in new data, and so on.

A data mining model is not static; it is an opinion about data, and as with any opinion, its viewpoint can be altered as new, known data is introduced. Part of the "closed loop" approach to decision support is that all of the steps within the loop can be increasingly improved as more information is known, and that includes data mining models. Data mining models can be retrained with more and better data as it becomes available, further increasing the performance of such a model.

Closed Loop Data Mining

Closed loop data mining is used to support long-term business decision support by analyzing historical data to provide guidance not just on the immediate needs of business intelligence, but also to improve the entire decision support process.

The following diagram illustrates the analysis flow used in closed loop data mining.

Cc917651.c31_003(en-us,TechNet.10).gif 

In closed loop data mining, the analysis improves the overall quality of data within the decision support process, as well as improves the quality of long-term business decisions. Input for the data mining model is taken primarily from the data warehouse; Analysis Services also supports input from multidimensional data stores. The information gained from employing the data mining model is then used, either directly by improving data quality or indirectly by altering the business scenarios which supply data, to impact incoming data from the OLTP data store.

For example, one action involving closed loop data mining is the grooming and correction of data based on the patterns and rules discovered within data mining feedback. As mentioned earlier, many of the processes used to prepare data for data mining are also used by data warehousing solutions. Consequently, problems found in data during data mining generally reflect problems in the data in the data warehouse, and the feedback provided by data mining can improve data cleaning and transformation for the whole decision support process, including data warehousing and OLAP.

Closed loop data mining can take either a continuous view, in which data is continually analyzed against a data mining model to provide constant feedback on the decision support process, or a one-time view, in which a one-time result is generated and recommended actions are performed based on the provided feedback. Decisions involving closed loop data mining can take time, and time can affect the reliability of data mining model feedback. When constructing a data mining model for closed loop data mining, you should consider the time needed to act on information. Discovered information can become stale if acted on months after such information is reported.

Also, the one-time result process can be performed periodically, with predictive results stored for later analysis. This is one method of discovering significant attributes in data; if the predictive results differ widely from actual results over a certain period of time, the attributes used to construct the data mining model may be in question and can themselves be analyzed to discover relevance to actual data.

Closed loop data mining can also supply the starting point for operational data mining; the same models used for closed loop data mining can also be used to support operational data mining.

Operational Data Mining

Operational data mining is the next step for many enterprise decision support solutions. Once closed loop data mining has progressed to the point where a consistent, reliable set of data mining models can be used to provide positive guidance to business decisions, this set of data mining models can now be used to provide immediate business decision support feedback in client applications.

The following diagram highlights the analysis flow of operational data mining.

Cc917651.c31_004(en-us,TechNet.10).gif 

As with closed loop data mining, input for the data mining model is taken from data warehousing and OLTP data stores. However, the data mining model is then used to perform immediate analysis on data entered by client applications. Either the user of the client application or the client application itself then acts upon the analysis information, with the resulting data being sent to the OLTP data store.

For example, financial applications may screen potential credit line customers by running the demographic information of a single customer, received by a customer service representative over the telephone, against a data mining model. If this is an existing customer, the model could be used to determine the likelihood of the customer purchasing other products the financial institution offers (a process known as cross-selling), or indicate the likelihood of a new customer being a bad credit risk.

Operational data mining differs from the more conventional closed loop data mining approach because it does not necessarily act on data already gathered by a data warehousing or other archival storage system. Operational data mining can occur on a real-time basis, and can be supported as part of a custom client application to complement the decision support gathered through closed loop data mining.

Client-based data mining models, duplicated from server-based data mining models and trained using a standardized training case set, are an excellent approach for supporting operational data mining. For more information about how to construct client-based data mining models, see "Creating Data Mining Models" in this chapter.

The Data Mining Process

Cc917651.spacer(en-us,TechNet.10).gif Cc917651.spacer(en-us,TechNet.10).gif

Analysis Services provides a set of easy-to-use, robust data mining tools. To make the best use of these tools, you should follow a consistent data mining process, such as the one outlined below:

  • Data Selection 

    The process of locating and identifying data for data mining purposes. 

  • Data Cleaning 

    The process of inspecting data for physical inconsistencies, such as orphan records or required fields set to null, and logical inconsistencies, such as accounts with closing dates earlier than starting dates. 

  • Data Enrichment 

    The process of adding information to data, such as creating calculated fields or adding external data for data mining purposes. 

  • Data Transformation 

    The process of transforming data physically, such as changing the data types of fields, and logically, such as increasing or decreasing granularity, for data mining purposes. 

  • Training Case Set Preparation 

    The process of preparing a case set for data mining. This may include secondary transformation and extract query design. 

  • Data Mining Model Construction 

    The process of choosing a data mining model algorithm and tuning its parameters, then running the algorithm against the training case set to construct a data mining model. 

  • Data Mining Model Evaluation 

    The process of evaluating the created data mining model against a case set of test data, in which a second training data set, also called a holdout set, is viewed through the data mining model and the resulting predictive analysis is then compared against the actual results of the second training set to determine predictive accuracy. 

  • Data Mining Model Feedback 

    After the data mining model has been evaluated, the data mining model can be used to provide analysis of unknown data. The resulting analysis can be used to supply either operational or closed loop decision support. 

If you are modeling data from a well-designed data warehouse, the first four steps are generally done for you as part of the process used to populate the data warehouse. However, even data warehousing data may need additional cleaning, enrichment, and transformation, because the data mining process takes a slightly different view of data than either data warehousing or OLAP processes.

Data Selection

There are two parts to selecting data for data mining. The first part, locating data, tends to be more mechanical in nature than the second part, identifying data, which requires significant input by a domain expert for the data. (A domain expert is someone who is intimately familiar with the business purposes and aspects, or domain, of the data to be examined.)

Locating Data

Data mining can be performed on almost every database, but several general database types are typically supported in business environments. Not all of these database types are suitable for data mining.

The recommended database types for data mining are listed below:

  • Enterprise Data Warehouse 

    For a number of reasons, a data warehouse maintained at the enterprise level is ideal for data mining. The processes used to select, clean, enrich, and transform data that will be used for data mining purposes are nearly identical to the processes used on data that will be used for data warehousing purposes. The enterprise data warehouse is optimized for high-volume queries and is usually designed to represent business entities in a dimensional format, making it easier to identify and isolate specific business scenarios. By contrast, OLTP databases are generally optimized for high-volume updates and typically represent an entity-relation (E-R) format. 

  • Data Mart 

    A data mart is a subset of the enterprise data warehouse, encapsulated for specific business purposes. For example, a sales and marketing data mart would contain a copy of the dimensional tables and fact tables kept in the enterprise data warehouse that pertain to sales and marketing business purposes. The tables in such a data mart would contain only the data necessary to satisfy sales and marketing research. 

    Because data marts are aggregated according to the needs of business users, most data marts are not suitable for data mining. However, a data mart designed specifically for data mining can be constructed, giving you the power of data mining in an enterprise data warehouse with the flexibility of additional selection, cleaning, enrichment, and transformation specifically for data mining purposes. Data marts designed for this purpose are known by other terms, but serve the same purpose. 

    OLAP databases are often modeled as a data mart. Because their functionality and use are similar to other types of data marts, OLAP databases fit into this category neatly. OLAP databases are also aggregated according to the needs of business users, so the same issues apply. 

    Overaggregation can also cause problems when mining OLAP data. OLAP databases are heavily aggregated; indeed, the point of such data is to reduce the granularity of the typical OLTP or data warehouse database to an understandable level. This involves a great deal of summarization and "blurring" when it comes to viewing detailed information, including the removal of attributes unnecessary to the aggregation process. If there is too much summarization, there will not be enough attributes left to mine for meaningful information. This overaggregation can start well before the data reaches Analysis Services, as data warehouses typically aggregate fact table data. You should carefully review the incoming relational and OLAP data first before deciding to mine OLAP data.

Conversely, you should not mine data in the database types listed below.

  • OLTP database 

    OLTP databases, also known as operational databases, are not optimized for the kind of wholesale retrieval that data mining needs; marked performance impacts in access and transaction speed can occur on other applications that depend on the high-volume update optimization of such databases. Lack of pre-aggregation can also impact the time needed to train data mining models based on OLTP databases, because of the many joins and high record counts inherent in bulk retrieval queries executed on OLTP databases. 

  • Operational data store (ODS) database 

    The operational data store (ODS) database has come into popular use to process and consolidate the large volumes of data typically handled by OLTP databases. The business definition of an ODS database is fluid, but ODS databases are typically used as a "buffer zone" between raw OLTP data and applications that require access to such high-granularity data for functionality, but need to be isolated from the OLTP database for query performance reasons. 

    While data mining ODS databases may be useful, ODS databases are known for rapid changes; such databases mirror OLTP data with low latency between updates. The data mining model then becomes a lens on a rapidly moving target, and the user is never sure that the data mining model accurately reflects the true historical view of the data. 

Data mining is a search for experience in data, not a search for intelligence in data. Because developing this experience requires a broad, open view of historical data, most volatile transactional databases should be avoided.

When locating data for data mining, ideally you should use well-documented, easily accessible historical data; many of the steps involved in the data mining process involve free and direct access to data. Security issues, interdepartmental communications, physical network limitations, and so on can restrict free access to historical data. All of the issues that can potentially restrict such free access should be reviewed as part of the design process for implementing a data mining solution.

Identifying Data

This step is one of the most important of all steps in the data mining process. The quality of selected data ultimately determines the quality of the data mining models based on the selected data. The process of identifying data for use in data mining roughly parallels the process used for selecting data for data warehousing.

When identifying data for data mining, you should ask the following three questions:

  1. Does this data meet the requirements for the proposed business scenario? 

    The data should not only match the purpose of the business scenario, but also its granularity. For example, attempting to model product performance information requires the product data to represent individual products, because each product becomes a case in a set of cases. 

  2. Is this data complete? 

    The data should have all of the attributes needed to accurately describe the business scenario. Remember that a lack of data is itself information; in the abovementioned product performance scenario, lack of performance information about a particular product could indicate a positive performance trend for a family of products; the product may perform so well that no customer has reported any performance issues with the product. 

  3. Does this data contain the desired outcome attributes? 

    When performing predictive modeling, the data used to construct the data mining model must contain the known desired outcome. Sometimes, to satisfy this requirement, a temporary attribute is constructed to provide a discrete outcome value for each case; this can be done in the data enrichment and data transformation steps. 

Data that can immediately satisfy these questions is a good place to start for data mining, but you are not limited to such data. The data enrichment and data transformation steps allow you to massage data into a more useful format for data mining, and marginally acceptable data can be made useful through this manipulation.

Data Cleaning

Data cleaning is the process of ensuring that, for data mining purposes, the data is uniform in terms of key and attribute usage. Identifying and correcting missing required information, cleaning up "orphan" records and broken keys, and so on are all aspects of data cleaning.

Data cleaning is separate from data enrichment and data transformation because data cleaning attempts to correct misused or incorrect attributes in existing data. Data enrichment, by contrast, adds new attributes to existing data, while data transformation changes the form or structure of attributes in existing data to meet specific data mining requirements.

Typically, most data mining is performed on data already that has been processed for data warehousing purposes. However, some general guidelines for data cleaning are useful for situations in which a well-designed data warehouse is not available, and for applications in which business requirements require cleaning of such data.

When cleaning data for data warehouses, the best place to start is at home; that is, clean data in the OLTP database first, rather than import bad data into a data warehouse and clean it afterward. This rule also applies to data mining, especially if you intend to construct a data mart for data mining purposes. Always try to clean data at the source, rather than try to model unsatisfactory data. Part of the "closed loop" in the decision support process should include data quality improvements, such as data entry guidelines and optimization of validation rules for OLTP data, and the data cleaning effort provides the information needed to enact such improvements.

Ideally, a temporary storage area can be used to handle the data cleaning, data enrichment, and data transformation steps. This allows you the flexibility to not only change the data itself, but also the meta data that frames the data. Data enrichment and transformation in particular, especially for the construction of new keys and relationships or conversion of data types, can benefit from this approach.

Cleaning data for data mining purposes usually requires the following steps:

  1. Key consistency verification 

    Check that key values are consistent across all pertinent data. They will most likely be used to identify cases or important attributes. 

  2. Relationship verification 

    Check that relationships between cases conform to defined business rules. Relationships that do not support defined business rules can skew the results of a data mining model, misleading the model into constructing patterns and rules that may not apply to a defined business scenario. 

  3. Attribute usage and scope verification 

    Generally, the quality and accuracy of a data attribute is in direct proportion to the importance of the data to the business. Inventory information, for a manufacturing business that creates parts and products for the aerospace industry, is crucial to the successful operation of the business, and will generally be more accurate and of higher quality than the contact information of the vendors that supply the inventory. 

    Check that the attributes used are being used as intended in the database, and that the scope or domain of selected attributes has meaning to the business scenario to be modeled. 

  4. Attribute data analysis 

    Check that the values stored in attributes reasonably conform to defined business rules. As with attribute usage and scope verification, the data for less business-critical attributes typically requires more cleaning than attributes vital to the successful operation of the business. 

    You should always be cautious about excluding or substituting values for empty attributes or missing data. Missing data does not always qualify as missing information. The lack of data for a specific cluster in a business scenario can reveal much information when asking the right questions. Consequently, you should be cautious when excluding attributes or data elements from a training case set. 

Data cleaning efforts directly contribute to the overall success or failure of the data mining process. This step should never be skipped, no matter the cost in time or resources. Although Analysis Services works well with all forms of data, it works best when data is consistent and uniform.

Data Enrichment

Data enrichment is the process of adding new attributes, such as calculated fields or data from external sources, to existing data.

Most references on data mining tend to combine this step with data transformation. Data transformation involves the manipulation of data, but data enrichment involves adding information to existing data. This can include combining internal data with external data, obtained from either different departments or companies or vendors that sell standardized industry-relevant data.

Data enrichment is an important step if you are attempting to mine marginally acceptable data. You can add information to such data from standardized external industry sources to make the data mining process more successful and reliable, or provide additional derived attributes for a better understanding of indirect relationships. For example, data warehouses frequently provide preaggregation across business lines that share common attributes for cross-selling analysis purposes.

As with data cleaning and data transformation, this step is best handled in a temporary storage area. Data enrichment, in particular the combination of external data sources with data to be mined, can require a number of updates to both data and meta data, and such updates are generally not acceptable in an established data warehouse.

Data Transformation

Data transformation, in terms of data mining, is the process of changing the form or structure of existing attributes. Data transformation is separate from data cleansing and data enrichment for data mining purposes because it does not correct existing attribute data or add new attributes, but instead grooms existing attributes for data mining purposes.

The guidelines for data transformation are similar to both data mining and data warehousing, and a large amount of reference material exists for data transformation in data warehousing environments. For more information about data transformation guidelines in data warehousing, see Chapter 19, "Data Extraction, Transformation, and Loading Techniques."

One of the most common forms of data transformation used in data mining is the conversion of continuous attributes into discrete attributes, referred to as discretization. Many data mining algorithms perform better when working with a small number of discrete attributes, such as salary ranges, rather than continuous attributes, such as actual salaries. This step, as with other data transformation steps, does not add information to the data, nor does it clean the data; instead, it makes data easier to model. Some data mining algorithm providers can discretize data automatically, using a variety of algorithms designed to create discrete ranges based on the distribution of data within a continuous attribute. If you intend to take advantage of such automatic discretization, ensure that your training case set has enough cases for the data mining algorithm to adequately determine representative discrete ranges.

Too many discrete values within a single attribute can overwhelm some data mining algorithms. For example, using postal codes from customer addresses to categorize customers by region is an excellent technique if you plan to examine a small region. If, by contrast, you plan on examining the customer patterns for the entire country, using postal codes can lead to 50,000 or more discrete values within a single attribute; you should use an attribute with a wider scope, such as the city or state information supplied by the address.

Training Case Set Preparation

The training case set is used to construct the initial set of rules and patterns that serve as the basis of a data mining model. Preparing a training case set is essential to the success of the data mining process. Generally, several different data mining models will be constructed from the same training case set, as part of the data mining model construction process. There are several basic guidelines used when selecting cases for the preparation of a training case set, but the usefulness of the selection is almost entirely based on the domain of the data itself.

Sampling and Oversampling

Typically, you want to select as many training cases as possible when creating a data mining model, ensuring that the training case set closely represents the density and distribution of the production case set. Select the largest possible training case set you can, to smooth the distribution of training case attributes. The process of creating such a representative set of data, called sampling, is best handled by selecting records completely at random. In theory, such random sampling should provide a truly unbiased view of data.

However, random sampling does not always provide for specific business scenarios, and a large training case set may not always be best. For example, if you are attempting to model a rare situation within your data, you want to ensure that the frequency of occurrences for the desired situation is statistically high enough to provide trend information.

This technique of increasing the density of rare occurrences in a sample, called oversampling, influences the statistical information conveyed by the training case set. Such influence can be of great benefit when attempting to model very rare cases, sensitive cases in which positive confirmation of the existence of a case must first be made, or when the cases to be modeled occur within a very short period of time. For example, "no card" credit card fraud, in which a fraudulent credit card transaction occurs without the use of a credit card, represents about 0.001 percent of all credit card transactions stored in a particular data set. Sampling would theoretically return 1 fraud case per 100,000 transaction cases—while accurate, the model would overwhelmingly provide information on successful transactions, because the standard deviation for fraud cases would be unacceptably high for modeling purposes. The data mining model would be 99.999 percent accurate, but would also be completely useless for the intended business scenario—finding patterns in no-card fraud transactions.

Instead, oversampling would be used to provide a larger number of fraudulent cases within the training case set. A higher number of fraudulent cases can provide better insight into the patterns behind fraudulent transactions. There are a few drawbacks with oversampling, though, so use this technique carefully. Evaluation of a data mining model created with oversampled data must be handled differently because of the change in ratios between rare and common occurrences in the training case set. For example, the above credit card fraud training set is constructed from five years of transaction data, or approximately 50 million records. This means that, out of the entire data set to be mined, only 500 fraudulent records exist. If random sampling was used to construct a training case set with 1 million records (a 2 percent representative sample), only 10 desired cases would be included. So, the training case set was instead oversampled, so that the fraudulent cases would represent 10 percent of the total number of training cases. We extract all 500 fraudulent cases, so an additional 4,500 cases are randomly selected to construct a training case set with 5,000 cases, of which 10 percent are fraudulent transactions. When creating a data mining model involving the probability of two likely outcomes, the training case set should have a ratio of rare outcomes to common outcomes at approximately 10 to 40 percent, with 20 to 30 percent considered ideal. This ratio can be achieved through oversampling, providing a better statistical sample focusing on the desired rare outcome.

The difficulty with this training case set is that one non-fraudulent case, in essence, represents 11,111 cases in the original data set. Evaluating a data mining model using this oversampled training case set means taking this ratio into account when computing, for example, the amount of lift provided by the data mining model when evaluating fraudulent transactions.

For more information on how to evaluate an oversampled data mining model, see "Data Mining Model Evaluation" later in this chapter.

Selecting Training Cases

When preparing a training case set, you should select data that is as unambiguous as possible in representing the expected outcome to be modeled. The ambiguousness of the selected training cases should be in direct proportion to the breadth of focus for the business scenario to be predicted. For example, if you are attempting to cluster products that failed to discover possible failure patterns, selecting all products that failed is appropriate to your training set. By contrast, if you are trying to predict product failure for specific products due to environmental conditions, you should select only those cases where the specific product directly failed as a result of environmental conditions, not simply all failed products.

This may seem like adding bias to the training case set, but one of the primary reasons for wide variances between predicted and actual results when working with data mining models is due to the fact that the patterns stored in the data mining model are not relevant to prediction of the desired business scenario, and irrelevant patterns are introduced in part by ambiguous training cases.

One of the difficulties encountered when selecting cases is the definition of a business scenario and desired outcome. For example, a common business scenario involves grouping cases according to a set of known attributes to discover hidden patterns. The clustering algorithm is used in just this way to discover hidden attributes; the clustering of cases based on exposed attributes can be used to reveal a hidden attribute, the key to the clustering behavior. So, the desired outcome may not have anything to do with the clusters themselves, but the hidden attribute discovered by the clustering behavior. Before you select cases, be sure you understand both the business scenario used to create the data mining model and the information produced by the created data mining model.

The training case set is not the only source of stored pattern and rule information for the data mining model. The data mining model evaluation step of the data mining process can allow you to refine this stored information with the use of additional case sets. The data mining model, through refinement, can unlearn irrelevant patterns and improve its prediction accuracy. But, the data mining model uses the training case set as its first step towards learning information from data, so your model will benefit through careful selection of training cases.

Data Mining Model Construction

The construction of a data mining model consists of selecting a data mining algorithm provider that matches the desired data mining approach, setting its parameters as desired, and executing the algorithm provider against a training case set. This, in turn, generates a set of values that reflects one or more statistical views on the behavior of the training case set. This statistical view is later used to provide insights into similar case sets with unknown outcomes.

This may sound simple, but the act of constructing a data mining model is much more than mere mechanical execution. The approach you use can decide the difference between an accurate but useless data mining model and a somewhat accurate but very useful data mining model.

Your domain expert, the business person who provides guidance into the data you are modeling, should be able to give you enough information to decide on an approach to data mining. The approach, in turn, assist in deciding the algorithm and cases to be modeled.

You should view the data mining model construction process as a process of exploration and discovery. There is no one formula for constructing a data mining model; experimentation and evaluation are key steps in the construction process, and a data mining process for a specific business scenario can go through several iterations before an effective data mining model is constructed.

Model-Driven and Data-Driven Data Mining

The two schools of thought on decision support techniques serve as the endpoints of a spectrum, with many decision support techniques incorporating principles from both schools. Data warehousing, OLAP, and data mining break down into multiple components. Depending on the methodology and purpose of the component, each has a place in this spectrum.

This section focuses on the various methods and purposes of data mining. The following diagram illustrates some of these components and their approximate place in this spectrum.

Cc917651.c31_005(en-us,TechNet.10).gif

After data has been selected, actual data mining is usually broken down into the following tasks:

  • Classification 

    Classification is the process of using the attributes of a case to assign it to a predefined class. For example, customers can be classified at various risk levels for mortgage loan applications. Classification is best used when a finite set of classes can be defined—classes defined as high risk, medium risk, or low risk can be used to classify all customers in the previous example. 

  • Estimation 

    While classification is used to answer questions from a finite set of classes, estimation is best used when the answer lies within an unknown, continuous set of answers. For example, using census tract information to predict household incomes. Classification and estimation techniques are often combined within a data mining model. 

  • Association 

    Association is the process of determining the affinity of cases within a case set, based on similarity of attributes. Simply put, association determines which cases belong together in a case set. Association can be used to determine which products should be grouped together on store shelves, or which services are most useful to package for cross-selling opportunities. 

  • Clustering 

    Clustering is the process of finding groups in scattered cases, breaking a single, diverse set of cases into several subsets of similar cases based on the similarity of attributes. Clustering is similar to classification, except that clustering does not require a finite set of predefined classes; clustering simply groups data according to the patterns and rules inherent in the data based on the similarity of its attributes. 

Each of these tasks will be discussed in detail later in this chapter. Classification and estimation are typically represented as model-driven tasks, while association and clustering are associated more often with data-driven tasks. Visualization, the process of viewing data mining results in a meaningful and understandable manner, is used for all data mining techniques, and is discussed in a later section.

Model-Driven Data Mining

Model-driven data mining, also known as directed data mining, is the use of classification and estimation techniques to derive a model from data with a known outcome, which is then used to fulfill a specific business scenario. The model is then compared against data with an unknown outcome to determine the likelihood of such data to satisfy the same business scenario. For example, a common illustration of directed data mining is account "churning," the tendency of users to change or cancel accounts. Generally speaking, the data mining model drives the process in model-driven data mining. Classification and estimation are typically categorized as model-driven data mining techniques.

This approach is best employed when a clear business scenario can be employed against a large body of known historical data to construct a predictive data mining model. This tends to be the "I know what I don't know" approach: you have a good idea of the business scenarios to be modeled, and have solid data illustrating such scenarios, but are not sure about the outcome itself or the relationships that lead to this outcome. Model-driven data mining is treated as a "black box" operation, in which the user cares less about the model and more about the predictive results that can be obtained by viewing data through the model.

Data-Driven Data Mining

Data-driven data mining is used to discover the relationships between attributes in unknown data, with or without known data with which to compare the outcome. There may or may not be a specific business scenario. Clustering and association, for example, are primarily data-driven data mining techniques. In data-driven data mining, the data itself drives the data mining process.

This approach is best employed in situations in which true data discovery is needed to uncover rules and patterns in unknown data. This tends to be the "I don't know what I don't know" approach: you can discover significant attributes and patterns in a diverse set of data without using training data or a predefined business scenario. Data-driven data mining is treated as a "white box" operation, in which the user is concerned about both the process used by the data mining algorithm to create the model and the results generated by viewing data through the model.

Which One Is Better?

Asking this question is akin to asking whether a hammer is better than a wrench; the answer depends on the job. Data mining depends on both data-driven and model-driven data mining techniques to be truly effective, depending on what questions are asked and what data is analyzed. For example, a data-driven approach may be used on fraudulent credit card transactions to isolate clusters of similar transactions. Clustering uses a self-comparison approach to find significant groups, or clusters, of data elements. The attributes of each data element are matched across the attributes of all the other data elements in the same set, and are grouped with records that are most similar to the sampled data element. After they are discovered, these individual clusters of data can be modeled using a model-driven data mining technique to construct a data mining model of fraudulent credit card transactions that fit a certain set of attributes. The model can then be used as part of an estimation process, also model-driven, to predict the possibility of fraud in other, unknown credit card transactions.

The various tasks are not completely locked into either model-driven or data-driven data mining. For example, a decision tree data mining model can be used for either model-driven data mining, to predict unknown data from known data, or data-driven data mining, to discover new patterns relating to a specific data attribute.

Data-driven and model-driven data mining can be employed separately or together, in varying amounts, depending on your business requirements. There is no set formula for mining data; each data set has its own patterns and rules.

Data Mining Algorithm Provider Selection

In Analysis Services, a data mining model is a flexible structure that is designed to support the nearly infinite number of ways data can be modeled. The data mining algorithm gives the data mining model shape, form, and behavior.

The two algorithms included in Analysis Services, Microsoft® Decision Trees and Microsoft Clustering, are very different in behavior and produce very different models, as described below.

Both algorithms can be used together to select and model data for business scenarios. For more information on using both algorithms in concert, see "Model-Driven and Data-Driven Data Mining" earlier in this chapter.

Microsoft Decision Trees

The Microsoft Decision Trees algorithm is typically employed in classification and estimation tasks, because it focuses on providing histogram information for paths of rules and patterns within data. One of the benefits of this algorithm is the generation of easily understandable rules. By following the nodes along a single series of branches, a rule can be constructed to derive a single classification of cases.

One of the criteria used for evaluating the success of a data mining algorithm is referred to as fit. Fit is typically represented as a value between 0 and 1, and is calculated by taking the covariance between the predicted and actual values of evaluated cases and dividing by the standard deviations of the same predicted and actual values. This measurement, also referred to as r-squared, is returned—0 means that the model provides no predictive value at all, because none of the predicted values were even close to the actual values, while 1 means the model is a perfect fit, because the predicted values completely match the actual values of evaluated cases.

However, a perfect fit is not as desirable as it sounds. One of the difficulties encountered with data mining algorithms in general is this tendency to perfectly classify every single case in a training case set, referred to as overfitting. The goal of a data mining model, generally speaking, is to build a statistical model of the business scenario that generates the data, not to build an exact representation of the training data itself. Such a data mining model performs well when evaluating training data extracted from a particular data set, but performs poorly when evaluating other cases from the same data set. Even well-prepared training case sets can fall victim to overfitting, because of the nature of random selection.

For example, the following table illustrates a training case set with five cases, representing customers with cancelled accounts, extracted from a larger domain containing thousands of cases.

Customer Name

Gender

Age

Account Months

Beth

Female

28

6

Dennis

Male

45

12

Elaine

Female

45

24

Michelle

Female

47

18

John

Male

37

36

The following diagram illustrates a highly overfitted decision tree, generated from the training case set, created by a data mining model.

Cc917651.c31_001(en-us,TechNet.10).gif

The decision tree perfectly describes the training data set, with a single leaf node per customer. Because the Age and Gender columns were used for input and the Account Months column was used as output, it correctly predicted for this training data set that every female customer with an age of 45 would close their account in 24 months, while every male customer with an age of 45 will close their account in 12 months. This model would be practically useless for predictive analysis—the training set has too few cases to model effectively, and the decision tree generated for this training set has far too many branches for the data.

There are two sets of techniques used to prevent such superfluous branches in a data mining model while maintaining a good fit for the model. The first set of techniques, referred to as pruning techniques, allows the decision tree to completely overfit the model and then removes branches within the decision tree to make the model more generalized. This set of techniques is knowledge-intensive, typically requiring both a data mining analyst and a domain expert to properly perform pruning techniques.

The second set of techniques, referred to as bonsai or stunting techniques, are used to stunt the growth of the tree by applying tests at each node to determine if a split is statistically significant. The Microsoft Decision Trees data mining algorithm automatically employs stunting techniques on data mining models, guided by adjustable data mining parameters, and prevents overfitting training case sets in data mining models that use the algorithm.

There are two data mining parameters that can be adjusted to fine tune the stunting techniques used by the Microsoft Decision Trees algorithm. The first, MINIMUM_LEAF_CASES, determines how many leaf cases are needed to generate a new split in the decision tree. To generate the data mining model in the above example, this parameter was set to 1, so that each case could be represented as a leaf node in the decision tree. Running the same training case set against the same data mining model, but with the MINIMUM_LEAF_CASES parameter set to 2, provides the following decision tree.

Cc917651.c31_002(en-us,TechNet.10).gif 

The above decision tree diagram is less overfitted; one leaf node is used to predict two cases, while the other leaf node is used to predict the other three cases in the training data set. The algorithm was instructed not to make a decision unless two or more leaf cases would result from the decision. This is a "brute force" way of ensuring that not every case ends up a leaf case in a data mining model, in that it has obvious and easily understood effects on a data mining model.

Using the second parameter, COMPLEXITY_PENALTY, involves more experimentation. The COMPLEXITY_PENALTY parameter adds cumulative weight to each decision made at a specific level in a decision tree, making it more difficult to continue to make decisions as the tree grows. The smaller the value provided to the COMPLEXITY_PENALTY parameter, the easier it is for the data mining algorithm to generate a decision. For example, the data mining model examples used to demonstrate the MINIMUM_LEAF_CASES parameter were created using a COMPLEXITY_PENALTY value of just 0.000001, to encourage a highly complex model with such a few number of cases. By setting the value to 0.50, the default used for data mining models with between 1 and 10 attributes, the complexity penalty is greatly increased. The following decision tree represents this penalization of mode complexity.

Cc917651.c31_006(en-us,TechNet.10).gif 

Because the individual cases do not differ significantly, based on the total number of cases included in the training case set, the complexity penalty prevents the algorithm from creating splits. Therefore, the data mining algorithm provider can supply only a single node to represent the training case set; the data mining model is now too generalized. The value used for COMPLEXITY_PENALTY differs from data mining model to data mining model, because of individuality of the data being modeled. The default values provided in the SQL Server Books Online are based on the total number of attributes being modeled, and provide a good basis on which to experiment.

When using data mining parameters to alter the process of generating data mining models, you should create several versions of the same model, each time changing the data mining parameters and observing the reaction in the data mining model. This iterative approach will provide a better understanding of the effects of the data mining parameters on training a data mining model when using the Microsoft Decision Trees algorithm.

The Microsoft Decision Trees algorithm works best with business scenarios involving the classification of cases or the prediction of specific outcomes based on a set of cases encompassing a few broad categories.

Microsoft Clustering

The Microsoft Clustering algorithm provider is typically employed in association and clustering tasks, because it focuses on providing distribution information for subsets of cases within data.

The Microsoft Clustering algorithm provider uses an expectation-maximization (EM) algorithm to segment data into clusters based on the similarity of attributes within cases.

The algorithm iteratively reviews the attributes of each case with respect to the attributes of all other cases, using weighted computation to determine the logical boundaries of each cluster. The algorithm continues this process until all cases belong to one (and only one) cluster, and each cluster is represented as a single node within the data mining model structure.

The Microsoft Clustering algorithm provider is best used in situations where possible natural groupings of cases may exist, but are not readily apparent. This algorithm is often used to identify and separate multiple patterns within large data sets for further data mining; clusters are self-defining, in that the variations of attributes within the domain of the case set determine the clusters themselves. No external data or pattern is applied to discover the clusters internal to the domain.

Creating Data Mining Models

Data mining models can be created a number of ways in Analysis Services, depending on the location of the data mining model. Data mining models created on the Analysis server can only be created through the Decision Support Objects (DSO) library. Analysis Manager uses DSO, through the Mining Model Wizard, used to create new relational or OLAP data mining models. Custom client applications can also use DSO to create relational or OLAP data mining models on the server.

Relational data mining models can be also created on the client through the use of PivotTable Service and the CREATE MINING MODEL statement. For example, the following statement can be used to recreate the Member Card RDBMS data mining model from the FoodMart 2000 database on the client.

CREATE MINING MODEL
[Member Card RDBMS]
([customer id] LONG KEY,
[gender] TEXT DISCRETE,
[marital status] TEXT DISCRETE,
[num children at home] LONG CONTINUOUS,
[total children] LONG DISCRETE,
[yearly income] TEXT DISCRETE,
[education] TEXT DISCRETE,
[member card] TEXT DISCRETE PREDICT)
USING
Microsoft_Decision_Trees

This statement can be used to create a temporary data mining model, created at the session level, as well as to create a permanent data mining model, stored on the client. To create a permanent data mining model on the client, the Mining Location PivotTable Service property is used to specify the directory in which the data mining model will be stored. The same property is also used to locate existing permanent data mining models for reference.

The Data Mining Sample Application, provided with the SQL Server 2000 Resource Kit, is a great tool for prototyping data mining models. You can test each data mining model at session scope; once a data mining model is approved, the same query can be used to construct it locally.

The CREATE MINING MODEL statement can be issued as an action query through any data access technology capable of supporting PivotTable Service, such as Microsoft ActiveX® Data Objects (ADO). The USING clause is used to assign a data mining algorithm provider to the data mining model.

For more information on the syntax and usage of the CREATE MINING MODEL statement, see PivotTable Service Programmer's Reference in SQL Server Books Online. For more information regarding the details of data mining column definition, see the OLE DB for Data Mining specification in the MSDN® Online Library.

Training Data Mining Models

Once a data mining model is created, the training case set is then supplied to the data mining model through the use of a training query.

Training case sets can be constructed either by physically separating the desired training data from the larger data set into a different data structure used as a staging area and then retrieving all of the training records with a training query, or by constructing a training query to extract only the desired training data from the larger data set, querying the larger data set directly. The first approach is recommended for performance reasons, and because the training query used for the data mining model does not need to be changed if the training case set changes – you can instead place alternate training data into the physically separated staging area. However, this approach can be impractical if the volume of data to be transferred is extremely large or sensitive, or if the original data set does not reside in an enterprise data warehouse. In such cases, the second approach is more suitable for data mining purposes.

Once the records are extracted, the data mining model is trained by the use of an INSERT INTO query executed against the data mining model, which instructs the data mining algorithm provider to analyze the extracted records and provide statistical data for the data mining model.

In Analysis Services, the training query of a data mining model is typically constructed automatically, using the first approach. The information used to supply input and predictable columns to the data mining model is also used to construct the training query, and the schema used to construct the data mining model is used to supply the training data as well.

For example, the training query used for the Member Card RDBMS relational data mining model in the FoodMart 2000 database is shown below.

INSERT INTO 
[Member Card RDBMS'S] 
(SKIP, 
[gender], 
[marital status], 
[num children at home], 
[total children], 
[yearly income], 
[education], 
[member card]) 
OPENROWSET
('MSDASQL.1', 'Provider=MSDASQL.1;Persist Security Info=False;Data Source=FoodMart 
2000', 
'SELECT DISTINCT 
"Customer"."customer_id" AS 'customer id', 
"Customer"."gender" AS 'gender', 
"Customer"."marital_status" AS 'marital status', 
"Customer"."num_children_at_home" AS 'num children at home',
"Customer"."total_children" AS 'total children', 
"Customer"."yearly_income" AS 'yearly income', 
"Customer"."education" AS 'education', 
"Customer"."member_card" AS 'member card' 
FROM 
"Customer"')

The MDX INSERT INTO statement is used to insert the data retrieved by the OPENROWSET command into the data mining model. The data mining model assumes that all records in the Customer table, which was used to define the data mining model, are to be used as the training case set for the data mining model.

The second approach, the construction of a custom training query, is more difficult to perform in Analysis Services. The property used to supply custom training queries is not directly available through the Analysis Manager or either of the data mining model editors.

There are two methods used to support the second approach. The first method involves the use of the Decision Support Objects (DSO) library in a custom application to change the training query used by the data mining model. The DSO MiningModel object provides the TrainingQuery property specifically for this purpose. If the default training query is used for a data mining model, this property is set to an empty string (" "); otherwise, you can supply an alternate training query for use with the mining model.

The second method involves the use of another data access technology, such as ADO, to directly supply a training query to a data mining model. In this case, the training query can be directly executed against the data mining model.

The following statement example is a custom training query for the Member Card RDBMS data mining model that selects only those customers who own houses for analysis. A WHERE clause is used in the OPENROWSET statement to restrict the selection of records from the Customer table.

INSERT INTO 
[Member Card RDBMS'S] 
(SKIP, 
[gender], 
[marital status], 
[num children at home], 
[total children], 
[yearly income], 
[education], 
[member card]) 
OPENROWSET
('MSDASQL.1', 'Provider=MSDASQL.1;Persist Security Info=False;Data Source=FoodMart 
2000', 
'SELECT DISTINCT 
"Customer"."customer_id" AS 'customer id', 
"Customer"."gender" AS 'gender', 
"Customer"."marital_status" AS 'marital status', 
"Customer"."num_children_at_home" AS 'num children at home',
"Customer"."total_children" AS 'total children', 
"Customer"."yearly_income" AS 'yearly income', 
"Customer"."education" AS 'education', 
"Customer"."member_card" AS 'member card' 
FROM 
"Customer"
WHERE
"Customer"."houseowner" = "Y"')

The resulting data mining model provides analysis on the same attributes, but with a different training case set. By using custom training queries, the same data mining model structure can be used to provide different outlooks on data without the need to completely redevelop a data mining model.

The Microsoft OLE DB for Data Mining provider supports a number of options in the INSERT INTO statement for selecting training data. The OPENROWSET statement, shown in the previous example, is the most common method used, but other methods are supported. For more information about the various supported options, see the OLE DB for Data Mining specification in the MSDN Online Library.

Also, the Data Mining Sample Application, shipped with the SQL Server 2000 Resource Kit, can be used to construct and examine a wide variety of training queries quickly and effectively.

Data Mining Model Evaluation

After the data mining model has been processed against the training data set, you should have a useful view of historical data. But how accurate is it?

The easiest way to evaluate a newly created data mining model is to perform a predictive analysis against an evaluation case set. This case set is constructed in a manner similar to that of the construction of a training case set—a set of data with a known outcome. The data used for the evaluation case set should be different from that used in the training case set; otherwise you will find it difficult to confirm the predictive accuracy of the data mining model; evaluation case sets are often referred to as holdout case sets, and are typically created when a training case set is created in order to use the same random sampling process.

Remove or isolate the outcome attributes from the evaluation case set, then analyze the case set by performing prediction queries against the data mining model, using the evaluation case set. After the analysis is completed, you should have a set of predicted outcomes for the evaluation case set that can be compared directly against the known outcomes for the same set to produce an estimate of prediction accuracy for the known outcomes. This comparison, misleadingly referred to as a confusion matrix, is a very simple way of communicating the benefits of a data mining model to business users. Conversely, the confusion matrix can also reveal problems with a data mining model if the comparison is unfavorable. Because a confusion matrix works with both actual and predicted outcomes on a case by case basis, using a confusion matrix will give you the ability to exactly pinpoint inaccuracies within a data mining model.

This step can be divided into two different steps, depending on the needs of the data mining model. Before evaluating the data mining model, additional training data can be applied to the model to improve its accuracy. This process, called refinement, uses another training case set, called a test case set, to reinforce similar patterns and dilute the interference of irrelevant patterns. Refinement is particularly effective when using neural network or other genetic algorithms to improve the efficacy of a data mining model. The evaluation case set can then be used to determine the amount of improvement provided by the refinement.

For more information on how to issue prediction queries against a data mining model in Analysis Services, see "Data Mining Model Feedback" later in this chapter.

Calculating Effectiveness

There are several different ways of calculating the effectiveness of a data mining model, based on analysis of the resulting prediction data as compared with actual data. Several of the most common forms of measurement are described in the following section.

  • Accuracy 

    A brute-force measurement, accuracy is the percentage of total predictions that were correct. "Correct," in this case, means either that, for discrete prediction attributes, the correct value was returned, or, for continuous prediction attributes, a value was returned within a pre-defined threshold established as a criterion for accuracy. For example, predicting the total amount of store sales within a $5,000 threshold could be considered an accurate prediction. 

  • Error Rate 

    Another brute-force measurement, this measures the total predictions that were wrong. Typically calculated at 100—(accuracy in percent), error rates are often used when accuracy rates are too high to be viewed meaningfully. For instance, if the total amount of store sales was correctly calculated 98 percent of the time for the previous year, but calculated correctly 99 percent of the time for the current year, this measurement of accuracy does not have as much impact as being able to say that the error rate was reduced by 50 percent, although both measurements are true. 

  • Mean-Squared Error 

    A special form of error rate for prediction involving continuous, ordered attributes, the mean-squared error is the measurement of variation between the predicted value and the actual value. Subtracting the two values and squaring the result provides the rate of squared error. Then, this value is averaged over all predictions for the same attribute to provide an estimate of variation for a given prediction. The reason this number is squared is to ensure that all errors are positive and can be added together when the average is taken, as well as to more severely weight widely varying prediction values. For example, if the prediction for unit sales (in thousands) for one store is 50 and the actual unit sales (in thousands) for the store was 65, the mean squared error would be 65 - 50, or 15, raised to the power of 2, or 225. Mean-squared error can be used in an iterative manner to consistently establish the accuracy threshold of continuous ordered attributes. 

  • Lift 

    Simply put, lift is a measurement of how much better (or worse) the data mining model predicted results for a given case set over what would be achieved through random selection. Lift is typically calculated by dividing the percentage of expected response predicted by the data mining model by the percentage of expected response predicted by a random selection. For example, if the normal density of response to a direct mail campaign for a given case set was 10 percent, but by focusing in on the top quartile of the case set predicted to respond to the campaign by the data mining model the density of response increases to 30 percent, lift would be calculated at 3, or 30/10. 

  • Profit 

    While the best measurement of any business scenario, profit or returns on investment (ROI) is also the most subjective to calculate, because the variables used to calculated this measurement are different for each business scenario. Many business scenarios involving marketing or sales often have a calculation of ROI included; used in combination with lift, a comparison of ROI between the predicted values of the data mining model and the predicted values of random sampling will simplify any guess as to which subset of cases should be used for lift calculation. 

Evaluating an Oversampled Model

The primary drawback of oversampling as a technique for selecting training cases is that the resulting data mining model does not directly correspond to the original data set. It instead provides an exaggerated view of the data, so the exaggerated prediction results must be scaled back to match the actual probability of the original data set. For example, the original data set for credit card transactions, in which 0.001 percent of transactions represent "no card" fraudulent transactions, contains 50 million cases. Statistically speaking, this means only 500 transactions within the original data set are fraudulent. So, a training case set is constructed with 100,000 transactions, in which all 500 fraudulent transactions are placed. The density of the fraudulent data has gone up from 0.001 percent to 0.5 percent – still too low, though, for our purposes. So, the training case set is pared down to just 5,000 transactions, raising the density of fraudulent transactions to 10 percent. The training case set now has a different ratio of representation for the non-fraudulent and fraudulent cases. The fraudulent cases still have a one to one relationship with the original data set, but now each case in the training data set represents 10,000 cases in the original data set. This ratio of cases must be reflected in the sampling of cases from a case set for lift calculation.

For example, the above credit card fraud training case set assumes a binary outcome—either fraudulent or non-fraudulent. We have increased the density of fraudulent cases from 0.001 percent to 10 percent, so this ratio should be taken into account when computing lift. If a selected segment consisting of the top 1 percent of cases within the case set represents a predicted density of 90 percent of fraudulent cases, with a data density of 10 percent for fraudulent cases in the training case set, then the lift for the top 1 percent of total cases, based on the oversampled training case set, is calculated as 9. Since the original data set had an actual data density of 0.001 percent for fraudulent cases, however, the ratio of oversampling, defined earlier as 1 to 10,000 cases, is multiplied by the percent of non-fraudulent cases in the top 1 percent of cases, or 10, added to the percent of fraudulent cases, and is then divided into the predicted density to establish a calculated predicted density of about 0.892 percent for this selected 1 percent of cases. This calculation is illustrated below, with the answer rounded to 10 decimal places.

90 /(90 + (10 * (0.001 / 10)) = 0.0089197225

Once this calculation is performed, you can then calculate the corresponding lift of the original data set by dividing the calculated density by the density of the original set. Since the density of fraudulent cases for the original data set is 0.001 percent, the lift for this selected 1 percent of cases jumps from 9 to about 892.

The calculated lift value for this selected segment of cases seems abnormally high. However, the selected percentage of cases also changes based on the same ratio of densities. Since the 90 percent predicted response rate occurs for the top 1 percent, then the size of this segment decreases because of the ratio of cases between the training case set and the original data set.

A similar calculation is performed to obtain the new size of the selected segment. The density of the fraudulent cases for the segment, 90 percent, is added to the density of the non-fraudulent cases, or 10 percent, multiplied by the ratio of cases between the training case set and the original case set, or 10000. The product is then divided by the same ratio, 10000, and is then multiplied by the actual size of the segment to get the new relative segment size. This calculation is illustrated below.

.01 * ((90 + (10 * 10000))) / 10000) = 0.10009

So, the lift figure of 892 only applies to the top 0.10009 percent, or 50,045 cases, of the original case set of 50 million cases, representing a very narrow band of cases at the high end of the lift curve.

As you can see, oversampling is very useful for obtaining information about rare occurrences within large data sets, but providing accurate figures can be quite difficult. Oversampling should only be used in specific situations to model extremely rare cases, but is an essential tool for modeling such situations.

Visualizing Data Mining Models

The visualization tools supplied with Analysis Services are ideal for the evaluation of data mining models. The Data Mining Model Browser and Dependency Network Browser both display the statistical information contained within a data mining model in an understandable graphic format.

The Data Mining Model Browser is used to inspect the structure of a generated data mining model from the viewpoint of a single predictable attribute, to provide insight into the effects input variables have in predicting output variables. Because the most significant input variables appear early within decision tree data mining models, for example, generating a decision tree model and then viewing the structure can provide insight into the most significant input variables to be used in other data mining models.

For example, using the Data Mining Model Browser to view the Member Card RDBMS data mining model presents the following decision tree.

Cc917651.c31_007(en-us,TechNet.10).gif

The decision tree is shown from left to right, or from most significant split to least significant split. Just from looking at this decision tree, you should be able to determine that, when predicting the member card attribute, the most significant attribute is yearly income. However, the next most significant attribute varies slightly, depending on the value of the yearly income attribute. For those customers who make more than $150,000 for yearly income, the next most significant attribute is marital status. For all others, the next most significant attribute is num children at home.

The Dependency Network Browser, by contrast, constructs a network-like depiction of the relationships within a data mining model from the viewpoints of all predictable attributes, providing a better understanding of the relationships between attribute values within the domain of cases depicted by the data mining model. The Dependency Network Browser not only shows the relationships between attributes, but ranks the relationships according to the level of significance to a given attribute. The browser can be adjusted to display relationships of a specified significance level across the domain of the data mining model, allowing an informal exploration of the domain itself.

For example, using the Dependency Network Browser to view the Member Card RDBMS data mining model presents the following network of nodes.

Cc917651.c31_008(en-us,TechNet.10).gif

All other attributes tend to predict the member card attribute, indicated by the direction of the arrows between nodes. The slider in the Dependency Network Browser can be used to determine which attributes most influence the member card attribute. Once examined in this fashion, you can determine that the member card attribute is most strongly influenced by the yearly income attribute, then by the num children at home attribute, then finally by the marital status attribute. Note, too, that this coincides with the previously presented view provided by the Data Mining Model Browser, in which the decision tree used to predict the member card attribute illustrates this same significance of attributes

The network represented in the previous example is based on only a single predictable attribute. The Dependency Network Browser is best used with very complex data mining models involving multiple predictable attributes to better understand the domain represented by the model. You can use the Dependency Network Browser to focus on a single predictable attribute, study its relationship to other attributes within the domain, then explore the decision tree used to predict the selected attribute and related attributes using the Data Mining Model browser.

Used in concert, both tools can provide valuable insight into the rules and patterns stored in a data mining model, allowing you to tune the data mining model to the specific needs of the data set to be modeled.

Data Mining Model Feedback

The true purpose of data mining is to provide information for decision support and, ultimately, for making business decisions based on the provided information. Although data mining is an excellent way to discover information in data, information without action invalidates the purpose of data mining. When designing a data mining model, remember that the goal of the model is to provide insight or predictions for a business scenario.

The use of data mining models to provide information generally falls into two different areas. The most common form of data mining, closed loop data mining, is used to provide long-term business decision support.

There are other business uses for data mining feedback, especially in financial organizations. The process of operational data mining, in which unknown data is viewed through a predictive model to determine the likelihood of a single discrete outcome, is commonly used for loan and credit card applications. In this case, feedback can be reduced to a simple "yes or no" answer. Operational data mining is unique in this respect—it occurs in a real-time situation, often on data that may or may not be first committed to a database.

These actions, however, fall outside the typical scope of the data mining analyst. The goal of the data mining analyst is to make data mining model feedback easily understandable to the business user.

Visualization plays an important role in both the evaluation and feedback of a data mining model—if you cannot relate the information gained from a data mining model to the people who need it, the information might as well not exist. Analysis Services supplies two visualization tools, Data Mining Model Browser and Dependency Network Browser, for data mining model visualization purposes. However, these tools may be incomprehensible to a typical business user, and are more suited for the data mining analyst. There are numerous visualization tools available from third-party vendors, and can provide views on data mining model feedback that are meaningful to the business user. For more information about understanding the information presented in the Data Mining Model Browser and Dependency Network Browser, see "Visualizing Data Mining Models" in this chapter.

Custom client applications developed for data mining visualization have an advantage over external visualization tools in that the method of visualization can be tailored specifically for the intended business audience. For more information about developing custom client applications, see Chapter 25, "Getting Data to the Client."

Predicting with Data Mining Models

The true purpose of a data mining model is to use it as a tool through which data with unknown outcomes can be viewed for the purposes of decision support. Once a data mining model has been constructed and evaluated, a special type of query, known as a prediction query, can be run against it to provide statistical information for unknown data.

However, the process of construction prediction queries is the least understood step of the data mining process in Analysis Services. The Data Mining Sample Application, shipped with SQL Server 2000 Resource Kit, is an invaluable tool for constructing and examining prediction queries. You can also use it as an educational tool, as the sample provides access to all of the syntax used for data mining.

Basically, the syntax for a prediction query is similar to that of a standard SQL SELECT query in that the data mining model is queried, from a syntactical point of view, as if it were a typical database view. There are, however, two main differences in the syntax used for a prediction query.

The first difference is the PREDICTION JOIN keyword. A data mining model can only predict on data if data is first supplied to it, and this keyword provides the mechanism used to join unknown data with a data mining model. The SELECT statement performs analysis on the data supplied by the prediction join and returns the results in the form of a recordset. Prediction joins can be used in a variety of ways to support both operational and closed loop data mining.

For example, the following prediction query uses the PREDICTION JOIN keyword to join a rowset, created by the OPENROWSET function from the Customer table in the FoodMart 2000 database, to predict the customers most likely to select a Golden member card.

SELECT
[MemberData].[customer_id] AS [Customer ID],
[MemberData].[education] AS [Education],
[MemberData].[gender] AS [Gender],
[MemberData].[marital_status] AS [Marital Status],
[MemberData].[num_children_at_home] AS [Children At Home],
[MemberData].[total_children] AS [Total Children],
[MemberData].[yearly_income] AS [Yearly Income] 
FROM
[Member Card RDBMS] 
PREDICTION JOIN
OPENROWSET
('Microsoft.Jet.OLEDB.4.0',
'Provider=Microsoft.Jet.OLEDB.4.0;
Data Source=C:\Program Files\Microsoft Analysis Services\samples\FoodMart 
2000.mdb;
Persist Security Info=False',
'SELECT 
[customer_id],
[education],
[gender],
[marital_status],
[num_children_at_home],
[total_children],
[yearly_income] 
FROM 
[customer]') 
AS 
[MemberData]
ON 
[Member Card RDBMS].[gender] = [MemberData].[gender] AND
[Member Card RDBMS].[marital status] = [MemberData].[marital_status] AND
[Member Card RDBMS].[num children at home] = [MemberData].[num_children_at_home] AND
[Member Card RDBMS].[total children] = [MemberData].[total_children] AND
[Member Card RDBMS].[yearly income] = [MemberData].[yearly_income] AND
[Member Card RDBMS].[education] = [MemberData].[education]
WHERE
[Member Card RDBMS].[member card] = 'Golden' AND 
PREDICTPROBABILITY([Member Card RDBMS].[member card])> 0.8

The ON keyword links columns from the rowset specified in the PREDICTION JOIN clause to the input attributes defined in the data mining model, in effect instructing the data mining model to use the joined columns as input attributes for the prediction process, while the WHERE clause is used to restrict the returned cases. In this prediction query, only those cases that are most likely to select the Golden member card are returned. The PredictProbability data mining function is used to establish a probability of correct prediction, also known as the confidence of the prediction, and further restrict the returned cases only to those whose confidence level is equal to or higher than 80 percent.

The following table represents the results returned from the previous prediction query. The cases represented by the table are the cases most likely to choose the Golden member card, with a confidence level of 80 percent or greater.

Customer ID

Education

Gender

Marital Status

Children At Home

Total Children

Yearly Income

105

Bachelor's Degree

M

M

3

3

$150K+

136

Bachelor's Degree

M

M

3

3

$150K+

317

High School Degree

M

M

0

0

$150K+

340

Bachelor's Degree

F

M

0

2

$150K+

343

Bachelor's Degree

F

M

1

2

$150K+

...

...

...

...

...

...

...

This prediction query is a typical example of closed loop data mining. The cases returned by the prediction query can be targeted, for example, for direct promotion of the Golden member card. Or, the actual results of the selected cases can be compared against the predicted results to determine if the data mining model is indeed achieving an 80 percent or better confidence level of prediction. This provides

information that can be used to evaluate the effectiveness of this particular data mining model, by constructing a confusion matrix or by computing the fit of the data mining model against this particular case set. The business decisions to be taken by the review of this data affect not just a single case, but a subset of a larger case set, and the effects of such business decisions may take weeks or months to manifest in terms of additional incoming data.

Data mining models can take data from a variety of sources, provided that the data structure of incoming cases is similar to the data structure of expected cases for the data mining model.

For example, the following prediction query uses the PREDICTION JOIN keyword to link a singleton query (a query that retrieves only one row), with both column and value information explicitly defined within the query, to the Member Card RDBMS data mining model in the FoodMart 2000 database, to predict the type of member card most likely to be selected by a specific customer, as well as the confidence of the prediction.

SELECT 
[Member Card RDBMS].[member card] AS [Member Card],
(100 * PREDICTPROBABILITY([Member Card RDBMS].[member card])) AS [Confidence Percent]
FROM 
[Member Card RDBMS] 
PREDICTION JOIN
(SELECT 'F' as Gender, 'M' as [Marital Status], 3 as [num children at home], 
'$130K - $150K' as [yearly income], 'Bachelors Degree' as education ) AS singleton
ON 
[Member Card RDBMS].[gender]=[singleton].[gender] AND
[Member Card RDBMS].[marital status] = [singleton].[marital status] AND
[Member Card RDBMS].[num children at home] = [singleton].[num children at home] AND
[Member Card RDBMS].[yearly income] = [singleton].[yearly income] AND
[Member Card RDBMS].[education] = [singleton].[education]

The following table illustrates the returned resultset from the previous prediction query. From the analysis provided by the data mining model on the case defined in the singleton query, the customer is most likely to choose a Golden member card, and the likelihood of that choice is about 63 percent.

Member Card

Confidence Percent

Golden

62.666666666666671

This prediction query is an excellent example of applied prediction in an operational data mining scenario. The case information supplied by the singleton query used in the PREDICTION JOIN clause of the prediction query is not supplied directly from a database; all columns and values are constructed within the singleton query. This information could just have easily been supplied from the user interface of a client application as from a single database record, and the immediate response of the data mining model allows the client application to respond to this information in real time, immediately affecting incoming data.

Using Data Mining Functions

In both of the prediction query examples presented earlier, the PredictProbability data mining function is used to provide confidence information on the predictions made by the queries. Other data mining functions are also available, which can be used to provide additional statistical information, such as variance or standard deviation, for cases analyzed through the data mining model.

For example, the previous query can instead use the PredictHistogram function to supply several common statistical measurements about the single case being examined, as demonstrated in the following query.

SELECT 
[Member Card RDBMS].[member card] AS [Predicted Member Card],
PredictHistogram([Member Card RDBMS].[member card])
FROM 
[Member Card RDBMS] 
PREDICTION JOIN
(SELECT 'F' as Gender, 'M' as [Marital Status], 3 as [num children at home], 
'$130K - $150K' as [yearly income], 'Bachelors Degree' as education ) AS singleton
ON 
[Member Card RDBMS].[gender]=[singleton].[gender] AND
[Member Card RDBMS].[marital status] = [singleton].[marital status] AND
[Member Card RDBMS].[num children at home] = [singleton].[num children at home] AND
[Member Card RDBMS].[yearly income] = [singleton].[yearly income] AND
[Member Card RDBMS].[education] = [singleton].[education]

This prediction query returns a recordset that contains the predicted member card, all of the possible member card choices, and the statistical information behind each choice, or histogram, as shown in the following table. The $ADJUSTEDPROBABILITY, $VARIANCE and $STDEV columns, representing the adjusted probability, variance and standard deviation values of the various member card choices, have not been shown in the table due to space limitations.

Predicted Member Card

member card

$SUPPORT

$PROBABILITY

...

Golden

Golden

46

0.62666666666666671

 

 

Silver

14

0.20000000000000001

 

 

Bronze

7

0.10666666666666667

 

 

Normal

3

5.3333333333333337E-2

 

 

 

0

1.3333333333333334E-2

 

Histogram information can be useful in both operational and data mining. For example, the previous prediction query indicates that this customer is more than three times as likely to choose the Golden member card instead of the Silver member card, but is twice as likely to select the Silver member card over the Bronze member card and about four times as likely to select the Silver member card over the Normal member card. The customer service representative, using a client application employing operational data mining, would then be able to rank the various member cards and offer each in turn to the customer based on this histogram information.

For more information about the use and availability of data mining functions, see SQL Server Books Online.

Cc917651.spacer(en-us,TechNet.10).gif

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.