Export (0) Print
Expand All
Automate Testing of Your Stored Procs
Streamline Your Database Setup Process with a Custom Installer
Stop SQL Injection Attacks Before They Stop You
Get a Lean, Mean Dev Machine with the Express Editions of Visual Basic and SQL Server 2005
Keep Bad Guys at Bay with the Advanced Security Features in SQL Server 2005
AMO Lets You Dig Deeper into Your Data from Your Own Applications
Make Sense of Your Web Feedback using SQL Server 2005
Fuzzy Lookups and Groupings Provide Powerful Data Cleansing Capabilities
Unearth the New Data Mining Features of Analysis Services 2005
Using XQuery, New Large DataTypes, and More
Expand Minimize
5 out of 6 rated this helpful - Rate this topic

SQL Server Data Mining: Plug-In Algorithms

SQL Server 2005
 

Raman Iyer and Bogdan Crivat

July 2004
Updated June 2005

Applies to:
   Microsoft SQL Server 2005 Analysis Services

Summary: Describes how SQL Server 2005 Data Mining allows aggregation directly at the algorithm level. Although this restricts what the third-party algorithm developer can support in terms of language and data types, it frees the developer from having to implement data handling, parsing, meta data management, session, and rowset production code on top of the core data mining algorithm implementation. (15 printed pages)

Contents

Overview
Requirements
Architecture

Overview

Microsoft SQL Server Analysis Services 2000 Service Pack 1 allows the plugging in ("aggregation") of third-party OLE DB for Data Mining providers on Analysis Server. Because this aggregation is at the OLE DB level, third-party algorithm developers using SQL Server 2000 SP1 have to implement all the data handling, parsing, meta data management, session, and rowset production code on top of the core data mining algorithm implementation.

By contrast, SQL Server 2005 Data Mining allows aggregation directly at the algorithm level. Although this restricts what the third-party algorithm developer can support in terms of language and data types, it frees the developer from implementing all the additional layers described above. It also allows for much deeper integration with Analysis Services, including the ability to build OLAP mining models and data mining dimensions. We use the term "plug-in algorithms" to describe third-party algorithms that plug into the SQL Server 2005 Analysis Server (hereafter referred to as "Analysis Server") and appear, in all respects, like native algorithms to users.

Requirements

Analysis Server communicates with third-party algorithm providers via a set of COM interfaces. We group these interfaces into two categories: those that need to be implemented by an algorithm provider, and those that are implemented by Analysis Server objects and consumed by algorithm providers.

Interfaces Implemented by Algorithm Providers

Method definitions and descriptions of parameters are to be supplied. Refer to dmalgo.h for the method definitions for these interfaces.

IDMAlgorithmFactory

This is the entry point into a plug-in algorithm provider. Analysis Server requests this interface upon instantiating an algorithm provider, and uses it to create new algorithm instances that will be bound to corresponding mining models in the server space. IDMAlgorithmFactory can also be queried for the IDMAlgorithmMetadata interface described below.

IDMAlgorithmMetadata

This interface is used by Analysis Server to interrogate an algorithm provider's capabilities. This includes attributeset validation.

IDMAlgorithm

This is the core algorithm interface that provides access to the various functions of an algorithm instance, including training, prediction, and browsing.

IDMCaseProcessor

This interface supplies formatted cases to the algorithm provider for training.

IDMAlgorithmNavigation : IDMDAGNavigation

This interface exposes a trained model's algorithm content to Analysis Server for browsing.

IDMPullCaseSet

This interface will be consumed by Analysis Server for sample case generation.

IDMPersist

Analysis Server invokes this interface for loading and saving algorithm-specific content into a stream provided by the server.

IDMCaseIDIterator [optional]

This may be implemented by an algorithm provider for filtering and controlling case generation.

IDMMarginalStat [optional]

Marginal statistics are required by Analysis Server during prediction query processing. They may be gathered either by Analysis Server during case generation or by the algorithm provider itself during training. If an algorithm provider indicates (through a method in the IDMAlgorithmMetadata interface) that statistics will be gathered and exposed by the algorithm, the provider must support this interface. Otherwise, Analysis Server will initialize the algorithm with its own implementation of this interface. Even if the algorithm does not internally use Analysis Server's implementation of IDMMarginalStats, it must save and return the interface when queried for it.

IDMClusteringAlgorithm [optional]

Clustering algorithms can optionally support this interface so that Analysis Server's query processor can successfully return results for queries that invoke algorithm-specific functions, such as Cluster().

IDMSequenceAlgorithm [optional]

Sequence Clustering algorithms can optionally support this interface so that Analysis Server's query processor can successfully return results for queries that invoke algorithm-specific functions, such as Sequence().

IDMTimeSeriesAlgorithm [optional]

Sequence Clustering algorithms can optionally support this interface so that Analysis Server's query processor can successfully return results for queries that invoke algorithm-specific functions, such as Time().

IDMCustomFunctionInfo [optional]

Plug-in algorithms may support custom functions. Metadata for such functions is exposed by the plug-in algorithm through this interface that can be obtained from the algorithm's metadata object.

IDMDispatch [optional]

If a plug-in algorithms supports custom functions and exposes metadata for them through the IDMCustomFunctionInfo metadata interface, it must also support the IDMDispatch interface on its algorithm object to enable Analysis Server to call these functions.

IDMTableResult [optional]

If a custom function returns a table result, it must be returned as an IDMTableResults interface pointer. This interface allows Analysis Server to navigate the result (in a forward-only manner) and fetch the data rows.

Interfaces Consumed by Algorithm Providers

Method definitions and descriptions of parameters are to be supplied. Refer to dmalgo.h for the method definitions for these interfaces.

IDMPushCaseSet

This is used for passing case processing information between Analysis Server and the algorithm instance.

IDMAttributeset

This interface encapsulates information about the attributes contained by input cases.

IDMAttributeGroup

Attributes can be grouped together based on certain criteria (for example, related attributes or nested tables). IDMAttributeGroup provides a way to iterate over such groups of attributes.

IDMPersistenceWriter

This is an abstract interface for a stream that algorithms can save their content into. The stream is implemented by Analysis Server over its own storage system for the algorithm's parent mining model, and passed to the algorithm in the IDMPersist::Save method.

IDMPersistenceReader

This is an abstract interface for a stream that algorithms can load their previously saved content from. The stream is implemented by Analysis Server over its own storage system for the algorithm's parent mining model, and passed to the algorithm in the IDMPersist::Load method.

IDMServices

This is the base interface for passing shared information from the server space to the algorithm. It exposes services like memory allocators, string and variant handling, persistence to files, and transactions. See "DMContext Object" (Section 2.5) for a description of how this interface will be used through the context object.

IDMContextServices

This is the context interface that will be passed to most algorithm calls from Analysis Server. It derives from the IDMServices interface described above and provides access to locale, memory allocators, and other information specific to the current request.

IDMModelServices

This is the context interface that will be passed when an algorithm instance is created. It can be used to access model-specific information, as well as allocators whose lifetime is tied to the server model object. It include methods for

  • Getting the model's locale information
  • Firing progress notifications
  • Getting the model's content map for updated node captions
  • Parsing and rendering PMML content

IDMMemoryAllocator

This is the interface that allows the plug-in algorithm to allocate and free memory in the server's memory space. See "Memory Management" (Section 2.3) for details.

IDMStringHandler

This interface provides access to Analysis Server's internal string data type. Pointers to server strings that are passed to algorithm methods will be treated as opaque handles that can decoded by IDMStringHandler methods. See "Access to Shared Data Types" (Section 2.4) for more information about the usage of this interface.

IDMVariantPtrHandler

This interface provides access to Analysis Server's internal variant data type. Pointers to server variants that are passed to algorithm methods will be treated as opaque handles that can be decoded by IDMVariantHandler methods. See "Access to Shared Data Types" (Section 2.4) for more information about the usage of this interface.

IDMContentMap

This interface provides access to updateable parts of the algorithm content that are maintained on the algorithm's behalf by the Analysis Server framework. Currently this includes node captions that users are allowed to update using DMX. See the section on "User-Updateable Algorithm Content" for details on how this interface must be used by the algorithm navigator.

Memory Management

Algorithm providers must use Analysis Server's memory allocation interfaces and make memory reservations using the memory quota interface. Aggregator algorithms must use memory allocators from either the context or model services, depending upon the lifetime of the allocated objects. To allow Analysis Server to manage and control memory efficiently, all memory allocations in the plug-in algorithm provider must be made using these memory management interfaces.

Access to Shared Data Types

The following internal server types will be exposed to the plug-in algorithm provider implementers:

  • String (as a handle, DMString)
  • Variant (as a handle, DMVariantPtr)
  • XMLWriter (as a handle, DMXMLWriterPtr)
  • MiningFunctionsInfo (as a handle, DMFunctionRecPtr)
  • ExecutionContext (to be discussed separately in the "DMContext Object" section)

DMString and DMVariantPtr

DMString and DMVariant are used in various interfaces to pass strings/values to the server code, and to fetch strings/values from the server.

The plug-in algorithm will operate over string and variant handles using two interfaces: IDMStringHandler and IDMVariantPtrHandler.

These interfaces are thread-safe and stateless; therefore they can be cached.

Implementations of these interfaces (handlers) can be obtained via IDMServices, both from context and at initialization time.

Note   Any service obtained from the context is allocated in the context's memory; therefore, it must be released before the end of the function that brings the context handle inside the plug-in algorithm space.

The set of memory allocators used by a handler is determined by the way the handler was obtained. Likewise, the memory allocators used in handling strings and variants are determined by the handler.

Examples:

  • A handler is obtained from the IDMServices pointer passed in Initialize (that is, from the model's service provider). All the operations performed by that handler will use the model's allocators. Therefore, a string handle created with this allocator can be safely cached between calls.
  • A handler is obtained from the IDMServices of the execution context (that is, from the execution context's service provider). All the operations performed by that handler will use the execution context's allocators. Therefore, a string handle created with this allocator cannot be safely cached between calls.

Methods in IDMStringHandler

HRESULT   CreateNewHandle(DMString**   out_phString)

Creates a new string handle that can be used later inside the algorithm space. The out_phString parameter is the location to store the newly created string handler.
HRESULT   CopyHandleToBuffer(DMString*   in_hString, 
WCHAR*   out_pchBuff,
UINT*      io_pcAllocated)

Copies the content of a string handle into a char buffer. It behaves like Platform SDK functions, meaning that io_pcAllocated will contain either the required size on failure (if the supplied buffer size is less than the required size), or the actual size if the function succeeded.
HRESULT   CopyBufferToHandler(DMString*   in_hString, 
WCHAR*   in_pchBuff,
UINT      in_pcLength)
Copies the content of a buffer into a string handle.
HRESULT   GetConstStringFromHandle(DMString*   in_hString, 
const WCHAR**   out_ppchBuff)

For performance purposes, returns a const pointer to the internal string buffer contained by the handle, instead of copying to a user-supplied buffer.
HRESULT   AttachHandleToBSTR(DMString*   in_hString, 
BSTR      in_bstrBuffer)

For performance purposes, attaches the handle to the input BSTR. The copy time is saved. The in_bstrBuffer lifetime is controlled by the in_hString after returning from this function.
HRESULT   CopyHandleToHandle(DMString*   in_hString, 
DMString*   out_hString)

Copies a string from one string handle to another.

Methods in IDMVariantPtrHandler

HRESULT   CreateNewHandle(DMVariantPtr**   out_phVariant)

Creates a new variant handle that can be used later inside the algorithm space. The out_phVariant parameter is the location to store the newly created string handler.
HRESULT   CopyVariantToHandle(DMVariantPtr*   in_hVariant, 
VARIANT*   in_pVar)

Copies the input variant into the handle variant.
HRESULT   GetVariantCopyFromHandle(
DMVariantPtr*   in_hVariant, 
VARIANT*      out_pVar)

Copies the handle variant into the out_pVar location.
HRESULT   DetachHandleVariant(DMVariantPtr*   in_hVariant, 
VARIANT*   out_pVar)

Detaches the handle variant and returns the address of the detached variant. This is a high-performance version of GetVariantCopyFromHandle that can be used where an explicit copy is not required.
HRESULT   AttachHandleVariant(DMVariantPtr*   in_hVariant, 
VARIANT*      out_pVar)

Attaches the handle variant. This is a high-performance version of CopyVariantToHandle that can be used where an explicit copy is not required.

DMContext Object

The DMContext object contains information for the currently executing request, including locale and model access. The DMContext object will be exposed as an interface (IDMContextServices) inheriting from IDMServices.

Service provider architecture

The plug-in algorithm will access server components via an IDMServices mechanism.

Two different service providers will be available most of the time to the plug-in algorithm:

  • A model service provider, which is logically owned by an Analysis Server mining model object. This model service provider provides access to the following services:
    • Per-model allocators, which can be used to create objects that will exist between calls (i.e., objects with the same lifetime as the algorithm).
    • Model information, such as the name of the model and its locale.
    • Progress notifications.
    • PMML rendering and parsing.
    • A context service provider, which is logically owned by an Analysis Server request or "execution context". This context service provider provides access to the following services:
    • Context (per-EC) allocators, which can be used to create objects that will exist only for the lifetime of the current context.
    • Locale information.
    • Methods for updating memory estimates
    • Polling for cancellation status.

    The plug-in algorithm will receive the model service provider as a parameter of the IDMAlgorithm::Initialize method. The model service provider (as well as any service obtained through it) can be safely cached for the lifetime of the algorithm (until it gets unloaded or destroyed).

Each service exposed by the model service provider will be documented as being thread-safe or not.

Shared Definitions and Enumerations

These are the definitions of data types and enumerations that are used to communicate case and attribute information to algorithms.

The dmalgo.h file contains types and structs such as DM_Attribute and DM_STATE_STAT. Descriptions to be supplied.

Algorithm Registration

Each data mining algorithm available to an instance of Analysis Server will have an entry in the server's INI file. This includes both Analysis Server's native algorithms and third-party algorithms. The entry will have the following information:

  • Algorithm name (such as Microsoft_Decision_Trees).
  • ProgID (this is optional and will be provided only for third-party providers).
  • Flag indicating whether the algorithm is enabled or not.

Here is the entry in "\Program Files\Microsoft SQL Server\MSSQL.1\OLAP\bin\msmdsrv.ini" for our sample plug-in algorithm provider:

<ConfigurationSettings>
...
 <DataMining>
...
   <Algorithms>
...
    <Microsoft_Sample_PlugIn_Algorithm>
       <Enabled>1</Enabled>
       <ProgID>Microsoft.DataMining.SamplePlugInAlgorithm.Factory</ProgID>
    </Microsoft_Sample_PlugIn_Algorithm> 
...
   </Algorithms>
...
 </DataMining>
...
</ConfigurationSettings>

In the above configuration entry, "Microsoft_Sample_PlugIn_Algorithm" is the name that used to identify the algorithm in DDL statements sent to the server. At startup, Analysis Server verifies that this name is the same as the algorithm name returned by the algorithm in IDMAlgorithmMetadata::GetServiceName(). If the names don't match, the server does not load the algorithm provider and logs an error in the Microsoft Windows event log.

If a previously created model instance is accessed by a user and the algorithm is currently disabled, Analysis Server will fail the request and report that the model's algorithm is not available to the current server instance.

Algorithm Content Persistence

A plug-in algorithm's content is loaded and saved into the server model's space through the IDMPersist interface. This serialization mechanism allows a plug-in algorithm instance to persist and load its content to and from abstract streams, namely IDMPersistenceWriter and IDMPersistenceReader provided by Analysis Server. Analysis Server is responsible for transactionally persisting and loading algorithm content via these interfaces to and from its stores, so you don't have to worry about handling errors that could occur if the algorithm content was partially loaded or saved.

Algorithm-Specific Modeling Flags

Plug-in algorithms may support custom modeling flags that they expose information for via the IDMAlgorithmMetadata methods. These will validated and passed to the algorithm by Analysis Server.

Custom functions

In addition to the standard functions that are part of the OLE DB for DM specification, plug-in algorithms can support custom functions. Metadata for the custom functions is exposed through the IDMCustomFunctionInfo interface. Based on this information, Analysis Server handles parsing and semantic validation of custom function calls in DMX queries issued by the user. At prediction time, Analysis Server obtains the IDMDispatch interface on the algorithm and calls the PrepareFunction and InvokeFunction methods to evaluate custom functions and obtain metadata/data for inclusion in the prediction result.

The control flow between the Analysis Server framework and the plug-in algorithm is as follows:

  1. The plug-in algorithm advertises its custom functions by including them in the list of supported functions returned by algorithm's IDMAlgorithmMetadata::GetSupportedFunctions method. The DM_SUPPORTED_FUNCTION enumeration in dmalgo.idl has a list of enumeration values for well-known functions published in the OLE DB for DM specification—custom functions have enum values equal to and above DMSF_CUSTOM_FUNCTION_BASE.
  2. If custom functions are included in the list returned by the plug-in algorithm's IDMAlgorithmMetadata::GetSupportedFunctions method, the Analysis Server framework QI's its pointer to the plug-in algorithm's IDMAlgorithmMetadata interface for the IDMCustomFunctionInfo interface (which must be supported in this case) and obtains metadata information that it can use for parsing and validating custom function calls in users' DMX queries, as well as for returning in the MINING_FUNCTIONS schema rowset. This includes
    1. Signature
    2. Description
    3. Whether the function is scalar or table-returning
    4. Parameters and flags accepted
  3. When a DMX request is received that includes a custom function call, the Analyis Server framework QI's its pointer to the plug-in algorithm's IDMAlgorithm interface for the IDMDispatch interface.
  4. The IDMDispatch::PrepareFunction is called once at the beginning of query execution (at bind time) to obtain column metadata for the result. The column information would be an array for table results.
  5. For each case, along with a prediction call to IDMAlgorithm::Predict, a call to IDMDispatch::InvokeFunction is made for each custom function in the DMX query.

User-Updateable Algorithm Content

Analysis Server allows users to update parts of the algorithm content using DMX UPDATE statements. Currently only node captions are updateable. Plug-in algorithms can access the updated content through the IDMContentMap interface available from the model services object (IDMModelServices::GetContentMap). In order to return the correct (updated) node captions when the Analysis Server framework requests the DMNP_CAPTION property from a plug-in algorithm's content navigator, it must fetch the caption from the model service object's content map. The code for this would look something like the sample shown below (error-handling code is omitted for simplicity):

// TODO: Init caption string to empty string
   CComPtr<IDMContentMap>   spContentMap;
m_spModelServices->GetContentMap(in_pContext, &spContentMap));
if (S_FALSE == spContentMap->IsEmpty())
{
      spContentMap->FindNodeCaption(&strNodeUniqueName,&pstrCaption));
      if (pstrCaption)
{
// TODO: Copy to caption string
}
}
// TODO: Copy string to output variant

Architecture

This section explains the flow of data and control between Analysis Server and algorithm providers. (See Figure 1)

Click here for larger image

Figure 1. Data Mining plug-in algorithm data flow

Server Startup

At startup, Analysis Server will instantiate all registered and enabled plug-in algorithm providers, and cache their IDMAlgorithmFactory interface pointers.

Mining Algorithm Information

In response to Discover requests for MINING_SERVICES, the server will iterate through the Algorithm Manager list of algorithm providers (represented by their corresponding cached IDMAlgorithmFactory interfaces), and obtain the relevant information through the IDMAlgorithmMetadata interface.

Mining Model Creation

When a mining model is created, a meta data object is instantiated for it in the server and saved to disk. At this point, it does not have an algorithm instance associated with it. However, it will be validated against information obtained from the corresponding algorithm provider's IDMAlgorithmMetadata interface.

Mining Structure Processing

After a mining model's parent mining structure is processed, the Attributeset object created by the server is validated against each child model's algorithm provider to confirm that the Attributeset is in a form that can be consumed by the algorithm. This is accomplished by invoking the ValidateAttributeset method on the algorithm provider's IDMAlgorithmMetadata interface.

Mining Model Training

When a processing request is carried out by the server for a mining model, the CreateAlgorithm method on the corresponding algorithm provider's IDMAlgorithmFactory is invoked to create a new algorithm instance associated with the model. The IDMAlgorithm interface on the algorithm instance and related interfaces obtained from it are used by the server to pass cases, train the algorithm instance, and save its content as detailed below:

  1. The algorithm instance is first initialized with an Attributeset object (IDMAttributeSet) that can be queried by the algorithm during training to obtain attribute information and optionally a marginal statistics object (IDMMarginalStats).
  2. Then the server initiates training by calling IDMAlgorithm::InsertCases with an IDMPushCaseSet parameter. The algorithm provider is in fact required to implement a callback object exposing the IDMCaseProcessor interface that it passes back to Analysis Server, in response to the InsertCases request, via IDMPushCaseSet::StartCases.
  3. After the server receives the algorithm's CaseProcessor object, it pushes cases to it for processing by invoking the CaseProcessor's ProcessCase method for each case.
  4. At the end of training, Analysis Server may make additional calls into the algorithm to build a Drillthrough store as described below in "Mining Model Drillthrough" (section 3.9). It may also make additional calls to build a data mining dimension as described below in "Mining Dimensions" (section 3.10).

Mining Model Prediction

In response to a prediction query, Analysis Server's query processor evaluates the prediction join using the Predict method on the model's cached algorithm interface (IDMAlgorithm:Predict). Custom function calls are evaluated using the meta data obtained through the algorithm provider's IDMAlgorithmMetadata interface. Meta data obtained through the IDMAlgorithmMetadata interface describes the calling conventions and parameters for provider-specific functions. This information is later used to evaluate provider-specific function calls.

Mining Model Browsing

In response to Discover requests for MINING_MODEL_CONTENT, the server uses a call on the corresponding model's cached algorithm interface IDMAlgorithm to request a content navigation object that exposes the IDMAlgorithmNavigation interface. The server then builds the result rowset by traversing the nodes of the graph exposed by the navigator and querying each node for various properties.

Mining Model Persistence

The core interface on an algorithm instance —IDMAlgorithm — can be queried using COM for the IDMPersist interface. This interface is used for loading and saving algorithm content from and into the storage space of the mining model object that owns the algorithm instance. Versioning and transactional updates of this storage are managed by Analysis Server.

Mining Model Drillthrough

While browsing a mining model's content in a viewer, a user may request to see the underlying cases that belong to a particular node in the content graph. If the algorithm supports this Drillthrough operation, Analysis Server builds an internal data structure associated with the model that maps training cases to corresponding nodes in the content graph. To build this structure for use during browsing, the server uses the IDMAlgorithm::GetNodeIDs method for each case at the end of training.

Mining Dimensions

Analysis Server allows users to build a data mining dimension based on a mining model's content. This dimension can be included in an OLAP cube that uses the same dimensions that the mining model was built on, and its discovered hierarchy can be used to slice and dice the fact data in interesting ways. If the algorithm supports data mining dimensions and the user requested a data mining dimension to be created from a model, Analysis Server builds this special dimension by calling the IDMAlgorithm::GetNodeIDs method for each case with the DIMENSION_CONTENT flag at the end of training.

Note that the algorithm provider may choose to present its content in a different form for the purpose of data mining dimensions than it would for regular model browsing. Therefore, both IDMAlgorithm::GetNodeIDs and IDMAlgorithm::GetNavigator support a flag that allows the server to specify how it will be using the content node information.

Sample Case Generation

If a user requests a sample case set by issuing a SELECT * FROM model.CASES WHERE IsInNode(xxxx) query, Analysis Server obtains this case set by requesting the IPullCaseSet interface from the model's algorithm instance. A sample case set is simply a hypothetical set of cases generated by the algorithm (using attribute values it has learned during training) that fit the rules represented by a particular node in the content graph.

Importing and Generating PMML

A user can request the content of a mining model in the PMML form. Microsoft Analysis Services 2005 supports PMML 2.1. The request is of the form SELECT * FROM [Model].PMML. Also, a user can request a model to be loaded from a PMML 2.1 document. The syntax is CREATE MINING MODEL [ModelName] FROM PMML 'pmml here'. The framework takes care of the structural information in the PMML (the data dictionary and, optionally, the model statistics) while the plug-in algorithm is responsible for parsing/rendering the content part of the PMML.

A plug-in that supports rendering of PMML 2.1 has to implement the RenderPMMLContent method of the IDMAlgorithm interface. If this method returns E_NOTIMPL, then the framework will consider that the plug-in does not support PMML. RenderPMMLContent takes a ISAXContentHandler interface as parameter and the plug-in must generate XML events on this interface to write into the output stream. The PMML 2.1 standard included the model statistics and the model schema inside the model content element. The plug-in can delegate rendering of this information to the framework by calling RenderPMMLModelStatistics, RenderPMMLMiningSchema and RenderPMMLModelCreationFlags on the IDMModelServices interface provided by the framework.

Overall, the operation of rendering a PMML 2.1 document is executed according to the diagram below:

Analysis Server FrameworkPlug-In Algorithm
- Creates a content handler to render the PMML into3 
- Starts rendering the PMML 2.1 
- Renders the data dictionary 
- Calls into the plug-in for the content 
 - RenderPMMLContent—starts rendering the model content (starts the XML element identifying the algorithm content)
 - renders the statistics OR calls into the framework for this
- if RenderPMMLModelStatistics is called, render the statistics then return the control to the plug-in 
 - renders the mining schema OR calls into the framework for this
- if RenderPMMLMiningSchema is called, render the statistics then return the control to the plug-in 
 - renders the model creation flags (a Microsoft extension to PMML 2.1, developed according to the PMML 2.1 specification for extensions) OR calls into the framework for this
- if RenderPMMLModelCreationFlags is called, render the statistics then return the control to the plug-in 
 - renders the actual content of the mining model
 - closes the XML element that identifies the algorithm content (end of the RenderPMMLContent function)
- finalizes rendering the PMML document 

For reading a PMML 2.1 stream (and creating a mining model from a PMML 2.1 document), a plug-in must implement 2 methods. If any of these methods returns E_NOTIMPL, the framework will consider that the plug-in does not support parsing PMML 2.1

PreInitializeForPMMLParsing —allows preparing the plug-in for reading the PMML. The framework parses the structural part of the PMML 2.1 then calls into this method of the plug-in, passing as parameter a reduced attribute set implementation (IDMAttributeSet) that contains the structural information.

GetPMMLAlgorithmSAXHandler—the framework calls into this method of the plug-in to get a SAX handler for the content of the PMML document. The plug-in must parse all the algorithm specific information from the PMML stream. The plug-in may parse the model statistics or delegate this task to the framework by invoking ParsePMMLModelStatistics on the IDMModelServices interface provided by the framework.

Once the content parsing is completed, the plug-in must give control back to the framework by invoking ContinuePMMLParsing on the IDMModelServices interface.

The operation of parsing a PMML 2.1 document is executed according to the diagram below:

Analysis Server FrameworkPlug-In Algorithm
- Starts parsing the PMML 2.1 (create an XML SAX content handler for this) 
- Reads the data dictionary 
- Pre-parses the PMML 2.1 document and creates the metadata for the mining model and mining structure and columns 
- Calls into the plug-in for parsing the content 
 - GetPMMLAlgorithmSAXHandler— returns a SAX Content handler that will handle the content
- redirects the XML parsing to the handler returned by the plug-in 
 - handles the XML for reading the content of the mining model
 - when the ModelStats element is encountered, the plug-in can parse the statistics OR delegate this task to the framework
- if ParsePMMLModelStatistics was called, loads the statistics into an IDMMarginalStats implementation, the return control to the plug-in 
 - when the content part of the PMML is completed, return control to the framework by invoking ContinuePMMLParsing
- when ContinuePMMLParsing is invoked, redirects the XML parsing to the original handler 
- finalizes parsing the PMML document 
- saves the newly created objects 

Error Handling

Algorithm providers must raise standard errors and populate IErrorInfo objects.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.