BizTalk Server 2004: A Review of the Business Activity Monitoring (BAM) Features

Microsoft Corporation

Published: July 2005

Applies to: BizTalk Server 2004

Summary: This document introduces the BAM features in BizTalk Server 2004. The first two sections of this paper address the need for BAM and the desired user experience. These sections are intended primarily for business decision makers (managers, analysts, information workers, etc.) as well as more technical audiences that need a better understanding of the value proposition to the business.

The last two sections of this paper go into deeper technical detail about how BAM works, and are intended for technical readers (architects, developers, system administrators, etc.) looking to deploy, manage, and extend BAM for their organization. (16 printed pages)

The Visibility Problem

Today's networks, systems, applications, and businesses processes are becoming larger and more distributed. This distributed nature guarantees scalability, fast response, redundancy in case of failure, and many other features far exceeding the capabilities of the classical central-server design. At the same time, it becomes increasingly important to have end-to-end visibility into business activities. Making the right business decisions hinges on knowing the progress and dependencies of all units of work at any point in time.

Knowing what is going on is simple in a business that records everything in a single, central database. Unfortunately, the reality of today's business processes is far from such simplicity. Business processes typically grow step-by-step, buying or developing new aspects built on whatever technology happens to be available at the moment. Over time, disparate technologies (from mainframes with business logic in COBOL, via 3-tier applications in C/C++ and SQL, to e-commerce sites, web services written on C# or Java, etc.) coalesce as the process implementation evolves. At the end, the enterprise infrastructure starts looking like something that was assembled with duct-tape and baling wire, rather than architected for a specific set of performance goals (i.e. to run the organization optimally).

Even though the heterogeneous nature of enterprise processes severely aggravates the visibility problem, it is inherent to any distributed system. For example, departmental or other business units within an organization tend to insist on owning their processes and data, even where storage format and other consistencies would make consolidation possible. Decisions that occur outside the departmental level (e.g. business decisions or infrastructural policies) ultimately require some of this data be correlated across departments and concentrated for analysis or real-time monitoring (other data relegated to archive for less frequent tasks like problem investigation, etc.).

The visibility pain-point is formulated in many different forms, but the capabilities to address it range from rudimentary to sophisticated and powerful, as shown in Table 1.

Table 1 Data visibility problems & solutions

Class of Problems Description Examples

End-to-end causality trace (log, audit)

Fundamental tracing capability: creation of a trace/log of what has happened with individual business activities, including inter-activity relationships.

  • Where is purchase order number 123 right now, and what is its status?
  • Which shipments relate to purchase order number 123 and what is each of their statuses?

Activity query-ability

Dynamic query capability: provide answers to specific questions about what is going on and what has happened.

  • Which purchase orders are in the "fulfillment" stage of the business process right now?
  • Which applications for loans over $200,000 are "waiting for credit report"?

Activity aggregations

Performance indicator capability: aggregate data to show what is going on with the business as a whole (Where are the bottlenecks? Is there way to optimize the business? Are service level agreements (SLAs) being met? Is it time to scale-out/in?).

  • What is the average duration of "fulfillment" in the end-to-end "Order Management" process?
  • What is the trend of this average duration?
  • How many orders for "productX" are there at each discrete step of the process?


Proactive notification capability: alert the following individuals or processes when a pre-defined condition is detected.

  • Alert me if any order for ABC with dollar amount greater than $10,000 is stuck in the “fulfillment” stage of the process for more than 1 hour.
  • Alert me if the average duration for the entire order processing cycle, end-to-end, exceeds 30 minutes

SLA alerts

Process management capability: optimize business performance relative to business SLAs (common between trading partners in B2B relationships, for example) by managing processes in terms of their respective SLA requirements.

  • Alert me if any order for @Partner with dollar amount greater than @Amount is stuck in the "fulfillment" stage of the process for longer than @MaxDuration minutes.
  • Alert me if the average processing time for orders from @Partner exceeds @MaxDuration hours.

The BAM Vision: Global Visibility

Business Activity Monitoring (BAM) is the solution to the visibility problem and enables all the important capabilities cited above. BAM listens to events from disparate processes and systems, or intercepts data from inside applications or on the wire. It then correlates and aggregate this information to achieve true global visibility of all units of work (activities) across the organization.

Figure 1 illustrates the user experience in such a globally visible environment, for the case of a manufacturing and distribution business.

Figure 1 End-to-end visibility across business processes


Figure 1 shows:

  1. Users browse high-level activity aggregations about purchase orders. This is based on data inside BAM Store-A.
  2. Users (via some click sequence) "drill-through" a bar in the chart (an aggregated value, the number "3," representing "POs with amounts over $10,000"). This retrieves a tabular list of the individual activities (PO numbers 123, 433, and 540) which contributed to that aggregation.
  3. Users select individual activities for which they would like to see detailed status (e.g. PO number 123 as shown in Figure 1). This is presented to the user as a document that includes important milestones and other contextual data, plus hyperlinks to related documents and other content.
  4. Users follow hyperlinks to related activities (e.g. shipments 546, 547, etc.) to access progress and data about them.
As Figure 1 shows, the user retrieved "shipments" data from BAM Store-B despite having started their experience at their "home" BAM, Store-A. Simple synchronization of information about access permissions to user data makes this achievable as a seamless experience to the user.)

BAM becomes the distributed index of all the activities in the organization. It allows users to reconstruct, in their terms, all the causality and dependencies of events and activities in the business or system. BAM also facilitates better communication between roles involved in process management (business users, IT, development, etc.). As far as BAM is concerned, visibility consumers differ only in terms of what data they can access and how it is presented, or their "view." When the relationships between the different views are known as described in the preceding example, BAM can act as translator or go-between for functional roles that traditionally do not communicate well (e.g. "HELP!" button on a business user portal that pops up a trouble-ticket in Microsoft Operations Manager which includes extremely specific troubleshooting target links and data).

This section provides a very general introduction for how BAM works. It is important to note that, because most of the BAM-based infrastructure is dynamically generated, building a BAM solution typically does not require any knowledge of Microsoft® SQL Server™ and OLAP.

User Interaction for BAM

The primary goal of BAM technology is to connect information workers (e.g. Microsoft Office users) with the implementation of business processes. This is why the workflow for BAM begins and ends with information workers using Microsoft Office, as shown in Figure 2.

Figure 2 BAM technology and user interaction


Figure 2 shows:

  1. An expert in the business (analyst/consultant, business architect, manager, etc.) defines an observation model. This task is accomplished using BAM-specific wizards added to Microsoft® Excel®, which take the analyst/user through defining the model, then actually interacting with a simulated "preview data" version of it to verify it meets the requirements of the end-user. Briefly, an "observation model" includes (refer to the next section for more details):
    1. One or more "activity" definitions: an activity (or unit of work) is simply a list of milestones (events as points in time, e.g. "invoice received") and contextual data (e.g. "invoice number") needed to service the visibility requirements of business decision makers.
    2. One or more "view" definitions: a view is a role-specific perspective that spans one or more activities. The definition includes specifications for what to show (activity data shown or hidden) as well as how to show it (aggregations – dimensions; measures; visualization – charts/graphs, etc.).
  2. The IT administrator (after one-time setup of BizTalk Server/BAM) uses a command-line utility to deploy the observation model constructed by the business expert in step 1 (i.e. the utility consumes the Excel workbook). This deployment results in several automatic actions:
    1. Dynamic infrastructure is generated for correlation, data maintenance, and aggregations (a self-maintaining data warehouse). This includes emitting SQL schema and logic for staging, star-schema, OLAP cubes, and DTS packages.
    2. UI is generated/provisioned giving business end-users a live data view corresponding exactly to the preview decided on by the business expert. In BizTalk Server 2004, the UI is another Excel workbook that is bound to the SQL and OLAP data directly. BizTalk Server 2006 (currently in Beta) adds out-of-box portal functionality for both ASP.Net and Windows Sharepoint Services.
    3. The observation model is stored inside BAM (for access by tools and run-time components).
  3. Now that the infrastructure for storing and using the data is ready, the developer needs to find the data of interest somewhere in the implementation and connect those run-time event sources to the BAM activity defined.
    For the developer, the only concern is filling the items in the activity given that the view aggregation/presentment layer is entirely self-managed once deployed by the IT administrator in the previous step.

    The developer maps the activity definition to the actual run-time in two possible ways depending on the implementation technology:
    1. If the business implementation is done on BAM-enabled technology such as BizTalk Server, there is no need for coding. The Developer uses the Tracking Profile Editor (TPE) (a tool that first debuted in BizTalk Server 2004) to map the observation model defined by the business expert to the events and data that exist in the implementation. Once the developer applies this "tracking profile," the BAM immediately starts collecting (and possibly aggregating) the data as defined in the observation model. It is important to note that you can update profiles, and therefore you can update visibility requirements, at any time without impact upon the running process itself.
    2. For technology that is not BAM-enabled, for example, custom C# code, the developer sends events to BAM using explicit calls to the BAM API.
  4. As the business process is executed it emits events that are correlated and aggregated by BAM:
    1. The BAM Event Bus Service takes care of the in-memory preparation of the real-time event streams (manages buffering, transactions and threads, watermarks for crash recovery, etc.).
    2. The dynamic Business Intelligence (BI) infrastructure takes care of correlating and storing the events into activity records, performs subsequent data maintenance (so that the system does not fill-up and stop), and finally/optionally, aggregation as the basis for performance indicators.
  5. Finally, the Business Decision Maker (BDM) opens the auto-generated UI and sees live data about business processes, presented in terms of both current progress as well as recent historical context.
    1. The experience of BDM is identical to the preview experience of the expert in the business who created the observation model, but with live data showing what is going on with the business currently.
    2. The BDM can also perform ad-hoc queries, define alerts on conditions for which they need automatic monitoring, etc. For BizTalk Server 2004, query capability is surfaced as an out-of-box experience through the Business Activity Services portal, and alerting requires some development atop SQL Server Notification Services (as noted above, BizTalk Server 2006, currently in Beta, will introduce an out-of-box BAM portal which includes query and alerting capabilities).

Note that an important aspect of the BAM technology is clear role separation (hiding complexity for some audiences, etc.), sometimes referred to as "the right tools for the right users." For example, as far as information workers are concerned, they know what data they need (defining observation models in step 1 above) and benefit from live access to it (step 5 above) – everything else is quasi-magical, and they typically do not need or want to understand it. Similarly, the developer knows about the process implementation and how to get data from it, but does not care at all what users are being serviced by this data collection (the views).

Observation Model Concepts

The following figure is a conceptual representation of how data flows into and through BAM. From the bottom up, the BAM stack progressively filters and/or aggregates data. In addition, as described above, everything subsequent to intercepting events is about presentment of the data to serve some specific user visibility requirement or other process management purpose.

Figure 3 BAM Observation Model concepts


Consider first the raw events of the run-time. During the execution of the process, a vast amount of data can potentially be collected. Even though this data may be useful to IT or development for troubleshooting or server health monitoring, most of it does not make much sense to business end users.

The manifest that describes specifically what data to collect for the business user audience is the BAM activity (the dotted-line arrows shown at the bottom of Figure 3 above: milestones in red, data items in black). These items are collected during the actual transacting of the business. Defining the activity requires knowledge of a specific business (or whatever is being modeled as an activity) but does not require knowledge of the actual implementation.

Assuming that the data is collected in context of the defined activity, the next layer closer to the end-user is the activity view: a filtering and aliasing of the activity data intended to serve a specific category (i.e. functional role) of business users. There can be one or more views onto any given activity. Through the views created for specifically for them, business users can see the health of their business in their terms, and do things like perform searches (query) for activities in progress.

Optionally, multidimensional aggregations of activity data can be created based on whatever data is part of a given view. Those aggregations contain measures (e.g. count of purchase orders in total, or a running average on dollar amount) and dimensions (used for filtering or grouping, such as the product being ordered or the city to which goods will be shipped).

Typically, the analyst or expert in the business who defines the activity works in collaboration with the IT administrator to determine appropriate real-timeness of aggregate data. Specifically, any aggregations defined can be created as OLAP cubes, which represent moment-in-time snapshots of the business, or as real-time aggregations, which are continuously updated. Many factors contribute to this decision, though it usually comes down to balancing the timeliness requirements of the end-user with the impact to shared IT resources (specifically the SQL servers).

Defining & Searching for Activities

Despite targeting different users, defining and searching for activities are two tasks typically mentioned together because it is a good way to represent the fact that BAM begins and ends with the business user:

  1. First, a business analyst, or expert in the business, decides what the visibility for a given process ought to be. They do this completely separate from whatever exists to run that process in the real world. As described in the introductory part of this section, this is a two-part task of a) deciding what data to collect, and b) creating views or perspectives on that data collection.
  2. A couple of other important process management roles do their part (namely the IT Pro and developer), at which point the business end-user (or information worker) is able to consume business activity data in a representation tailored to their needs.

As described near the beginning of this paper (see The Visibility Problem section for more details), this is a fundamental process management capability: being able to define an abstract for a process and then use that abstract to measure the progress of individual items of work as they move through the process end-to-end.

However, this truly is only a basic capability. It allows users to perform ad-hoc and repeatable queries, generally to review performance aspects of a process or the participating applications, business entities, etc. What this ability does not do is provide any advanced process management capabilities. At best, this data supports reacting to business/process conditions.

Activity & Document Navigation: References

In The BAM Vision: Global Visibility section, BAM was likened to a distributed index for all activities of the business. That comparison is meant to describe an experience in which users seamlessly crawl from or through activity data to related messages, back again, through other related activities and their messages, and so forth. To facilitate this experience, there are two BAM functions required:

  1. Related activities – described in The BAM Vision: Global Visibility section (sub-point #4 related to the figure), activities can have relationships with other activities. Whether for reasons of granularity (i.e. a 1-to-many relationship between the activities) or simply because of functional association relative to the business, the idea of knowing inter-activity relationships is a cornerstone of the seamless navigation experience.
  2. Activity references – this is an open framework by which it is possible to associate any kind of data to a BAM activity. The most common example given for this is the one of a purchase order business message relative to an activity built atop the order management process. When users review activity data for the process, it is entirely likely that they will eventually need to see the actual message at the heart of (or causing) the process instance to occur. Processes may optionally append any relevant reference material to a BAM activity as it is being assembled. Also, BAM will automatically maintain this information in the form of pointers to underlying run-time references that it understands (e.g. BizTalk Server service, Orchestration, and message IDs are all known and provided automatically for BAM activities mapped to BizTalk Server solutions).

None of this is to say that BAM lacks value in the absence of activity relationships and references. However, given that richness of the business user experience is a significant reason for employing BAM technology at all, its ability to also function as a powerful cross-reference is a key aspect of BAM.

Activity Aggregations

This is really where BAM delivers on its promise to improve process management. As noted in the previous section, having access to activity data at the level of individual items of work does allow some process management to occur, but it is entirely in reactive mode: users review what data they have for the individual items of work to reconstruct "what happened." However, when users have access to data in aggregate, they are no longer reviewing the process outcome for any single item of work and are instead taking a holistic view of process health.

For example, consider the prototypical purchase order management process. The power of process management at one’s disposal is dramatically different depending on the granularity of data available:

  • Individual – users can see what data has been recorded and what has not, determine how long each part of the process took to complete for this case, and otherwise investigate anything the activity is set to record; users can be alerted on queries that match some condition or set of conditions, but are left to deal with each specific case separately; no conclusions are possible regarding process health or performance trends over time.
  • Aggregate – users no longer see anything about specific cases of the process; aggregate views are a reflection of process health, and comparative analysis of aggregate snapshots is the means by which trends are identified (and corrected if negative, encouraged if positive).

In short, being able to assemble Key Performance Indicators (KPIs) for an organization requires data aggregations. Typically, KPIs are complex expressions that tie together two or more simple data aggregations; however, a KPI can be as simple as "average time to process a purchase order end-to-end."

Although it might be semi-controversial to say so, it is through aggregation of data that BAM is somewhat predictive of the future. For example, if a given BAM aggregation shows that 300 purchase orders pile up in the stage of the order management process known as "fulfillment," and the normal steady-state level for that build-up is more like 100-150, then two things are true about the process:

  1. Real-time / non-predictive: there is a bottleneck in this stage of the process for some reason (for which further investigation will reveal a cause at some point…).
  2. Future / predictive: at some point, everything in the process downstream from "fulfillment" will get slammed when the stage that is currently jammed up goes into high gear to catch up.

This second aspect is the main driver behind vendors of network & systems management software looking to augment their technology by adding BAM capabilities. The process bottleneck just described is completely invisible unless correlated, aggregated process progress data is available as additional commentary on system health.

Basic Correlation Idea

The heart of BAM functionality is the ability to correlate many individual small events into more useful logical units that form the basis for an end-to-end picture that business users understand (e.g. intercepting events across ten or fifteen loosely coupled applications and distilling that down to a logical end-to-end representation of "the purchase order process").

The basic design idea behind the correlation is very simple. As the activity happens in the real world, it causes monitoring events to occur. BAM listens for those events and maintains a row in a SQL table that is kept in-synch with each corresponding instance of the activity. In this way, the activity record is a reconstructed "shadow image" of the activity in the real world as illustrated in Figure 4.

Figure 4 Correlation of Events into Activity Records


In Figure 4, a, b, c, and d are milestones in the business process; Name and Amount are data items of interest (values differ for each process instance). Clearly, the milestones and data Items will be different for each business process (i.e. items in the activity called "order management" will be different from those in the activity called "fraud detection"). The variability of the activity by scenario is the main driver behind the need for BAM's dynamically created infrastructure: the format of the table (and other SQL resources configured) is highly specific to the activity defined. The activity table contains a separate column for each milestone (e.g. PO Received) and data item (e.g. PO Amount) defined in the observation model. This also allows for high performance/throughput given that almost no additional massaging of the data is necessary to actually store it (since the storage schema matches the data).

The correlation of multiple events into a single activity record means that, in situations where it is possible to have thousands of activities (e.g. business processes) executing simultaneously, in various states of completion, there will be thousands of in-progress records in the activity table. Again, finding activities of interest is just a simple search in SQL (a query, e.g. "which purchase orders for Contoso, received today, with Amount greater than $1,000, and are now in the 'fulfillment' stage of the process?").

One interesting subtlety to creating and relating activity records is the case where data can not be stored in a single table due to what amounts to variability in the number of items in the activity. This is sometimes referred to as the 1-to-many case, or non-deterministic activity items. Consider, for example, a BAM activity defined around a purchase order process. If an end-user needs to see milestones and data for the order itself, and then also to see specific milestones and data for each line item in the purchase order, because the number of line items in a purchase order is variable, this results in a BAM activity that is not specific regarding the number of milestone and data items to collect in total. To address this issue BAM has the concept called "related activity." In short, use of activity relationships breaks the data collection and activity maintenance into separate but related parts. Continuing the example:

  1. Purchase order activity – contains milestones and data for each order processed.
  2. Purchase order line item activity – contains milestones and data for each line item in any purchase order.
  3. Relationships – a separate table used to join things that are logically related by some pre-defined token (in the purchase order example, purchase order number could be a good choice because it is the same across the parent purchase order and all of the line items it contains).

Ultimately, you can think of the activity storage as a relational database schema in which each type of work (order process, distribution process, etc.) is a table, and each instance of work is a row in the table. As illustrated in Figure 1, this storage is actually distributed – different types of activities are stored in different physical places, but they still have relationships among them, and the distributed nature of the data can be hidden from end-users as needed to simplify their experience.

BAM Dynamic Infrastructure

Thinking of each activity type as a table in SQL is an easy simplification that helps in understanding the BAM dynamic infrastructure. However, in order to ensure good performance and availability, the BAM dynamic infrastructure is significantly more complex than such a simplification, as illustrated in Figure 5.

Figure 5 BAM Dynamic Infrastructure


In Figure 5, you see:

  1. The incoming Events are written to a specific table for the activities currently in progress (the active table). This table has a separate column for each milestone and data item in the activity. The first event for an activity instance results in inserting a new record in this table and each subsequent event updates columns of that record. Thus, one can think of the basic idea as "maintain a SQL row in-synch with each activity instance in progress."
  2. When the activity instance is completed (no more events are expected), it is moved to separate table for completed activity instances (the completed table). This makes it possible to keep the first (active) table very small and efficient.
  3. To prevent the completed table from growing continually, a special partitioning is implemented. This works by creating a new empty table of the same format as the completed table and substituting it for the completed table. Activities moved to the completed table after the empty partition is swapped in will be moving to the new, empty table. There are no more records written or updated in the previous completed table: it becomes a "partition table" available only for search/query.
  4. Over time, multiple partitions are spawned (e.g. one for each day). Once a partition becomes older than a configurable online window (e.g. 10 days), it is archived and then dropped. This way the database never fills up, and the whole correlation infrastructure works 24x7, indefinitely.
  5. A special view abstracts the partitioning details from the user interface. As far as the user is concerned, the activity storage looks like a simple table, in which each row represents an activity instance (current, or relatively recent).
  6. On a scheduled basis, a snapshot is taken of the stream of activities. This works by temporarily blocking the event-writers, recording a snapshot of the activities in-progress in another small table, and at the same time, remembering a watermark of the completed instances. The data for instances completed after the last snapshot is exposed as a dynamically created view.
  7. The data representing the snapshot is moved to a staging area (star-schema database) and then into two separate OLAP cubes:
    1. OLAP cube for completed activities, which is incrementally updated so that the aggregations for all newly-completed activities are added each time. Here "newly completed" means activities that were not completed at the time of the last snapshot but are completed at the current snapshot.
    2. OLAP cube for all the activities that were in progress at the time of the snapshot. This cube is fully processed each time because BAM assumes that the record for any activity in-progress can change in unpredictable ways from one snapshot to the next.
  8. The fact that the aggregations are in two separate cubes is hidden from the end-user by creating a virtual cube. If, for example, users look at BAM data for the "Order Management" process, they would see aggregations for all processes: complete or currently in-progress. They can then filter out what they do or do not want to see based on relative progress in the activity end-to-end.

DirectEventStream API

There are two ways to send event data to BAM, the first being the DirectEventStream API. This is the simplest way for applications to send events to BAM and requires only a single DLL reference for the application.

Behaviorally (partly because it is simple), the DirectEventStream object also has the largest impact on solution performance. Any application sending BAM activity data through this API will effectively pause its execution while the activity update occurs. Consequently, the application performance is negatively affected (impact varies, though a few percentage points is not uncommon), and the top speed of this API is less than that of its buffered counterpart (see the next section). This is not to say that you should avoid the DirectEventStream approach. For many applications, the top-end performance achievable using direct event publication (200-300 events per second into BAM, dependent on the scenario, of course) is more than adequate, and the interface is extremely easy to use.

Beyond performance, the other consideration is agility or flexibility relative to change management. Using either DirectEventStream or BufferedEventStream (see the next section) API calls directly inside applications is fine and appropriate in some cases. However, any change to visibility requirements that also requires an update to what data is collected (i.e. the activity, not simply a change to the view or views) will ultimately require changing the application code, with obvious implications for cost to maintain the application over time. Refer to the Interception section for other options regarding this change flexibility.

BufferedEventStream API

The other way to send event data to BAM is the BufferedEventStream API. This is the most efficient way for applications to send events to BAM. Its use requires a reference to the same DLL as for the DirectEventStream object, as well as a table for event buffering in a database somewhere, plus a Windows service that handles the movement of event data to BAM from the buffering table. In BizTalk Server, the data buffering table is automatically created as part of the BizTalk MessageBox database, and the Windows service logic is part of the BizTalk Application Service.

Any application sending BAM activity data through this API will not pause its execution while the activity update occurs. Instead, the data is thrown to the buffering table and the application then continues normally. The service, working behind the scenes, ensures that the data is conveyed to its final destination, the BAM database. Consequently, the application performance is impacted far less than with the DirectEventStream alternative, and the top speed of this API (in terms of events processed by BAM per second) is 5-10 times faster than that of its un-buffered counterpart (see the previous section).


Both the previous sections described modifying applications to include calls to one of the BAM APIs to explicitly fire events to BAM. This is a simple but, as noted above, not especially flexibly approach to application event publication. Because emission of activity data is hard-coded into the application, any need to change the data collection (i.e. the activity) requires a code change.

The way to provide for making changes to the activity without impacting the application code is through the use of a new design pattern that centers on an "interception" approach: an in-memory component (the interceptor) extracts the data necessary for BAM from messages on the wire or in-memory operational data of the business applications.

The interception approach assumes that meta-data about the applications and their run-times is available at design-time (e.g. the WSDL for a web service). This is used by a visual tool (the "Tracking Profile Editor," or TPE, in BizTalk Server 2004) which lets the developer or IT administrator make a mapping between data that was requested by the activity definition (the data collection part of the overall observation model) and the correct run-time source for those data items. The mapping is called a tracking profile.

Figure 6 Interception


In Figure 6 you see:

  1. The dev/admin user starts by importing the activity definition from BAM into the left pane of the tool. It is conceivable that the observation model will be created in-place (though current functionality of BizTalk Server's TPE does not include this ability).
  2. Into the right-hand side of the tool, the user then loads a variety of available assets from the application environment, things like BizTalk Server solution meta-data, SOAP endpoints, web services descriptions, etc. (typically, the dev/admin chooses one specific service or operation from the WSDL/meta-data and the right-hand side is replaced with the schema of the message for this service/operation).
  3. The user connects (via drag-n-drop operation) event source items on the right to target activity items on the left.
  4. The user applies the tracking profile, which essentially reconfigures the interceptor component to be aware of the new data interception pattern(s). Again, the tracking profile is nothing more than a map between an activity (the data collection part of an observation model) and existing assets in the process environment.
  5. At run-time, the tracking profile drives interception of the message traffic. Conceptually, the interceptor has a constant bidirectional conversation with its host run-time. At each discrete processing step, the run-time is pre-wired to ask the interceptor "do you need data?" The tracking profile is what controls whether the interceptor says "no" (and then nothing happens) or "yes" (in the form of an XPath location for the data to intercept). For example, out of the possibly dozens or hundreds of events that occur inside the process described in Figure 4, the interceptor may only retrieve City, Amount, and Tax from messages flowing through the process.

You can think of the Interception approach as "externalizing internal events" without code changes, for a one-time cost of building (or acquiring) interception for a given run-time. This approach gives you the best combination of breadth of scope, flexible and granular control over changes, and negligible impact on process performance (due to the fact that most events get mapped, and the size of the event data on the wire is the absolute minimum possible).

Transactional Consistency of the BAM Data

Another critical challenge for BAM is the transactional consistency of the event data. Imagine, for example, that BAM is monitoring, without transactional considerations, asynchronous events coming from a web service that accepts purchase orders.

What may possibly happen in this case is that data could potentially disagree with reality. So, for example, a purchase order arrives, the "PO Received" milestone event is fired, something fails, and then transaction aborts. If the event collection system did not observe transactional behavior, the result is that the data reflects the arrival of the order, but the order did not actually make it into the process successfully because of the rollback semantics of the transaction involved.

This problem can be aggravated by further processing steps too. Continuing the failed transaction scenario described above, the process could potentially retry the failed action, say, 10 times before failing permanently (retries exhausted). Again, if the event collection system did not provide transactional behavior consistent with the process itself, the data shown to the user would indicate that ten orders (obviously the same order ten times) had been received, while in reality there was nothing successfully received at all.

The solution to this problem is to have a transaction-consistent mechanism for intercepting and storing the events. BAM achieves this in two ways, depending on which of the APIs described above is being discussed:

Synchronous API: this API writes directly to BAM/SQL, in the context of whatever current .Net transaction exists (if any).

Asynchronous high-performance infrastructure: this is more intricate integration of BAM and high-performance run-times such as BizTalk Server, so that the events are buffered in a transaction-consistent way (i.e. the run-time has awareness of the existence of buffered BAM event streams and can include them in any transaction maintenance for the overall application/solution).

Refer to Performance Considerations for BAM Event Publishing for more details on transactional considerations around BAM.

Note also that transactional consistency is just an option. There are many customers that use BAM in transaction-independent way, or in mixed mode, for reasons dictated by their specific scenarios.

Some of the following information was provided throughout the previous material. However, for easy reference, this section provides a comprehensive review of BAM's integration with BizTalk Server. The BAM feature area first debuted in BizTalk Server 2004. Any functionality of BizTalk Server 2006 (currently in Beta) described here which is only available in that release is so noted.

On the subject of features available in which release, the BAM vision should be considered completely solid and stable regardless of any features specific (or not) to a given version of BizTalk Server. For example, the ability for the business user audience to define alerts on business Key Performance Indicators (KPIs) is an important facet of BAM's power. To not describe such a thing in a whitepaper about BAM would be seriously neglectful of the intention to close that loop and free-up user time for activities outside process management which otherwise drive business. The fact that alerting atop BizTalk Server 2004 requires some development against SQL Server Notification Services is merely a tactical issue that bears on BAM roll-out project phases (e.g. get end-to-end visibility going first in its basic form, then worry about proactive alerting, and review out-of-box feature availability at that time).


The following is a brief summary of BAM Tools in BizTalk Server:

  1. Design-time:
    1. [BTS2006 only] Orchestration Designer for Business Analysts (ODBA) – this is a process visualization tool based on Microsoft ®Visio®. The flow design is ultimately incidental to the BAM functionality. This tool includes the basic ability to define a BAM activity (milestones and data to collect) but does not expose any functionality for completing the observation model (i.e. this is only step 1 of a two-step task for this user role).
    2. Microsoft Excel – this is how BAM functionality is built into Excel to guide the business analyst through the construction of an observation model. The activity definition can optionally be performed using the ODBA tool, though the user may also choose to perform both parts of observation model creation (activity items to collect; how to show them to the user) here.
  2. Deploy/Manage: BM.exe command line utility – this is an extensive and versatile utility for creating and managing the BAM dynamic infrastructure based on definitions provided from business analysts.
  3. Development: Tracking Profile Editor – this is a simple mapping tool for connecting the required data collection (the BAM activity) to whatever actually runs the process being monitored.
  4. Run-time:
    1. Client / MS Excel – this is the existing way to view business KPIs (i.e. aggregate data) through a rich client tool.
    2. [BTS2006 only] Collaboration / Portal & Web Parts – the BAM Portal is a new, out-of-box experience being added to BizTalk Server 2006. The release will also include a way to use BAM activity definitions as the basis for generating web parts compatible with Windows Sharepoint Services.


BAM run-time in BizTalk Server can be divided into two principal categories:

  1. Binaries:
    1. BAM Interceptor – this component is hard-wired into the BizTalk Server engine. At each discrete processing step, the engine asks the interceptor if it needs data. The interceptor is configured to respond "yes" or "no" based on the tracking profile, which is a mapping between the activity definition and the actual source of the activity events. Any change to visibility requirements is possible without modifying the running solution when tracking profiles are used.
    2. BAM Event Bus Service – this is the service that enables buffering of tracking data. Specifically, while the interceptor (or other users of BAM’s buffered event stream API) puts data into tracking data table queue to be recorded in BAM, this service is busily reading the output end of the queue and persisting those events to the BAM warehouse.
    3. BAM APIs – these are the object classes described in earlier sections (see the DirectEventStream API and BufferedEventStream API sections) by which developers can send additional event data to BAM.
  2. SQL Server logic – once data has made it to the BAM Primary Import Database there is an extensive sequence of steps to massage the data to prepare it for user access. The SQL Server logic stack is the means by which event data is correlated, view-prepared, and aggregated.