Cookdown

Applies To: System Center Operations Manager 2007

Cookdown refers to a feature of Operations Manager 2007 where a single copy of a data source module is shared among multiple workflows to reduce overhead. Understanding how cookdown works and how to design workflows to take advantage of cookdown can be important to making sure that a particular workflow operates in the most efficient manner and does not place too much overhead on the managed agent.

Overview of Cookdown

Multiple Workflows

The Operations Manager agent will load a separate copy of a workflow for each instance of the target class. Most agents manage a considerable number of objects each with several workflows targeted at them. This means that several workflows are typically running on any given agent. The Operations Manager agent can run multiple workflows efficiently, so that this is typically not a concern. However, Data Source modules often run processes outside of the Operations Manager agent that have the potential for considerable overhead. The most obvious example of this is a script that can potentially generate significant overhead, depending on what actions it is performing. If a workflow running that script is targeted at a class that has multiple instances on an agent, that agent will be running multiple instances of the script at the same time. As the number of instances of the class increases, the number of instances of the script increases. These could generate significant overhead on the agent.

Multiple instances of workflow on a single agent

Multiple instances of workflow on a single agent

For example, the Windows Server 2008 management pack has a monitor called Microsoft.Windows.Server.2008.LogicalDisk.FreeSpace (display name Logical Disk Free Space) that runs a script to measure the free space on each logical disk on a managed agent. This monitor is targeted at the Windows Server 2008 Logical Disk class, and will have multiple instances on any agent that has more than one logical disk defined. The monitor has three states, meaning that the agent will load a copy of three workflows for each instance of the target class. This means that an agent with four logical disks would have 12 different workflows. Without cookdown, every time the monitor was scheduled to run, 12 copies of the script would run at the same time.

Multiple Workflows with Cookdown

With cookdown, the agent still runs a separate workflow for each instance of the target class. However, it loads only a single instance of the data source sharing the output with the different workflows. This is a significant reduction in potential overhead.

Multiple instances of workflow sharing a single data source

Multiple instances of workflow charing single data

Only data source modules are cooked down. However, composite data source modules may contain modules of other types such as probe action modules and condition detection modules. If such a composite data source module supports cookdown, it will be cooked down. Because only a single instance of the data source module is loaded, only a single instance of the modules within it is also loaded.

The previously mentioned monitor Microsoft.Windows.Server.2008.LogicalDisk.FreeSpace supports cookdown, and only a single instance of its data source is loaded regardless of the number of logical disks on the agent. A separate set of workflows are still loaded for each agent, yet they all share the single copy of the data source module. Because the data source module runs the script, the result is a significant reduction in the overhead generated by the different instances of the monitor.

In addition to multiple copies of the same workflow, cookdown applies to different workflows sharing the data source module. For example, one data source module running a script might be used by a monitor for measuring health state and also used by a rule for collecting performance data. Cookdown could be performed in such a way that a single copy of the data source module would be shared between the different workflows.

Multiple instances of workflow sharing single data source module

Multiple instances of workflow charing single data

Criteria for Cookdown

A workflow does not have to specify whether cookdown should be performed. Cookdown is performed automatically on any workflow that meets the cookdown criteria: the only criterion is that all copies of the data source module are called by using identical values for each parameter. If this is the case, all instances of the workflow on each agent will cookdown to a single data source module.

The only way that values for a parameter will vary for different instances of the same module is if a $Target variable is used. This kind of variable uses the value of a property on the target object. Because this value may vary between different instances, the value that is provided to the module parameter could vary. In this case, cookdown would not be performed, and a separate copy of the data source would be used for each workflow.

Values that are provided to the parameters for a data source module shared between different kinds of workflows have more potential to vary. For example, a rule and a monitor might share a data source module running a script. The interval that the script should run is provided to the data source module as a parameter. If the monitor and rule are configured to run at different intervals, the value that is provided to that parameter will vary between the workflows. In this case, cookdown would not be performed, and a separate copy of the data source module would be required for the different workflows.

When to Configure for Cookdown

Changing a script and a workflow to support cookdown can take significant effort, and that effort is not valuable in all circumstances. You should be concerned about cookdown only when multiple instances of the target class are expected on a single agent. When there is only a single instance of the target class expected on the agent, cookdown will not be performed anyway because there will only be a single copy of the workflow.