本文件已封存並已停止維護。

Practices to Avoid

更新日期: 2010年1月

適用於: System Center Operations Manager 2007

The following are common mistakes that are made when you design a service model for an application.

Classes

Creating Too Many Classes

Creating too many classes can result in needless complexity with minimal value. A good rule is to use the least number of classes to achieve the desired monitoring results. Other than abstract classes, if a class is not going to be the target of any rules or monitors, it probably should not be created. Also, if two application components are similar, consider modeling them with a single class, possibly by using a property that can hold the values for any differences.

Creating Classes for Volatile Objects

Classes should represent objects that are fairly static. They are created and removed by administrators of the managed computers and not created frequently through automated processes. Examples include an application installed on a server, a database, and a Web site. Generally, a new instance of a class should not be expected most of the times that a discovery for that class runs. Too much volatility can result in excess load on the Root Management Server and lead to error messages from too frequent updates to the Operations Manager database. Other than performance considerations, it is also typically not useful to have health measurements for objects that appear for a short time.

For example, consider a Voice over IP application. You may want to create a class representing a telephone call. A monitor targeted at the class would measure the quality of each call and report an error state if there is a decrease in the quality level of the call. This is a questionable design, because telephone calls are constantly created and destroyed. In fact, discovery for the class would have to be run very frequently; otherwise, most of the calls would never be discovered. For those calls that were discovered, by the time that an administrator was able to respond to a message generated by a monitor, the telephone call would likely have ended.

A better solution for this scenario would be to target monitors at a class created for the application installation on the server. It might be used as a target for a monitor measuring the health of calls and detecting any error events that indicate a problem with a particular call. Any health state would be associated with the application itself, and any alerts could include an identifier for a particular call should a problem are detected.

Properties that Update Too Frequently

Properties should change rarely after they are first discovered. After initial discovery of an object, discovery data is sent from the agent to the Root Management Server only when a configuration change is detected. If properties frequently change, discovery data is sent to the Root Management Server every time that discovery is run and results in excess load. Frequent updating can also affect configuration reports that detect changes in property values.

The most common cause of this issue is a model that stores performance data or health state in a property, which is not a recommended practice. For example, if an object performs some kind of replication, you might consider creating a property to hold the last replication time. But this kind of performance data should not be stored in a property. A better solution would be to collect a numeric value, such as the number of minutes or hours since the last replication, as performance data. This data could be collected with a rule for tracking it on a graph in the console or in a report. A monitor could also be created comparing the collected performance value with a numeric threshold to set the health state according to the time since the last successful replication.

Discoveries

Too Frequent Discoveries

New objects installed on a managed computer or changes to existing objects will not be recognized by Operations Manager until the next time that the discovery for the class runs. Frequencies of only a few minutes are sometimes used to reduce the time that is required for such additions and changes to be detected. However, a short frequency interval increases the load on the agent running the discovery and usually provides minimal value.

Even though a particular management pack may have only a few discoveries, an agent must run discoveries from multiple management packs installed in the environment in addition to the other kinds of workflows, such as rules and monitors. Because of this, care should be taken to balance the need for quick detection of changes with a minimization of overhead.

If a particular class requires a short discovery interval because of frequent changes to the application’s components, this may indicate a health model that violates the above recommendations for volatile objects and properties that update frequently.

Script Discoveries Targeting Broad Classes

Discoveries for some classes in a management pack will need to search across all computers in an environment (or at least a large set of computers) to identify where the application is installed. These discoveries will continue to run on a regular basis even though the application may never be installed on most of these computers.

Because of their fairly small overhead, we strongly recommended that only registry discoveries target broad classes, such as the Windows Operating System class. Script discoveries that have significantly larger resource requirements should target only those classes specific to the application that were previously discovered through the registry. This strategy means that only those computers that have the application installed are required to run the scripts. Computers without the application installed must run only registry discoveries. These have minimal resource requirements.

另請參閱

顯示: