Analyzing Monitoring Results

The baseline you develop establishes the typical counter values you should expect to see when your system is performing satisfactorily. The following section provides guidelines to help you interpret the counter values and eliminate false or misleading data that might cause you to set your own target values inappropriately.

When you are collecting and evaluating data to establish a valid performance baseline, consider the following guidelines:

  • When monitoring processes of the same name, watch for unusually large values for one instance and not the other. This can occur because System Monitor sometimes misrepresents data for separate instances of processes of the same name by reporting the combined values of the instances as the value of a single instance. Tracking processes by process identifier can help you get around this problem. For information about monitoring processes, see "Analyzing Processor Activity" later in this book.

  • When you are monitoring several threads and one of them stops, the data for one thread might appear to be reported for another. This is because of the way threads are numbered. For example, you begin monitoring and have three threads, numbered 0, 1, and 2. If one of them stops, all remaining threads are resequenced. That means that the original thread 0 is now gone and the original thread 1 is renamed to 0. As a result, data for the stopped thread 0 could be reported along with data for the running thread 1 because old thread 1 is now old thread 0. To get around this problem, you can include the thread identifiers of the process's threads in your log or display. Use the Thread\Thread ID counter for this purpose.

  • Do not give too much weight to occasional spikes in data. These might be due to startup of a process and are not an accurate reflection of counter values for that process over time. The effect of spikes can linger over time when using counters that average.

  • For monitoring over an extended period of time, use graphs instead of reports or histograms because these views only show last values and averages. As a result, they might not give an accurate picture of values if you are looking for spikes.

  • Unless you specifically want to include startup events in your baseline, exclude these events because the temporary high values tend to skew overall performance results.

  • Investigate zero values or missing data. These can impede your ability to establish a meaningful baseline. There are several possible explanations for this. For more information, see "Troubleshooting Problems with Performance Tools" later in this chapter.