Export (0) Print
Expand All
Expand Minimize

Chapter 9: Stabilizing Phase

Published: May 31, 2006

This chapter describes the suggested strategy for stabilizing an application that has been migrated from UNIX to the Microsoft® Windows® operating system. The Stabilizing Phase involves testing the application for the expected functionality and improving the quality of the application to meet the acceptance criteria set for the project.

This chapter also describes the objectives of testing in the Stabilizing Phase. It introduces testing processes and methodologies that can be used to test applications with different architectures. You need to test the applications to verify that they meet the expected functionality and acceptance criteria set for the project. Various tools that you can use to test applications are also discussed here. The information in this chapter will enable you to choose the most appropriate tools for testing your application.

*
On This Page

Goals for the Stabilizing Phase Goals for the Stabilizing Phase
Testing the Solution Testing the Solution
Resolving the Solution Defects Resolving the Solution Defects
Conducting the Solution Pilot Conducting the Solution Pilot
Closing the Stabilizing Phase— Release Readiness Approved Closing the Stabilizing Phase— Release Readiness Approved
Tuning Tuning
Testing and Optimization Tools Testing and Optimization Tools
Further Reading Further Reading

Goals for the Stabilizing Phase

This section describes the primary goals that you need to accomplish in the Stabilizing Phase. This section acquaints you with the major tasks to be performed and the deliverables expected from the Stabilizing Phase.

The primary goal of the Stabilizing Phase is to improve the quality of the solution so that it meets the acceptance criteria and can be released into the production environment. During this phase, the team tests the feature-complete migrated application. At this point in the Stabilizing Phase, the applications are subjected to various tests such as user acceptance testing (UAT), regression testing, and bug tracking based on the application’s requirements. The build must demonstrate that it reaches the defined quality and performance levels and is ready for full production deployment.

Testing during the Stabilizing Phase is an extension of the testing that was conducted during the development of the application in the Developing Phase. Testing in the Stabilizing Phase tests usage and operation of the application under realistic conditions. Test plans include testing the functionality in the migrated application and making a comparison of the migrated application’s functionality with that provided by the original application. Test plans also must include test cases for testing the new features added to the application.

After a build is stabilized, the solution is deployed. This phase culminates with the Release Readiness Approved Milestone, indicating that the team and customer agree that all the outstanding issues have been addressed.

Major Tasks and Deliverables

Table 9.1 describes the major tasks that must be completed during the Stabilizing Phase and lists the processes and roles responsible for achieving them.

Table 9.1. Major Stabilizing Phase Tasks and Owners

Major Tasks

Owners

Testing the solution

The team executes the test cases that were created during the Planning Phase and enhanced and tested during the Developing Phase. Testing includes comparing the test results of the parent application and the migrated application as well as testing the applications from different perspectives.

Test team

Resolving defects

The team triages the defects identified and resolves them. New tests are developed to reproduce issues reported from other sources. The new test cases are integrated into the test suite.

Development team, Test team

Conducting the solution pilot

This task involves setting up the deployment environment and the migrated application on the staging area to test the application before it is deployed. The team moves a solution pilot from the development area to a staging area to test the solution with the actual users and real scenarios. It also includes testing the solution in a live environment. The solution pilot is conducted before starting the Deploying Phase.

Release Management team

Closing the Stabilizing Phase

The team documents the results of the tasks performed in this phase and solicits management approval at the Release Readiness Approved Milestone meeting.

Project team

Table 9.2 lists the tasks described in Table 9.1 and considers the tasks from the perspective of the team roles. The primary team roles directing the Stabilizing Phase are Test and Release Management.

Table 9.2. Role Cluster Focuses and Responsibilities in Stabilizing Phase

Role Cluster

Focus and Responsibility

Product Management

Execute communications plan and launch test phase.

Program Management

Track project and bug triage.

Release Management

Preparation for deployment of the application and setting up the production environment.

Development

Bug triage and resolution, code optimization, and hardware or service reconfiguration.

User Experience

Stabilization of user documentation and training materials.

Test

Generate build and triage plan.
Track test schedule.

Review bugs entered in the bug-tracking tool and monitor their status during triage meeting.
Generate weekly status reports.
Escalate issues that are blocking progress, review impact analysis, and generate change management document.
Ensure that the appropriate level of testing is achieved for a particular release.
Lead the actual Build Acceptance Test (BAT) execution.
Execute test cases and generate test report.

Testing the Solution

This section describes the testing activities that are performed in the Stabilizing Phase. In the Stabilizing Phase, testing is performed not only on individual components of the solution, but on the solution as a whole, because all features and functions of the solution are now complete, and all solution elements have been built. The testing that began during the Developing Phase according to the test plan created during the Planning Phase continues with further testing, tracking, documentation, and reporting activities. This mainly involves UAT and regression testing as explained in the next subsections in detail.

User Acceptance Testing (UAT)

The Stabilizing Phase emphasizes UAT to ensure that the migrated solution meets the business needs. UAT is performed on a collection of business functions in a production environment after the completion of functional testing. This is the final stage in the testing process before the system is accepted for operational use. It involves testing the system with data supplied by the actual user or customer instead of the simulated data developed as part of the testing process. The result of UAT confirms that the solution meets the overall user requirements and determines the release readiness status of the system. Running a pilot for a select set of users helps to identify areas where users have trouble understanding, learning, and using the solution.

For migration projects, UAT involves testing the migrated application and identifying the defects. These defects are addressed and regression testing is conducted for each fixed defect to ensure the fix doesn’t break any other functionality of the migrated application. The UAT Summary confirms that the solution meets the customer’s acceptance criteria, thereby assisting in customer acceptance of the solution.

Regression Testing

Regression testing refers to retesting previously tested components and functionality of the system to ensure that they function properly even after a change has been made to parts of the system. For migration projects, this is the most important class of tests. As defects are discovered in a component, modifications should be made to correct them. This may require retesting of other components or the entire solution in the testing process.

The regression test helps in the following areas:

To ensure that no new problems are introduced and that the operational performance is not degraded because of the modifications.

To ensure that the effects of the changes are transparent to other areas of the application and other components that interact with the application.

To modify the original test data and test cases from other levels of testing.

Resolving the Solution Defects

In order to resolve defects, they must be reproduced and tested in the test environment. Each reproduced defect in the test environment should be tracked with its status and severity. An important aspect of such tests involves test tracking and test reporting. Test tracking and reporting occurs at frequent intervals during the Developing and Stabilizing Phases. During the Stabilizing Phase, this reporting is driven by the bug count. Regular communication of the test status to the team and other key stakeholders ensures that the project runs smoothly. After fixing the defects, test cases and test data should be updated and integrated with the test suite.

Bug Convergence

Bug convergence is the point at which the team makes visible progress against the active bug count. At bug convergence, the rate of bugs resolved exceeds the rate of bugs found, thus the actual number of active bugs decreases. After bug convergence, the number of bugs should continue to decrease until the zero bug bounce task, as explained in the next sections.

Interim Milestone: Bug Convergence

Bug convergence tells the team that most of the bugs have been addressed and the rate of bugs resolved is higher than the rate of new bugs found. This can be considered as the interim milestone and the migrated application can be taken for zero bug bounce verification.

Zero Bug Bounce

Zero bug bounce is the point in the project when development finally catches up to testing and there are no active bugs for the moment. After zero bug bounce, the number of bugs should continue to decrease until the product is sufficiently stable for the team to build the first release candidate.

Interim Milestone: Zero Bug Bounce

Achieving zero bug bounce is a clear sign that the solution is near to being considered a stable release candidate.

Release Candidates

After the first achievement of zero bug bounce, a series of release candidates are prepared for release to the pilot group. Each release is marked as an interim milestone.

Guidelines for declaring a build a release candidate include the following:

  • Each release candidate has all the required elements to qualify for release to production.

  • The test period that follows determines whether a release candidate is ready to release to production or if the team must generate a new release candidate with appropriate fixes.

  • Testing the release candidates, carried out internally by the team, requires highly focused and intensive efforts and concentrates heavily on discovering critical bugs.

Interim Milestone: Release Candidate

As each new release candidate is built, there should be fewer bugs reported, classified, and resolved. Each release candidate marks significant progress in the team’s approach toward deployment. With each new candidate, the team must focus on maintaining tight control over quality.

Interim Milestone: Preproduction Test Complete

Eventually, a release candidate is prepared that contains no defects. After this has occurred, no defects should be found within the isolated staging environment. At this stage, all testing that can be done before putting the migrated component into production has been completed.

Conducting the Solution Pilot

This section describes the best practices to be adopted for conducting a pilot of the migrated application. This section provides you with information regarding various points to be considered while conducting a pilot and deciding the next steps to take after the pilot.

A pilot release is a deployment into a subset of the live production environment or user group. During the pilot, the team tests as much of the entire solution as possible in a true production environment. Depending on the context of the project, the pilot can take various forms:

  • In an enterprise, a pilot can be a group of users or a set of servers in a data center.

  • For migration projects, the pilot might involve testing the most demanding application or database that is being migrated with a sophisticated group of users that can provide helpful feedback.

The common element in all the piloting scenarios is testing under live conditions. The pilot is not complete until the team ensures that the solution is viable in the production environment and that the solution is ready for deployment.

Some of the best practices that should be followed while conducting a pilot are:

  • Before beginning a pilot, the team and the pilot participants must clearly identify and agree upon the success criteria of the pilot. These should map back to the success criteria for the development effort.

  • Any issues identified during the pilot must be resolved either by further development, by documenting resolutions and workarounds for the installation team and production support staff, or by incorporating them as supplemental material in training or help documentation.

  • Before the pilot is started, a support structure and an issue-resolution process must be in place. This may require the support staff getting trained in the application area that is being piloted.

  • To identify any issues and confirm that the deployment process will work, it is necessary to implement a trial run or a rehearsal of all the elements of the deployment prior to the actual deployment.

After you collect and evaluate the pilot data, a corresponding strategy should be selected based on the findings from the analysis of pilot data. The next strategy could be one of the following:

  • Stagger forward. Deploy a new release to the pilot group.

  • Roll back. Execute the rollback plan and revert the pilot group to the stable state they had before the pilot started.

  • Suspend. Suspend the entire pilot.

  • Fix and continue. If you find an issue during the pilot, fix the issue and continue with the next steps.

  • Proceed. Advance to the Deploying Phase.

After the pilot has been completed, the pilot team must prepare a report detailing each lesson learned and how new information was incorporated and issues were resolved.

Interim Milestone: Pilot Complete

This milestone signifies that the pilot has been successfully completed and that the team is ready to proceed to the Deploying Phase.

Closing the Stabilizing Phase— Release Readiness Approved

The Stabilizing Phase culminates with the Release Readiness Approved Milestone. The team builds a release candidate (with all the major defects fixed) that satisfies the necessary quality policy of the organization. All rounds of testing must be completed, meaning that all test plans have been executed and test cases satisfied before the migrated component can be moved into the production environment. Then the release is approved with a formal sign-off marking that the Release Readiness Approved Milestone has been reached.

Key stakeholders, typically representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the solution is complete and approved for release. The sign-off document becomes a project deliverable and is archived for future reference.

The performance of the application following deployment in the production environment is a key criterion in indicating a successful application migration. The following sections will help you to optimize the performance of the application and the tools following deployment.

Tuning

This section discusses tuning of the solution in detail, including how to performance-tune the migrated application, and scaling up and scaling out of the application. In addition, the section discusses multiprocessor considerations for applications and network utilizations. You can use this information to identify the parameters that affect application performances and the steps to consider in the scalability of the applications.

Performance Tuning

Performance management starts with the gathering of a data baseline that indicates what system performance should look like. After establishing a baseline, it is used to evaluate the performance of the application. Performance problems typically do not become apparent until the application is placed under an increased load.

Measuring the performance of an application when placed under ever increasing loads determines the scalability of that application. When the performance begins to fall below the stated minimum performance requirements, you have reached the limit of scalability of the application. For more information about scaling, refer to the "Scaling Up and Scaling Out" section later in this chapter.

Performance tuning can be carried out in the following ways:

  • Tuning the computer hardware by adding more memory, updating CPUs, adding disk controllers, or upgrading network controllers. This is the most efficient way and helps performance-tune the application as well.

  • Application rearchitecture to remove bottlenecks such as poor threading and looping and checking for other loops that use too much CPU time. This step also helps considerably in performance tuning.

  • Operating system parameter tuning. This involves adjusting the amount of page store and tweaking network stack parameters.

  • Tuning of the configurations on a database server, application server, or Web server.

In UNIX, performance is monitored using a type of kernel-level instrumentation, along with rudimentary tools for monitoring the CPU, disk, and memory usage. Windows Server™ 2003 is designed such that it exposes a great deal of performance data. Tools like Windows Performance Monitor (PerfMon) can be used to export detailed information about the processor, memory, disk, and network usage. Performance Monitor support is integrated throughout Windows. Administrators can gather a variety of performance data from many computers simultaneously.

UNIX kernels tend to have many configurable parameters that can be fine-tuned for specific applications. By contrast, the Windows kernel is largely self-tuned. The virtual memory, thread scheduling, and I/O subsystems all dynamically adjust their resource usage and priority to maximize throughput. The difference between these two approaches is that the UNIX approach is to tweak kernel parameters for maximum advantage in the benchmark, even if those tweaks affect the real-world performance, while the Windows approach is to let the kernel tune itself for whatever load is placed on it.

Note

Scaling Up and Scaling Out

Scalability is a measure of how easy it is to modify the application infrastructure and architecture to meet variances in utilization. As with other application capabilities, the decisions you make during the design and early coding phases largely dictate the scalability of your application.

Application scalability requires a balanced partnership between two distinct domains: software and hardware.

Scaling up is achieving scalability with the use of better, faster, and more expensive hardware to move the processing capacity limit from one part of the computer to another. Scaling up includes adding more memory, adding more or faster processors, or just migrating the application to a more powerful, single computer. Typically, this method allows for an increase in capacity without requiring changes to source code. However, adding CPUs may not add performance in a linear fashion. Instead, the performance gain curve slowly tapers off as each additional processor is added.

Scaling out distributes the processing load across more than one server by dedicating several computers to common tasks. In this scenario, the fault tolerance of the application can be increased. Scaling out also presents a greater management challenge because of the increased number of computers.

Developers and administrators use a variety of load-balancing techniques to scale out with the Windows platform. Load balancing allows an application to scale out across a cluster of servers, making it easy to add capacity by adding replicated servers. It provides redundancy, giving the site failover capabilities so that it remains available to users even if one or more servers fail or are taken down.

Scaling out provides a method of scalability that is not hampered by hardware limitations. Each additional server provides a near linear increase in scalability.

The key to successfully scaling out of an application is location transparency. If any of the application code depends on knowing which server is running the code, location transparency has not been achieved and scaling out will be difficult. This situation requires code changes to scale out an application from one server to many, which is seldom an economical option. If you design the application with location transparency in mind, scaling out becomes an easier task.

Note

Multiprocessor Considerations

Application performance improves by having multiple processors perform the same task. You can distribute the processing load across the several processors.

Computationally intensive tasks are characterized by intensive processor usage with relatively few I/O operations. The ongoing challenge with these applications is to improve the performance. You can do this with a faster computer, a more efficient algorithm, and by improving the implementation or using more processors. You can improve the performance with the help of tuning techniques as well.

Using additional processors can mean taking advantage of an SMP computer or by using distributed computing with multiple networked computers. However, adding CPUs does not add performance in a linear fashion. Instead, the performance gain curve slowly tapers off as each additional processor is added. The characteristics of this behavior depend on how the application is designed. For computers with SMP configurations, each additional processor incurs system overhead. After you have upgraded each hardware component to its maximum capacity, you will eventually reach the real limit of the processing capacity of the computer. At that point, the next step is to move to another computer.

Multiprocessor optimization can be achieved by making use of threads.

Note   More information on multiprocessor optimizations is available at http://msdn.microsoft.com/msdnmag/issues/01/08/Concur/.

Network Utilizations

Network resources, such as available bandwidth and latency, must be predicted and managed on computers and devices throughout the network.

Optimal network utilization is achieved with cooperation among end nodes, switches, routers, and wide area network (WAN) links through which data must pass. There are tools that help analyze network traffic, provide network statistics and packet information, and thereby better use the network by analyzing areas of congestion.

Quality of Service (QoS), an industry-wide initiative, achieves a more efficient use of network resources by differentiating between data subsets. Windows 2000 implements QoS by including a number of components that can cooperate with one another.

Note
More information on QOS on Windows is available at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/qos/qos/qos_start_page.asp.
Network Monitor captures network traffic for display and analysis. More information on Network Monitor is available at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/netmon/netmon/network_monitor.asp.
Network Probe is another tool for traffic-level network monitoring and for analysis and visualization. More information on Network Probe is available at http://www.objectplanet.com/probe/.

Testing and Optimization Tools

The following are some of the commonly used tools for Windows Services for UNIX 3.5.

Monitoring Tools

  • netstat. Allows you to track the state of the socket ports. This tool is a part of Windows.

Testing and Debugging Tools

  • Electric Fence. A malloc debugger and bounds checker. It uses the virtual memory hardware of your system to detect when software overruns the boundaries of a malloc buffer. It also detects any accesses of memory released by free.

  • hexdump. Gives an ASCII, decimal, hexadecimal, and octal dump. The hexdump command can be used to display the contents of a binary file or a file that contains unprintable characters.

  • nm. Used to examine binary files (including libraries, compiled object modules, shared-object files, and stand-alone executables) and to display the contents of those files or the meta information stored in them.

  • ulimit. Used to display or control the resources available to a process.

  • xev. Prints contents of X Windows events. It is useful for seeing what causes events to occur and to display the information that they contain.

  • truss. A run-time system call tracker. It follows the detailed history of system calls. It is useful for narrowing down errors before starting a debugger.

  • pstat. Displays detailed process information and provides detailed run-time information per process. This tool is a part of Windows Services for UNIX.

  • ps. Provides run-time information about processes and their interrelationships. This tool is a part of Windows Services for UNIX.

  • Objdump. Displays information about a binary/object that can be useful for tracking down run-time problems (that is, shared library dependencies). This tool is a part of Windows Services for UNIX.

  • pstruct. Dumps information about C structures, such as offsets, and actual member sizes. Good for checking alignment problems. This tool is a part of Windows Services for UNIX 3.5.

  • expect. Scripts user activity and responses interactively to programs.

Other Commonly Used Tools

This section lists other commonly used tools that are useful in testing and monitoring applications.

Monitoring Tools
  • Diskmon. This tool captures all hard disk activity or acts such as a software disk activity light in your system tray. This tool is available for download at http://www.sysinternals.com/ntw2k/freeware/diskmon.shtml.

  • Filemon. This monitoring tool allows you to view all file system activity in real-time. This tool works on all versions of Windows NT, Windows 2000, Windows 2003, and Windows XP. It also works with the Windows XP 64-bit edition. This tool is available for download at http://www.sysinternals.com/ntw2k/source/filemon.shtml.

  • PMon. This is a Windows NT GUI/device driver program that monitors process and thread creation and deletion, as well as context swaps if it is running on a multiprocessing or checked kernel. This tool is available for download at http://www.sysinternals.com/ntw2k/freeware/pmon.shtml.

  • Portmon. You can monitor serial and parallel port activity with this advanced monitoring tool. It knows about all standard serial and parallel IOCTLs and even shows you a portion of the data being sent and received. This tool is available for download at http://www.sysinternals.com/ntw2k/freeware/portmon.shtml.

  • Regmon. This monitoring tool allows you to view all registry activity in real-time. This tool is available for download at http://www.sysinternals.com/ntw2k/source/regmon.shtml.

  • TCPView. You can view all the open TCP and UDP endpoints. TCPView even displays the name of the process that owns each endpoint. This tool is available for download at http://www.sysinternals.com/ntw2k/source/tcpview.shtml.

  • Task Manager. Task Manager provides run-time information on processes. The Task Manager tool is available as part of Windows.

Testing Tools
Source Test Tools
  • gdb. Used to view the current activities inside another program while it executes or to view the state of another program with which you can monitor the processes and threads of an application at the moment of the application’s crash.

  • Purify. Purify is a run-time error and memory leak detector. More information on Purify is available at http://www-306.ibm.com/software/sw-bycategory.

Tools for Win64:

  • VTune Performance Analyzer. Intel VTune Analyzers help locate and remove software performance bottlenecks by collecting, analyzing, and displaying performance data from the system-wide level down to the source level. More information on VTune Performance Analyzer is available at http://www.intel.com/software/products/vtune/.

  • Lint. A source code (C language) checker. Lint highlights possible problem areas with code successfully compiled by cc. Additional information is available at http://www.pdc.kth.se/training/Tutor/Basics/lint/index-frame.html.

Further Reading

For more information, refer to:

Download

Get the UNIX Custom Application Migration Guide

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft