Chapter 9: Stabilizing Phase

This chapter covers the strategy suggested for stabilizing an application that has been migrated from UNIX to the Microsoft® Windows® operating system. The Stabilizing Phase involves testing the application for the expected functionality and improving the quality of the application to meet the acceptance criteria set for the project.

This chapter describes the objectives of testing in the Stabilizing Phase. It introduces testing processes, as well as methodology and tools that you can employ to test applications with different architectures. It includes a set of job aids that can be used to develop test checklists to define the actions that you must take to ensure that the solution has been adequately tested and approved before its release. These job aids are specific to different architectures and also provide information on tuning the applications.

*

On This Page

Goals for the Stabilizing Phase Goals for the Stabilizing Phase
Testing the Solution Testing the Solution
Resolving Solution Defects Resolving Solution Defects
Conducting the Solution Pilot Conducting the Solution Pilot
Closing the Stabilizing Phase—Release Readiness Approved Closing the Stabilizing Phase—Release Readiness Approved
Tuning Tuning
Testing and Optimization Tools Testing and Optimization Tools
Further Reading Further Reading

Goals for the Stabilizing Phase

The primary goal of the Stabilizing Phase is to improve the quality of the solution so that it meets the acceptance criteria and can be released to the production environment. During this phase, the team tests the feature-complete migrated application by subjecting it to various tests such as user acceptance testing (UAT), regression testing, and bug tracking based on the application’s requirements. The resulting build must demonstrate that it meets the defined quality and performance level and be ready for full production deployment.

Testing during the Stabilizing Phase is an extension of the testing that is conducted during the development of the application in the Developing Phase. Testing in the Stabilizing Phase tests usage and operation of the application under realistic environment conditions. Test plans include a comparison of the migrated application’s functionality with that provided by the original application. Test plans must also include test cases for testing the new features added to the application.

After the build is stabilized, the solution is deployed. The Stabilizing Phase ends with the Release Readiness Approved Milestone, indicating that the team and customers agree that all outstanding issues have been addressed.

Major Tasks and Deliverables

Table 9.1 describes the tasks that must be completed during the Stabilizing Phase and lists the roles responsible for achieving them.

Table 9.1. Major Stabilizing Phase Tasks and Owners

Major Tasks

Owners

Testing the solution

The team executes the test plans that were created during the Planning Phase and enhanced and executed during the Developing Phase. Testing includes comparing the test results of the parent application with the migrated application, as well as testing the application from different perspectives.

Test team

Resolving solution defects

The team triages the identified defects and resolves them. New tests are developed to reproduce issues reported from other sources. The new test cases are integrated into the test suite.

Development team and Test team

Conducting the solution pilot

This task involves setting up the deployment environment and the migrated application on the staging area to test the application before it is deployed. The team moves a solution pilot from the development area to a staging area in order to test the solution with actual users and real scenarios. The solution pilot is conducted before starting the Deploying Phase.

Release Management team

Closing the Stabilizing Phase

The team documents the results of the tasks performed in this phase and solicits management approval at the Release Readiness Approved Milestone meeting.

Project team

Table 9.2 lists the tasks described in Table 9.1, but focuses on the tasks from the perspective of the team roles. The primary team roles driving the Stabilizing Phase are Test and Release Management.

Table 9.2. Role Cluster Focuses and Responsibilities in the Stabilizing Phase

Role Cluster

Focus and Responsibility

Product Management

Execute communications plan.

Launch test phase.

Program Management

Track project.

Triage bugs.

Release Management

Prepare for the deployment of the application.

Set up the production environment.

Development team

Triage bugs and resolve them.

Optimize code.

Reconfigure hardware or service.

User Experience

Stabilize user documentation and training materials.

Test team

Define test goals and generate a master test plan (MTP).
Generate build and triage plan.
Generate detailed test plan (DTP).

Review DTP and detailed test cases (DTC).

Track test schedule.

Review bugs entered in the bug-tracking tool and monitor their status during triage meeting.
Generate weekly status reports.
Escalate issues that are blocking progress, review impact analysis, and generate change management document.
Ensure that appropriate level of testing is achieved for a particular release.
Lead the actual build acceptance test (BAT) execution.
Execute test cases and generate test report.

Testing the Solution

This section describes the testing activities that are performed in the Stabilizing Phase. In the Stabilizing Phase, because all features and functions of the solution are now complete and all solution elements have been built, testing is performed on the solution as a whole, not just on individual components. The testing that began during the Developing Phase according to the test plan created during the Planning Phase continues with further testing, tracking, documentation, and reporting activities during the Stabilizing Phase. This mainly involves user acceptance testing (UAT) and regression testing as explained in the next subsections in detail.

User Acceptance Testing

The emphasis on user acceptance testing (UAT) during the Stabilizing Phase is to ensure that the migrated solution meets the business needs. UAT is performed on a collection of business functions in a production environment after the completion of functional testing. This is the final stage in the testing process before the system is accepted for operational use. It involves testing the system with data supplied by the actual user or customer instead of the simulated data developed as part of the testing process. The UAT helps to validate the solution for the overall user requirements and also determines the release readiness status of the system. Running a pilot for a select set of users helps to identify areas where users have trouble understanding, learning, and using the solution.

For migration projects, UAT involves testing the migrated application and identifying its defects. These defects are addressed and regression test is conducted for each fixed defect to ensure that the fix doesn’t break any other functionality of the migrated application. The UAT Summary confirms that the solution meets the customer’s acceptance criteria, thereby furthering customer acceptance of the solution.

Regression Testing

Regression testing refers to retesting previously tested components and functionality of the system to ensure that they function properly even after a change has been made to parts of the system. For migration projects, this is the most important class of tests. As defects are discovered in a component, modifications should be made to correct them. This may require retesting of other components or the entire solution.

Regression testing helps in the following areas:

  • To ensure that no new problems are introduced and that the operational performance has not been degraded because of modifications.

  • To ensure that the effects of the changes are transparent to other areas of the application and other components that interact with the application.

  • To modify the original test data and test cases from other testing activities.

Resolving Solution Defects

In order to resolve defects, you must reproduce and test them in the test environment. Each reproduced defect in the test environment should be tracked for its status and severity. An important aspect of such tests involves test tracking and reporting. Test tracking and reporting occurs at frequent intervals during the Developing and Stabilizing Phases. During the Stabilizing Phase, this reporting is driven by the bug count. Regular communication of the test status to the team and other key stakeholders ensures that the project runs smoothly. After fixing the defects, test cases and test data should be updated and integrated with the test suite.

Bug Convergence

Bug convergence is the point at which the team makes visible progress against the active bug count. At bug convergence, the rate of bugs resolved exceeds the rate of bugs found, thus the actual number of active bugs decreases. After bug convergence, the number of bugs should continue to decrease until the zero bug bounce task, as explained in the next sections.

Interim Milestone: Bug Convergence

Bug convergence tells the team that most of the bugs have been addressed and that the rate of bugs resolved is higher than the rate of new bugs found. This can be considered as the interim milestone and the migrated application can be considered for zero bug bounce verification.

Zero Bug Bounce

Zero bug bounce is the point in the project when development finally catches up to testing and there are no active bugs for the moment. After zero bug bounce, the number of bugs should continue to decrease until the product is sufficiently stable for the team to build the first release candidate.

Interim Milestone: Zero Bug Bounce

Achieving zero bug bounce is a clear sign that the solution is near to being considered a stable release candidate.

Release Candidates

After the first achievement of zero bug bounce, a series of release candidates are prepared for release to the pilot group. Each release is marked as an interim milestone.

Guidelines for declaring a build as a release candidate include the following:

  • Each release candidate has all the required elements to qualify for release to production.

  • The test period that follows determines whether a release candidate is ready to release to production or if the team must generate a new release candidate with appropriate fixes.

  • Testing the release candidates, carried out internally by the team, requires highly focused, intensive efforts and concentrates heavily on discovering critical bugs.

Interim Milestone: Release Candidate

As each new release candidate is built, there should be fewer bugs reported, classified, and resolved. Each release candidate marks significant progress in the team’s approach toward deployment. With each new candidate, the team must focus on maintaining tight control over quality.

Interim Milestone: Preproduction Test Complete

Eventually, a release candidate is prepared that contains no defects. After this has occurred, no defects should be found within the isolated staging environment. At this stage, all testing that can be done before putting the migrated component into production has been completed.

Conducting the Solution Pilot

This section describes the best practices to adopt for conducting a pilot of the migrated application. This section provides you with information regarding various points to be considered when conducting a pilot and deciding the next steps after the pilot.

A pilot release is a deployment into a subset of the live production environment or user group. During the pilot, the team tests as much of the entire solution as possible in a true production environment. Depending on the context of the project, the pilot can take various forms:

  • In an enterprise, a pilot can be a group of users or a set of servers in a data center.

  • For migration projects, the pilot might involve testing the most demanding application or database that is being migrated with a sophisticated group of users who can provide helpful feedback.

The common element in all piloting scenarios is testing under live conditions. The pilot is not complete until the team ensures that the solution is viable in the production environment and that the solution is ready for deployment.

Some of the best practices that should be followed when conducting a pilot are:

  • Before beginning a pilot, the team and the pilot participants must clearly identify and agree upon the success criteria for the pilot. These should map back to the success criteria for the development effort.

  • Any issues identified during a pilot must be resolved either by further development, by documenting resolutions and workarounds for the installation team and production support staff, or by incorporating them as supplemental material in training or Help documentation.

  • Before the pilot is started, a support structure and an issue-resolution process must be in place. This may require that the support staff receive training in the application area that is being piloted.

  • In order to determine any issues and confirm that the deployment process will work, it is necessary to implement a trial run or a rehearsal of all the elements of the deployment prior to the actual deployment.

After you collect and evaluate the pilot data, a corresponding strategy should be selected based on the findings from the analysis of pilot data. The next strategy could be one of the following:

  • Stagger forward. Deploy a new release to the pilot group.

  • Roll back. Execute the rollback plan and revert the pilot group to the stable state they had before the pilot started.

  • Suspend. Suspend the entire pilot.

  • Fix and continue. If you find an issue during the pilot, fix the issue and continue with the next steps.

  • Proceed. Advance to the Deploying Phase.

After the pilot has been completed, the pilot team must prepare a report detailing each lesson learned and how new information was incorporated and issues were resolved.

Interim Milestone: Pilot Complete

This milestone signifies that the pilot has been successfully completed and that the team is ready to proceed to the Deploying Phase.

Closing the Stabilizing Phase—Release Readiness Approved

The Stabilizing Phase culminates with the Release Readiness Approved Milestone. The team builds a release candidate with all major defects fixed as per the quality policy of the organization. All rounds of testing must be done before moving the migrated component into the production environment. When all test plans are executed and test cases are satisfied, the migrated application is ready to be moved to the production environment after the release is approved with a formal sign-off.

Key stakeholders, typically representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the solution is complete and approved for release. The sign-off document becomes a project deliverable and is archived for future reference.

The performance of the application following deployment in the production environment is a key criterion in indicating a successful application migration. The following sections will help you to optimize the performance of the application and the tools following deployment.

Tuning

This section discusses tuning of the solution in detail, including how to performance-tune the migrated application, and scaling up and scaling out of the application. In addition, the section discusses multiprocessor considerations for applications and network utilizations. You can use this information to identify the parameters that affect application performances and steps to consider in the scalability of applications.

Performance Tuning

Performance management starts with the gathering of a data baseline that indicates what system performance should look like. After establishing a baseline, it is used to evaluate the performance of the application. Performance problems typically do not become apparent until the application is placed under an increased load.

Measuring the performance of an application when placed under ever increasing loads determines the scalability of that application. When the performance begins to fall below the stated minimum performance requirements, you have reached the limit of scalability of the application. For more information about scaling, refer to the "Scaling Up and Scaling Out" section later in this chapter.

Performance tuning can be done in the following ways:

  • Tuning the computer hardware by adding more memory, updating CPUs, adding disk controllers, or upgrading network controllers. This is the most efficient way and helps performance-tune the application as well.

  • Application rearchitecture to remove bottlenecks such as poor threading and looping and checking for other loops that use too much CPU time. This step also helps considerably in performance tuning.

  • Operating system parameter tuning, which involves adjusting the amount of page store and tweaking network stack parameters.

  • Tuning the configurations on a database server, application server, or Web server.

In UNIX, performance is monitored using a type of kernel-level instrumentation, along with rudimentary tools for monitoring the CPU, disk, and memory usage. Windows Server 2003 is designed such that it exposes a great deal of performance data. Tools like Windows Performance Monitor (PerfMon) can be used to export detailed information about the processor, memory, disk, and network usage. Performance Monitor support is integrated throughout Windows. Administrators can gather a variety of performance data from many computers simultaneously.

UNIX kernels tend to have many configurable parameters that can be fine-tuned for specific applications. By contrast, the Windows kernel is largely self-tuned. The virtual memory, thread scheduling, and I/O subsystems all dynamically adjust their resource usage and priority to maximize throughput. The difference between these two approaches is that the UNIX approach is to tweak kernel parameters for maximum advantage in the benchmark, even if those tweaks affect the real-world performance, while the Windows approach is to let the kernel tune itself for whatever load is placed on it.

Note
More information on improving performance is available at
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/fastmanagedcode.asp.
More information on writing high-performance managed applications is available at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/highperfmanagedapps.asp.

Scaling Up and Scaling Out

Scalability is a measure of how easy it is to modify the application infrastructure and architecture to meet variances in utilization. As with other application capabilities, the decisions you make during the design and early coding phases largely dictate the scalability of your application.

Application scalability requires a balanced partnership between two distinct domains: software and hardware. Because scalability is not a design concern of stand-alone applications, the applications discussed here are distributed applications.

Scaling up involves achieving scalability with the use of better, faster, and more expensive hardware to move the processing capacity limit from one part of the computer to another. Scaling up includes adding more memory, adding more or faster processors, or just migrating the application to a more powerful, single computer. Typically, this method allows for an increase in capacity without requiring changes to source code. However, adding CPUs does not add performance in a linear fashion. Instead, the performance gain curve slowly tapers off as each additional processor is added.

Scaling out distributes the processing load across more than one server by dedicating several computers to a common task. In this, the fault tolerance of the application is increased. Scaling out also presents a greater management challenge because of the increased number of computers.

Developers and administrators use a variety of load-balancing techniques to scale out with the Windows platform. Load balancing allows an application site to scale out across a cluster of servers, making it easy to add capacity by adding replicated servers. It provides redundancy, giving the site failover capabilities so that it remains available to users even if one or more servers fail or are taken down.

Scaling out provides a method of scalability that is not hampered by hardware limitations. Each additional server provides a near linear increase in scalability.

The key to successfully scaling out an application is location transparency. If any of the application code depends on knowing which server is running the code, location transparency has not been achieved and scaling out will be difficult. This situation requires code changes to scale out an application from one server to many, which is seldom an economical option. If you design the application with location transparency in mind, scaling out becomes an easier task.

Note
More information on scaling is available at
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/vsent7/html/vxconmanageabilityoverview.asp.
Microsoft Application Center 2000 reduces the complexity and the cost of scaling out. More information on "Application Center 2000" is available at
https://www.microsoft.com/applicationcenter/default.mspx.
More information on scaling network-aware applications is available at
https://msdn.microsoft.com/library/default.asp?url=/msdnmag/issues/1000/Winsock/toc.asp.

Multiprocessor Considerations

Application performance improves by having multiple processors perform the same task. You can distribute the processing load across several processors.

Computationally intensive tasks are characterized by intensive processor usage with relatively few I/O operations. The ongoing challenge with these applications is to improve the performance. You can do this with a faster computer, a more efficient algorithm, and by improving the implementation or using more processors. You can improve the performance with the help of tuning techniques as well.

Using more processors can mean taking advantage of an SMP computer or by using distributed computing with multiple networked computers. However, adding CPUs does not add performance in a linear fashion. Instead, the performance gain curve slowly tapers off as each additional processor is added. For computers with SMP configurations, each additional processor incurs system overhead. After you have upgraded each hardware component to its maximum capacity, you will eventually reach the real limit of the processing capacity of the computer. At that point, the next step is to move to another computer.

Multiprocessor optimization can be achieved by making use of threads.

Note   More information on multiprocessor optimizations is available at
https://msdn.microsoft.com/msdnmag/issues/01/08/Concur/.

Network Utilizations

Network resources, such as available bandwidth and latency, must be predicted and managed on computers and devices throughout the network.

Optimal network utilization is achieved with cooperation among end nodes, switches, routers, and wide area network (WAN) links through which data must pass. Preferential treatment must be given for certain data as it traverses through the network in order to service certain components better during congestion. There are tools that help analyze network traffic, provide network statistics and packet information, and thereby better use the network by analyzing areas of congestion.

Quality of Service (QoS), an industry-wide initiative, achieves a more efficient use of network resources by differentiating between data subsets. Windows 2000 implements QoS by including a number of components that can cooperate with one another.

Note   More information on QOS on Windows is available at
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/qos/qos/qos_start_page.asp.

Note   Network Monitor captures network traffic for display and analysis. More information on Network Monitor is available at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/netmon/netmon/network_monitor.asp.

Note Network Probe is another tool for traffic-level network monitoring and for analysis and visualization. More information on Network Probe is available at https://www.objectplanet.com/probe/.

Testing and Optimization Tools

This section lists some of the useful tools that can be used for testing and monitoring your applications.

Visual Studio .NET 2003 Tools

Microsoft Visual Studio® .NET 2003 includes tools for analyzing the performance of applications. These include:

Platform SDK Tools

Platform SDK tools includes debugging tools, file management tools, performance tools, and testing tools. These tools are available with the latest Platform SDK.

Debugging Tools

Platform SDK includes the following debugging tools:

File Management Tools
Performance Tools

Performance tools can be used to measure application performance and resolve some performance issues. Platform SDK includes the following performance tools:

Testing Tools

Other Commonly Used Tools

This section lists other commonly used tools that are useful in testing and monitoring applications.

Monitoring Tools
  • Diskmon. This tool captures all hard disk activity or acts such as a software disk activity light in your system tray. This tool is available for download at https://www.sysinternals.com/ntw2k/freeware/diskmon.shtml.

  • Filemon. This monitoring tool allows you to view all file system activity in real-time. This tool works on all versions of Windows NT, Windows 2000, Windows Server 2003, and Windows XP. It also works with the Windows XP 64-bit edition. This tool is available for download at https://www.sysinternals.com/ntw2k/source/filemon.shtml.

  • PMon. This is a Windows NT GUI/device driver program that monitors process and thread creation and deletion, as well as context swaps if it is running on a multiprocessing or checked kernel. This tool is available for download at https://www.sysinternals.com/ntw2k/freeware/pmon.shtml.

  • Portmon. You can monitor serial and parallel port activity with this advanced monitoring tool. It knows about all standard serial and parallel IOCTLs and even shows you a portion of the data being sent and received. This tool is available for download at https://www.sysinternals.com/ntw2k/freeware/portmon.shtml.

  • Regmon. This monitoring tool allows you to view all registry activity in real-time. This tool is available for download at https://www.sysinternals.com/ntw2k/source/regmon.shtml.

  • TCPView. You can view all the open TCP and UDP endpoints. TCPView even displays the name of the process that owns each endpoint. This tool is available for download at https://www.sysinternals.com/ntw2k/source/tcpview.shtml.

  • Task Manager. Task Manager provides run-time information on processes. The Task Manager tool is available as part of Windows.

Testing Tools
Source Test Tools

Tools for win64:

  • VTune Performance Analyzer. Intel VTune Analyzers help locate and remove software performance bottlenecks by collecting, analyzing, and displaying performance data from the system-wide level down to the source level. More information on VTune Performance Analyzer is available at https://www.intel.com/software/products/vtune/.

Further Reading

For more information, refer to:

Download

Get the UNIX Custom Application Migration Guide

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions