Chapter 18 - Stabilizing Phase

On This Page

Introduction and Goals Introduction and Goals
Testing the Solution Testing the Solution
Piloting the Solution Piloting the Solution
Finalizing the Release Finalizing the Release

Introduction and Goals

During this phase, the test team verifies that the solution meets the defined quality levels and that the risk of bugs is eliminated or minimized. Any existing bugs should not affect critical functionality or performance. After the solution has been stabilized, it will be ready for deployment into the production environment. This chapter highlights the processes of stabilization as they relate to an Oracle on UNIX to Microsoft® SQL Server™ on Windows® migration project.

The main goals for the Stabilizing Phase include:

  • Improve the overall quality of the migrated solution and stabilize for release.

  • Ensure that the solution meets the requirements of the project outlined in the Envisioning and Planning Phases.

  • Assemble all of the components of the solution and test the entire system before deployment.

  • Complete and validate documentation that is required for deployment, operations, and end users.

  • Evaluate and mitigate the risks involved in releasing the solution for deployment.

The deliverables produced during the Stabilizing Phase are listed in Table 18.1.

Table 18.1: Deliverables for the Stabilizing Phase

Task

Deliverables

Owner

Testing the solution

Release versions of source code, executables, release version of scripts, and installation documentation

Test Role

Testing the solution

Release versions of end-user training materials and operations documentation

User Experience Role

Testing the solution

Release notes

Release Management

Bug tracking and reporting

Testing and bug reports

Test Role

Piloting the solution

Pilot Review

Release Management

Piloting the solution

Project documents

Program Management

Piloting the solution

Sign off document

Project team

The Stabilizing Phase consists of two major activities: testing the solution and piloting the solution. During testing, the entire ecosystem is evaluated, including the hardware, software, database, connectivity layer, and the application. As a result of the testing, bugs and issues will be tracked and resolved.

The second major activity of the Stabilizing Phase pertains to piloting the solution to a select group consisting of end users, deployment, and operations personnel in the production environment. Pilot testing also helps anticipate and resolve any issues that may occur during deployment. While piloting the solution is optional, this activity is highly recommended because components are deployed into the Windows environment that were originally developed for another platform.

Testing the Solution

Testing is an iterative refinement process and is initiated every time there is a change to the solution (including bug fixes). The occurrence of second and third iterations is common in data migrations involving complex data integrity rules. Application testing will vary based on the amount of modifications needed to migrate the solution application. For example, when an application is ported, the code coverage tests may be given limited importance because the code base has not changed from the existing application.

The test team should accumulate complete knowledge of the software's functionality and the tests that should be performed to verify them. Often, the original design considerations and approach used to create the existing solution are not available to the migration team. This limits the effective knowledge of the test team. Testing should not be limited only to the parts that the developers have identified as affected by the migration. Changes in the backend database and the environment can manifest itself in unpredictable places.

The project team should reuse test cases from the existing solution. If none exist, new test cases can be created from the business requirements of the existing solution. Tests can then be run against both the source and target systems in the migration project and the results compared to identify deviations and bugs in the new solution.

Logs and audit reports that document any defects associated with the applications need to be created and published to the entire team in conjunction with the tests performed. This is because the source of the bug and the place where it manifests itself may be different. For example, even though a problem may be detected in the application, the source may be a database configuration or a hardware setting.

Each iteration of testing helps make the subsequent iteration more robust and complete than the previous one. This process must continue until an iteration has no exceptions. The number of tests completed and the defect rate are used to measure progress and schedule conformance.

A comprehensive series of tests is key to meeting the goals of the Stabilizing Phase. These tests provide assurances of the quality and stability of the solution. The following sections provide information about:

  • Best practices

  • Preparing for testing

  • Types of testing

  • Bug tracking and reporting

  • User Acceptance Testing (UAT) and signoff

This information will help you assure the quality and stability of the solution you have developed.

Best Practices

When testing a solution, two important best practices are clearly defining the success criteria and approaching testing with a zero-defect mindset.

Success Criteria

Judging whether a project has been successful is almost impossible without something to measure the project's results against. Success criteria, also called key performance indicators (KPIs), can be created by defining conditions under which the proposed solution will achieve its goals. In a migration project, the success criteria can be gauged by the effectiveness of the new solution. Does it effectively replace the existing solution in terms of features, performance, and function based on results from the test cases?

Note Though measured in the Stabilizing Phase, success criteria for a project is established during the Envisioning and Planning Phases.

Zero-defect Mindset

The concept of a zero-defect mindset should encompass the project team's commitment to producing the highest quality product possible. Each member is individually responsible for helping achieve the desired level of quality.

The zero-defect mindset does not mean that the deployed solution must be flawless; rather, it specifies a predetermined quality bar for the deliverables. Oftentimes, the project schedule will play a determining factor in achieving a zero-defect solution. In situations where schedule does not allow for complete testing, tests should be prioritized to ensure that all critical functionality is evaluated.

In addition, the zero-defect mindset concept should be carried throughout the dynamic life cycle of the solution. For example, as the database scales over time, additional tuning will be required to ensure optimum performance.

Preparing for Testing

Preparation for testing involves creating two key items:

  • Test environment

  • Bug tracking system

Each of these items is discussed under the following headings.

The Test Environment

The development and test plans provide a list of requirements that must be met by the test environment, and the test environment should be set up according to them. The test environment should be completely separate from the production environment. Although it may not mirror the production environment, it is best if it does.

If the development phase is completed, the same hardware and software may be used to test the solution. If the needs for the testing environment are more demanding, then additional resources may be required. In some situations, it may be necessary to scale the testing based on the available hardware.

In MSF, setting up the test environment starts toward the end of the Planning Phase. This is to make sure that a test environment is available to the development team while individual components are being developed. However, a full-scale test environment to verify all aspects of an integrated solution will only be required in the Stabilizing Phase.

Another consideration of the testing environment is ensuring that it is properly tuned before testing commences. Some recommended best practices for optimal tuning of SQL Server running under Windows 2003 include:

  • Format disk partitions using NTFS 5.0. This file system provides performance enhancements in Windows 2003.

  • Configure SQL Server as a stand-alone server. If configured as a domain-controller, additional resources are utilized, and performance is reduced.

  • Set the Application Response setting to optimize for background services. This setting allows background services to run at a higher priority than foreground applications and is accessed through the System icon in the Control Panel. This setting will improve the performance of SQL Server.

  • Turn off additional security auditing. This will reduce I/O activities.

  • Set the size of PAGEFILE.SYS. Monitor the usage of this swap file used by SQL Server and resize the resources slightly larger than your needs.

  • Turn off unnecessary services. Review all services and determine which can be turned off.

  • Turn off unnecessary network protocols.

For more information on tuning the test environment, refer to https://www.microsoft.com/windowsserver2003/evaluation/performance/tuning.mspx.

Bug Tracking Solution

An effective bug tracking system is needed to make sure that bugs are identified and issues are not dropped until they have been completely resolved. Projects typically have several hundreds or thousands of bugs. It is imperative to have a robust bug tracking system in place to address these issues.

The bug tracking software for the project was selected during the Planning Phase. Using this software from the beginning of testing allows all bugs identified through the life cycle of the solution to be tracked in one location. This testing history can also be a useful reference for future releases after the solution has been deployed.

Bug categorization is an important consideration during the configuration process of the bug tracking solution. Variables required by the categorization may affect the configuration of new and existing bug tracking solutions. These variables include:

  • Repeatability. This is a variable that measures how repeatable the issue or bug is. Repeatability is the percentage of the time the issue or bug manifests itself.

  • Visibility. This is a variable that measures the situation or environment that must be established before the issue or bug manifests itself. For example, if the issue or bug occurs only when the user holds down the Shift key while right-clicking the mouse and viewing the File menu, the percentage of obscurity is likely to be high.

  • Severity. This is a variable that measures how much impact the issue or bug will produce in the solution, in the code, or to the users. For example, a bug that causes the application to crash would rank higher than situations that allow the application to recover.

These three variables are estimated and assigned to each issue or bug by the project team and are used to derive the issue or bug priority.

Note The priority of a bug can be calculated by using the following formula:
(repeatability + visibility) * severity = priority

Types of Testing

Often, the test environment is the first place that all of the separate components (that have been unit tested in the Development Phase) are combined into a fully functional version of the solution. The first task is to ensure that the disparate components of the solution integrate properly. Next, ensure that the solution performs as expected. The next series of tests check to ensure that the solution will work properly under heavy workloads. The final set of tests check the solution for operational aspects.

The following types of testing are useful in an Oracle to SQL Server migration:

  • Integration testing. Does the solution work as a cohesive unit?

  • Performance testing. Does the solution meet the baseline requirements?

  • Stress testing. How does the solution react to stresses and workloads?

  • Scalability testing. How far will the solution scale? Can the system handle increased load by adding new hardware as required?

  • Operational testing. Do the operational aspects of the system perform as expected?

Each of these testing types is described under the following headings.

Integration Testing

The first level of testing in the Stabilizing Phase is integration testing, an iterative process in which separate components are combined into larger solutions until the system is complete. In integration testing, the focus is on the interfaces between components.

Integration testing proves that all areas of the system interface with each other correctly and that there are no gaps in communication. The final integration test proves that the entire system works as an integrated unit.

Integration testing will also reveal any issues with shared resources. For instance, if Pretty Good Privacy (PGP) encryption is used by more than one application, instead of having a separate key for each application, multiple applications could potentially share a single key.

If server consolidation is a business goal for this solution, then it has to be performed at this stage. For more information on server consolidation, refer to https://www.microsoft.com/downloads/details.aspx?FamilyId=0F70695E-5D0B-4781-8966-84BE43216F9E&displaylang=en.

Resolving Integration Issues

The major reasons issues arise during integration testing are incompatibilities or inconsistencies in the design or implementation of the interfaces between components. In a migration project, such issues are bound to occur because of interoperation between applications, changes in command usage, and differences in protocols between the two platforms. Such issues should be logged and forwarded to the development team for resolution. Another commonly encountered issue is resource shortage because several components are being assembled for the first time. Additional resources, such as processor, memory, and storage, should be made available to complete the testing.

Performance Testing

Performance testing involves evaluating how well the solution meets the expected criteria. Testing for performance can be further defined by two sub-types:

  • Application performance testing

  • Hardware utilization performance testing

Application performance testing in a migration focuses on comparing various speed and efficiency factors between the existing solution and the migrated solution. This ensures that the migrated solution complies with the expected level of performance.

These key speed and efficiency factors include:

  • Throughput. Database throughput measures the total number of transactions that the server can handle in a given time frame. Baseline figures from the existing solution are needed for comparison. Performance testing must be executed using a workload that represents the type of operations that are most frequently performed in the production environment.

    Note For a detailed discussion of baselines, see Appendix C, "Baselining."

  • Response time. Response time measures the length of time required to return the first row of the result set.

Testing hardware performance in a cross-platform migration is recommended because additional adjustments may need to be made to the proposed solution. There are no reliable benchmarks that can provide equivalency performance statistics between the UNIX and Windows platform. The results of benchmarks of the popular Transaction Processing Performance Council (https://www.tpc.org/) can be used as a guideline. You should ask the assistance of the vendors in performing lab testing of your solution to get more accurate numbers on the proposed hardware. This testing validates the hardware requirements for the solution.

While conducting performance tests, capture data regarding the utilization of resources such as CPU, memory, disk I/O, and network bandwidth. This is important because if your testing environment is not the same scale as the production load, then bottlenecks in these resources may not be discovered until deployment. Resource utilization data, captured at various load levels, can be used to draw conclusions about resource requirements at production loads.

The following resources can be used to conduct performance testing:

Resolving Performance Issues

Performance problems can occur because of several reasons: application code, database implementation, hardware configuration, software configuration, or resource availability. The testing team should be able to solve configuration and resource problems themselves, while issues with the application and database should be logged and forwarded to the development team, along with any analysis and supporting evidence for resolution. Additional resources may have to be procured to solve resource-related hardware issues.

Performance tuning can be an ongoing series of refinements and improvements. A large amount of information regarding performance tuning is available. For more information, refer to the following resources:

Stress Testing

Stress testing is performed to determine the load at which the performance is unacceptable or the system fails. This involves loading the system beyond the use it was designed for and checking for issues. When performing stress tests, new bugs or issues often may surface because of the high stress and load levels. At a minimum, stress testing should load the system as defined in the business goals.

If the test environment is scaled down in relation to the production environment, the limitations of the testing environment in relation to the production environment should be considered in the results.

For example, with respect to applications, there may be a failure when the number of simultaneous connections hit 100. This could be because of the limitation of some variable associated with the code segment related to connection handling. Similar problems may be encountered only during stress testing because there will be differences in resource consumption and low-level implementation of the same functions between the UNIX and Windows platform.

Resolving Stress Issues

Stress testing should be performed only after all issues encountered during performance testing have been fixed. Bugs encountered during stress testing can be because of the application, the hardware configuration, or resource availability. Issues or bugs found with the application have to be sent to the development team with any pertinent information. If the issues relate directly to hardware or hardware resources, they can be solved by configuration or adding additional resources.

For example, SQL Server is not configured to take advantage of memory more than 2 GB by default. If your hardware contains additional memory, this issue can be resolved by correctly configuring SQL Server to take advantage of the server's available memory.

Microsoft offers two utilities, Read80Trace and OSTRESS, to assist in stress testing SQL Server. To learn more or download these utilities, refer to https://support.microsoft.com/?kbid=887057.

Scalability Testing

Scalability testing is performed to determine if the solution scales to handle an increasing workload. The workload may be increased size, as in Very Large Databases (VLDB), or more activity, such as transactions. Activity scalability is measured in number of user connections, requests, reports, for example. The overall scalability is also dependent on the hardware and the application. The application's throughput change as the load is increased is a measure of its scalability. Also, the throughput may be measured with increased resources along with the increased load.

Scalability testing, while similar to stress testing, provides additional information to assist in future solution expansion plans. Scalability testing is conducted to record how well the migrated solution will scale or increase throughput as the user workload increases. It differs from stress testing because scalability testing will generally load the solution far past the minimum load levels defined in the Planning Phase.

If additional hardware is available, it is worthwhile to determine whether exceeding the limits of the current solution requires a simple addition of hardware or a complete redesign of the system.

For detailed information on scalability, refer to the following resources:

Resolving Scalability Issues

Issues of scaling should be documented as constraints of the system. If it is imperative that the entire system scale to a certain point based on business goals, but the solution does not meet these goals, then the resources (mostly hardware) have to be re-evaluated by experts. In most cases, vendors can provide support and information in this area.

Operational Testing

Operational testing is required to ensure that day-to-day functionality and maintainability is developed and tested. This type of testing includes items such as:

  • Backup routines

  • Database maintenance tasks and schedules

  • Documentation and processes developed for ongoing support

  • Alert and monitoring processes

  • Disaster recovery plans

If the solution is not going to be piloted, operational testing can be expanded to ensure that the operations team is comfortable with the processes and procedures needed to maintain the system.

Resolving Operational Issues

Issues during operational testing are normally because of incomplete documentation of the system requirements and configuration with respect to components that are found only in the production environment. In most cases, the issues will have to be handled by the operations staff who may seek information and expertise from the project team.

Bug Tracking and Reporting

There are several important interim milestones in the iterative process of testing and refining the solution before release. The interim milestones guide the tracking and testing process. These milestones include:

  • Bug convergence

  • Zero bug bounce

  • Release candidates

  • Golden release

These milestones are discussed under the following headings.

Bug Convergence

Bug convergence is the point at which the team makes visible progress against the active bug count. It is the point at which the rate of bugs that are resolved exceeds the rate of bugs that are found.

Figure 18.1 illustrates bug convergence.

Figure 18.1 Bug convergence graph

Figure 18.1 Bug convergence graph

Because the bug rate will still vary — even after it starts its overall decline — bug convergence usually manifests itself as a trend instead of a fixed point in time. After bug convergence, the number of bugs should continue to decrease until the zero bug bounce.

Zero Bug Bounce

Zero bug bounce (ZBB) is the point in the project when development resolves all the bugs raised by the Test role and there are no active bugs — for the moment. Figure 18.2 illustrates ZBB.

Figure 18.2 Zero bug bounce

Figure 18.2 Zero bug bounce

After ZBB, the bug peaks should become noticeably smaller and should continue to decrease until the product is stable enough to release.

Careful bug prioritization is vital because every bug that is fixed creates the risk of creating a new bug or regression issue. Achieving ZBB is a clear sign that the team is in the final stage as it progresses toward a stable product.

Note New bugs will certainly be found after this milestone is reached. But it does mark the first time that the team can honestly report that there are no active bugs — even if it is only temporary. This can help the team maintain focus on a zero-defect mindset.

Release Candidates

After the first achievement of zero bug bounce, a series of release candidates are prepared for release to the pilot group. Each of these releases is marked as an interim milestone. The release candidates are made available to the pilot group so they can test it. The users provide feedback to the project team, and the project team in turn continues to improve the product and resolve bugs that appear during the pilot. As each new release candidate is built, there should be fewer bugs to report, prioritize, and resolve. The pilot group is discussed in more detail in the "Piloting the Solution" section later in this chapter

Golden Release

Golden release is the release of the product to production. Golden release is a milestone in the Stabilizing Phase that is identified by the combination of zero-defect and success criteria metrics. At golden release, the team must select the release candidate that they will release to production. The team uses the testing data that is measured against the zero-defect and success criteria metrics to make this selection.

User Acceptance Testing and Signoff

User acceptance testing (UAT) is an additional testing process to determine if the solution meets the customer acceptance criteria. Because this is a migration project, the database and the existing solution have already passed these criteria. In most cases, only the solution's operating environment has changed.

In migration projects, UAT should test whether the new solution produces the same results from use cases as the existing solution. Whenever possible, use acceptance tests from the existing solution as a base.

UAT can also affect the database. Part of testing should ensure that the SQL Server database can be accessed by the client applications.

User signoff is obtained when the users agree that the solution meets the needs of the end user. The user signoff is proof that the solution meets the user acceptance criteria in relation to their business requirements. The user signoff indicates that the solution conforms to the requirements of the end-user (functionality) and the enterprise (performance), and that the solution is ready to be deployed into production.

Piloting the Solution

A pilot program is a test of the solution in the production environment, and a trial of the solution by installers, systems support staff, and end users. The primary purposes of a pilot are to demonstrate that the design works in the production environment as expected and that it meets the organization's business requirements. Pilot deployments are often characterized by a reduced but key feature set of the system or a smaller end-user group.

The pilot is the last major step before a full-scale deployment. Before the pilot, all testing must be completed. This provides the opportunity to integrate and test other pieces of the production environment that do not have any equivalent in the test environment.

The pilot also provides an opportunity for users to provide feedback about how the solution works. This feedback must be used to resolve any issues or to create a contingency plan. The feedback can help the team determine the level of support that they are likely to need after full deployment. Some of the feedback can also contribute to the next version of the application.

Note The success of the pilot contributes heavily to the deployment schedule. Issues discovered during the pilot can delay deployment until the problems are resolved.

Every migration situation is unique, and some scenarios may not require a pilot program. The pilot helps minimize the risks involved with the Deployment Phase. For instance, if the migration involves a Perl application that is ported to run natively on the Windows platform, the differences within the application could be minimal and, depending on other mitigating factors, a decision may be made not to pilot the solution.

A pilot program is highly recommended in situations where any of the following instances apply:

  • The deployment plan is highly complex and the deployment team requires the experience of the pilot deployment.

  • The solution is prominent or critical to the organization. If the rollout must go exactly as planned, a pilot will provide additional assurance.

  • There is a large difference between the production and test environments.

  • There are elements in the production environment that cannot be adequately verified in the test environment.

Preparing for the Pilot

A pilot deployment needs to be rehearsed to minimize the risk of disruption for the pilot group. At this stage, the development team is performing last-minute checks and ensuring that nothing has changed since pre-production testing. The following tasks need to be completed before starting the pilot:

  • The development team and the pilot participants must clearly agree on the success criteria for the pilot. In a migration project, the main success criterion is that the new solution effectively replaces the existing solution.

  • A support structure and issue resolution process must be in place. This process may require that the support staff be trained. The procedures used for resolution during a pilot can vary significantly from those used during the deployment and when in full production.

  • To identify any issues and confirm that the deployment process will work, it is necessary to implement a trial run or a rehearsal of all the elements of the deployment.

  • It is necessary to obtain customer approval of the pilot plan. Work on the pilot plan starts early during the Planning Phase so that the communication channels are in place and the participants are prepared by the time the test team is ready to deploy the pilot.

  • Ensure that the plan effectively mirrors the deployment process. For instance, if the migration solution is scheduled to be deployed in phases, the entire process should be replicated for the pilot.

    Note It is important to remember that a pilot program tests and validates the deployment process as well as the solution.

A pilot plan should include the following:

  • Scope and objectives

  • Participating users, locations, and contact information

  • Training plan for pilot users

  • Support plan for the pilot

  • Known risks and contingency plans

  • Rollback plan

  • Schedule for deploying and conducting the pilot

Conducting the Pilot

Conducting the pilot involves deploying the applications and databases that have been chosen to be part of the pilot. The golden release of the solution is used for pilot testing against an audience consisting of actual users using real-world scenarios. When the pilot is conducted in a production environment, care has to be taken to ensure that the existing application and databases are not jeopardized. Hence, adequate support has to be provided while conducting the pilot to monitor and fix any issues that arise.

Conducting a pilot also includes testing the accuracy of supporting documentation, training and other non-code elements, such as cutover and fallback procedures. Any changes made to these documents during the pilot have to be noted and the documentation updated accordingly.

Note Ultimately, the pilot leads to a decision to either proceed with a full deployment or to delay deployment so that issues can be resolved.

Evaluating the Pilot

At the end of the pilot, its success is evaluated to determine whether deployment should begin. The project team then needs to decide whether to continue the project beyond the pilot.

It is important to obtain information about both the design process and the deployment process. Review what worked and what did not work so that it is possible to revise and refine the plan before deployment. Examples of information to be gathered include:

  • Training required for using the solution

  • Rollout process

  • Support required for the solution

  • Communications

  • Problems encountered

  • Suggestions for improvements

The feedback is used to validate that the delivered design meets the design specification, and the business requirements. After the data is evaluated, the team must make a decision. The team can select one of following strategies:

  • Stagger forward. Prepare another release candidate and release it to the original pilot group, then to additional groups. The release to more than one group might have been part of the original plan or might have been a contingency triggered by an unacceptable first pilot.

  • Roll back. Return the pilot group to their pre-pilot state.

  • Suspend the pilot. Put the solution on hold or cancel it.

  • Patch and continue. Fix the build that the pilot is running and continue.

  • Proceed to the Deploying Phase. Move forward to deploy the pilot build to the full live production environment.  

Finalizing the Release

The Stabilizing Phase culminates in the Release Readiness Approved Milestone. This milestone occurs when the team has addressed all outstanding issues and has released the solution and made it available for full deployment. This milestone is the opportunity for customers and users, operations and support personnel, and key project stakeholders to evaluate the solution and identify any remaining issues they need to address before beginning the transition to deployment and, ultimately, release.

After all of the stabilization tasks are complete, the team must formally agree that the project has reached the milestone of release readiness. As the team progresses from the release milestone to the next phase of deploying, responsibility for on-going management and support of the solution officially transfers from the project team to the operations and support teams. By agreeing, team members signify that they are satisfied with the work that is performed in their areas of responsibility.

Project teams usually mark the completion of a milestone with a formal sign-off. Key stakeholders, typically representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the milestone is complete. The sign-off document becomes a project deliverable and is archived for future reference.

Download

Get the Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions