Process 5: Develop and Test the Change

Published: April 25, 2008   |   Updated: October 10, 2008

 

Once a change has been approved, development and then testing of the proposed change can start. These are activities that coincide with the Deliver Phase of the IT service lifecycle. They focus on ensuring that IT services are envisioned, planned, built, stabilized, and released in line with business requirements and the customer’s specifications.

Cc543245.image7(en-us,TechNet.10).jpg

Figure 7. Develop and test the change

Activities: Develop and Test the Change

Developing and testing a change are activities that tie directly to the Deliver Phase of the IT service lifecycle. More information about developing and testing a change can be found in the Deliver Phase Overview. Even more detailed information can be found in the following Deliver Phase SMFs:

Low-risk and minimal-effort changes can go through this process and the next process very quickly. More complex changes should follow the processes outlined in the Deliver SMFs.  Both sets of processes follow a similar path. Follow these guidelines for each change request category:

  • Standard Change: Follow the established procedures for the standard change.
  • Minor Change: Follow the processes for minor changes outlined in this document. See the Deliver SMFs for more detail if needed.
  • Significant or Major Change: See the Deliver SMFs.
  • Emergency Change: Use where necessary to quickly get an essential service back up and running, Testing may be delayed until after the release of the change. Be sure to complete the testing to confirm that there are no unknown issues caused by the change. Use caution when dealing with emergency changes, as risk levels are generally higher.

The following table lists the activities involved in this procedure. These include:

  • Designing the change.
  • Identifying configuration dependencies.
  • Building and testing the change.
  • Reviewing the readiness of the change for release.
  • Updating the RFC.

Table 8. Activities and Considerations for Developing and Testing Change

Activities

Considerations

Design the change

Key questions:

  • Does the design demonstrate an understanding of the business requirements and define the features that users need to do their jobs?
  • Have adequate usage scenarios been developed?
  • Does the design address operational requirements?
  • Does the design address system requirements?

Inputs:

  • Business and user requirements for the solution
  • Usage scenarios
  • Operational and system requirements

Output:

  • Design document

Best practice:

  • Maintain traceability between requirements and solution features. This serves as one way to check the correctness of design and to ensure that the design meets the goals and requirements of the solution.

Identify configuration dependencies

Key questions:

  • Are there other CIs that have dependencies on or that could be affected by the proposed change?
  • Does the proposed change have dependencies on other changes? In other words, does completing the proposed change require other changes to be made first?
  • Are all changes (both the prerequisites and the ultimate change) recorded in the CMS?

Input:

  • Information about other proposed changes in the CMS

Output:

  • A CMS entry showing CI dependencies that might be affected by or have an effect on the proposed change

Best practice:

  • The CMS should be updated whenever an RFC is approved. This will help track that a change is planned for a CI or group of CIs. The CMS must also be updated once the change is complete and successful.

Build and test the change

Key questions:

  • Does the built-out change meet the customer’s specifications?
  • Has the development team prepared a development lab?
  • Has the development team prepared an issue-tracking process?  How will these issues be handed over to the Operations team for use in a knowledge database?
  • Have the development and test teams worked together to prepare a test specification?
  • Has the team created multiple release candidates and tested each to see whether it is fit to release to a pilot group?
  • Has the team completed user acceptance testing?
  • Has the team piloted the solution and collected feedback?

Inputs:

  • Vision/scope document
  • Functional specification
  • Customer requirements
  • Code
  • Test specification document
  • Test plan
  • Lab environment
  • Issue-tracking database and issue-tracking policies and procedures

Outputs:

  • Release candidates
  • Pilot-ready release candidate

Best practices:

  • Resolve all known issues, whether the resolutions are fixes or deferrals.
  • Define and communicate standards for issue priority and severity to all team members, including Development, Test, and User Experience.
  • Deliver the issue database to training and support staff so that they can have a deeper insight into the history of the solution and the problems found in development.
  • Schedule regular meetings with those responsible for development and testing to review issues and plan strategies for resolving them.

Review the readiness of the change for release

Key questions:

  • Is there business alignment, and are priorities understood?
  • Is there clear ownership of all activities and actions?
  • Has the appropriate management signed off on all plans?
  • Have required communications with all affected groups occurred? 
  • Do the users and owners of dependent services know this change is scheduled and what the impact will be to them? 
  • Are the functional users ready and committed to the new processes?
  • Is the testing complete?
  • Are Operations and Support ready for the release?

Input:

  • Status reports for the Release Manager

Output:

  • A go/no go decision about whether to release

Best practices:

  • Ensure that the Release Manager is provided with the appropriate status reports. 
  • Provide feedback and acknowledgement to those who have supported the release, and remind the organization of the expected benefits.

Update the RFC

Key questions:

  • Have updates been made to reflect such things as the planned release date, backout plans and any reasons for a backout (if one was required), support requirements, rollout plan, test results, observed problems, and date of the post-implementation review (PIR)? (For more information about the PIR, see Process 7: “Validate and Review the Change.”)
  • Have status updates and monitoring been done throughout the process?
  • Has the change initiator been able to view the RFC throughout the process to get status?
  • Has there been formal communication at the point of release with the change initiator?

Input:

  • Updates about the change

Output:

  • Updated RFC

Best practice:

  • The Change Manager should be monitoring open changes that are pending release in order to ensure information gets updated. An operations level agreement (OLA) can assist with setting expectations to do this.