Planning

The primary focus in the Planning Phase is on selecting the appropriate imaging scenarios and methods, as shown in Figure 2. The secondary focus is adding sources to the imaging application or server, whether these are boot images, drivers, packages, or operating systems. The milestone for the Planning Phase is to define the deployment scope and objectives for the imaging strategy for the environment.

Figure 2. Activities during the Planning Phase

Figure 2. Activities during the Planning Phase

On This Page

Planning Checklist Planning Checklist
Choose an Image Strategy Choose an Image Strategy
Milestone: Deployment Scope and Objectives Identified Milestone: Deployment Scope and Objectives Identified

Planning Checklist

Table 1 shows the high-level steps in the image Planning Phase.

Table 1. Planning Phase Checklist

 

High-level steps in the image Planning Phase

Install Deployment Workbench.

Most of the Image Engineering feature team’s work is determining what type of images to create. The team must choose between thick, thin, and hybrid image strategies.

Choose an Image Strategy

Most organizations share a common goal: to create a standard configuration that is based on a common image for each version of the operating system. Organizations want to apply a common image to any computer in any region at any time, and then customize that image quickly to provide services to users.

In reality, most organizations build and maintain many images—sometimes up to 100 images. By making technical and support compromises, however, by making disciplined hardware purchases, and by using advanced scripting techniques, some organizations have reduced the number of images they maintain to approximately three or fewer. These organizations tend to have the sophisticated software distribution infrastructures necessary to deploy applications—often before first use—and to keep them updated.

The following list describes costs associated with building, maintaining, and deploying disk images:

  • Development costs. Development costs include creating a well-engineered image to lower future support costs and improve security and reliability. They also include creating a predictable work environment for maximum productivity, but which is balanced against flexibility. Higher levels of automation lower development costs.

  • Test costs. Test costs include testing time and labor costs for the standard image and the applications that might reside inside it, in addition to applications applied after deployment. Test costs also include the development time required to stabilize disk images.

  • Storage costs. Storage costs include storing the distribution points, disk images, migration data, and backup images. Storage costs can be significant depending on the number of disk images, the number of computers in each deployment run, and so on.

  • Network costs. Network costs include moving disk images to distribution points and to computers. The disk imaging technologies that Microsoft provides do not support multicasting, so network costs scale linearly with the number of distribution points that must be replicated and the number of computers in the deployment project.

As the size of image files increases, costs increase. Large images have more updating, testing, distribution, network, and storage costs associated with them. While team members update only a small portion of the image, the Image Engineering feature team must distribute the entire file.

Note   Windows Vista and Windows Server 2008 do not require a separate image for each type of hardware abstraction layer (HAL). The team needs different images only for 32-bit and 64-bit versions of these operating systems.

Thick Image

Thick images are monolithic images that contain core applications, language packs, and other files. Part of the image development process is installing core applications and language packs prior to capturing the disk image. Most organizations that use disk imaging to deploy operating systems today are building thick images.

The advantage of thick images is simplicity. The organization creates a disk image that contains core applications and language packs and thus performs only a single step to deploy the disk image and core applications to the target computer, with language support for all target locales. Also, thick images can be less costly to develop, because advanced scripting techniques are frequently not required to build them. In fact, thick images can be built using Microsoft Deployment with little or no scripting. Last, in thick images, core applications and language packs are available on first start.

The disadvantages of thick images are maintenance, storage, and network costs. For example, updating a thick image with a new version of an application or language pack requires rebuilding, retesting, and redistributing the image. If the Image Engineering feature team chooses to build thick images that include core applications and language packs, they should install them during the disk imaging process.

Thin Image

A key to reducing image count, size, and cost is compromise. The more the Image Engineering feature team puts in an image, the less common and larger that image becomes. Large images are less attractive to deploy over a network because of the bandwidth they consume, they are more difficult to update regularly, they are more difficult to test, and they are more expensive to store. By compromising on what the team includes in images, the team reduces the number of images maintained, and their size—ideally, the Image Engineering feature team builds and maintains a single, worldwide image that is customized after deployment.

Thin images contain few if any core applications or language packs. The Image Engineering feature team installs applications and language packs separately from the disk image. Installing the applications and language packs separately typically takes more time at the computer and possibly more total bytes transferred over the network, but the transfer is spread out over a longer period of time. The team can mitigate the network transfer time using trickle-down technology that many software distribution infrastructures provide, such as Background Intelligent Transfer Service (BITS).

Thin images have many advantages. First, they cost less to build, maintain, and test. Second, network and storage costs associated with the disk image are lower, because the image file is physically smaller. The primary disadvantage of thin images is that they can be more complex to develop initially; however, this drawback is offset by a reduction in the costs of building successive images. Deploying applications and language packs outside the disk image often requires scripting and a software distribution infrastructure. Another disadvantage of thin images is that core applications and language packs are not available on first start—a possible requirement in high-security scenarios.

If the Image Engineering feature team chooses to build thin images that do not include applications or language packs, the organization should have a systems management infrastructure, such as Microsoft Systems Management Server (SMS) 2003 or Microsoft System Center Configuration Manager 2007, in place to deploy applications and language packs. The Image Engineering feature team will use this infrastructure to deploy applications and language packs after installing the thin image.

Hybrid Image

Hybrid images mix thin and thick image strategies. In a hybrid image, the disk image is configured to install applications and language packs on first run, giving the illusion of a thick image but automatically installing the applications and language packs from a network source. Hybrid images have most of the advantages of thin images. However, they are not as complex to develop and do not require a software distribution infrastructure. They do require longer installation times, however, which can raise initial deployment costs.

An alternative is to build one-off thick images from a thin image. The team begins by building a reference thin image. Then, after the thin image is tested, the team adds core applications and language packs, captures them, tests them, and distributes a thick image based on the thin image. Testing the thick image is minimized, because the imaging process is essentially the same as a regular deployment. Be wary of applications that are not compatible with the disk imaging process, however.

If the Image Engineering feature team chooses to build hybrid images, it will store applications and language packs on the network but include the commands to install them when the team deploys the disk image. This process is different from installing the applications and language packs in the disk image. The Image Engineering feature team is deferring installations that would typically occur during the disk imaging process to the image deployment process. Also, if the organization has a systems management infrastructure in place, the team will likely use it to install supplemental applications and language packs after deployment.

Milestone: Deployment Scope and Objectives Identified

Milestones are synchronization points for the deployment process. For more information, see the Microsoft Deployment Planning Guide.

For this milestone, the Image Engineering feature team has completed the process of choosing the appropriate imaging strategy for the organization’s environment, and determined the scope of its use, as listed in Table 2.

Table 2. Planning Phase Project Milestones and Deliverable Descriptions

Planning Phase milestone

Deliverable description

Owner

Select image strategy

All image strategies for the environment have been determined.

Image Engineering Feature Team

Determine the scope of the image strategy

Determine where and how the images are used within the different areas of the organization.

Image Engineering Feature Team

Download

Get the Microsoft Deployment Solution Accelerator

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions