Chapter 3 - Planning Your Build Migration

On This Page

Introduction and Goals Introduction and Goals
Understanding the Build System Migration Understanding the Build System Migration
Developing the Solution Design Developing the Solution Design
Validating the Technology Validating the Technology
Creating the Functional Specification Creating the Functional Specification
Developing the Project Plans Developing the Project Plans
Creating the Project Schedules Creating the Project Schedules
Setting Up the Development and Test Environments Setting Up the Development and Test Environments

Introduction and Goals

The rest of this solution guide offers guidance for migrating a make-based build system on UNIX to a make-based build system on Windows. The process guidance and technical information provided in the remaining chapters are applicable even if your solution is to create a new build system on Windows, but the implementation details will differ.

The primary goals for the Planning Phase are to create the solution architecture and design, the project plan, and the project schedule. The solution design is articulated as a concept, as a logical system, and as physical components (also known as the conceptual design, logical design, and physical design). This is the phase of a project when the initial vision is translated into practical plans for how to achieve it. Team members draw upon their expertise to create individual plans in all areas of the project, ranging from security to budget to deployment, and the role of Program Management rolls them up into the master project plan. Likewise, individual schedules are combined to become the master project schedule.

The phase concludes when the project team agrees that the plans are sufficiently well-defined to proceed with development, and the team, business sponsor, and key stakeholders approve the master project plan and schedule, usually at a milestone meeting. The formal conclusion of this phase is marked by the second major milestone, Project Plans Approved. The following list identifies the tasks that must be completed to meet the milestone:

  • Develop the solution design and architecture.

  • Validate the technology.

  • Create the functional specification.

  • Develop the project plans.

  • Create the project schedules.

  • Set up the development and test environments.

The technical information provided in this chapter will guide you in the considerations to keep in mind as you make decisions. Requirements for the development and test environments are discussed, and guidance is also provided on validating the technology that will be used. The UMPG provides detailed descriptions of each of the tasks and deliverables for this phase of the project.

Understanding the Build System Migration

The following steps provide an overview of the usual course of a build system migration. Each step is discussed in detail in the rest of this chapter. In the course of performing these steps, you will decide which UNIX portability environment is most suitable for your requirements.

Depending upon the depth of the assessment you performed during the Envisioning Phase and the technical requirements you have gathered, you may have already done some of the work described in these steps. If you did not perform an in-depth assessment during the Envisioning Phase, the Planning Phase is the appropriate time to do this work.

  1. Inventory your existing build system.

    Before you can begin to migrate your build process, you must fully understand how the build system works and determine a list of requirements necessary in order for the migration to succeed. This is achieved by performing a detailed assessment that examines all aspects of the build system, including the makefiles and the utilities used from within the makefiles. Refer to the detailed guidance provided in this chapter in the "Assessing your Existing Build System" section.

  2. Create a set of requirements.

    To minimize migration problems, you want to choose or create a Windows environment that shares as many important features as possible with the existing system. To compare and match the features of the Windows environment to your existing system, you need to distill your assessment of the current build system into a list of build system requirements. This requirements list need not be a formal document at all; it may be captured entirely in the sample construction site that is described in step 3c of this procedure.

  3. Choose an appropriate UNIX portability environment.

    To migrate your make-based build process, a UNIX portability environment on Windows must be chosen. For this choice, it is important to compare your build system requirements with the features of the portability environments. To do this, you will need to test and use the build environments directly to ensure that all requirements are satisfied. Choosing the appropriate environment usually involves the following steps:

    1. Investigate potential solutions on Windows.

      Become familiar with all possible UNIX environment solutions for Windows. Experiment with shell scripting, compiling, editing text files, and using make in each of these systems. Determine if there are any obvious limitations in any of them.

    2. Compare your build environment requirements with the capabilities of the Windows solutions.

      Determine which Windows solutions seem most compatible with your existing process. Do not eliminate any solution too quickly or too early; there may be important problems not yet visible that could be solved by this solution.

    3. Build a sample construction site.

      This sample site mimics your existing build process. Create a high-level shell script that performs many of the same operations that the existing one does (or use your existing one). Include as many of the existing operations as possible so that you can determine if the Windows solutions are capable of handling these operations. You will also need to create some sample makefiles that contain build rules indicative of your existing build process. Use the same tools (compilers, library archivers, and other utilities) to test everything. A copy of this site should be a test bed.

      This step also identifies problem areas for which alternate solutions can be implemented. Refer to Chapter 4, "Migrating the Build System," in this guide to familiarize yourself with various UNIX-to-Windows migration issues that can help you identify potential problem areas in your migration.

    4. Install each UNIX environment on Windows and compare each one to the original build environment.

      You may want to install each environment on different physical systems to avoid any potential confusing or incompatible interactions they may have with each other. Take the sample construction process and execute it in each of the installed UNIX environments. Examine the results and determine the problem areas.

      Be aware that some environments can provide different versions of important utilities. For example, some environments provide both their unique version of make and also the popular GNU gmake utilities. Some environments come with compiler scripts that can be configured to use different native Windows compilers or to support specific command line options compatible with your previous UNIX version.

      Many of these UNIX environments claim conformance to the POSIX software standards, and many UNIX systems also make this claim. So if your build process was constructed to be portable using these software standards, you are generally looking for the use of special features and non-standard extensions.

    5. Pick the most suitable UNIX environment.

      Choose the UNIX environment that seems to have the fewest problems executing your build process. Sometimes, you will find that one apparently good solution has some behavior that is totally incompatible with your requirements, or it is missing a feature that has no easy replacement. This process may make your decision easier.

  4. Change your actual construction files.

    Make the necessary changes to work around the known problem areas and then execute your solution using the chosen Windows solution. Following this process could reveal additional problem areas that were not detected by the sample construction process. Continue performing this step to build all your applications.

  5. Ensure that new makefiles work.

  6. Deploy build system to desktops of developers.

Figure 3.1 captures these steps in a diagram:

Figure 3.1: Representation of core steps necessary in migration

Figure 3.1: Representation of core steps necessary in migration

This flowchart summarizes the critical technology-oriented tasks that should be accomplished within the overall migration process. Your migration may differ from that represented in Figure 3.1, but the summarized tasks will all be factors or considerations in one form or another during the migration process. For example, your vision/scope document may dictate that the build migration team cannot convert the ten thousand makefiles on your system; instead, they will provide an alternate framework for building the software on Windows (such as Visual Studio) and a set of guidelines or utilities for converting the makefiles. This approach is often seen when a single team is migrating a build system that will be used by many different application migration teams.

Developing the Solution Design

The first technological activity in migrating a build system based on your existing make system is to develop a solution design. A solution design consists of three parts:

  • Conceptual design. This design includes the build system from the point of view of its users, including user profiles, and usage scenarios for the current and proposed systems.

  • Logical design. This design includes the objects and the interactions that will fulfill the conceptual design, independent of a specific hardware or software solution.

  • Physical design. This design includes the hardware and software solutions to implement the logical design; this is the stage in which real-world limitations are applied to the design.

If some of the work described in this list looks familiar, it should. The conceptual and logical designs are based on work you have already done when assessing your current build system and deciding to migrate your make-based build system. Use the inventory and examination of your existing build system to produce the conceptual design. Examine the current usage and modify the design from there to meet the high-level requirements and goals defined earlier.

Conceptual Design

The purpose of the conceptual design is to capture and understand business needs and user requirements in their proper context, and then to create the conceptual design based on them. The conceptual design facilitates complete and accurate requirements by involving business sponsors, stakeholders, and users.

Logical Design

The logical design will also come from the examination of your current build system. It is tempting to skip this stage and go straight to the physical design, but the work of reevaluating the existing build system as logical components pays off when portions have to be reimplemented on Windows. You can derive a set of build system requirements strictly from the logical design.

Physical Design

The physical design involves the choice of specific hardware and software solutions to migrate your make-based build system to Windows. The physical design will include specifics that come from your investigation of the existing build system. Different software solutions offer different features, and these will affect the physical design and may cause you to modify your logical design.

For example, your existing code base may consist of many files that have names that differ only in case. The logical design may simply state that files are compiled from these directories, but the physical design must account for differences in case. How will these files be renamed? Will you select a system where differences in case are accounted for, or will you go back to the logical design and add a step where file names are mapped in some way?

The investigation of your existing process gives you a set of technical requirements, such as "The new system must handle symbolic links." Investigating the target environments allows you to decide if it can satisfy your technical requirements, combined with your other high-level requirements, defined in the Envisioning Phase.

As another example, the UNIX portability environment in your physical design may be complex or practically nonexistent. For example, if your makefiles invoke only utilities for which there are Windows command line equivalents (copy for cp, del for rm, and so on) and cause no UNIX-specific demands on the file system, then the make program is the portability environment you should use. You need only to move the source files, find an appropriate make program, and rewrite the makefiles.

On the other hand, if your build process is utterly dependent upon UNIX file system semantics, then you need a more sophisticated UNIX portability environment, such as the Microsoft® Interix subsystem in Microsoft Services for UNIX.

Deciding on the physical design — the set of tools to implement your migrated build process — is a significant part of the solution design, and it is the bulk of this document's guidance.

Assessing Your Existing Build System

In practice, teams sometimes skip this step. They just use the inventory from the earlier Envisioning Phase and begin one or more sample build system migrations; they investigate issues as they are encountered. This approach has less process overhead, but the total amount of rework done is often significant.

This step covers the four boxes beginning with "Assess" in Figure 3.1. The intent of the assessment at this point is to create a list of features that must be considered on the Windows target system. These features must be put into a sample system or proof of concept to try out the different Windows solutions.

In the earlier assessment preformed in the Envisioning Phase, you needed only a cursory analysis to develop an idea of what you wanted your build process to do, the potential scope constraints, and to surface possible issues. In this stage, you need to understand your process completely in order to map between your existing commands and one of the available UNIX portability environments. All the tools and all the processes must be analyzed and itemized. Each of these must be compared to the capabilities of each target environment to determine if there are any migration issues.

Create a table or spreadsheet listing features that are important to your process. Later, when examining the Windows solutions, you will be able to determine how the Windows product provides a solution to your issues. The table will be important when you create a sample build process or proof of concept. As a point of comparison for the table you build, the guidance in Chapter 4, "Migrating the Build System," lists various features in the build system that vary between systems. These features may highlight potential migration issues when compared to your build system needs.

To determine the best way to migrate your build process to Windows, you must analyze and fully understand your existing process. The four key elements to assess are the compiler and linker, make and the makefiles, the shell scripts, and other utilities used, in roughly that order of importance. The four following sections discuss these elements.

Assessing Your Compiler and Linker

Depending on the type, complexity, and characteristics of the application being built, the features and usage of the actual compilation and linking utilities may be very unique and complex. Usually, the compiler is capable of dealing with most compilation options, including passing feature requests directly to the linker so that the build process does not need to call the linker directly. But in some cases, such as constructing shared libraries, the linker must be called directly.

On UNIX, the most common name for the C compiler is cc, and the common name for the C++ compiler is CC. Many UNIX system vendors supply their own specific compiler version, and these may have different names, such as IBM's AIX xlc and xlC compilers and the GNU gcc compiler. In many cases, the name cc or CC is a link to the vendor's specific compiler location. Note that the difference in case between these two names may cause problems on Windows.

Every vendor's compiler is different. Not only do differences occur between UNIX vendors, but also between UNIX and Windows vendors. They all tend to support a common set of options, but then they diverge by supporting different command line options, and by providing varying levels of conformance with respect to the different ANSI/ISO C and C++ standards. The differences tend to be more pronounced when moving from UNIX to Windows.

Several of the UNIX environments on Windows invoke a native Windows compiler through a special command, such as cc or wcc. These are configurable wrapper scripts that are used to mimic the more familiar UNIX compiler commands and compiler behavior. Because they are configurable scripts, they are easy to modify for accommodating different Windows tools and to make your own personalized enhancements. Most of the common UNIX command line options are supported by these wrapper scripts.

If your build system also invokes the ld command, then you need to determine whether the UNIX environment on Windows also supports this command and, if not, then whether the compiler can be used as a replacement to ld or you need to call the native Windows linker, such as link.exe, directly.

After you build your application using the compiler or linker or both, you may need to use a debugger to investigate why your application is not working as expected after migration. For whichever compiler you choose, make sure you have a corresponding debugger. For gcc, this is gdb; for Interix (wcc) and MKS (cc), this is a Windows debugger, such as windbg or cdb. Debuggers are essential in the application migration, but they may also be very important in the build process if they create their own tools to assist with the build.

Assessing make and makefiles

An assessment of the makefiles should determine all the features used by make in the build system and the features that are not specific to make but are captured in the makefiles. Answers to the following questions will help establish features when planning the new build system:

  • What other targets are provided in the makefiles? Do they allow for different versions, cleaning old object files, printing documentation, or stripping executables of symbolic information?

  • What features are used in the makefiles? Are suffix rules, pattern matching, special macro assignment operations, conditionals, and virtual targets included?

  • Are unusual recipes used?

  • What compile-time macros are used to control versions built?

  • What compiler options are used?

  • Does the build system depend on case-sensitive file names? Specifically, does the build system depend on the case-sensitive distinction between the following two file names: makefile and Makefile?

Ensure that your assessment of existing makefiles includes the following information:

  • Implicit rules for creation of executables

  • Inclusion of other makefiles

  • Special targets

  • Recursion in building a directory hierarchy

  • Variables such as MAKEFLAGS and SHELL

  • Use of VPATH macro

  • Dynamic macros in prerequisites

  • Pattern matching rules

  • Use of case specificity in makefile names — if both the files named Makefile and makefile must be processed differently, this difference in case will be a problem on Windows

Assessing Shell Scripts

Most UNIX build systems use shell scripts extensively to drive the software construction process. These scripts can be found in their own files, or they could be recipes in makefiles.

In many cases, it will be possible to choose an equivalent shell on Windows because versions of ksh, sh, bash, and csh are available in some or all of the UNIX portability environments discussed in this document. For this reason, you rarely need to decide on a UNIX portability environment based on your shell script requirements.

When assessing shell scripts, ensure that your assessment includes the following considerations:

  • The shell used to run the scripts

  • Inconsistencies in choice of shell (for example, perhaps a small number of scripts are actually csh scripts instead of Bourne shell scripts)

  • Whether the shell is a POSIX conforming shell

  • Extensions to POSIX used, such as co-processes

  • What are the scripts used for? That is, what is the intent behind their use?

  • What command line options are used for each?

  • What is the purpose of any special scripts or applications that are invoked?

  • Are symbolic links used (that is, is the command ln –s used in any scripts)?

  • Are any of the path names used in the scripts' absolute path names?

  • Do any of the file or path names contain characters not supported by Windows or file names reserved by Windows?

  • Are case-sensitive file and directory names important?

Assessing the Use of Utilities

The intent in assessing the utilities at this point is to come up with a complete list of utilities invoked in the course of your build process. You may already have this list from your examination done during the Envisioning Phase. When you have this list, you will need to assess each one individually to determine if that utility is available on Windows and whether its command line options are the same. Essentially, you are trying to find which utilities need to be modified or replaced during your migration to Windows.

This list of all utilities will come from sources such as shell scripts, recipes in makefiles, and your custom-built build process utilities (such as a tool that retrieves version identifiers from a database). Many utilities will have Windows equivalents — for example, the commands to move and copy files overlap significantly in functionality — but some may not.

Identify the purpose of the commands that do not have Windows equivalents. These are commands that will have to be worked around instead of matched to an equivalent in a portability environment. The commands without equivalents will often be those dealing with file system capabilities. For instance, the command ln –s indicates a symbolic link. Because of the differences between the UNIX and Windows file systems, the use of symbolic links must be worked around in most cases.

Choosing Your UNIX Portability Environment

This step constitutes the second half of the Planning Phase illustrated in Figure 3.1. It is now time to use the feature lists generated during your assessment to select an appropriate UNIX portability environment.

For an efficient migration, you want to select an environment that best suits your construction tool requirements. Because Windows is inherently different from UNIX, you must find a UNIX environment solution for Windows. Fortunately, there are many UNIX environment systems available for Windows, including the Developer versions of the MKS Toolkit, the Interix environment system in Microsoft Services for UNIX, and the Cygwin environment and toolset. All these systems attempt to provide as many UNIX-like features and capabilities as possible to ease the migration effort from UNIX.

Your choice of one portability environment may be determined by the fact that a single vital utility or feature is available only with that environment. It may be that the choice of a compiler determines your choice of portability environment, or it may be that specific features of the portability environment determine your selection of utilities.

In fact, you may not want to choose a specific portability environment until early in the Developing Phase. If the results of the assessment are inconclusive (as is often the case), or if the trade-offs between different versions are not clear, then you should prioritize the features available in each environment according to your requirements in order to help you clarify the trade-offs. For more information, see the section titled "Validating the Technology" in this chapter.

Comparing UNIX Portability Environments

Your build system migration will go more smoothly if you understand the limitations of each UNIX environment system as it relates to the requirements and dependencies of your current build process. You want a system that provides the most tools with the same functionality that you depend upon from UNIX and that best suits your business goals.

Three such products are discussed in this document: MKS Toolkit for Developers, Interix environment (part of Microsoft Services for UNIX) and the Cygwin environment. Each of these products has its own unique characteristics, which may or may not make the product suitable for your migration. If none provides all the features you require, then you must choose the best fit and then change your build system to accommodate it. For example, if you require your build process to be shared on multiple platforms, then you will need to find the Windows solution that requires the fewest changes to your construction files.

You should do your own investigation of Windows products to supplement the information about the three major products described briefly here. They are used in many migration projects, but you may discover that a system not mentioned in this guide meets your needs better.

You may also discover that different products meet different portions of your requirements and, thus, you may be inclined to mix and match different components from each product. This does not usually work and is not recommended because of the potential difficulties, either due to of cost reasons (each product may have significant licensing costs) or because of technology reasons (one product's technology is incompatible with the others). It is not impossible to mix and match — for example, gmake is available for all of these environments — but multiple products are seldom used successfully in a single solution. Table 3.1 summarizes some of the features available in each product.

Table 3.1 High-level Capability Comparison Matrix

Capability

MKS Toolkit

Interix

Cygwin

make utility

MKS make
gmake (on CD media)

BSD make
gmake (available from www.interopsystems.com)

GNU make
(gmake)

Compiler for Win32

cc, cxx
(Interfaces to Windows compiler — configurable using CCG environment variable)

wcc
(Interface to Windows compiler. Available from www.interopsystems.com)

gcc, g++

Linker

ld

(Interface to Windows linker – configurable )

Use wcc
or Windows linker (link.exe)

ld

Shells installed

MKS ksh,
csh ( tcsh),
bash

ksh (pdksh 5.2.13)
csh (tcsh 6.08.03

bash

Other Shells available

 

bash
(available from www.interopsystems.com)

 

Scripting tools

Perl 5.6.0,
MKS awk
gawk (on CD media)

Perl 5.6.1
AT&T awk

gawk

Perl 5.8.0,
gawk

Common utilities

POSIX.2

POSIX.2

POSIX.2

Single rooted file system

No

Yes

Yes

Case sensitive file names

No

Yes

no

Windows file name syntax

Yes

No

Yes

Unix portabilty libraries compatible with Win32

Yes

No

Yes

The following sections describe in greater detail the three popular UNIX environment products that are available on Windows.

MKS Toolkit

MKS Inc. provides a popular set of UNIX tools and programming environments on Windows known as the MKS Toolkit product family. There are several versions of the MKS Toolkit that enable you to preserve UNIX investments and migrate scripts, source code, build environments, and working environments quickly and easily from UNIX to Windows. MKS Toolkit for Developers is the product that will solve the problem of migrating UNIX build environments to Windows.

MKS Toolkit for Developers provides the entire suite of tools defined in POSIX.2, including the Korn, C, and bash shells; find, grep, awk, perl, and 400 other tools and utilities. For development, it includes utilities such as make, ld, and a cc script which is configurable to support a variety of C and C++ compilers for Windows (such as Microsoft Visual Studio).

MKS Toolkit for Enterprise Developers is a superset of MKS Toolkit for Developers and is used for the migration of enterprise UNIX applications to Windows. Along with all the features of the Developer product, MKS Toolkit for Enterprise Developers includes a complete set of UNIX application programming interface (API) libraries based on the Win32 subsystem for migrating UNIX applications. The Enterprise Developers product also includes support for X11, OpenGL, Motif, C, C++, Fortran, curses, and shell script applications.

The MKS Toolkit technology provides a framework for UNIX and Windows applications to coexist and fully interact on the Windows platform. The migration process requires little to no modifications to the UNIX source code and allows you to maintain a single source code baseline for building and deploying applications on both UNIX and Windows. Further, with MKS Toolkit you can introduce and integrate Windows specific code and functionality to your legacy applications, thereby evolving them to take advantage of Windows features such as .NET and COM.

This document concentrates on the MKS Toolkit for Developers product; the Enterprise Developers product is more comprehensive and would be the choice for migrating both the build environment and the application itself to the Windows environment.

More information on the MKS Toolkit products can be found at https://www.mkssoftware.com/products/tk/.

Microsoft Services for UNIX and Interix

Interix version 3.5 comes with the Microsoft Services for UNIX (SFU) product. Interix is a complete UNIX environment that includes the Interix subsystem, the Korn and C shells, more than 350 UNIX utilities, and a complete software development kit (SDK). These utilities are standard on UNIX computers and provide a familiar environment to ease the transition of developers, users, and system administrators from UNIX to Windows. The SDK supports more than 1900 UNIX APIs and migration tools, such as make, rcs, yacc, lex, nm, and strip.

The SDK also comes with compilation tools such as cc, c89, gcc, g++, ld, and g77 that allow UNIX source code to be recompiled. They will produce a binary that runs on the Windows platform, but this binary is not dependent on or compatible with the Win32 subsystem. The binary will only run on Windows in the context of the Interix environmental subsystem. Unfortunately — in the context of this guide — this means that these compilation tools are not suitable because this guide assumes your goal is to build Win32 binary applications. Fortunately, Interix does support interoperability with Windows in many other ways, including the capability to directly invoke Windows utilities. There is a scriptable utility, called wcc, that looks like a conventional UNIX C/C++ compiler and it works in the Interix environment, but it invokes the Microsoft C/C++ compiler to produce Win32 executable files. Because of its scriptable nature, wcc can also be modified to invoke the Windows version of other popular compilers, such as the Windows version of gcc.

Another Interix SDK tool is the ld utility that is part of the gcc/g++ family of tools. This, too, cannot be used to create a Win32 binary. So, if your build system requires the use of a linker, then you will have to replace it with either wcc (using appropriate linker command line options) or with a Windows linker utility such as Windows link.exe.

Additional information about Interix and Services for UNIX can be found at https://www.microsoft.com/windows/sfu/ and https://www.microsoft.com/technet/interopmigration/unix/sfu/sfu3ovw.mspx.

Cygwin

Cygwin is a Linux-like environment for Windows. It consists of two parts:

  • A DLL (cygwin1.dll) that acts as a Linux emulation layer and that provides substantial Linux API functionality.

  • A collection of tools that provide the look and feel of Linux.

The Cygwin tools are ports of the popular GNU development tools that have been ported for Microsoft Windows. They run using the Cygwin library, which provides the UNIX system calls and environment these programs expect.

With these tools installed, it is possible to write Win32 console or GUI applications that make use of the standard Microsoft Win32 API and the Cygwin API. As a result, it is possible to easily port many UNIX programs without the need for extensive changes to the source code. This includes configuring and building most of the available GNU software, including the packages that come with the Cygwin development tools themselves.

The Cygwin system can be obtained from https://www.cygwin.com/

Choosing a Version of make

To a large extent, the choice of a UNIX portability environment determines the choice of make, and vice versa: You will normally use MKS make in the MKS environment, gmake in the Cygwin environment, and Interix make in the Interix environment.

However, because the gmake implementation is open source, this version can be easily recompiled for any UNIX environment, including the Interix and MKS portability environments. The MKS version is available on the CD media and the Interix version is available from the https://www.interopsystems.com Web site.

If you are considering maintaining your build process on both UNIX and Windows, then you may want to consider using gmake in your build process.

Choosing a Windows Compiler

There are a variety of Window compilers available, including Microsoft Visual Studio® .NET and the Windows version of gcc. Choosing one will depend on the features your UNIX application requires and perhaps which UNIX compiler you are currently using. For example, if you are using gcc on UNIX, you may want to use the Windows gcc compiler. This choice could minimize your application migration issues.

The gcc compiler is popular on UNIX because it is freely available on almost every UNIX platform, it has reasonable performance, and it keeps abreast of the latest ANSI/ISO compiler standards. This provides a level of consistency across multiple platforms that simplifies portability. However, on Windows, there are several different versions and configurations of this compiler available. The gcc compiler that comes with Cygwin is configured to use the Cygwin UNIX portability libraries and will be dependent on the Cygwin runtime DLL. There is also a gcc compiler that comes with the MinGW (Minimalist GNU for Windows) package that contains a collection of Windows header files and import libraries that allow you to build Win32 applications that do not require the Cygwin runtime DLL.

Using any Windows compiler directly in the Interix environment is problematic because of the file name syntax incompatibilities between Interix and Windows. If you want to use the Interix environment, you should use the wcc wrapper script that is designed to invoke a previously installed Microsoft C/C++ compiler, such as Visual Studio .NET, and that will create a Win32 binary. It is possible to modify this script to invoke other Windows compilers, such as gcc, but you are responsible for making the appropriate modifications.

The MKS Toolkit environment provides a scriptable cc utility that uses the Microsoft Visual C® compiler that can be configured to build a strict Win32 application or to build a UNIX application that requires support from the MKS Toolkit for Enterprise Developers UNIX portability libraries. The former script is located at $ROOTDIR/etc/compiler.ccg and the latter at $ROOTDIR/etc/nutccg/cc.ccg.

Comparing Mechanisms Commonly Used in make Environments

Table 3.2 summarizes some of the common features available on UNIX and the availability of these features within the different UNIX environment products on Windows. Because the make utility is very important, this table highlights many of the features provided by the Sun Microsystems Solaris version make that may be an issue during the build process migration. Details of these feature comparisons are explained in detail in Chapter 4, "Migrating the Build System." For now, check to see which of these features your build system uses and which Windows implementations support them.

Table 3.2 Support for Mechanisms Commonly Used in make Environments

Common Mechanisms

Cygwin

Interix

MKS Toolkit

Product provides gmake during installation

yes

 

 

gmake available

yes

yes

yes

make supports multi-line comments

yes

 

yes

makefiles can include other makefiles

yes

yes

yes

make stores command line macro definitions in MAKEFLAGS

yes

yes

 

make supports immediate evaluation of variable value (

:=
)

yes

yes

yes

make assigns shell command result to macro (

VAR:sh = sh_cmd_line
)

 

yes

 

make defines macro conditionally (

:=
)
[Sun Solaris feature]

 

 

 

make uses VPATH macro

yes

 

 

make provides macros COMPILE.c, LINK.c

yes

 

 

make provides implicit rule for converting .c files to executables

yes

yes

yes

make provides dynamic macros in prerequisites

 

yes

yes

make provides pattern matching rules

yes

 

yes

make provides equivalents to .INIT and .DONE special targets

 

yes

 

Product provides Windows gcc

yes

 

 

Windows gcc could be invoked from product environment

yes

yes

yes

C compiler utility for Windows applications

gcc

wcc

cc

C++ compiler utility for Windows applications

g++

wcc

cxx

Product provides appearance of single-rooted file system

yes

yes

 

ln -s creates symlinks for files*

yes

yes

 

ln -s creates symlinks for directories*

yes

yes

yes

Product provides file system case sensitivity*

 

yes

 

Product supports Windows path name syntax

yes

 

yes

Note: The items in the table marked with an asterisk (*) are mechanisms that are intended to work only for utilities and applications built for the corresponding UNIX environment. These mechanisms do not usually work with Windows applications or applications from other UNIX portability environments.

Using this table to identify migration issues is not as simple as tallying checkmarks and choosing the product with the highest total. Each feature needs to be assigned a priority or a weight that depends upon the frequency of use and the difficulty of working around it. For example, the lack of an implicit rule for converting .c files to executables is a nuisance but it is easily fixed, while a reliance on symbolic links may be impossible to work around. On the other hand, your build system may use symbolic links only once but rely heavily on implicit rules.

For a build system migration, you may need to port individual components that make up that build system. Porting components may require creating or porting scripts to perform a specific function in the Windows environment. Although most scripts can be migrated (given the appropriate UNIX portability environment), you should check the two principal sources of incompatibility:

  • File and path specifications

  • Permissions and security

Details on these migrations are given in the UNIX Application Migration Guide, as well as in Chapter 4, "Migrating Your Build System."

Build processes tend to use many different types of applications: scripts for sed, awk, and the shell; standard UNIX utilities such as make, cp, ld, and rm; and specially-crafted applications unique to your organization and written in C or C++. Each of these applications and the controlling makefiles make up solution components for your build process. During the Planning Phase your investigation should uncover any components that need modification or replacing.

Validating the Technology

You should run a proof of concept to validate your physical design. If you have not yet decided on a UNIX portability environment, the proof of concept will quickly establish which UNIX portability environment best fits your needs. The mechanism for this validation is a test build, designated as steps 3c, 3d, and 3e in the "Understanding the Build Migration" section of this chapter.

The test build is a simple application that uses the most important features of your build system. This test application may be a simple application specifically written to test aspects of the migration, but it is more likely a subset of the migration project itself. (If the build system includes custom utilities, building them with the migrated build system is an interesting bootstrap problem and a useful test.)

If you have the resources, you should set up a computer for each possible Windows environment. This kind of parallel testing must be synchronized with development; parallel testing can save considerable time when compared to trying different environments serially.

Note that the proof of concept is not a pilot. A pilot project tests the system as a whole; this validation test does not cover the entire system yet — it may not even be robust. However, it can serve as the basis for a later pilot or a later test harness.

Creating the Functional Specification

The build system does not often get a functional specification in any detail. When migrating a make-based build system, the tendency is to say, "It will be like the UNIX version," and leave the functional specification at that. The table of migration issues you have started is a list of possible specifics, but it is not a functional specification itself. In small projects, however, it is sufficient.

However, based on the assessment already done, you must clarify the areas where you can make the new system identical and where you cannot. The functional specification, which serves as a contract and as a guideline for a future test mechanism, should capture this information. The specifications should make it possible for an end user or tester to identify if the system works, if it fails, and, if so, by how much it succeeds or fails.

In some organizations, it is the job of the build migration team to do the initial conversion of the makefiles. In others, the build migration team handles the utilities and structures of the build system but does not touch the actual makefiles; instead, it provides the framework (the toolset) and a set of instructions or conversion utilities to be applied by application developers as they need the conversions.

Regardless of which approach your build migration team provides, the functional specification is an important description of the deliverables that needs to be captured. The functional specification, as the contract between the build migration team and the application migration team, is the best place for this information. However, it can be defined elsewhere, so long as it is defined.

Developing the Project Plans

The project plans describe how the migration will be done on a day-to-day basis. They allocate resources and set a schedule with milestones. The UMPG contains a list of plans that you might need to develop for a software migration project. Which of the possible project plans you create (security, communications, budget, development, test, purchasing and facilities, deployment, pilot, training, and capacity) depends strongly on the application or applications you are migrating. There must be at least development and deployment plans for the build migration, or, if the build migration is not to be treated as a separate project, then the development, test, and deployment plans for the application migration must take into account the build migration.

The build system migration needs to be considered when defining development, test, deployment, pilot, and training plans for the application migration. Some of those plans (such as the application pilot plan) require the build system to be finished before the application migration can proceed. Scheduling is critical.

If you have a single group migrate the build system for other groups to use in their application migrations, you should develop the build migration plans first. The build system migration will need to provide a single product to a number of customers, and the build migration team will need to take into account the customers' deadlines. If the developers are doing the build system migration as well as the application migration, then those points should be covered in the project plans for the application migration.

The following project plan descriptions will help guide you when you are deciding which project plans are necessary for the build migration project. The individual plans described here are rolled up into a master project plan, which should be baselined during the Planning Phase.

Development Plan

The development plan can use many of the same policies defined in the application migration development plan. For example, when developing the build system, everyone should use source control; script code and makefiles should be reviewed, just like other code. Tools for the migration should be developed. Any compiled utilities in the build system should be rebuilt on a regular basis.

The development plan has to break down the different components of the build system and indicate which components are going to be migrated and which are going to be replaced. This top-down analysis prevents wasted migration efforts.

After the various scripts, applications, and utilities of the build system have been migrated to Windows, they need to be integrated. They need to work together and they need to work with the applications. This kind of development must precede application migration, and it must be stable before the application migration can proceed. Build and integrate the components of the build system frequently.

Budget Plan

The budget plan identifies major expenses for hardware and software required for the migration. The budget should cover things like:

  • One-off copies of each of the UNIX environments for use during planning.

  • Development hardware which matches the typical developer?fs work environment, and development copies of the UNIX environment selected during the assessment process.

  • Software required for testing (if any)

  • Projected costs for deployment copies of the selected UNIX environment.

    Clearly, the last item is an estimate that assumes the preliminary choices made during the Planning phases remain viable throughout the Migration and Testing phases. A margin for error should be applied; perhaps the costs for deploying the most expensive solution alternative identified as acceptable during Planning should be used for budgetary purposes.

Test Plan

The test plan describes how you are going to do your testing, that is, it describes what kinds of tests will be done, who will do them, where they will be done, and which tools will be used to do them. It does not have to describe the actual test cases or the suite of tests to be used. The test plan should describe the proof-of-concept setup, which can be used as the basis for a test system.

There are a number of functional tests that can be run on a build system:

  • Does it actually produce the requested target? (This is the basic smoke test.)

  • Are the modules included the ones that should be included?

  • Are the targets placed in the correct location?

  • Do all of the options and features work as expected?

  • Does it correctly identify out-of-date modules or targets?

  • Can it do incremental builds?

There are also non-functional tests:

  • How long does a complete build take?

  • How many modules are compiled unnecessarily during the course of a build?

Measures should be defined and included.

Some of these tests will overlap with tests being done by the application migration team to ensure the correctness of their work The Test Role should also plan who is responsible for doing which tests. The test plans should also ensure that the configuration management mechanisms work. If the build system inserts version numbers or module identifiers, ensure that they are inserted and are correct.

Performance and load testing are rarely concerns for build systems, but they are worth considering in case your system is an exception. There are some techniques for speeding builds that can be passed to the developers. For example, use of makefile include directives instead of recursive calls to make can often speed up builds by creating a more complete dependency graph; unnecessary compiles are not done.

Plan and design for the automation of tests, even down to the checks on the build system output. In the later stages of stabilization, the tests will run frequently, and they should run with as little human intervention as possible.

Deployment Plan

Creating a plan for deployment of the build system is often overlooked during the Planning Phase. The build system must be deployed much earlier than the corresponding application migration. Deployment of the build system involves two halves that are often installed in different places: the migrated makefiles, normally installed in the source tree, and the new build system and environment, installed on each developer's computer. These two options are described in more detail in the following list:

  • Deployment of the makefiles may be as simple as a new revision added to the source tree and notification to the developers to "deploy" by updating their working version of the source.

  • Deployment of the build system and environment will involve installing the new environment on each developer's computer.

The details of your deployment plan depend on your build migration scenario. In the "virtual team" scenario, deployment may be quick and informal because the team has converted all of the makefiles during the migration. Deployment in this case will happen automatically; the application migration team will use the new tools and solutions as part of their daily work. The deployment plan covers documenting the build system, including how it will be installed for new employees to the project. In the "distinct entity" scenario, deployment is more formal and gradual. The documentation needs to be prepared before any deployment occurs, and there must be some kind of sign-off or acceptance from the developers who receive the build system.

In both cases, there needs to be documentation to describe:

  • How the solution will be distributed and installed by the different teams of developers.

  • How to use the solution properly.

  • The problems each team may encounter using the new environment and solutions to these problems.

Note that this information is required in the virtual team scenario, but it is less formally documented.

In both scenarios, the solution is probably based on a UNIX environment product. You need to produce clear instructions for this product to be distributed and installed wherever appropriate, and to explain any differences between this process and the process that was used on the previous UNIX system. Wherever possible, these instructions should be automated as well as documented. It should be easy for those using these tools in the production environment to install and employ the solution.

Plan for an easier installation and upgrade path for the application developers. Further, updates of the build system must be made available to them throughout the stabilization of the build system. When creating the deployment plan, consider the following questions:

  • How will the build system be made available to the developers?

  • Will your decision regarding tools have a purchasing implication if you need a commercial UNIX portability environment?

  • How will the initial setup be done? If a developer must install, for example, Services for UNIX (Interix) first, how will it be made available?

  • Does your organization allow developers to install this kind of software themselves?

  • What kind of updates will be done and how will they be distributed?

  • How will developers be notified?

  • If developers make changes to the build system, how will these changes be fed back to the build migration team?

Some organizations need only to establish a single network directory from which developers draw the latest stable build as needed. Other organizations build installer packages or press CD-ROMS. The choice of deployment mechanism depends on the needs of your organization. A very large organization might find it useful to get an OEM license for a UNIX portability environment and craft an installer that does the work for the developer.

Remember that deployment does not stop with the release of the migrated application; new developers will come along who will need to have the build system installed.

Pilot

Pick an application migration to work as the pilot project. Other groups will no doubt be trying the build system at the same time, but the pilot will be responsible for ensuring that the build system works in an actual production environment. By designating one application migration as the pilot, you signal to the project teams that they need to reserve time in their schedules for the final testing of the build system.

Training

One of the reasons for migrating a make-based build system is to minimize training, but the User Experience Role will need to document the parameters of the build system, the parameters involved, and the kinds of customization and future development allowed in the system. What can developers change to affect their builds? How will they get that information back to the production build staff?

Creating the Project Schedules

The UMPG has most of the information you need for scheduling, but keep the following make-specific considerations in mind:

  • The project schedules have to integrate with the larger application project schedule.

  • In setting up the project schedules, it is best to establish a milestone early on to establish the proof of concept, if you are producing one. It is usually better to get something useful to the developers doing the application migration early instead of delaying all other migration while trying to create a rock-solid build system.

  • If you are treating the build migration as a separate project, start with the major milestones as defined in the UMPG. For example, there will be a point at which the build system is released to developers (the Release Readiness Approved milestone in the UMPG). If you are treating build migration as a part of the larger application migration project, you will need to establish equivalent milestones that apply only to the build system.

  • If you are uncertain about which UNIX portability environment you need to use, try to leave space in the schedules to redo the component migration.

  • If your deployment plan calls for the creation of installers and sophisticated packages for the build system, remember to factor that time in. The time required for the creation of installers is almost always underestimated; it accumulates through successive release cycles.

Setting Up the Development and Test Environments

You will need both development and test environments. The initial setup for each of them is probably identical, even if the characteristics of the hardware are not. The installation of the test harness will change the test environment to meet your test specifications. The development environment may or may not be similar to the production environment.

The development and test environments for a build system are usually replicas of an application developer?fs system. The production environment may be significantly different. For example, if you produce embedded systems, the production environment cannot be used for development. If the production environment is significantly different from the development environment, you need to examine whether you need a set of tests that look at the output of your build system. That is, if you are developing for an embedded system, do you need a test environment that will examine the application in the embedded system after being built? If you do, it is usually possible for the group doing the build system migration to make early use of a test environment intended for the application migration group. If that is not going to be possible, the build system migration should use the same specifications and equipment that the application migration Test Role uses.

During a lengthy migration project, the third-party software components involved may be revised: there may be new releases of the UNIX portability software during the project. Be aware of the release cycles of the producers of that software. Keep in mind that you may need to run a second set of tests using the revised software to ensure that the build system is still stable.

Download

Get the Solution Guide for Migrating UNIX Build Environments

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions