Five Steps to Windows 7 Application Readiness

Springboard_7Banner.jpg

Streamlining Your Application Analysis and Testing Project

What’s our next project? Test our applications to get ready for Windows 7?  No problem, boss.  We just have about 950 that we need to look at…

How thoroughly you’ve conducted the application compatibility portion of the migration project will determine if your OS roll-out is reasonably smooth, or if you’re about to throw your IT team into a firestorm of help-desk calls, finger-pointing and numerous late nights.

When companies began evaluating Windows Vista a few years back, application compatibility was the issue that stopped some dead in their tracks.  In many of these cases, critical applications an organization relied upon for key business functions simply were not available for Windows Vista.  In other cases, organizations did not have the budget, nor desire, to license the new version that was designed for Windows Vista.  Finally, in some instances, key applications were custom or in-house development efforts where the original developers were either no longer around or otherwise not available to re-engineer the code base.

If you are looking to migrate to Windows 7, you’ll find the situation isn’t as challenging as it was a few years back—most applications designed to work with Windows Vista will work fine with Windows 7, and most ISVs have updated their applications to work with this new generation of Windows operating system.  So whether you’re migrating from Windows XP or Windows Vista, the situation isn’t quite as tough as it was in the past.

That said, getting your application portfolio ready for an OS migration can be a major undertaking, but taking the right sequence of steps and making some tough choices to reduce the scope of testing can make the chore a little less daunting.

Why do applications break in Windows Vista and Windows 7?

So what changes were made in Windows 7 (and Windows Vista) that caused applications designed for Windows XP to ‘break?’  To be sure, the engineering teams responsible for Windows Vista and Windows 7 didn’t take the issue lightly. 

The changes to Windows were made to improve security, reliability, performance, and usability, and in some cases to remove legacy components that have simply reached the end of their useful life.  We won’t take the time to catalog all of the changes in this article, but those most significant to application compatibility include:

User Account Control (UAC)/Standard User accounts.  In the development of Windows Vista, the engineering team set out to enable most organizations to deploy their users as standard users, and reserve administrator privileges for those who need them—IT professionals.  Adopting the principle of what we used to call ‘least-privileged user account’ for client PCs helps prevent intrusive malware, reduces end user configuration errors, and prevents unauthorized applications from being loaded on the machine.  In the past, an application had the ability to write to the registry settings, modify the kernel, and other similarly invasive actions.  Unfortunately this level of freedom came with a price—namely security.  Windows now restricts the parameters of the OS an application is able to change—limiting the impact any malware can have—but applications that were written with this behavior will need to be modified or shimmed to function in Windows 7.

Applications performing hard version checks for the Windows XP operating system version are also affected. While it makes some sense for a developer to lock support and functionality for the application with the version of the operating system the developer originally used in testing, it also assumes that users will never attempt to install that application on a newer OS, or install a newer Service Pack to the same OS. While this is a relatively easy issue to mitigate with compatibility modes or fixes, you will see this surface frequently when coming from Windows XP to Windows 7.

The five steps to managing application readiness for Windows 7

Like most big undertakings, the challenge isn’t insurmountable if you take the time to deconstruct the problem into logical, manageable tasks. 

An application readiness project falls into three major sections: Collecting, Analyzing and Mitigating.  However, there are a couple of additional steps we would like to call out:  consider virtualization technologies before you commence the testing regimen, to help reduce the testing process and potentially help improve your desktop infrastructure to make future migrations more manageable; and sequence the testing phase to align to your roll-out strategy.

If you’re ready to dive in, let’s get started.

Step 1:  Collect an application inventory

The first step is to take an application inventory to understand exactly where you stand—and believe us; at this point you’ve probably just realized the problem is bigger than you thought.  But more importantly, you’ve just turned an ‘unknown’ into a ‘known’ and are in a better position to scope the testing and readiness program and understand the challenges ahead. 

Fortunately there are a number of tools available that can help automate the process.  Your client management software might have this capability built-in, or you can also use the Application Compatibility Toolkit, available for free download. If you already have another inventory mechanism like System Center Configuration Manager, Asset Inventory Service or other, you can use that as a starting point. 

To make the inventory most useful downstream, capture more than just a list of applications—you’ll want to understand more detail on who is using an application, what their role is, and how important that application is to the user.  With this information, you can prioritize those mission-critical applications and eliminate unused or redundant applications (more on that in the next step).

Also, there’s a side benefit—identifying widely-used applications that you don’t currently manage.  You’ll want to get these into your orbit so you can ensure they are properly managed, on the approved version and have the required software updates.

Step 2: Analyze your applications

How many applications do you currently support that have been replaced or have otherwise fallen out of favor with business users?  If you’re like most organizations, a sizable number of them—in some cases most of them.   So once you done your assessment and have a good ‘lay of the land,’ the next step is to scrub your supported application list and filter them down, before you undertake the time consuming—and costly— process of regression testing.

Set appropriate goals for your application portfolio.  How many total apps do you want to support?  At what point does an app elevate to “managed” status? 

After you set your goals, it’s now time to find the low hanging fruit and narrow down the applications that need testing.

  • Eliminate redundant and unused applications.You’ll undoubtedly find that you have several applications that perform the same function. Now is a good time to standardize on a single application per function, and eliminate those that have been made obsolete. One tip here is to try and map application dependencies, as you may need to support a legacy version of one application to keep another one supported by the ISV.And of course drop those that are rarely or never used.Not only will you make testing easier, you might save on licensing expense as well.
  • Remove multiple versions of the same application and standardize on the most current.In almost all cases, the newest version performs best and is the most secure and reliable. Again, watch for application-to-application dependencies.

Collect information from business users to help prioritize those apps that are mission critical, and determine which departments are using which apps.  This will be useful when you sequence your testing process; you’ll want to align the timing of your testing to your staged roll-out of the new desktop image.

Step 3:  Assess incompatibilities and mitigation options

No doubt you will find some applications that need some work to get them ready for Windows 7.  At this point you have several options:

  1. You can replace the non-compatible application with a new version.Certainly the most reliable method, but unfortunately, the most expensive as well.If the application is mission-critical or otherwise strategic to operations, this is the way you’ll want to go.
  2. Create shims for your existing applications.Shims are small pieces of code inserted between the application and Windows to modify calls to the underlying OS—for instance, to trick your application into thinking the user is running as an administrator, while still maintaining standard user mode.You will have some management overhead, since you’ll need to maintain a shim database, but this approach will remedy many application problems.This is the more cost effective route, and might be the only option if the application vendor is no longer around.One caveat—many vendors will not support shimmed applications.
  3. You can use Group Policy to change the offending behavior of the application.Like shimming, this will usually take care of the compatibility problem but carries some downsides as well.Essentially this approach uses policy to disable a particular feature or function that is causing the application to falter.Unfortunately, in many cases these functions involve the security of the underlying system, so the trade-off is significant.Likewise, the application must have Group Policy settings to enable this manageability.

For custom or in-house developed applications, you can of course modify the code.  This isn’t always an option, but if it is, there are great resources to help—the Application Compatibility Cookbook for changes made from Windows XP to Windows Vista, and the Application Quality Cookbook for changes made from Windows Vista to Windows 7. Both are free guides that help developers recode an application for native compatibility.

Step 4: Prepare for the OS deployment and new application delivery options

The start of an OS migration project is a great time to rethink how you package and deliver applications to your end users.  Virtualization technologies have opened up options that simply weren’t available for the last major OS migration; you should consider different models for desktop image and application delivery before beginning the testing process.  You might find that the savings in application testing and readiness more than offsets the cost of implementing a virtualized environment—while providing a more flexible and easier-to-manage environment for future efforts.

There are two major forms of virtualization that can address application compatibility issues—application virtualization and OS virtualization.  Application virtualization separates the application layer from the OS, including the applications files and registry settings, and packages the application for streaming.  OS virtualization come in a few different forms, but essentially creates an OS image independent of the native image on the machine.

Virtualizing your application portfolio provides a number of benefits for manageability and flexibility, but one key advantage is that you minimize application-to-application conflicts.  This type of conflict arises, for instance, when you need to run two versions of the same application simultaneously—common in training situations where you want to compare the process of conducting a specific task in an old versus new application, or when the finance department is migrating to a newer version of their accounting software but needs access to the old one to close the fiscal year.

A more general use of virtualization to overcome application compatibility is to create a virtual image that contains a critical application and the operating system it is designed to run on.  There are several tools to enable OS virtualization, from Virtual PC and Windows XP Mode in Windows 7 Professional and higher SKUs (an unmanaged virtual image that will run applications intended for Windows XP but not compatible with Windows 7) to Microsoft Enterprise Desktop Virtualization (MED-V), in the Microsoft Desktop Optimization Pack (MDOP), which enables a virtual machine to be easily provisioned, configured and managed using policies to determine how the physical and virtual environments interact with one another.

Of course, adopting an alternative computing model for your client PCs is an undertaking in its own right, but this would be the time to assess whether the benefits to your organization—greater flexibility and manageability—outweigh the additional effort to adopt this model for PC provisioning.

Step 5: Sequence your testing, piloting and deployment efforts

Use your prioritization from step 2 to sequence your testing efforts, so you can begin the staged roll-out with and conduct subsequent testing in parallel.

As you begin the testing process, you can use two approaches—static and dynamic analysis; while static analysis is relatively new, a thorough testing regimen will use both. 

  • Static analysis looks at the structure of the application and identifies issues that will undoubtedly arise, either in installation or runtime.There are a number of tools and services that can help automate this process, and will quickly highlight the obvious problems.
  • Dynamic analysis looks at the behavior of the application at runtime, and is what is traditionally done in regression testing.Here, you are “smoke testing” the application in your specific environment—replicating the experience a variety of users will have with their hardware and the other key applications and drivers.
  • Finally, you will want to get a handful of real users running the applications and looking for any strange behavior that hasn’t surfaced in the structured testing.The promise of keeping the new PC for participation can be a great motivator here!

Once you are ready to start rolling out into production, identify the people for whom a migration makes sense first—based on specific capabilities they need, or to minimize business disruption.  Migrating a group of expert users will be easier than dealing with the help desk calls from task workers who now are looking at an unfamiliar screen and don’t know what to do with it. Next, identify which applications these groups will need to perform their work. Start with groups that are minimally or unaffected by application compatibility based on the applications they use, this will enable you to validate the deployment process and the operating system. As you work through your application portfolio and more groups become unblocked from incompatible applications, then target those groups.

One final word of caution—avoid taking the process too far.  If you let the scope creep from application compatibility to a full-blown application quality project, you might never finish.  Accept the goal of fixing bugs that prevent work from being done, and avoid trying to eliminate every bug that exists—you undoubtedly have better use for your time!

More resources

Readying your application portfolio for a migration to Windows 7 is a major undertaking, but fortunately there are a number of tools and an abundance of guidance to make the process more streamlined and manageable.  We have just scratched the surface in this article; if you’re ready to dive deeper and get the process rolling, a great next step is to visit the Application Compatibility zone on the Springboard Series on TechNet, download the Application Compatibility Toolkit, and start building your project plan!

You can also find helpful information and guidance on Chris Jackson’s Blog, and in the Windows 7 Application Compatibility white paper, which covers Understanding Application Compatibility and Understanding Application Compatibility in Your Environment.  You can learn more on the virtualization technologies mentioned above at www.microsoft.com/mdop and on the MDOP TechNet page.

To learn more about Windows 7 or any of the Windows Client technologies, please visit www.microsoft.com/springboard for the latest in information, guidance, and community connections.