Share via


Application Compatibility

Planning Your Application Compatibility Project

Chris Jackson

 

At a Glance:

  • Defining vision and scope
  • Building a team
  • A project’s three key phases
  • Remediation approaches

Contents

Defining a Vision and Scope: Windows
Defining a Vision and Scope: Application Management
Building a Team
Collect
Analyze
Test
Remediate
Conclusion

My job takes me around the world to work with customers who are in the process of migrating to the latest version of Windows. Often, the biggest obstacle they encounter is application compatibility, which raises the following questions: How much is it going to cost to fix things? How long is it going to take? What do I need to know? How do I accelerate the process if it's taking too long?

An application compatibility project is similar to a software development project: cost and time estimates refine over time as you discover what you are facing. I have seen people propose estimates of time and cost based on the number of applications, but the best this system can offer is an average. While the average is often close, almost nobody actually hits it; most organizations are notably better than average, or disappointingly worse.

So, let's talk about how to go about the planning phases of an application compatibility project—understanding cost estimates early, refining them over time, and minimizing them throughout. I will share with you the tips and tricks our most successful customers have used, hopefully saving you rework in the process. I especially want to maximize the productivity of those participants who aren't directly involved in the process, such as end users testing and business owners prioritizing applications, since you'll need their cooperation to be successful.

Defining a Vision and Scope: Windows

The going can get tough when you're dealing with application compatibility. When you find the pocket of Visual Basic 3.0 applications that are mission critical, broken, and without source code, it's easy to get discouraged. You'll want a motivating rallying cry. You'll want overarching goals to guide you in making hard, strategic decisions.

For example, if you are fixing an application that requires admin rights, and a major goal of upgrading Windows is to enable running as a standard user, than applying the RunAsAdmin shim is not a good mitigation. If your primary goal is to make finding, organizing, and using information easier, and you need to fix a broken application using the "buy a new one" fix, you'll take into consideration information visualization and shell integration features of the products you consider.

After you write down your vision, define the scope. If your vision includes moving a percentage of desktops to standard users, define that percentage. This may sound obvious, but it is frequently overlooked, and strongly correlated with success.

Defining a Vision and Scope: Application Management

An operating system migration is the perfect time to set goals for application management. You're touching every application in your enterprise—what a great time to think about how you manage your application portfolio.

Review your application management goals from two perspectives:

  • Agility How rapidly can you respond to technology change as a competitive advantage? How good are you at testing software to minimize both risk and time to production? How well do you manage the lifecycle of applications? Do you understand what business problem your software is solving, and is it the optimal software solution? You want to use software deliberately and strategically.
  • Productivity Are you maximizing user productivity? Do you provide a familiar, modern, and consistently high-quality software experience? Do you have quality standards to manage productivity and helpdesk costs? Can your users collaborate using the same software platforms? You want to leverage software to enable business productivity.

The single biggest application management challenge almost everywhere I go is application proliferation. Some organizations have close to 100,000 pieces of software. That's completely unmanageable! Think about it—you could hire a 50-person team, spend only one hour testing each application, and it would still take you an entire calendar year to finish your testing. An application compatibility project is a scale project, and one of the biggest wins is to minimize the scale.

Whatever your greatest issue is, write down how you will improve. You probably won't go from 100,000 applications to 500 this time. You probably won't get rid of all the old VB6 applications with buttons so big they take two hands to push. Nevertheless, you should set objectives, incorporate them into your vision, and quantify them in your scope.

Building a Team

With your vision and scope in place, it's time to assemble the team to do the work. Crucial roles (a person can fill more than one role) include:

  • Project Manager coordinates a team spanning many disciplines and organizations.
  • Business Coordination Lead works with business application owners to obtain data on application priority, locate and coordinate user acceptance testing, and coordinate with pilot users (this role is frequently executed by the project manager).
  • Technical Lead works with developers to identify training gaps, and work with debuggers to resolve complex compatibility issues.
  • Lab Manager ensures that the team and the user base have a comfortable chair and a current OS image on which to test software.
  • Application Research Team determines current support status for third-party software (frequently outsourced).
  • Testing Team runs install and smoke tests to ensure basic compatibility before user testing (frequently outsourced).
  • Mitigation Team resolves compatibility issues that surface during testing.
  • Packaging Team creates installation packages after testing is completed (frequently outsourced).

The entire team will need to work closely with each other and with anyone else involved in the operating system migration. For example, you'll want current images and up-to-date group policies and configurations.

With a vision and scope defined and a team identified, it's time to start planning the work you intend to do. The process can be divided into three phases:

  • Collect What do I have? (Current State)
  • Analyze What do I want? (Desired State)
  • Test and Mitigate What works?

Collect

Before you can plan your work, you must understand what software you have, the current state. The more sophisticated your software management tools, the more visibility you have into your current state. If you already have an application inventory from your software management tool, use it. If not, the free Application Compatibility Toolkit (ACT) from Microsoft can provide an excellent inventory. The tool you use doesn't matter; what does matter is that your tool gives you all the data you need to the answer questions that will come up later in the process. To help you structure your inventory, here's what you need to know:

  • Division or organization Where does the user work? This information helps identify which software solves which business problem, to either drive prioritization or identify redundancy. I've seen this collected in a number of ways, using machine name, IP subnet, and so forth.
  • Role What does the user do? This information helps identify what the software is used for, again to inform prioritization or identify redundancy. This data is typically harder to locate unless it is already encoded someplace, such as in Active Directory.
  • Usage How many users have the software installed? Even better, how much are they using it? It's easier to retire an application if you know nobody has it installed, or that the people who have it installed never use it.

Later, you'll want to help the people testing the application with the following data:

  • OS/Patch Level You need details of the user configuration where the software is (presumably) working so the tester can compare the working case to the broken case when things go wrong.
  • Subject Matter Experts You want your tests to represent real user scenarios, for which you'll need real users. If you haven't identified SMEs, perhaps the usage data can help identify candidates.
  • Other Applications Installed Sometimes, application issues are caused by conflicts between applications, which you'll want to be able to identify.

Finally, you'd like to have data in your software inventory to support the actual deployment. If you are planning to deploy by role (see the sidebar "Deploying by Role" to learn why this is a good idea), then tagging the applications by role helps guide you to test those applications first. Similarly, if you are planning to deploy by division or geography, you'll want to start testing the applications used by the people you'll be deploying to first.

Do you completely understand your current state? Great—skip to the next section. If not, let's choose a tool to get you there.

You may need raw numbers early to build the business case for the application compatibility effort. Microsoft offers an agentless tool to help you get some rough numbers, the Microsoft Assessment and Planning Toolkit. While typically not sufficient to drive the entire project, this tool does provide some indication of the amount of effort you'll need to invest.

Deploying by Role

Our most successful customers deploy Windows Vista by role. This works particularly well when you have structured task worker roles. Roles have the greatest commonality in software usage than any other easily distinguishable division. You can start with a structured task worker role, such as the call center. This group may have a relatively small number of applications, perhaps a dozen. You can test all of these applications, finish, and be ready to deploy this role. You can brag to management, and move on to the next role. Team morale is high because of the success, management is happy because progress is visible, and the team gets practice doing the work on manageable numbers before they tackle roles with lots of software, such as information worker roles.

To obtain detailed data, you'll need an agent. Collecting an inventory is non-trivial as there are many ways to install applications (MSI, xcopy, setup.exe, and so forth). You want to find all of the applications that matter to you. Most tools surface rich client applications, but what about the Web applications, ActiveX controls, and Microsoft Office applications? I haven't found a tool that surfaces everything.

If you don't already have an application inventory, the Application Compatibility Toolkit (ACT) is best of breed for surfacing desktop applications in a way that matters to you: giving you an application and a version that you can map up to a support statement.

By the way, with regard to ACT and its inventory, I want to clear up one misconception: the idea that the Compatibility Evaluators are a substitute for testing. Weigh the cost of this data against its value when deciding how much to collect. You will not eliminate the need to test by using compatibility evaluators. Evaluators are tuned for performance (so they can be run in a production environment), so they're explicitly not designed to catch every possible problem (this would slow down users too much, were it even possible).

Compatibility evaluators will effectively report deprecations (features that have been removed from the operating system, such as a GINA). The Internet Explorer compatibility evaluator is fantastic, except it only runs on Internet Explorer 7 and above (and, since you're not likely to deploy before you test, that pretty much limits its use to lab or pilot machines). Because the UAC evaluator doesn't catch much that file and registry virtualization doesn't automatically fix, it's a rather coarse predictor of additional bugs running as a standard user. In summary, the things it catches are all absolutely compatibility bugs that you should address, but because of the limited scope, it's a poor predictor of if and how badly an application as a whole is broken.

We weigh this against the costs. With sophisticated software deployment systems in place, it is easier and cheaper to deploy agents. I also have to price out a server to support the data collection, as well as the time to collect the data. With an average of 17 seconds to process each log, I could do a collection of 1,000 machines for 3 days, uploading every 8 hours, and process this data in under 2 days. But, if I wanted to collect from every machine in a 200,000 seat enterprise for 30 days, uploading every 8 hours, I'd have to wait close to 10 years (!!!) for that data to process.

If you're collecting inventory anyway, it makes sense to collect compatibility data from a subset of computers, but not to invest relentlessly. I've seen too many massive cost estimates relating to collecting this data.

Analyze

Once you know what you have, you need to determine what you want—your desired state inventory. This requires collaboration between business and IT, and a number of tough choices. As a result, I've seen the process proceed along the lines of what's shown in Figure 1 far too many times, where you only figure out what you want after you've spent a lot of money on things you don't want.

fig01.gif

Figure 1 Unwise application analysis—what works is not necessarily what you want.

The problem with deciding what you want based on what works is that you end up keeping (and supporting) redundant applications just because they happen to work, or deciding to eliminate an application after you have already spent money researching it.

It makes more sense to figure out what you want first, and then invest your money researching and testing only those applications that you have determined add value to your business.

To make the application analysis process more productive, we recommend that you set some explicit goals and measure your progress against them. Recommended goals include:

  • Maximum number of applications Set an explicit goal for the number of applications you'd like to support.
  • Managed application tolerance Set a tolerance level for when an application becomes a "managed" application, based on both business priority and number of users.
  • Management level In a decentralized organization, set organization-wide goals for application management, affording business units the autonomy to implement this guidance as appropriate for their business.
  • Commercial software versioning standards It can be prohibitively expensive to always buy the latest version of all of your software, but you incur risk having very old software. Consider setting standards of n (current version) or n-1 (previous version) for your business critical applications
  • Supported platforms Limiting supported platforms helps you manage complexity. While you don't want to be running in place (creating new versions whose only feature is compatibility with the latest platform), upgrading everything becomes prohibitively large and expensive.
  • Application prioritization goals People have very different perspectives on what "business critical" means. You'll want to set a percentage goal or some objective criteria to make this clear

With these goals in mind, it's time to involve folks on the business side of things—the people who know how and why the software is used. For smaller organizations, this can be one-on-one. For larger organizations, you can use SharePoint portals to collect data to inform the analysis process. While you want to simplify, you also want to make sure you can capture data like, "we need to keep the previous seven versions of this tax software working somewhere, by federal law."

One important practice: optimize for the time of people who aren't officially on your team. Business owners typically are helping you with little to no immediate reward.

How do you collect the information that is already known about your applications? Information about commercial software is shared on the Windows Compatibility Center, but you need to match up your list with the data on this site. The Application Compatibility Toolkit 5.5 will automate this matching.

This principle of cutting early and often applies throughout. A high-value resource eliminating an application in 30 seconds is less expensive than a low-value resource spending an hour researching it. Remove obvious noise before you collect data from business owners, and only research applications that business owners helped determine you want to keep around. The preferred approach to researching applications is shown in Figure 2.

fig02.gif

Figure 2 Filter down the list of applications early, when it’s inexpensive

How well does this work? With one customer, we took an inventory of around 1,200 applications from 54 different computers. Over a lunch hour, we removed obvious noise according to their business rules. We narrowed down the list to about 450 applications, and could probably narrow it down even more with additional time. That's over 700 unimportant applications removed in an hour—a significant cost savings.

You can now refine your cost estimate based on your desired state. You can further inform that estimate with known compatibility state for commercial software, and perhaps use a static analysis tool to help understand what you expect to work or to have issues.

Test

Next, you need to determine who is going to be involved in the testing process. Considerations for the team include:

  • Internal team makeup Have a strong project manager and a technical expert leading the team internally, ensuring you can coordinate several roles (testers, debuggers, development teams, users, business owners, and so on).
  • Partners involved Many organizations involve partners to assist in the process. Think about where they fit (targeted skill augmentation, staff augmentation, factory approach, and others) and how they'll integrate with your users for business functionality testing

You'll also want to plan the technology you intend to use. Consider the following technologies:

  • Virtual machines Undo disk and snapshot capabilities will save you a lot of time (from, for example, "first run" bugs, bugs that permanently destroy the state of the machine)
  • Terminal Services/Remote Assistance These are very useful for user testing, providing an easy way to give people access to a Windows Vista computer quickly. And Remote Assistance helps with bug reproduction and investigation.
  • Pilot machines Giving users first access to your hotrod new laptops in exchange for testing apps can be very motivating.

Next, map out the testing process. Figure 3 shows a skeleton workflow:

fig03.gif

Figure 3 The application testing process

Do everything you can to ensure that nothing obvious breaks before you involve the user. There is nothing more frustrating than finally convincing a reluctant user to come to your lab, only to have the installer blow up in their face.

Likewise, make sure testers don't end up testing something you're unwilling to fix. If support is required, only test supported versions.

Remediate

To have efficient testing, you'll want to be testing with a fix in mind. Debug a failing application until you determine which remediation bucket it fits into; once you have a bucket, stop.

Of course, to do that, testers must know which buckets you're considering, and when. Crisply define your strategy for remediation. Remediation options most organizations consider include:

  • Get a new one. This is extremely likely to work, and offers you vendor support (which probably matters for some of your applications). This tends to be the most expensive approach, with either development or acquisition costs. Typically, this approach is used any time you can afford it!
  • Shim it. This is the cost-saving route—help the application by modifying calls to the operating system before they get there. You can fix applications without access to the source code, or without changing them at all. You incur a minimal amount of additional management overhead (for the shim database), and you can fix a reasonable number of applications this way. The downside is support as most vendors don't support shimmed applications. You can't fix every application using shims. Most people typically consider shims for applications where the vendor is out of business, the software isn't strategic enough to necessitate support, or they just want to buy some time.
  • Change policy. When a particular feature breaks a number of applications, you may want to disable that feature. The advantage is similar to using shims—you don't have to change or even have access to the source code. And the disadvantages are similar as well—lack of support and inability to fix everything. Some people consider this approach for Web applications, where shims aren't an option. Some of the security features can be controlled individually and disabled as a stopgap solution. A common choice is to disable protected mode for the Local Intranet zone (which Internet Explorer 8 does by default). Note, however, that any time you modify the default security of the system, you want to take that decision very seriously. For example, disabling UAC can decimate the business value of the OS migration.
  • Application virtualization. There is a lot of confusion around application virtualization as an application compatibility solution. I have heard it described as a complete separation of the application from the underlying OS, and therefore a complete and foolproof solution. This is emphatically untrue today. With the exception of the file and registry calls, the application still calls the underlying OS, and any compatibility issues outside of the file system or registry remain unfixed. It is great for application-to-application conflicts, but not a generic solution for application to OS conflicts. Support status is unknown but likely not in your favor, as not every company supports software within application virtualization even if it is supported natively on the OS. The typical scenarios where customers use this solution are: when the issues are with the file system and registry, when the issue is caused by a conflict with another application in the core load, or because they like the deployment story behind virtualized applications and it's just good fortune that it also fixes a compatibility issue.
  • Machine virtualization and terminal services. Machine virtualization is your brute force method. You know it's going to work, because you are actually running it on a previous version of the OS, whether on your local machine or on a server somewhere. It almost always puts you in a supported scenario, since you're actually running it on a supported operating system. But, while some say "virtualize it all, migrate today, and fix things later," I tend to be more cautious. There is management overhead, since you're managing potentially double the number of operating systems per user. If you're using local virtualization, then you need machines with the resources (particularly memory) to support two simultaneous operating systems. The user experience today isn't always that great, as most users are perplexed when they see two start buttons (although there are solutions from both Microsoft and partners to improve this). Most of my customers tend to use this as their last resort for application issues. (In fact, many customers set testing thresholds; if the remediation team can't fix a problem within the amount of time it estimated for each application, instead of carrying on potentially forever, it just stops and puts the application into a previous OS environment.)
  • Get rid of it. Don't forget this option! Sometimes it's not worth it to remediate a low-business-value application or a redundant piece of software. Retire it instead.

Conclusion

We have walked through some of the most important considerations for planning an application compatibility project. I have done this planning either in one solid chunk (generating a complete project plan before the project actually begins), and I have worked with customers who plan for each stage after the preceding one is completed. The critical point is to understand what you can be doing at each step to save you time and money later.

Though there is an aspect of engineering craftsmanship in the process, the big challenge with an application compatibility project is managing bulk and motivating people who aren't incented to help you. These tips and guidelines should help.

Chris Jackson is the technical lead for the Windows Application Experience SWAT team at Microsoft. He has worked with enterprise customers around the world to help them investigate and mitigate application compatibility issues, as well as providing instructional training about Windows application compatibility for numerous industry events. Chris can be reached at blogs.msdn.com/cjacks.