Export (0) Print
Expand All
Expand Minimize

Security in Operation (3/4): Patches in Hours

Published: May 12, 2005

Security Management

By Jeffrey R. Jones
Director, Microsoft Security Business and Technology Unit

See other Security Management columns.

As part of my work for Microsoft, I have spent a lot of time analyzing OS security, customer feedback, metrics for progress, and where those three things intersect. One thing I’ve discovered is that there is quite a large gap between the theoretical idea of security and the practical security concerns of customers. This article is the third of a four-part series in which I’ll be examining those customer concerns and raising questions to think about with respect to using either a Microsoft Windows–based or a Linux-based operating system.

Last month I was reading an article by Steven Suehring called “An Approach That Works: Comparing Open and Closed Source Security.” The whole article was a bit of a stretch, but the paragraph that got me going was this one:
The timing of security updates best reveals the differences in how the two models approach security. One of the aspects of open source security is transparency -- virtually as soon as a security flaw, theoretical or practical, is reported, it's released to the general public so that users of the software can take steps to mitigate the effects of the security flaw. A patch follows very shortly after for all of the popular open source software packages. If a patch isn't readily available within hours, the community frequently steps up to release an intermediate patch and to help others mitigate problems associated with the flaw.

First, I’m going to agree with Steven that security updates are interesting in distinguishing security in the two models. However, he made two important statements, one implicit and one explicit, that bear a bit of closer scrutiny:

  1. “A patch follows very shortly…” No boundaries are set on “very shortly,” but the context of the article certainly implies that the patches are available sooner for the open source model than from Microsoft (closed source).

  2. “If a patch isn’t readily available within hours, the community…” In other words, if the distributions aren’t quick enough, the community will make patches available for you.

I have a questioning nature. Do Linux distributions really produce patches faster? I am sure you can think of one example, but on average, across all patches? And if not, then does the community produce patches for me? How many were produced last year? Where do I go to find them? Will they break my system? These questions aren’t arbitrary; they are the type of questions a patch administrator may ask regardless of whether his patches are from a vendor or from somewhere else.

Fastest Patches for Disclosed Vulnerabilities

I am not going to spend a lot of time on the first implied statement that Linux systems provide patches quicker. There are some solid studies that examine this more deeply and have shown that, in spite of common perception to the contrary, the Microsoft Security Response Center (MSRC) provides patches to customers for publicly disclosed issues in much less time than leading Linux distributions.

In March 2004, Forrester released an independent, unfunded study (download here) of Microsoft and the four leading Linux distributions. Analysis of all vulnerabilities over a one-year period found that while Microsoft provided a patch in 25 days on average, the next closest Linux distribution, Red Hat, averaged 57 days to produce a patch. Note that each vendor validated the data as well, as indicated in the acknowledgments:
Forrester thanks Noah Meyerhans of Debian, Vincent Danen of MandrakeSoft, Allen Jones of Microsoft, Mark Cox of Red Hat, and Roman Drahtmüller of SUSE for the time that they so generously dedicated to this research.

In March 2005, Security Innovation LLC did some follow-up research (download here) comparing the full Windows Server 2003 set of components with a minimal LAMP Web server using Red Hat Enterprise Linux 3 Advanced Server. This study covered 2004 vulnerabilities and again found that the Linux distribution took twice as long, on average, to provide patches for their products and components.

So What About Community “Patches in Hours”?

From what I’ve been able to glean by looking backward, issues that get a lot of publicity or press coverage probably will result in community patches being developed and published. Publicized issues tend to be the ones that the Linux distribution vendors also patch the quickest, though, so the benefit of community patches may be questionable. With that in mind, let’s look at some specific questions.

The Ones the Distros Delay or Forget

If a Linux distro patches a vuln pretty quickly, a community patch isn’t really needed. So what about the vulns that the distros take the longest to patch?

Mark Cox of Red Hat posted some data on vulnerabilities fixed in Red Hat Enterprise Linux 3 (rhel3as) in his blog and made it available at http://people.redhat.com/mjc/. From this data, we can identify some vulnerabilities that took the longest to fix that were also remotely exploitable and rated high severity by ICAT (see here for more info): CAN-2002-1363, CAN-2004-0409, CAN-2004-0836, and CAN-2004-0419 -- the least of these 4 were disclosed for 138 days when Red Hat released a patch.

CAN-2002-1363 was public in 2002 before rhel3as shipped, as was a patch for the issue in libpng by Debian and others. So, when you deploy a brand-new rhel3as do you assume this issue has been fixed? Do you look up all the issues that were public before rhel3as shipped and check the source code to ensure they’ve been taken care of? A community patch was available, but how do you know you need it?

CAN-2004-0409. If you were subscribed to the xchat discussion alias on April 4, 2004, you received a notification that there was a vulnerability and a source patch was available from the xchat.org Web site. Otherwise, you would probably have first heard about this issue by monitoring the Bugtraq mailing list when you saw an advisory from Debian on April 21. If you didn’t subscribe to either of these, you would have received a security advisory notifying you of a patch from Red Hat in October.

CAN-2004-0836. This mysql problem was disclosed on June 4. If you subscribed to the mysql mailing list, you’d have been notified of a source code fix on June 17. If not, you’d have gotten a Red Hat advisory in October advising you of their patch for the issue.

CAN-2004-0419. On May 19, this issue was opened in XFree86 bug database along with a submitted fix. It looks like the fix was simple and was committed to the tree. If you don’t closely follow the bug database for XFree86, it is likely that you found out from a vendor advisory, such as the Red Hat one in October.

Although all of these four issues were high severity and remotely exploitable, I can’t find any coverage of the issues when they were discovered, only when they were fixed by a major distribution. This leads to the next question.

How Does an Admin Find Out About It?

In the examples we looked at above, even following Bugtraq would not have easily made an IT administrator aware of the software vulnerability before one of the major vendors released an advisory. So even though a community patch was there, what good does it do if an admin doesn’t know to get it and apply it?

Hackers can select a few critical components and monitor bugzillla, source changes and a wide variety of sources until they find a likely vulnerability to exploit. It does seem theoretically possible to keep on top of most of the disclosed issues if it is a full-time job or passionate hobby, but it does not seem practical for the majority of IT professionals who are responsible for protecting their systems.

Instead, it seems like most of the notification for IT pros would come from sources like Bugtraq, SecurityFocus, or vendor advisories, at which point the community patch might not even be necessary.

However, let’s assume that there is a community patch that is available prior to a vendor patch and that an admin knows about and would like to deploy.

How Do I Deploy a Patch to All of the Servers?

The majority of Linux business customers have either Red Hat or SUSE, so they will be using either up2date or yast. The expected use of these tools is to pull approved patches from the vendor site and deploy them to your computers. However, package tools are available for admin use as well. One could:

  1. Apply the community patch to the appropriate code and recompile.

  2. Package up a new RPM package from source and place it in a local update repository.

  3. Use the update/RPM management tools to push out the package.

This sounds like a process I wouldn’t want to do very often for a big company, but it is feasible. Having shown feasibility, there are still a couple of questions an administrator would have to consider -- the first one would be testing.

Quality and Testing -- Can I Trust This Source?

What if I ignored testing and just rolled out the community patch? Mike Howard has a great example of the risks of community patches on his blog in this entry. In summary, an Apache 1.3 buffer overflow was disclosed on Bugtraq and a community patch was very quickly posted for anybody to use. Although the community patch fixed the buffer overflow, Mike spotted that it introduced a new buffer overflow!

To use community patches safely in production, it seems like one might have to take on some extra responsibility and resources for testing. This “benefit” seems to be getting burdensome for the average company administrator. If an administrator lacks the resources or time to do sufficient regression and compatibility testing that an employer might require, it might be better to wait for a vendor patch.

Who Supports My Community Patch?

Commercial Linux enterprise distributions like Red Hat and SUSE take a snapshot of all of the components and then commit to supporting them for several years. The Red Hat terms and conditions certainly seem to allow for modification of the source of most of those components.

Practically though, if you branch the source code, who is responsible for ensuring that your changes stay in the code and don’t conflict with other “official” changes released by the vendor?

Let’s take a look at a scenario using kernel vulnerability CAN-2004-1234. This issue was disclosed on April 8, 2004, along with a proposed change to fix it. Let’s say an admin on Red Hat Enterprise Linux 3 Advance Server looks at the change and decides to apply it himself. He does what has to be done, recompiling kernels, distributing, and rebooting. Great, the community patch helped make for better security! But what might happen later?

Between April 8 and December 23, 2004 (the date Red Hat issued a fix for CAN-2004-1234), Red Hat issued security advisories fixing 28 kernel vulnerabilities in the kernel through their up2date and Red Hat Network mechanism. When each of those advisories was released, the administrator faced a decision:

  • Does he download the newest Red Hat kernel source and migrate his patch to it and manually roll that all out?

  • Does he take the new binary patch from up2date and roll back all that good work he did to fix CAN-2004-1234?

There are many more examples like this, even just considering the kernel. It’s a tough decision, but for practicality, I think most administrators will eventually decide to stick with the “vendor approved” patches as the best tradeoff between security and operational cost.

What If Microsoft Released Untested Patches?

The other way to examine the benefit of “patches in hours” is to imagine Microsoft doing the same thing. A developer completes his proposed patch. As soon as the patch passes to the testers, the MSRC also posts it for customers to use at their own risk, along with a security bulletin noting that it has not yet been regression or compatibility tested.

In theory, this is completely feasible for Microsoft, if almost all customers really wanted it and were willing to take the risk. Keep in mind that the security bulletin with the patch would make the issue very public, so everyone would really hope that the untested patch worked.

Does this seem like a process that many customers are yearning for? I can say from my experience that it is the opposite of what customers want.

Conclusions

When comparing vendors or products, all too often the values and benefits are oversimplified into a sound bite like “patches within hours.” For anybody offering a product to companies, along with a support and life-cycle commitment, providing security fixes is an entry-level requirement.

Beyond the basics, though, there are practical implications to security risk that arise from company requirements to have software fixes that can be trusted for deployment in production environments. When the open source model is considered in the light of the availability of “community patches” intersecting with vendor approved patches and distribution mechanisms, the situation presents real challenges and difficult choices to managing security risk.

As with any comparative security discussion like this, I advise you to check up on this topic yourself and draw your own conclusions. I’ve provided several references for you to follow and presented some scenarios that you might want to discuss with your own software vendor.

Join me next month for the next article in this series about practical security issues to consider for operational environments.

Best regards,
Jeffrey R. Jones
Director, Microsoft Security Business and Technology Unit

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft