The term mitigate is defined generally as to "make less severe, serious, or painful." I've spent quite a bit of time in this column discussing technologies that can be used to mitigate problems, especially those related to security. Growing up, I learned many proverbs and parables that were meant to help reinforce good behavior—and I'm still a big believer in many of those lessons I learned. My favorite proverb is "An ounce of prevention is worth a pound of cure." And my favorite quote is by philosopher George Santayana: "Those who cannot remember the past are condemned to repeat it." Together these do a good job of expressing how I like to operate—and how I wish more technologies operated.
Unfortunately, you and I get to interact with technology at a level that most people don't get (have?) to. While this is great at forcing us to learn technology, all too often the technology doesn't behave the way it's supposed to behave. When that happens, we have to mitigate against it to ensure that further damage doesn't happen as a result of the failure.
This month, I'm going to take my column in a bit of a philosophical direction. Simply by reading the pages of TechNet Magazine, you've already proven yourself to be something of a technophile—so I may just be preaching to the people who already agree with me here. Still, the points I bring up in this month's column are important enough to underscore.
I've mentioned before that the technology I work on (security and whitelisting) is a bit more of a preventative than is the standard technology that's been used to deal with malware for the last 25 years, which essentially compiled blacklists of signatures of known malware. The preventative approach can be a pretty avant-garde road to go down. But as I said in my November column ("Data Loss Prevention with Enterprise Rights Management
"), it's all about what your real goal is. If you come into a conversation saying "I want to stop malware" without exploring other approaches, you'll simply follow the standard "find the bad stuff and eliminate it" path. You can see how easily this can become the norm—without ever really solving the problem.
Instead of that standard approach, I prefer the more pragmatic "What is the real problem that I'm trying to solve?" In this column, I'm going to discuss why I think this is useful. I'll look at a few random events, incidents, and technology problems, and show how the wrong approach is taken way too often as a reaction to them.
I Love YOu
Where were you on May 4, 2000? I worked at Slate.com—and all of a sudden, everyone loved me. It was the dawn of iloveyou.vbs. This little malware gem took advantage of three conditions to spread virally.
First, the malware used social engineering to get users to open the message. It came from someone you knew (you had to be in the sender's address book in order for it to be sent), so of course you had to find out if you were really loved.
Second, it was based upon the fundamentally correct notions that users a) don't know diddly about file extensions; b) don't bother to check file extensions (heck, they're hidden by default in Windows); and c) will click past anything warning them of potential badness (to see the dancing pigs, as Steve Riley likes to say). See Steve's blog, Steve Riley on Security
, for more information.
Third, it took advantage of a fundamental architectural flaw, a flaw so apparent it should have been caught before the product with the flaw shipped. But it wasn't caught—so the malware could intrude and then spread very rapidly. ILOVEYOU worked by running Windows Script Host, harvesting the user's address book, and then sending e-mail to everyone in the victim's address book. As long as those in the address book also ran Microsoft Office Outlook on Windows and fell for the social engineering component of the virus, the process continued.
So where was the flaw? C'mon, give it a guess. Ready? The flaw was in Outlook, caused by two fundamental problems. When Outlook was originally designed, as with Internet Explorer 3.0 before it, COM and ActiveX were becoming all the rage—so you could quickly and easily reuse components of one application in another. Outlook, however, was far too trusting regarding who could call its COM object model. Inbound, completely unsecured documents—let alone e-mail content—should never have been able to even look in the address book, much less harvest it completely and then send e-mail. Sure, there are scenarios where it makes sense for an incoming e-mail to automatically send an e-mail response. But that's the exception—far from the rule.
Security in Outlook was subsequently tightened so that it now asks users when an application wants to query the address book or programmatically send mail (see Customize Programmatic Settings in Outlook 2007
). That was an important step in the right direction. Users are also asked whether they want to grant permissions for the application—because, frankly, we all know what happens if a user really wants to see those dancing pigs.
You know what the mitigation most likely to be put in place was while users waited for Outlook to be fixed? Killing Windows Script Host (WSH). Beginning in 2000, like no other time in my career, one of the most frequent questions I got was, "My customer wants to remove WSH from Windows—how can this be done?" What a sorry state—an incredibly powerful scripting language was threatened due to drive-by malware that was enabled by security flaws.
I am, of course, a big fan of WSH. I think it's a great tool and that Windows PowerShell actually has a ways to go before it can replace WSH (though that's another column topic for another day). But my point is that WSH in and of itself is not a security hole. Eliminating one component because of the flaws in another is not an effective way to manage things. Still, it is very important that you secure your operating system, Web browser, and e-mail client (especially if it is COM enabled—ahem) to ensure that they cannot take advantage of WSH in a negative way.
In 1994, Microsoft released ActiveX, and the world seemed to take two opposing views: it was pure evil and would cause the downfall of the Internet, or it was a great, powerful tool and would make the browser into a real platform. ActiveX in and of itself is not a huge exploit waiting to happen. In fact, Microsoft did a pretty good job of implementing security for ActiveX in Internet Explorer—though, of course, it has been further hardened over the years as the rest of Windows has.
Nevertheless, one of my favorite Web searches is "2008 buffer overflow ActiveX." Go ahead—try it. You can change the year, if you like, to see how each year has gone. Why do I find this interesting? It's because Internet Explorer and ActiveX controls have unfortunately become the poster children for security vulnerabilities, deserved or not.
We face somewhat similar problems in the world of whitelisting software. Sure you can try to secure a system by only allowing code to run that's already on the PC, but suppose there are exploits in that code? You can get owned just as well as if you had no security software on the system. Just as with buffer overflows, controls that have wrongly been marked as "Safe for Scripting" become giant holes for hackers to take advantage of.
Why do I bring up the buffer overflow aspect here? Because this problem generated a response similar to the WSH/Outlook behavior noted earlier. Instead of blame attaching to vendors for not performing decent threat modeling and buffer-overflow detection and for incorrectly marking a control as Safe for Scripting, ActiveX itself became the culprit.
Perhaps it's fair. If Microsoft had implemented a better sandbox (as has been done to a degree in Windows Vista via Protected Mode) or if Microsoft had simply not allowed Safe for Scripting, we would not have these problems. And ActiveX would probably be more widely spread—or at least more widely tolerated.
Unfortunately, we do have these problems—and the ActiveX kill-bit (see Figure 1
) has become something with which admins are all too familiar. See "How to stop an ActiveX control from running in Internet Explorer
" for a description of how to programmatically kill any ActiveX control that is perceived as a threat. Controls check this registry entry before instantiation to see if they are allowed to run.
Figure 1 You can prevent an ActiveX control from running by setting the killbit so that the control is not called by Internet Explorer
The reality is that if you want to perform certain tasks from within Internet Explorer, such as querying a registry key, interacting with hardware or another application, or interacting with user data on a Windows PC, you basically have no choice except an ActiveX control. And creating a control—properly designed, threat modeled, developed, tested, and signed (whew)—can be a rather foreboding task. But it honestly shouldn't be viewed as a bad thing or as a giant security hole (unless you skip or short-circuit those steps). Oh, and about Safe for Scripting—if you're developing a control, and you have to have it safe for scripting, don't. Really. Not unless you have no other choice.
That said, how do you mitigate against bad ActiveX controls? The fans of other browsers will gleefully tell you "My browser doesn't have those kinds of exploits," but that's naive. Internet Explorer on Windows is designed very well, but it has flaws. All software has flaws. Running Web browser B because you believe Web browser A has flaws is usually grounded in zeal—not in actual security. There have been security flaws found in every major browser and in every major ActiveX control. The answer?
The upshot is that though you can disable ActiveX controls to the point that they won't run (see Figure 2), there will still be exploits—even if you use another Web browser. You need to learn the attack surface of whatever software you do decide to run. Simply avoiding Internet Explorer doesn't make you malware proof; it just makes you resistant to malware that targets Internet Explorer.
Figure 2 Managing ActiveX controls in Internet Explorer
The Flash Drive Dilemma
Many customers have huge concerns about USB flash drives, more so than almost any other technology. Why? When I talk to our customers, it comes down to two issues. First, USB flash drives are easy targets for social engineering, or any other means, trying to get malware onto computers (this really applies to targeted malware, the kind that traditional audio visual software can't catch until it's too late). Second, it is all too easy for sensitive data to walk out of the office on a tiny USB drive. That's why I'm such a big fan of Information Rights Management (IRM) and other Digital Rights Management (DRM) techniques that can truly prevent data loss. Clearly the real problem is not the USB flash drive itself; it's the way these drives can be used.
So rather than using epoxy to glue the USB ports shut (I've actually heard of that being done) or trying to block the hardware via Group Policy or third-party software, what can you do? Stopping targeted malware is difficult, something you can really only approach via Group Policy Software Restriction Policies, whitelisting, or other means of restricting code to run only from the system drive, network drive, or the like.
As for stopping data loss, that will probably involve some form of IRM/DRM. Still, anytime I have to talk to customers about USB flash drives (or any type of malware, for that matter) I tend to quote my first TechNet Magazine
article ("Reduce Your Risk: 10 Security Rules To Live By
"), "Your enterprise is only as secure as your least and most technical users." By that I mean that an end user who really wants to run a piece of software will find a way, just as an end user who really wants to share confidential information will also find a way to do so.
What's the Point?
Years ago, when I worked at Winternals, Mark Russinovich researched and blogged on three topics pertinent to this discussion: defeating Group Policy (both as an administrator and as a limited user) and elevating privileges as a power user.
The "Mark Russinovich on Security" sidebar will direct you to the blog posts. These are great examples that demonstrate just how easy it is for a user to break free of policy or security constraints. Frustrated users often react to being locked down by eventually being willing to violate constraints—whether software, hardware, or policy based.
The three issues referenced in the sidebar have been concerns for some time. They are representative of the kinds of problems we all have to work around in our daily lives in a Windows ecosystem. The trouble is that mitigating against vulnerabilities, flaws, and software imperfections requires more than just reaction. It requires pragmatic thought about the real problem, which usually involves a good grasp of threat modeling and a willingness to accept that short-circuiting the immediate issue may cause a bigger fire than if you dealt with the real problem from the beginning.
A much better idea is to approach every situation with an open mind and, knowing that there are problems against which you are going to have to mitigate, take a step back and think about what the root problem actually is instead of just reacting in a short-sighted manner.
is a Senior Technical Product Manager at CoreTrace
in Austin, Texas. Previously, he worked at Winternals Software and as a Program Manager at Microsoft. Wes can be reached at firstname.lastname@example.org