Revisiting the 10 Immutable Laws of Security, Part 2
Jesper M. Johansson
In last month's issue of TechNet Magazine
, I kicked off a three-part series revisiting the well-known essay "10 Immutable Laws of Security." My objective is to evaluate them, eight years after they were first postulated, and see whether or not they still hold—in other words, to see if they really are "immutable." (You can find the first part of this series at microsoft.com/technet/archive/community/columns/security/essays/10imlaws.mspx
I found that the first three laws hold up pretty well. Now I'm ready to scrutinize four more of the laws. In next month's final installment, I will discuss the final three laws, as well as offer some new insight that benefits from eight years of hindsight.
Law #4: If you allow a bad guy to upload programs to your Web site, it's not your Web site anymore.
You may be thinking that Law 4 sounds a bit strange. After all, the other laws are quite high level, not speaking about specific services but general functionality. The first three laws all describe scenarios where your computer is no longer your computer. Then you reach Law 4, which talks about Web sites.
To understand Law 4, historical context is quite important. The laws were originally published in 2000. This was a time when the Web was still fairly new and immature. Sites like Amazon and eBay were still developing. And while exploits that ran arbitrary commands on Web sites were commonplace, patching them was not.
Against that backdrop, Microsoft likely saw Law 4 as a necessary public statement that you need to take responsibility for what your Web site serves. This point was driven home, sledgehammer style, in September 2001 when Nimda struck. Nimda was a multi-vector network worm, and one of the vectors it used for spreading was to infect vulnerable Web sites and modify them to carry the worm.
The time frame surrounding Law 4 was also one of Web site defacements. Attrition.org runs a mirror of many of the defacements from that time (attrition.org/mirror/attrition/months.html
). There were many of these defacements, often of prominent sites. Even notable security training organization SANS Institute had its homepage defaced. Figure 1
shows the defacement of the State of Arizona Web site in October 1998.
Figure 1 Defacement of the State of Arizona Web site (Click the image for a larger view)
The problem was that, back then, people were generally unclear on what really happened when a Web site was defaced. The consensus was that you got rid of the offending page and went on with life. If you were on the ball, you patched the hole that the bad guys used (if you could find it).
People weren't looking at the full picture. Law 4 was designed to make people think about what could have happened when a Web site was defaced, not what did happen.
Unfortunately, Law 4 was not entirely successful. In spite of Law 4, by 2004 I had grown weary of answering the question: "Can't we just remove the Web page the hacker put up and go on with our business as usual?" Not a man of few words, I tried to dispel those notions again with an article called "Help: I Got Hacked. Now What Do I Do?" (technet.microsoft.com /en-us/library/cc512587.aspx
The question, however, is whether Law 4 still holds today. Does a bad guy really own your site if he can upload a program to it? More granularly, does he own your site, or your visitors, or both? Law 4 is not actually clear as to which it refers, so I'll analyze both.
With respect to owning your site, the answer is yes. The bad guy owns your site if you let him put programs on it (there are, however, a few exceptions that I will look at in a moment). With a typical Web site, there are two things the bad guy can do by uploading programs, and there is one very big implication from the fact that he is able to upload to your site.
The first thing the bad guy can do is make your site serve his needs. If somebody is interested in serving out illegal content, such as child pornography, what better place to do so than on a site that cannot be traced back to the hacker himself? Criminals would much rather serve that type of content off your Web site than off their own.
The second thing the bad guy can do by uploading a program to your site is to take control of the system behind the site. This, of course, is dependent on whether he can actually execute the program on your Web server. Simply having the program there without it doing anything is not going to help. However, if the bad guy can execute the program, he definitely owns your site and can now not only make it serve his own needs but also use it to take over other things.
The implication to which I referred is even more important than the specifics. In the "Help: I Got Hacked" article, the point I was trying to make was that you don't know exactly what the bad guy may have done after he broke in. If a bad guy manages to put his own content on your site, then you have to ask what else he could have done.
The answer could potentially be a lot of things. That is the really critical part of this puzzle. If a bad guy can execute programs on a server behind your Web site, he completely owns that site and what it does. It truly isn't your Web site any longer.
With respect to compromising the visitors to the site, that question is harder to answer. Browsers were riddled with security holes in the late 1990s. Around 2004, the situation improved drastically. Today's major browsers, Internet Explorer and Firefox, are both quite solid in terms of security. In fact, compared to what we had in the 1990s, today's browsers are veritable bastions of security.
Whether a bad guy can compromise your visitors is highly dependent on two things. First, are there holes in the browsers that he can exploit? There may be, but there are not nearly as many as there used to be. And second, can the bad guy persuade users to compromise themselves? The answer, disturbingly often, is yes.
Far too many users will install things that a Web site tells them to install. This is a serious problem, as we cannot solve this one with technology. In the July, August, and September 2008 installments of Security Watch, I discussed this problem. With respect to Law 4, it unfortunately means that the bad guy has a very good chance of being able to compromise your visitors.
The exceptions mentioned earlier should be obvious. Web sites do many things today that were not foreseen back in the late 1990s. For instance, internal collaboration sites, such as Microsoft SharePoint, are common. Anyone with proper permissions can upload a program to such a site, but it means that neither the site, nor any user that goes to it, will be compromised. That is simply what the site is designed for. Users are considered trusted, to some extent, merely by virtue of having the permissions necessary to access the site.
And then there are shareware sites. While there has been malware posted to them in the past, they are designed to share software. That in and of itself does not mean that any user is compromised. In short, all these sites have safeguards in order to ensure that they stay secure and that users are not automatically compromised by going to them. I consider that the exception that confirms the rule. Therefore, Law 4, at least in spirit, still holds—in spite of some sites that are designed to permit people, bad and good, to upload programs to them.
Law #5: Weak passwords trump strong security.
Passwords have been a passion of mine for many years. Passwords or, more generally, shared secrets are a great way to authenticate subjects. There is just one minor problem with them: they fall down completely in the face of human nature.
Back in the halcyon days of computing when time-sharing computing was first invented, the need for a way to distinguish between users became apparent. The system needed a way to distinguish between Alice's data and Bob's data. Ideally, Bob should be able to prevent Alice from reading his data, although this was a loose requirement.
The solution was user accounts and passwords. We used to have one account on one computer system. And we generally had one password, which was usually one of the following:
- The name of one of your children
- Your spouse's first name
- Your pet's name
- "God" (if you were the superuser)
Fast-forward 30 or so years. We now have hundreds of accounts—on Web sites all over the Internet and on several computers. Every one of those systems tells us that we should not use the same password on any other system. You are advised to make it strong, not to write it down, and to change it every 30 to 60 days.
Because ordinary humans cannot change four passwords a day and actually remember any of them, the net result is that they, unfortunately, use the same password on all systems (or possibly two different passwords). And they are typically chosen from the following list of possibilities:
- The name of one of your children with the number 1 appended to it
- Your spouse's first name with the number 1 appended to it
- Your pet's name with the number 1 appended to it
- "GodGod11" (only if you are the superuser)
We have not advanced nearly as far as one would hope in the intervening 30 years. Researchers are still finding passwords a fruitful area of research; for more information, see the PC World article "Too Many Passwords or Not Enough Brainpower" (at pcworld.com/businesscenter/article/150874/too_many_passwords_or_not_enough_brain_power.html
Clearly, passwords, as they are normally used, are a very weak form of security. Yet there are ways to use passwords securely. For instance, you can generate strong passwords and write them down—there really is nothing wrong with that. However, people are so inundated with bad security advice from everywhere and everyone that users actually think it is better to use the same password everywhere than to write down their passwords.
The fact is that a whole lot of security boils down to one weak point. Take corporate Virtual Private Network (VPN) access, for example. I have been in numerous discussions about various VPN technologies; vendor evaluations where the vendors point out the virtues of their incredibly strong, and incredibly slow, cryptography; how they rotate the encryption keys to make sure an attacker can't sniff packets and cryptanalyze them.
But all of that completely misses the point. An attacker is not going to try to cryptanalyze a packet stream that will take 10 million years to break using currently available computing technology. Is there really any value in slowing down the network by an order of magnitude to obtain cryptography that will take 100 million years to break? Honestly, I don't particularly care if someone 100 million years from now (or even 10 million years from now) somehow manages to decrypt my work e-mail.
Is the cryptography really the interesting weak point? The attacker is far more likely to exploit the easy vulnerability—the fact that user passwords are often one of the passwords I just listed.
Incredibly strong cryptography doesn't really matter much if all of your users pick a six- or eight-character password. Consequently, we are now moving toward using stronger forms of authentication, such as smart cards and one-time PIN code generators.
These offer a huge improvement, but they do not always improve security. Smart cards, for instance, are very easy to lose or leave at home. And the one-time PIN code generators fit perfectly on your badge holder, which looks great hanging around your neck. The next time you go out to grab a coffee at the shop across the street, see if you can spot the guy looking at your current one-time PIN, reading your user name off your badge, and trying to guess the little tiny piece of randomness that is all he now needs in order to connect as you to your corporate network.
Law 5 most certainly holds today, and it will continue to hold into the future. However, I think it can be significantly generalized. More than just weak passwords trump strong security. More generally, it can say "weak authentication" or even "weak points" trump strong security.
IT security professionals at large are guilty of not stepping back and looking at the broader picture. We tend to focus on one small piece of the problem, one that we can comfortably solve with strong security technologies. Too often we fail to realize that there are systemic weak points that we have not mitigated, or even thought about, that render all the technology we are putting in place moot.
For instance, consider how many organizations try to regulate the removable devices that users can use, but permit outbound Secure Shell (SSH) and encrypted e-mail connections. How much data loss is really mitigated by limiting removable devices if you permit data to be transmitted with encryption so you can't even see the data? This is one of the major problems that we security professionals must solve if we are to survive and prosper.
Law #6: A computer is only as secure as the administrator is trustworthy.
I am amazed that, even today, we keep seeing reports of exploits that only work against administrators—even worse, exploits that only work if you are already an administrator. As I am writing this I am sitting in an airport on my way back from the Black Hat 2008 conference. Even there I caught a presentation that started out with the premise that "if you have root access, here is how you can take over the system."
On the one hand, it is comforting to know that the worst thing some people can come up with is how to modify the system, albeit a bit more stealthily, if they already have the right to modify the system. On the other hand, I find it frustrating that people do not seem to see how pointless that is and keep trying to invent new ways to do it—and more important, waste precious time and energy on protecting against it.
The fact is quite simple: any user who is an administrator (or root, or superuser, or whatever else you may call the role) is omnipotent within the world of that system. That user has the ability to do anything!
There are certainly noisier as well as subtler ways to do whatever it is a malicious user of this sort wants to do. But the fundamental fact remains exactly the same: you can't effectively detect anything that a malicious administrator does not want you to detect. Such a user has the means to hide his tracks and make anything appear to be done by someone else.
It is clear, then, that Law 6 still applies, at least at some level. If an individual that has been granted omnipotent powers over a computer turns bad, it really, truly, is not your computer any longer. In a very real sense, the computer is only as secure as the administrator is trustworthy.
There are some additional points to consider, however. First, the notion of administrator, from the computer's perspective, includes not only the person or persons granted that role. It also includes any software that is executed in the security context of that role. And by extension, then, the role of administrator also includes any author of any such software.
This is a critical point. When Law 6 states that the computer is only as secure as the administrator is trustworthy, the meaning is much broader than it first seems. Never forget that, as far as the computer is concerned, administrator means any process running within the security context of an administrative user. Whether that user intended to actually execute that piece of code or intended to do damage with it is irrelevant.
This is a critical point because it is only recently that an average user can feasibly operate a Windows-based computer as a non-administrator. This is a primary purpose of User Account Control (UAC) in Windows Vista. Even then, there is no security boundary between a user's administrative context and non-administrative context. Consequently, Law 6 applies to any user that has the potential to become an administrator, not just those who are administrators at the moment.
As a result, the only way not to be subject to Law 6 is to truly not be an administrator, but rather operate as a true standard user. Unfortunately, this is not the default even in Windows Vista, and many Original Equipment Manufacturers (OEMs) actually disable UAC altogether.
UAC, however, hints a little at the future to come. The most visible feature of UAC is the elevation process, shown in Figure 2. However, the most important strategic benefit is not elevation to an administrator but the ability to operate a computer effectively without being an administrator in the first place. Windows Vista includes several improvements to make that possible. For example, contrary to earlier versions of Windows, you can change the time zone without being an administrator, permitting travelers to be non-admins. Moving forward, there will likely be more of those types of improvements.
Figure 2 The most visible, and possibly least
important, feature of UAC is the elevation prompt (Click the image for a larger view)
Law 6 holds today and will continue to hold. However, the switch toward a world where users can operate their computers as a non-admin is one of two major moderating factors on Law 6. The second factor is not very new: mandatory access control systems.
In mandatory access control systems, objects have labels and there are strict rules for how objects can be relabeled. Software applies security to an object consistent with its label, outside of the immediate control of the administrator. Strictly speaking, in current implementations, the administrator can typically take various illegitimate steps to override these controls.
However, the principle is promising, and it may be possible one day to limit what administrators can do. But even if we were to get restrictions to limit what administrators could do, you could still argue that those users would no longer be administrators in the real sense of the term. Hence, Law 6 is decidedly immutable.
Law #7: Encrypted data is only as secure as the decryption key.
Law 7 is possibly the least controversial of all the laws. Encryption is far too often seen as the panacea for many security problems. The truth, however, is that while encryption is a valuable tool in the world of security, it is not, and never will be, a standalone solution to the majority of problems we face.
Encryption is everywhere you turn. In Windows, encryption is used for passwords, for files, for surfing the Web, and for authenticating. Not all encryption is designed to be reversible, but some of the more important examples of reversible encryption include the Encrypting File System (EFS) and the credential cache used for stored passwords and usernames, as illustrated in Figure 3.
Figure 3 The credential cache in Windows Vista is protected by encryption (Click the image for a larger view)
Both EFS and the credential cache are protected by an encryption key derived from the user's password. This has several implications. First, if the user's password is reset (set to a new password without entering the old one), all of the data stored in these locations will be lost unless there is a recovery key designated.
But even more important for our discussion, while the encryption itself uses extremely strong keys and protocols, the security of the key is dependent on the user's password. In other words, the data is no more secure than the password is strong. The password, in effect, is a decryption key even though, in this particular instance, it is a secondary decryption key—meaning it decrypts another decryption key.
This is crucial. These types of dependency chains are everywhere in the IT world. Years ago, someone executed a social engineering attack against VeriSign and obtained two code-signing certificates in the name of Microsoft. A code-signing certificate is effectively a decryption key, which is used to verify that the entity named in the certificate has the encryption key.
However, in this case, the person requesting the certificate was not the entity named in the certificate. In other words, the attacker now had signature keys in someone else's name. The keys may have been secure, but once you analyze the remainder of the dependency chain, you discover the fatal flaw.
All this serves to prove one point: the decryption key is critical to the security of data, but the decryption key itself may be protected by far weaker secrets. I have seen too many systems where the implementers built in the strongest encryption thinkable and protected the decryption key with some other security measure but failed to realize that this second layer had a significant hole. When implementing any cryptography, you must ensure that you analyze the entire chain of protection. Simply being encrypted does not by itself make the data secure.
Law 7 still holds then. It is one of the most unassailable of the 10 laws. In fact, it is the closest we get to a law of physics in this business. It should also serve as a lesson to all of us, reminding us to analyze the entire protection chain of our sensitive data. Therefore, it is acceptable to have keys that do not have the strongest protection possible, but it's important that those keys should only be used to encrypt data that merely requires that lesser level of protection.
So far, the immutable laws of security are 7 for 7. Each of the laws I have reviewed still holds true these many years later, and it looks unlikely that they will be significantly disproven anytime soon.
In fact, the laws have demonstrated an impressive bit of foresight. The only one that seems out of place so far is number 4, but, as mentioned previously, even that one should still be considered immutable.
Next month I'll wrap up this series with a look at Laws 8, 9, and 10. And I'll comment on where the laws may not cover all aspects of security.
Jesper M. Johansson is a Software Architect working on security software and is a contributing editor to TechNet Magazine. He holds a Ph.D. in Management Information Systems, has more than 20 years experience in security, and is a Microsoft Most Valuable Professional (MVP) in Enterprise Security. His latest book is The Windows Server 2008 Security Resource Kit.