This documentation is archived and is not being maintained.
Deconstructing Common Security Myths
Jesper Johansson and Steve Riley
At a Glance:
- High-level reasons
- Security settings
In our book Protect Your Windows Network, we wrote about "security myths"—things that many people believe are true about security, but which really are not.
We then started presenting some of these myths at various conferences around the world and people really seemed to appreciate the candid straight talk.
Our version of these myths is, of course, just our opinion. People are welcome to disagree with us, and sometimes do. Naturally, we will proceed to explain why we are right and they are wrong, but all in all this type of dialectic is crucial to advancing the state of the art in security. Unless we question the commonly held wisdom, we are not only doomed to repeat past mistakes, but also to keep building on them. We would then fail to do all we can to protect our networks and the information that resides on them.
Therefore, because we think it is fun and we never seem to run out of myths (or opinions, as some refer to them), we decided to revisit the topic with a new batch. Since people keep sending more myths to us, we will probably be back. If you have any good ones, go to our blogs and let us know. Jesper is at blogs.technet.com/jesper_johansson and Steve pontificates at blogs.technet.com/steriley.
Myth: It's Always Better to Wait for an Official Solution to a Problem
No saving the tough stuff for last—we’re diving headlong into controversy right from the start. Given that we’re from Microsoft, you might expect us to write an opposite myth: "It’s always better to install the first thing that looks like a patch, no matter where it came from." Thing is, we want you to consider your risk tolerance, your risk management paradigm, and your remediation procedures whenever you’re evaluating a solution. Depending on the vendor of the software and its history of releasing timely, quality updates, waiting for an official solution might be the correct approach. In other instances—especially when a vendor routinely puts you at risk by charging you for updates or waiting months (or years!) before providing them—looking for third-party solutions or developing them in-house might be your best choice.
The same thinking applies to guidance, too. Vendor Z releases a new application with some security enhancements, but which also includes many new features. Some of these features require you to think about how to secure them. Where will you learn about that? Vendor Z might not release a security guide at the same time the app hits the shelves. Do you take your best guess? What if your guess turns out to be wrong? Do you scour the Internet looking for hardening guides from anyone who seems credible, or do you think Vendor Z, being the developer, is in the best position to explain how to secure it?
How you manage risk will help you determine which approach to take. At Microsoft, we strive to release security guidance at the same time we release new versions of our software. Furthermore, we test all our security updates on systems that have been secured according to that guidance.
Myth: You Should Wait Before Deploying an OS or Service Pack
We constantly hear organizations arguing that they cannot deploy a particular piece of software. They usually have valid reasons to hold off upgrading, such as the cost, the training requirements, or a hardware refresh. However, we also hear a lot of not-so-valid reasons, particularly just about any reason based on perceived security requirements. There are two common myths surrounding OS upgrades. The first myth is that you should wait to deploy the new OS or service pack because it will have a lot of bugs that should be fixed, so you should let someone else find those bugs first.
It should be obvious why this argument is flawed: exactly who is going to find all those bugs if everyone follows this advice? There are other problems with the argument too. For example, it is often supported by stories of problems with Windows NT® 4.0 Service Pack 2 (SP2). While it is true that there were stability issues with that service pack, software testing has progressed significantly in the 10 years since it was released. It is not too likely that the same problems plaguing that release would ever recur.
Furthermore, the vast majority of problems with OS and service pack releases occur because of custom applications and for two general reasons: either those apps are broken already and security implementations in the new OS simply highlight the flaws, or the apps hit some obscure bug in the OS that would not be encountered without the app. In the former case, the problem is the app, and it should be replaced, not retained. In the latter case, it is even more important that you start testing the service pack or OS as soon as you can, preferably in the beta period before it is released! It is much simpler to get a problem fixed before the OS or service pack is released than afterward.
It’s impossible to foresee all the combinations that will reduce security and stability. Only by having customers partnering with the software vendor and participating in the beta program can we ensure that the software is as stable and secure as possible.
The second OS myth is that your deployment must wait until you have a defined security configuration standard or you will put your network at risk. The fact is that new OS releases are much more secure out of the box than the ones of yesteryear even with the most stringent guidelines implemented.
The threat models and assumptions to which the older operating systems were designed are no longer valid. For example, Windows® 2000 was designed to a threat model that was valid in 1997, and the Windows NT 4.0 threat model was valid in 1993. In 1997, microsoft.com had just migrated from a server underneath someone’s desk to a datacenter. In 1993, gopher was the protocol of choice for transporting information across the Internet. The World Wide Web had just been invented, NetBEUI was considered a state-of-the-art protocol, and OS/2 reigned as the industrial-strength operating system of savvy corporations.
There is no comparing the security fundamentals of those operating systems to Windows XP SP2 and Windows Server™ 2003 SP1, which were developed under the Security Development Lifecycle and against a threat model that was valid as we entered 2004. The threats, the mitigations, the very nature of how we do business has transformed in ways we could not have imagined, even in 1997.
Quite the opposite of the common view (that we put the network at risk when we upgrade without a security configuration standard) is that we put the network at risk when we fail to upgrade to an OS designed to a valid and current threat model.
Myth: Password Cracking is a Valid Way to Ensure that We Have Strong Passwords.
We cannot tell you how many times we have been asked to help define a process for cracking passwords in order to ensure that they comply with organizational security requirements. It appears that a preponderance of security personnel have been sold on the myth that password cracking is a valid way to ensure that passwords are good. This is patently untrue, and password cracking is a bad idea for many reasons.
The main reason is that password cracking does nothing whatsoever to ensure that you have strong passwords. Think about it: password cracking can only detect bad passwords after the fact. Not only is it possible—in some cases even likely—that the passwords have already been compromised by the time you crack them, but it is impossible to prevent the bad passwords from being set in the first place if all you do is crack them afterward. Further, you can never ensure that you will find all bad passwords using a password cracker. It is possible that the bad guys just have better warez than you do and they may find bad passwords that your tools miss!
There are other reasons as well for not cracking passwords. First, it could be considered an invasion of privacy. People consider their password a personal piece of information and will set passwords they really do not want others to see. You would probably never rifle through your coworkers’ wallets. Going through their passwords is no different. Second, password cracking is really slow. The good attackers today will not waste their time on it. If they get their hands on password hashes, they will use tools that consume those hashes directly, not tools that crack them since that takes too long. The hashes are plaintext equivalent, meaning there is no reason to crack them anyway. It also means that if, for whatever reason, you need to verify passwords after they are set, you may as well store them reversibly encrypted so you do not have to waste time cracking them. It really does not compromise security to do so since, as we said, the hashes are already equivalent to plaintext.
The proper way to ensure that you have strong passwords is to not allow users to set bad ones in the first place. This can be accomplished through the use of solid password policies, and custom password filters. There are a few commercial custom password filters , but you can also roll your own if you know how to write C code. Just be careful—password filters run within the Local Security Authority (LSA). That would be a really bad place to have a buffer overflow!
Myth: Passwords Must Be Complex to Be Strong.
This is an interesting myth, and we bet you’re thinking we’ve lost it at this point. Of course passwords need to be complex to be strong. No, they do not! They need to be looooonnnngggg. In fact, really, really, long passwords, by their very nature, are often much stronger than a short but complex password.
To see why, consider the prototypical terrible password: Seattle1. It is complex according to most definitions. It is eight characters long, has three of the four character sets in it, and fulfills the complexity requirements in the operating system. It is also hopelessly weak.
Let’s try to make it a bit more complex: Se@ttle1. Did it get any better? Not really. This password now contains all four character types, but it will take only marginally longer to guess. You may want to try this password or a slight variation, in a password complexity checker. The checker will probably claim this password is at least medium strength. Clearly, just because a password is complex does not make it strong. But then, that is not actually what the myth claimed either. It claimed that all strong passwords are complex.
Now consider this password: SeandialVickyandhorusbloomkendallWyoming. It is not complex by any measure. It contains only two character types and all of the components are words. They are, in fact, words picked from the Microsoft password strength checker’s dictionary, which includes 2,254 words. There are 40 characters in this password. The character set those characters are chosen from consist of uppercase and lowercase English characters, or 52 characters in total. That means there are a total of 4.45×1068 1 to 40-character passwords possible from that character set. If you use a brute force attack and you can guess 600 passwords per second, it will take you 1.63×1058 years to guess this password. But you may have captured a connection to a server and have the challenge-response sequence to crack it. In this case it will take you only 1.30×1054 years, assuming you are a nation-state and have access to nearly unlimited computing power.
Oh, but you may argue that these are all words, so we just try combinations of words. Fair enough. Let’s say you even know that it is picked from the password checker dictionary and that you know there are eight words in the password. That improves your ability to crack it significantly. It will now only take 1,948,790,798,336 years to crack. If we remember correctly from physics class, the universe is about 5,000,000,000 years old, so that means it will take you 390 times longer than the existence of the universe to crack this password, assuming you don’t have to restart your computer to apply a service pack before then. Since our policy forces us to change passwords every 90 days, there is a pretty good chance we will have changed passwords by the time you are finished cracking it. Now, we would consider that a pretty strong password even though it is not very complex!
Clearly, complexity is not the only requirement for password strength. Jesper demonstrated in a TechNet online article that, in fact, length is much more important. Nothing adds strength to the password as much as adding characters to it.
Myth: You Can Always Roll Back Configuration Errors with Setup security.inf
The setup security.inf template is a security template created at setup that contains the security settings configured when the OS was installed. It is commonly believed that this template can be used to roll back security settings should you make a mistake. This myth is so pervasive that there is even Microsoft documentation that makes this claim. Unfortunately, it’s not true.
Setup security.inf is just a log file. The installer does apply a template during setup: defltwk.inf on workstations and defltsvr.inf on servers. Setup security.inf never gets read at all. The installer simply writes to it when a component calls particular APIs during setup to configure security. Components that do not call those APIs do not have their settings logged. Neither do any components that are installed after setup or any settings that are configured after setup. An example may help illustrate this point.
During Windows XP setup, the installer does not create any user profiles under %systemdrive%\Documents and Settings. The installer only creates the Default User directory. You only get a profile directory when a user logs on the first time. Furthermore, that profile directory does not inherit its access control list (ACL) from the parent directory. Instead, the operating system programmatically sets the ACL when the directory is created. Since these directories are created after setup has finished, the setup security.inf file does not contain a record of the ACL. Therefore, you cannot use setup security.inf to roll back those ACLs should you happen to destroy them. And since defltwk.inf only sees use during setup, it also lacks any record of what these ACLs are supposed to be and cannot be used to roll them back.
The fact is it’s nearly impossible to roll back security settings, particularly ACLs. Theoretically, a third-party program can shim the operating system and create a record of all security changes made on it, but unless it is also written to shim object creation and deletion, such a program will be unable to fully restore security configurations. It would need all that information to calculate what settings should be made if an object were created or deleted after the last time its security was modified. This is a very difficult problem to solve and currently Windows does not support the ability to roll back security. If you accidentally make security changes that break something, the only fully supported way to undo the changes is to format and reinstall.
Myth: NTLM Is Bad, and you Should Disable It.
Windows supports several authentication protocols. There is the ancient Lan Manager, the slightly more modern Windows NT LAN Manager (NTLM) introduced with Windows NT 3.1, a nameless variant of NTLM launched in Windows NT 4.0 SP4, NTLM version 2, also launched in Windows NT 4.0 SP4, and of course Kerberos, launched in Windows 2000.
It is common knowledge that Lan Manager is really weak and that NTLM is only slightly better. However, does that mean that you should categorically disable them, and that you are clearly not doing your job as a security professional unless you do? No, it does not.
The main reason not to disable NTLM or Lan Manager is that it breaks things. Disabling Lan Manager mostly just breaks Windows 95 and Windows 98 (which some may consider a security benefit). However, it may break other things, such as clusters and third-party devices as well, so be careful if you’re going to try it.
Disabling NTLM will break Windows NT 4.0 in many instances and, depending on how you disable it, may break a lot more than that. For example, NTLM is used to authenticate with Windows XP when you are not on the domain. In fact, even if you are on the domain you may use NTLM for initial authentication if the computer was just started. We call this fast logon and it is why you can log on to the system so fast after it starts. You are actually authenticating against cached credentials and then reauthenticating against the domain when the system finds the domain controller.
We once saw an organization that attempted to disable NTLM by—get this—removing the links to the MSV1_0 dynamic link library. This means you will never be able to authenticate against a local account, nor will you be able to use a domain account unless the machine has authenticated to the domain already. Needless to say, that particular tweak is not supported.
Couldn’t you disable NTLM everywhere by setting the LMCompatibilityLevel value? Well, yes, but it is not going to improve security in some instances. For instance, consider a Web server and a database server in the datacenter. They can only communicate with each other due to the IPsec rules, VLAN configuration, and router filters. The network cable connecting them is inside a secure facility and is about eight centimeters long.
What is the exact threat to using NTLM in that situation? None is the correct answer. In order to sniff any traffic between those two systems, you have to be inside the datacenter or at least inside one of the systems. If you are, the security has already been breached and it does not matter whether NTLM is used. Disabling NTLM only protects the authentication against man-in-the-middle attacks on an insecure network transport. If the network transport is already secure, disabling NTLM adds nothing.
Myth: Don't Allow User Names to Display Because They Leak Half the Secret You Need to Log On.
Leaking half the secret means that you’ve leaked half your password or half your private key. This myth reflects a fundamental misunderstanding of identity and authentication. Logging onto a system requires that you provide two pieces of information: a public identity and a private authenticator. Your public identity is your user ID, which in Exchange also happens to be the part before the @ sign on your business card—a very public display indeed. But identity isn’t enough to log onto a system. Anyone can claim to be you. That’s why you have to combine it with an authenticator—a secret that proves you really are who you claim to be. Because only you and the system know the secret, the system grants you access.
Having knowledge of your identifiers is not really all that useful to an attacker. It only aids an attack if the corresponding authenticators are weak. Using strong authentication (like loooonnnnngggg passwords or digital certificates with private keys stored in smart cards or USB tokens) is how you keep accounts strong and resilient against attacks. In many organizations, identifiers can be deduced easily —steve.riley and jesperjo, for example. Even if we tried to keep them secret, deducing others from a known pattern becomes trivial. If your security rests on keeping something secret that really wasn’t designed to operate in such a way, then it’s only a matter of time before some bad guy works his way through your security by obscurity and starts causing you grief.
Myth: Let's Block Bad Stuff.
We’ve told the unicorn analogy before—it bears repeating here because it’s the perfect illustration. How do you prove there are no unicorns? Well, you’d have to go to every possible place that there might be a unicorn and observe that no unicorn exists in that place. Thing is, unicorns move. And they very likely will keep one step ahead of you because they prefer lives of secrecy. So you’d have to go to every possible place that there might be a unicorn all at the same time and simultaneously observe that every place lacks a unicorn. Have you mastered the difficult art of being in more than one place at a time? We didn’t think so. Therefore, you can’t prove there are no unicorns.
How do you prove that there are unicorns? Simple: find one.
It’s nearly the same when your security stance is to enumerate and block the bad stuff. How can you do it all? How can you possibly know everything that’s bad (including the bad things that don’t yet exist), block all that without accidentally blocking good stuff, and possibly keep it up to date? New malware emerges—well, more rapidly than we can keep up with now. Therefore, you can’t block all that’s bad.
However, you can define what’s good. Using tools like software restriction policies, Group Policy, IPsec, and Windows Firewall, you can create an environment where you define by policy what is permitted to install, run, and communicate. Anything that isn’t expressly permitted is therefore blocked. Now you don’t need to worry about enumerating and tracking all the bad stuff because you’ve made a policy statement (and are enforcing it with technology) that everything not permitted is, by definition, bad.
Your users may howl, scream, and generally carry on like children about how they can’t do their jobs anymore. Pfffft. No one needs to download silly little toolbars to do their jobs. You should, though, take advantage of organizational units (OUs) in Active Directory® to group computers (servers and workstations) by role. Define your security settings on OUs, and then simply assign each person’s computer to the OU that makes sense for that person’s job duties.
Myth: Security Controls Are Better When Centralized.
There is a lot of value to centralizing your security controls. For instance, rather than maintaining IPsec rules on 50,000 computers, maybe you can maintain 802.1x rules on 1,000 switches, centralizing management and making the network easier to manage. There is some value in that. However, it leaves the individual nodes on the network without the ability to defend themselves. They have to rely on some other entity to protect them. The same holds true in many situations.
Let’s say we completely ignore people security. We try to shield them from making bad security decisions by using antivirus, antispyware, antirootkits, antispam, antiphishing, and any other antiwhatever technology you can think of. We block all the known bad Web sites at the proxy, we filter their e-mail, and we disable USB drives using unsupported hacks.
Now think about what happens when people leave the organization, even for just the evening. They have been shielded from security all along, and now they go home, passing the Internet café on the way. There they have no antitechnologies. They do, however, have a guy next to them with a really cool little thing that says "memory stick" on the front, and he claims it has pictures of naked dancing pigs on it. He only wants $9.95 for it too!
The next morning your coworker decides to show the naked dancing pigs to some other coworkers and shoves them in the slot in his computer. You now get to spend the next couple of months flattening and rebuilding the network, unless you have an updated resume ready to post.
The problem with this thinking is that when we only rely on centralized security, we leave all our assets without the ability to protect themselves. We really need to have both centralized and distributed security. Each asset needs to be empowered with the ability to protect itself. That goes for computers and information just as it goes for people.
Myth: I've Updated. I've got Antimalware. I've got a Firewall. I'm safe.
No, we aren’t slamming the core steps to help protect your PC. Indeed, for many people, keeping up to date, running good antimalware (that is, antivirus and antispyware), and configuring a host firewall are critical components of keeping a computer safe from attack. But do you need all three? More importantly, are these all you need?
Think of an enterprise that followed our guidance and decided to use policy tools to limit a computer’s actions. In such an environment, is antimalware really necessary? If your network has deployed centrally managed host firewalls to every computer, controls what is allowed to install and run using software restriction policies, and limits who can communicate to whom using IPsec, how can malware enter and spread? Certainly ZoBigSter/w.32 variant zed-zed-9 plural-zed-alpha isn’t on your list of permitted apps, so if it tries to infiltrate your organization, it just ain’t gonna go nowhere.
Maybe, just maybe, you don’t need antimalware. It all depends on how you operate and manage your environment—including running as a non-administrator (which will be much easier in Windows Vista™). There is, however, no question about the efficacy of the other two items we mention here: updates and host firewalls. These are absolutely critical in an organization of any size.
We’ll continue to remind everyone—there’s no substitute for updating. However, you need to stop thinking of updating as a security exercise. It isn’t. Updating is a systems management and configuration control discipline. Stop looking at it as a security problem. Start planning how you will move updates out of the IT security role and into the systems management role. Ideally, the groups responsible for desktop, laptop, and server maintenance are the ones who should manage your updates. They are, after all, already skilled in other areas of system maintenance.
When computers were too big to carry around, network-level protection was sufficient to protect all the computers connected to the network. This is no longer true. Hundreds of organizations worldwide get infected by various worms not through e-mail, but from unmanaged portable computers that bring infections back to the corporate network. No amount of VLANs, router ACLs, or network firewalls will help control that. Indeed, it’s a safe bet that most corporate networks have become nearly as hostile and dangerous as the Internet (further evidence that the perimeter is eroding). Host firewalls not only stop the spread of malware by blocking inbound connections, but also help protect each individual computer from the rest of the network.
Myth: Host-Based Firewalls Must Filter Outbound Traffic to be Safe.
Speaking of host firewalls, why is there so much noise about outbound filtering? Think for a moment about how ordinary users would interact with a piece of software that bugged them every time a program on their computer wanted to communicate with the Internet. What would such a dialog box look like? "The program NotAVirus.exe wants to communicate on port 34235/tcp to address 18.104.22.168 on port 2325/tcp. Do you want to permit this?" Ugh! How would your grandmother answer that dialog box? Thing is, your grandmother just got an e-mail with an attachment that promises some rather sexy naked dancing pigs. Then this crazy dialog box appears. We promise: when the decision is between being secure and watching some naked dancing pigs, the naked dancing pigs win every time.
The fact is, despite everyone’s best efforts, outbound filtering is simply ignored by most users. They just don’t know how to answer the question. So why bother with it? Outbound filtering is too easy to bypass, too. No self-respecting worm these days will try to communicate by opening its own socket in the stack. Rather, it’ll simply wait for the user to open a Web browser, then hijack that connection. You’ve already given the browser permission to communicate, and the firewall has no idea that a worm has injected traffic into the browser’s stream.
Outbound filtering is only useful on computers that are already infected. And in that case, it’s too late—the damage is done. If instead you do the right things to ensure that your computers remain free of infection, outbound filtering does nothing for you other than, perhaps, to give you a false sense of being more secure. Which, in our opinion, is worse than having no security at all.