By Jesper M. Johansson and Steve Riley
This is the second article based on Jesper and Steve's new book "Protect Your Windows Network." The book will be released in late May from Addison-Wesley. You may also pre-order the book from many online resellers.
Myth 5: All Environments Should at Least Use <Insert Favorite Guide Here>
One size does not fit all. Every environment has unique requirements and unique threats. If there truly was a guide for how to secure every single system out there, the settings in it would be the default. The problem with people making such statements is that they fail to take into account the complexity of security and system administration. As we mentioned in the first article, administrators usually do not get calls when things break. Security breaks things; that is why some security-related settings are turned off by default. To be able to protect an environment, you have to understand what that environment looks like, who is using it and for what, and what the threats are that they have decided need mitigated. Security is about risk management, and risk management is about understanding and managing risks, not about making a bunch of changes in the name of making changes solely to justify one’s existence and paycheck.
At the very least, an advanced system administrator should evaluate the security guide or policy that will be used and ensure that it is appropriate for the environment. Certain tailoring to the environment is almost always necessary. These are not things that an entry-level administrator can do, however. Care is of the essence when authoring or tailoring security policies.
Myth 6: “High Security” Is an End-Goal for All Environments
High security, in the sense of the most restrictive security possible, is not for everyone. As we have said many times by now, security will break things. In some environments you are willing to break things in the name of protection that you are not willing to break in others. Had someone told you on September 10, 2001, that you needed to arrive three hours ahead of your flight at the airport to basically be strip searched and have your knitting needles confiscated, you would have told them they are insane. High security (to the extent that airport security is truly any security at all and not just security theater) is not for everyone, and in the world we lived in until about 08:00 EDT on September 11, 2001, it was not for us. Once planes took to the skies again, fewer people questioned the need for more stringent airport security.
The same holds true of information security. Some systems are subjected to incredibly serious threats. If these systems get compromised, people will die, nations and large firms will go bankrupt, and society as we know it will collapse. Other systems contain far less sensitive information and thus need not be subjected to the same level of security. The protective measures that are used on the former are entirely inappropriate for the latter; yet we keep hearing that “high security” is some sort of end-goal toward which all environments should strive. These types of statements are an oversimplification that contributes to the general distrust and disarray in the field of information security today.
Myth 7: Start Securing Your Environment by Applying a Security Guide
You cannot start securing anything by making changes to it. Once you start changing things, the environment changes and the assumptions you started with are no longer valid. To reiterate what we have said many times: Security is about risk management; it is about understanding the risks and concrete threats to your environment and mitigating those. If the mitigation steps involve taking a security guide and applying it, so be it, but you do not know that until you analyze the threats and risks.
Myth 8: Security Tweaks Can Fix Physical Security Problems
There is a fundamental concept in information security that states that if bad guys have physical access to your computer, it is not your computer any longer! Physical access will always trump software security -- eventually. We have to qualify the statement, though, because there are valid software security steps that will prolong the time until physical access breaches all security. Encryption of data, for instance, falls into that category. However, many other software security tweaks are meaningless. Our current favorite is the debate over USB thumb drives. After the movie “The Recruit,” everyone woke up to the fact that someone can easily steal data on a USB thumb drive. Curiously, this only seems to apply to thumb drives. We have walked into military facilities that confiscated our USB thumb drives but let us in with 80-GB i1394 hard drives. Apparently, those are not as bad.
One memorable late evening one author’s boss called him frantically asking what to do about this problem. The response: Head on down to your local hardware store, pick up a tube of epoxy, and fill the USB ports with it. While you are at it, fill the i1394 (FireWire), serial, parallel, SD card, MMC, Memory Stick, CD/DVD-burner, floppy drive, and Ethernet jack with it too. You’ll also need to make sure nobody could carry the monitor off and make a photocopy of it. You can steal data using all of those interfaces.
The crux of the issue is that as long as there are these types of interfaces on the system and bad guys have access to them, all bets are off. There is nothing about USB that makes it any different. Sure, the OS manufacturer could put a switch in that prevents someone from writing to a USB thumb drive. That does not, however, prevent the bad guy from booting to a bootable USB thumb drive, loading an NTFS driver, and then stealing the data.
In short, any software security solution that purports to be a meaningful defense against physical breach must persist even if the bad guy has full access to the system and can boot into an arbitrary operating system. Registry tweaks and file system ACLs do not provide that protection, but encryption does. Combined with proper physical security, all these measures are useful. As a substitute for physical security, they are usually not.
Myth 9: Security Tweaks Will Stop Worms/Viruses
Because worms and viruses (hereinafter collectively referred to as “malware”) are designed to cause the maximum amount of destruction possible, they try to hit the largest numbers of vulnerable systems. Thus they tend to spread through one of two mechanisms: unpatched/unmitigated vulnerabilities and unsophisticated users. Although there are some security tweaks that will stop malware (Code Red, for instance, could have been stopped by removing the indexing services extensions mappings in IIS), the vast majority of it cannot be stopped that way because it spreads through the latter vector. Given the choice of dancing pigs and security, users will choose dancing pigs, every single time. Given the choice between pictures of naked people frolicking on the beach and security, roughly half the population will choose naked people frolicking on the beach. Couple that with the fact that users do not understand our security dialogs and we have a disaster. If a dialog asking the user to make a security decision is the only thing standing between the user and the naked people frolicking on the beach, security does not stand a chance.
Myth 10: An Expert Recommended This Tweak as Defense in Depth
This myth has two parts. Let us deal with the second part first. Defense in depth is a reasoned security strategy applying protective measures in multiple places to prevent unacceptable threats. Unfortunately, far too many people today use the term “defense in depth” to justify security measures that have no other realistic justification. Typically, this happens because of the general belief in myth 3 (more tweaks are better). We make more changes to show the auditors that we are doing our job and to have them chalk us up as having done due diligence.
This shows an incredible immaturity in the field, much like what we saw in Western “medicine” in the Middle Ages. Medics would apply cow dung, ash, honey, beer, and any number of other things, usually in rapid succession, to wounds to show that they were trying everything. Today, doctors (more typically nurses, actually) clean the wound, apply a bandage and potentially an antibiotic of some kind, and then let it heal. Less is very often more and using defense in depth as a way to justify unnecessary and potentially harmful actions is inappropriate.
The first part of this statement is one of our favorites. As a society we love deferring judgment to experts, because, after all, they are experts and know more than we do. The problem is that the qualification process to becoming an expert can be somewhat lacking. We usually point out that the working definition of a security expert is “someone who is quoted in the press.” Based on the people we often see quoted, and our interaction with those people, that belief seems justified. It is no longer actions that define an expert, just reputation; and reputation can be assigned. Our friend Mark Minasi has a great statement that we have stolen for use in our own presentations. To be a security consultant all you have to know is four words: “The sky is falling.” Having been security consultants and seen what has happened to the general competence level in the field, this statement certainly rings true. There are indeed many good security consultants, but there are also many who do not know what they need to and, failing to recognize that, sometimes charge exorbitant amounts of money to impart their lack of knowledge and skills on unsuspecting customers.
This article has dealt with the things that you should avoid when managing security. In the book "Protect Your Windows Network," Jesper and Steve cover the things you should do, and you may be surprised at some of the conclusions.
As always, this column is for you. Let us know if there is something you want to discuss, or if there is a better way we can help you secure your systems. To send us a note, just click the “Contact Us” link below.