Security WatchPrinciples of Quantum Security

Jesper M. Johansson

It can be a lot of fun to toss something unexpected, but valuable, into a conversation. For a while now, I've been using the Heisenberg Uncertainty Principle to explain security concepts. (Those who have heard my unicorn story know that I like to bring in fundamental theories from other disciplines to make a point.) Strange

as it may seem, there do exist information security corollaries to some of the fundamental concepts in other branches of science.

The Heisenberg Uncertainty Principle, which comes from quantum physics, is based on the equation shown in Figure 1. The principle itself states that the position of a particle (p) and the momentum of a particle (x) are related such that if you increase your precision of measuring the position, you decrease the precision of measuring the momentum. The boundary on the precision of the two factors is a small fixed multiple of Planck's constant. In more comprehensible language, this principle says that you cannot observe with full precision certain facts about a single particle at the same time.

Figure 1 Heisenberg Uncertainty Principle

Figure 1** Heisenberg Uncertainty Principle **

This directly relates to predicting uncertainty bands, which essentially allow you to predict how uncertain you are about the state of a couple of facts about a particle. While you cannot (yet) predict the bands, the fact that you cannot predict with certainty how two variables on a single computer network will affect the security of that network is important.

You also have to make trade-offs. While you may not be bounded by Planck's constant in information security, you are bounded nonetheless. The most salient trade-off you have to make is between security and usability or usefulness. If you ignore availability for a moment—and, quite frankly, I believe security personnel should not be responsible for availability other than in the face of denial of service attacks—the easiest way to secure something is to turn it off. That way, it can no longer be hacked. Usefulness takes a nose dive, though.

Likewise, while it costs significantly more, you can prevent far more confidential data leakage by implementing an expensive data loss prevention product than you can by sending an e-mail to all employees once a year reminding them that instant messaging applications are not for business use. But is your organization willing to pay for the data loss prevention tool? Now you can see where quantum physics is more relevant to security than you may have thought.

Schrödinger's Cat

A related, but different, quantum physics concept is explained in the parable of Schrödinger's Cat. The story goes that Erwin Schrödinger, a prominent 20th century physicist, had a long-running argument with Albert Einstein (another prominent 20th century physicist you may have heard of) about the concept of superposition. Schrödinger found the areas of quantum mechanics that had elements existing in a superposition of two states quite absurd.

To make his point, Schrödinger posed a thought experiment in which a cat was stored in a hermetically sealed box. The other item in the box was a container of poison, the release of which was controlled by means of some (imaginary) subatomic radioactive particle. If the radioactive particle decayed, the poison would be released, and the cat would die.

The subatomic particle, being subject to the laws of quantum mechanics, exists in a superposition of states. The decidedly atomic cat, its state being entirely dependent on the state of the subatomic particle, therefore also exists in this superposition of states. Only by actually observing the state of the cat do we finally put it into a specific state of aliveness or deadness.

Schrödinger, of course, intended for this to illustrate how absurd some of the laws of quantum mechanics are when applied to atomic systems. Nevertheless, his example is often credited with giving rise to what in laymen's terms is called the "observer effect." The observer effect, while it does not apply to the subatomic systems of interest in quantum mechanics, is interesting to us as security professionals. Simply put, it states that by observing something, you change it.

The Observer Effect

To explain it a bit more, consider the example of a cup of tea. To find out how hot the tea is, you put a thermometer in the cup. Let's say the tea was 80 degrees Celsius when you put the thermometer in, and the thermometer itself was 22 degrees (room temperature). As soon as you put the thermometer in the tea, it starts absorbing heat from the tea, and the tea gives up some of its heat to heat up the thermometer. Eventually, the thermometer and the tea will have the same temperature. However, that temperature will not be 80 degrees, nor will it be 22 degrees—it will be something in between. Thus, by measuring the temperature of the tea, you changed it.

The observer effect is also relevant to infosec. Every time you do something to mitigate a security issue, you potentially modify your security posture since you modify the system. I'll call this the Security Posture Fluidity Effect (SPFE) for lack of a better term.

One of the simpler examples is in the case of service account dependencies. In Chapter 8 of Protect Your Windows Network (protectyourwindowsnetwork.com), I discuss installing an Intrusion Detection Service (IDS) on your systems to detect attacks. However, the IDS logs to a central system and requires highly privileged access to the monitored systems. Most of the time that means the service runs in a highly privileged service account.

If any one of those systems gets compromised, the attacker can get access to the service account credentials and access all the other systems, in addition to disabling the IDS itself. This is a perfect example of the SPFE. By installing something to provide security in your environment, you create a new potential security issue.

Other examples of the SPFE abound. Steve Riley's description of why 802.1x has certain problems on wired networks (microsoft.com/technet/community/columns/secmgmt/sm0805.mspx) is another great example. Obviously, using host-based firewalls is very desirable in many environments. However, if used in conjunction with 802.1x, the combination makes a particular attack possible where it was not possible before. Again, a security technology modifies the security posture of the environment and enables an attack that was not otherwise possible.

None of this means that these measures are inherently useless and should be avoided. What it means is that you need to consider the implications of what you do in terms of security across the board. When you do risk management, you have to consider what your actions and preventions really mean for risk. When mitigating a problem, you cannot simply stop after you have implemented counter measures. In a very real sense, security is a process. You need to use the defense-in-depth strategy, but consider how the defenses will change your security posture and how you will meet the new threats that emerge due to your strategy.

A Maturing Hardening Strategy

I have been working on hardening guidance for more than 10 years. Looking back at the first few guides I worked on, I find they were nothing more than lists of settings that my colleagues and I thought you should turn on. Back then, the general strategy behind security was naïve—basically, if something was intended to provide security, the logic was that it must be turned on. The fact that there may have been legitimate reasons for it to be turned off in the first place did not occupy much though.

Eventually, I learned that to do hardening effectively, you have to put a lot of thought into the hardening and consider the threats that the system is subject to. This revelation lead to the list of scenarios that you see in the Windows 2000 Security Hardening Guide (go.microsoft.com/fwlink/?LinkID=22380). The natural progression from there was to split the guidance into different threat levels, which is what you see in the Windows Server 2003 Security Guide (go.microsoft.com/fwlink/?linkid=14846).

All that time, I was nagged by the fact that it was still possible to break into a system even with all the guidance turned on and in place. This was because either there were missing patches, some operational practice let me in, or some insecure third-party software was installed.

It turns out that the hardening in those two guides did not do much to help the aggregate security posture. It all came down to just a few key things. In fact, the most secure system I ever built (msdn2.microsoft.com/library/aa302370) employed only four or five of the tweaks in those security guides, and none of them actually stopped any of the attacks it was subjected to. Unfortunately, people just want a big blue "Secure Me Now" button that will harden systems automatically, but security is more complicated than that.

How to Approach Risk

To be protected (as opposed to the finite state of secure), you must take a reasonable approach to risk. You must understand what risks you are facing, decide which you want to mitigate, and then you must understand how to mitigate them, and that is seldom done by simply throwing switches.

Generally, these security switches are designed for specific scenarios and don't necessarily create the best configuration for your scenario. Instead, they create a good baseline starting point and are available mainly because much of the marketplace demands these quick hardening tweaks. Most are already on by default in modern software; and the ones that are not are likely to have significant side effects.

Moreover, throwing bunches of switches may produce an unsupportable and unstable system that may not perform the tasks you need it to perform—all to mitigate threats you have not enumerated yet. The SPFE, and quantum physics, tells us to reanalyze after we secure things, but too many organizations fail to do this. In fact, they fail to analyze the threats in the first place. If you start by actually analyzing the threats, you will find that the things that really make a difference are not anonymous restrictions on listing account names and wholesale access control list changes.

Rather, what really makes a difference is determining whether your system should be providing a particular service, if so to whom, and then enforcing that policy. What makes a difference is ensuring that only those systems and users who absolutely need to communicate with you are allowed to do so. What makes a difference is ensuring that all applications and users run with the least possible privilege. In short, what makes a difference is taking a sensible approach to security and doing the difficult analysis so that you enable each system to take responsibility for its own security.

This is why the current approach to security now employs such tools as Server and Domain Isolation (microsoft.com/sdisolation), the Security Configuration Wizard (SCW) in Windows Server® products, and the Server Manager tools for managing roles in Windows Server 2008. These tools walk you through the process of understanding the scenarios you have to support and then help you lock down your systems appropriately. Granted, these tools don't offer the panacea everyone wants, but they produce a supportable configuration that actually performs the tasks for which you purchased the software.

Apply Security where It's Needed

What all this means is that you can't rely on other entities or simple tweaks for security. Each asset must be capable of defending itself because concepts such as "perimeter" and "internal network" are meaningless today.

The vast majority of organizational networks are at best semi-hostile. You need to understand that and take the appropriate steps, without relying solely on knee-jerk hardening guidance. To start with, you must understand your needs. And you should look for guidance from others who also understand your needs.

A guest on a webcast recently recommended that small-business users go to the Department of Homeland Security site and download a hardening guide for Internet Explorer®. Why should you expect a government agency that is charged with protecting the military and national security establishment to provide sensible guidance on how to secure a Web browser in a small business? This is a perfect example of some of the poor logic still prevalent in the field of computer security. The assumption is that since the guidance was issued by a three-letter government agency, the results must therefore be highly secure.

The choice is usually presented as a binary decision: you can either have "high security" or "low security." A better way to present this choice is between "high security" or "appropriate security." Security isn't one-size fits all.

In fact, high security is not for everyone—and it typically is not for most! It is not an end goal toward which you should strive. It is a specialized configuration for systems where people will die if the system is compromised. If that fits your risk profile and threat model, then use high security.

If that is not your risk profile and threat model, then you should use something more appropriate. More than likely, any of the tweaks you can configure are already appropriately set to the level of a moderate risk profile by default. This default state, used in most Microsoft products today, provides a reasonable trade-off between security and usability, usefulness, and performance. As far as hardening goes, it is typically done for you.

Now you need to consider other security steps. A good place to start would be the TechNet Security Center (microsoft.com/technet/security) and, of course, books such as the Windows Server 2008 Security Resource Kit.

It's All Risk Management

By now you have probably realized where I am going with this: risk management. The key message in the Heisenberg Uncertainty principle is that you have trade-offs. That part of the principle is not rocket science.

The message from the Schrödinger's Cat story, however, is something that most people fail to take into account. Not only is it important for you to analyze all sides of the issue and make a decision on the trade-offs, you also have to consider the changes introduced by your solutions. A sound risk management strategy considers how those changes affect the system and whether those changes were implemented as part of a risk mitigation strategy or for any other reason.

Annualized Loss Expectancy

The common way to quantify risk, and the method you learn in many security certification courses, is the Annualized Loss Expectancy (ALE) equation, shown in Figure 2. The standard ALE is straightforward. You determine the probability of an incident and the cost of each occurrence and then multiply the two values. This gives you the amount you should expect to pay for security incidents per year. If the cost of the risk is high enough, you implement the mitigation.

Figure 2 ALE is the probability of a loss, multiplied by the cost of the loss per incident

Figure 2** ALE is the probability of a loss, multiplied by the cost of the loss per incident **(Click the image for a larger view)

The problem with this is that the standard ALE fails to do justice to the cost of the mitigation. Not only does the mitigation have a cost in and of itself, but many mitigations also have side effects, which have a cost associated with them as well.

Consider one of the most standard security tweaks: account lockout. Many organizations deploy account lockout, ostensibly to prevent attackers from guessing passwords. You can mathematically calculate the probability that an attacker will guess any single password. If you are creative, you can also figure an average cost of the breach resulting from an attacker successfully guessing a password. Based on those two numbers, many organizations have decided that the ALE is unacceptable and therefore implemented account lockout.

However, this particular mitigation has a cost that you may not be taking into account. There is the cost of implementing the mitigation itself, of course, though this is fairly minimal. To be accurate, the calculation should also include the costs of side effects resulting from the mitigation.

First, there is a cost associated with the help desk unlocking accounts for users. If an account is locked out only for a small period of time, say 15 minutes, there is still a cost of lost productivity in that time period. This factor is counted as the cost of each incident, multiplied by the likelihood of that incident occurring in the given time period. In this case, you would multiply the cost by the expected number of lockout events—a number that your logs may already be able to provide.

You also have to take into consideration the user aggravation factor. Anecdotal evidence indicates that users will use less-complicated passwords if they are subject to account lockout because those are less likely to be mistyped. Therefore, the probability of the incident we wished to avoid does not entirely go to zero after implementing the mitigation.

Finally, there may be vulnerabilities introduced by the mitigation that were not present in the system prior to the introduction of the mitigation. In the case of account lockout, an attacker can use that feature to disable all accounts in the network simply by repeatedly guessing their passwords incorrectly. There is a probability of that occurring as well, along with an associated cost of each incident.

Taking all of these factors into account, it is clear that you must modify your loss expectancy equation. First, you must modify the probability of the incident. This incident should now be less likely to occur than it was before, although the probability, as you saw above, may not have gone entirely to zero. The cost of each incident may also have changed. To that product you need to add the cost of the mitigation itself. That cost is composed of the implementation cost plus the sum of the annualized cost of all the side effects. Each of those annualized costs is the product of the probability of the side effect occurring and the cost of each incident of that side effect. Simplifying the equation presentation a bit, you now have the more accurate risk analysis equation that is in Figure 3.

The improved equation in Figure 3 offers a much more accurate way to analyze the risk of a particular issue. Many organizations have already analyzed their risks in this way. But too many still have a simplified view of risk that does not adequately point out the need to analyze the impact of mitigations. By using the modified ALE equation, you are placing that need front and center.

Figure 3 Considering the additional costs of mitigation

Figure 3** Considering the additional costs of mitigation **(Click the image for a larger view)

What It All Means

If you only do one thing after reading this article, it should be to question the conventional wisdom behind security strategies. Far too much of what we do in the field of information security is based on a flat stereotypical view of the world. And many of the assumptions are now outdated.

Attacks have changed. The attackers are now professionals, in it for the money, for national supremacy, and for ideology. You cannot afford to waste time and money on security tweaks that don't improve security. That also means that you must take a much more sophisticated approach to risk management.

First, it is important that you realize that everything you do has a trade-off. You cannot know everything with perfect certainty.

Second, you must understand that you operate in an interdependent system. By introducing security changes into that system, you modify the system itself, which means you must then reanalyze the system.

Preferably, this should be done prior to actually implementing those changes because far too often the changes have such a tremendous impact on the system that they do more harm than good. By using better analysis tools that remind you to analyze those changes, you will be much less likely to forget about them.

Finally, do not ever forget the human factor. Everything you do in information security is to enable the business, and the users within the organization, to operate as safely as possible.

In a recent presentation, I pointed out that there is not a single user today that bought his computer so he can run an antivirus program. Security is your main objective, but it's not the company's main objective. Organizations that merely tolerate security because it's in their best interests do so at the moment. Never forget that the information security group is there to serve the business, not the other way around. If you ignore that fact, your users will do anything to get their jobs done, even if it means circumventing security controls that they do not understand or agree with.

Jesper M. Johanssonis a Software Architect working on security software and is a contributing editor to TechNet Magazine. He holds a PhD in Management Information Systems, has more than 20 years experience in security, and is a Microsoft Most Valuable Professional (MVP) in Enterprise Security. His latest book is the Windows Server 2008 Security Resource Kit.

© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.