Jesper M. Johansson
I was recently contacted by the University of Minnesota in order to be interviewed for its magazine. Apparently, they wanted to run a feature on some successful alumnus and—not being able to find any—they settled for me instead. The interviewer asked what I work on and I went on for a few minutes trying to describe security
infrastructure software. She then exclaimed, "That sounds so complicated! To me, security is about passwords and credit cards."
I pondered this reaction for a few minutes and then realized she had a really good point. Security really is about passwords and credit cards. At least, that is what it's about to the end user. Those of us in the business think security is about cryptographic algorithms, whether Kerberos is a better choice than TLS or NTLMv2, the merits of WS*, whether password hashes should be salted, and all the other esoteric topics we love to discuss. Sure, we have a very different and deeper perspective than end users, but we have, by and large, lost sight of one fundamental thing: security, to those whom we ought to be serving, is about passwords and credit cards.
Granted, all that esoterica we so love to argue about and the new technologies we so love to invent are all to protect the end user's data. Still, I think we have lost our way. We, the security subculture of the IT world, exist to serve a particular need of our constituents—the need to keep data safe. That, of course, includes ensuring that IT assets can be used safely. That's really it.
In previous columns, I have made the point that no one goes out and purchases a computer so he can run antivirus software. The user purchases a computer to do online banking, play computer games, write e-mail, do homework, or some other primary function. Likewise, no business has funded an IT security group simply so they could implement NTLMv2. Businesses fund IT security groups so these groups can protect the organization's assets, enabling the business at large to safely use its IT resources and achieve its business objectives.
We do not exist but to serve.
And so I must ask whether we are really doing a good job "serving" these days. Or are we, the security subculture, actually getting in the way more than helping? And are the legislators and regulators helping us to get in the way? I am not convinced that all this new technology we are putting in place really helps the end users. Therefore, I would like to explore a few areas where we, the IT providers of the world, are actually causing more harm than good.
Some days it feels like most of the security advice and many of the security technologies we inflict upon our users is inactionable, incorrect, incomprehensible, or (in many cases) some combination of the three. In this three-part series, I am going to look at some of the ways we confuse users by giving advice and deploying technologies that are guilty of one or more of these three I's.
Inactionable Security Advice
One of the best ways to confuse people is to give inactionable security advice. For bonus points, you can make it incorrect, as well. Figure 1 illustrates a popular piece of time-tested, theoretically sound, and utterly useless advice.
Figure 1 Inactionable security advice (Click the image for a larger view)
See where it says to use a different password for each online account that you have? Thirty years ago this recommendation made sense. The number of people on what eventually became the Internet numbered in the low hundreds, and they were all very smart people who were not picking particularly good passwords. Unfortunately, this advice has persisted and keeps being repeated over and over again, and there has been no apparent effort to reconcile this advice with how computers are used today.
How many online accounts do you have? Personally, I have 115, give or take a few that I don't keep track of. Not only does the advice in Figure 1 suggest that I should have 115 different passwords, but that I should also change these 115 passwords every 30 to 60 days. In other words, I should change 2 to 4 of my passwords every day. (Do the math: that also means I would have between 690 and 1380 passwords in one year.)
While the technical personnel at some sites offering this advice may be able to come up with 4 good passwords per day and keep 115 current passwords in short-term memory, you can be certain that 99.99 percent of the general Web-surfing public will be unable to do this.
The advice to use different passwords everywhere is actually correct and sound, from a purely theoretical perspective, as is the advice to change all your passwords every 30 to 60 days. But this advice is also inactionable. With the number of passwords users have these days, they simply cannot follow this advice without an aid of some sort, such as paper or software. Enter the next example.
Inactionable and Incorrect
The piece of advice shown in Figure 2, which comes from the Web site of one of the world's largest banks, is both inactionable and incorrect. The information provided under the heading of "Read our password advice" parrots the "different passwords everywhere" line. It also recommends that you should "never write them down."
Figure 2 Inactionable and incorrect advice (Click the image for a larger view)
So now I have 115 different passwords, and I have to create 4 new passwords each day, and I'm not allowed to write them down. For a while I thought I was just stupid since I couldn't remember all my passwords. Then I discovered that everyone else is exactly the same way. Human beings simply can't remember 115 passwords. We can remember and process about seven chunks of information—that's just how we're programmed. Following most of the password advice you will find on the Internet, that's actually not even enough for one password (see "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" by George A. Miller, which is available at musanim.com/miller1956).
The unfortunate fact is that when it comes to password advice, our industry, on a very regular basis, misleads users. If the security subculture is going to tell users that they should use different passwords everywhere, then we need to also tell them how—and that is to record and store those passwords somewhere secure. Write them on a piece of paper, use a secured document, or use a specialized tool, such as PasswordSafe ( sourceforge.net/projects/passwordsafe). Face it, we all either record our passwords, or we use the same password everywhere. In fact, a recent survey found that 88 percent of users have the same password on every system they need to authenticate to (see msnbc.msn.com/id/24162478).
Advice against writing down your password is surely a contributing factor to this trend. What we need to do is teach users how to manage passwords (and other sensitive data) effectively, rather than teaching them not to manage this information at all. Then they will be a lot less likely to compromise their credentials.
The first two figures repeat age-old advice that worked well back in the 1960s, 70s, and 80s, in a time before mainstream users were going online and using Web-based services. Nobody has, as of yet, made loud complaints about this type of advice because it is generally accepted wisdom.
This type of advice confuses users and can make them feel guilty about having to store their passwords somewhere. By giving this type of advice, rather than helping users do what they have to do, security professionals cause harm. After all, it is not the user's responsibility to figure out how to manage their security. That is up to the security professionals. We need to figure out acceptable ways for our users to manage all their online accounts. But as most guidance just repeats the same, outdated advice, users resort to sticky notes and spreadsheets to track their many passwords.
As an authentication mechanism, passwords have a lot to offer. The primary problem with passwords, though, is that human beings are terrible when it comes to remembering them. Instead of trying to solve a problem of human nature, the IT world keeps inventing all kinds of new mechanisms to replace passwords—or, worse still, to augment them. This just confuses users even more.
Imagine my surprise the last time I accessed a certain financial services site and was presented with a logon screen that contained only a username textbox (see Figure 3).
Figure 3 Where do I type my password?
At first I thought I had landed on some malicious Web site. Then I quickly validated the site—that step was easy since the site presented a certificate—and realized that I was, in fact, in the right place. The problem is that people are used to seeing two text fields, username and password, presented together when logging into a site. This is from years of encountering the same common workflow. So when you are suddenly presented with a site that asks only for your username, things come to a halt. It turns out that in this case the provider had implemented a technology that uses pictures to identify the site to users in an attempt to prevent phishing attacks. As it goes, when you fill in your username, the site pops up a screen with an identifiable picture, as shown in Figure 4.
Figure 4 Some sites now use images to authenticate the site to a user(Click the image for a larger view)
The theory is that you know which image goes with which site. If the correct image isn't shown, you can identify the site as being fake. This, in and of itself, is a sound concept. Under the assumption that the user knows which image goes with which site, this strategy makes some amount of sense.
Of course, the astute reader will have noticed the green address bar in Figure 4. This means that this site uses SSL and Extended Validation (EV) certificates, which is why the address bar is green. It also means that the entire premise of using the image to identify the site provides no additional value. Instead, the image does little more than add confusion for at least some end users. The site has already identified itself to the user—it has provided a certificate that contains the company name, the Web site address, and the name of the trusted issuer. And the fact that the address bar is green tells me that the company has even gone the extra step of paying three times as much for an EV certificate.
Then, of course, the pictures can also be faked. If the user can submit her own picture, there are ways in which the attacker can probably figure out what image is being used. There is also a good chance the user will use the same picture on every site, so all the attacker will have to do is create a site with content the user will value (I will let you come up with your own examples) and ask the user for a picture to use for site verification. If the user does use the same picture for all sites, this new site will now have access to the image that the user uses on, for example, her banking site.
While some sites let you pick your own image, there are others that use a library of stock photos. The site in Figure 4, for instance, has 318 stock photos to choose from. The previous trick doesn't work for sites that do not permit the user to submit her own photo However, if the user cannot pick her own image, the likelihood that the user will remember which site goes with which picture will be unlikely for sites she does not visit frequently. I honestly have no idea what picture I use on the site shown in Figure 4, though I can assure you that it's not the one shown in the screenshot.
The problem with this image-based approach is that an attacker can therefore show just about any one of the 318 pictures or just pick a random shot off of Flickr, and many users will just assume the image is the correct one. If the majority of people could remember things like which picture goes with which site, we wouldn't have all the phishing- and security-related problems we have today.
So why use a picture to authenticate the site to the user when the site has already authenticated itself using the certificate? Why not just use that certificate and help users learn how to validate them? Certificates already prove the site's identity to the user.
The process for getting a certificate is certainly much more secure than the process of obtaining a user's site authentication image. If the certificate is an EV certificates and you are using Internet Explorer® 7 or Firefox 3, the browser will even highlight the relevant certificate information in the address bar. Unfortunately, the highlighting only works with the very expensive EV certificates.
The Pitfalls of Image-Based Site Authentication
The picture authentication technology has a number of problems. First, it becomes very easy to harvest usernames from a site. In fact, the site shown in Figure 4 will present the dialog shown in Figure 5 if you enter the wrong username. Once you type a correct username for a user who has chosen a secret image, you get to see that person's secret image. Obviously, that knowledge could be very valuable to an attacker trying to harvest information about a user.
Figure 5 Image-authenticated sites allow for easy username harvesting
The type of implementation shown here has virtually no security value. The attacker can simply duplicate the logon workflow on a fake site. The fake site asks for a username and passes it to the real site. You can even use AJAX on the client to update the form in real time for an extra slick look. Furthermore, if the legitimate site has not mitigated cross-site request forgery attacks on the logon form, the AJAX code can even submit the request to the real site directly unless the browser has mitigations for cross-site XML-HTTP requests.
Now, once the result comes back, the attacker can parse the data, pull out the picture, and display it to the end user. In other words, any attacker that can present a fake login site to the user can also display the user's secret picture. The net result is that there is absolutely no added value from using image-based site authentication. The picture is displayed prior to the user authenticating to the site and, therefore, the picture is available to an attacker that has, or can get, the user's username.
Assuming the user was never taught to look for a certificate before submitting forms—and this is a safe assumption considering that the use of image-based site authentication is itself based on that same assumption—getting a username from the user is trivial. In addition, since many image-based site authentication schemes respond differently to a valid username versus an invalid username, username harvesting is trivial. The attacker can even do it out-of-band, before an attack is even started.
In the end, we have users who believe they are more secure or who are just more confused. We have spent significant amounts of shareholder equity implementing image-based site authentication and we have not made it any more difficult at all for malicious users to convince end users to submit their credentials to fake Web sites.
That's all I have space for in this installment of Security Watch. Please check back next month for part 2 of this series when I will be discussing more examples of misguided security practices and bad authentication implementations.