Security WatchPasswords and Credit Cards, Part 2

Jesper M. Johansson

Contents

Pseudo Multifactor Logon Process
The Problem with Browser Add-Ins
The Authenticator Must Never Change
Downgrading to Less Secure Passwords
Bypassing the Pseudo Multifactor Logon
Problems with Compromised Passwords
Some Benefits
Misleading Users with Eye Candy
To Open or Not to Open
Providing Secure Communication
It's About Privacy

Welcome to the second installment of my three-part series on how the IT industry confuses consumers rather than helps them when it comes to security issues. In the July 2008 issue of TechNet Magazine, I discussed inactionable and incorrect security advice as well as confusing logon workflows. (If you haven’t read Part 1 yet, you can find it

at technet.microsoft.com/magazine/cc626076.). I made an argument for how extremely common yet poor security advice and bad logon workflows confuse consumers and undermine their efforts at protecting their personal information.

In this installment, I continue that discussion with more real examples taken from the world of consumer security. The final installment of this series, which will include a "call to arms" for the security professionals of the world, will appear in the next issue of TechNet Magazine.

Pseudo Multifactor Logon Process

In October 2005, the Federal Financial Institutions Examination Council (FFIEC) published its "Authentication in an Internet Banking Environment" guidelines (see www.ffiec.gov/press/pr101205.htm). The timeframe for implementation was a mere 14 months and U.S. financial institutions were soon scrambling to figure out how to meet these new guidelines.

Most institutions failed to meet this timeframe. Many of those institutions that did eventually meet the guidelines met them with measures that serve no other purpose than to meet the guidelines. In other words, the institutions took measures that do not in any way make customers any more secure (they are merely practices in "security theater"). Some of the most interesting examples of unhelpful solutions are the ones that try to create multifactor authentication without actually being multifactor.

For instance, consider the technology that measures typing cadence when a user types the password. Used on a Web site, this solution presents a logon dialog that looks exactly the same as the old logon dialog. This dialog, however, is now an Adobe Flash object.

The Flash object records the user's typing cadence, including characteristics such as how long keys are pressed and how much time there is between key presses. This data is submitted to the Web site along with the password, where it is compared to stored values. As long as the typing cadence data falls within certain variances of the stored data, and the password matches, the logon is accepted. The general idea is that this permits the site to use a pseudo-­biometric authentication method, without having to install third-party hardware on the client.

I will refer to this as pseudo multifactor authentication. It is not true multifactor authentication, which measures two or more of the following canonical factors:

  • Something the user has (such as a smart card)
  • Something the user knows (such as a password)
  • Something the user is (such as a fingerprint)

Instead of actually including multiple factors in the authentication process, pseudo multifactor authentication reads multiple parameters off just one single factor—the password. It then uses those additional parameters to make inferences about something the user is.

The technology used for pseudo multifactor authentication suffers from many flaws. Taken together, these flaws render the technology largely ineffective at solving the security problems it purports to solve.

The Problem with Browser Add-Ins

Pseudo multifactor authentication systems rely on browser add-ins to provide the sophisticated client-side processing that they need. All software, of course, has bugs, and some of those bugs cause vulnerabilities. Those vulnerabilities get to be a real problem when the software is difficult to update and the user is unsure whether an update has even been installed.

Browser add-ins are guilty of both of these characteristics—they are difficult to update and don't provide the user with ample info about whether the installed version is up to date. As a result, the user must expose a piece of software to the Internet that might not otherwise be needed.

In some cases, keeping the add-in current will require regular visits to the add-in's Web site for updates. It goes without saying that this is unlikely to happen for most users. Any system that requires an end user to expose additional, unrelated software to the Internet simply to provide security functionality should be taken under significant consideration.

The Authenticator Must Never Change

The "something you are" factor has a trait, perhaps even a shortcoming, that doesn't affect the other two factors. While you can change passwords (something you know) and manufacture new security tokens (something you have), the number of biometric authenticators you can use is fairly limited. With fingerprints, for instance, you get ten, typically for life. And should you lose one of those digits in an accident, you typically cannot replace it.

Now consider how this applies to the pseudo multifactor logon example. The metrics collected as part of another factor cannot be easily replaced or modified. Certainly, when the password changes, the way the user types it will change, but the general way he presses the keys on the keyboard will not change. This is exactly what the technique relies upon. Should these metrics be compromised, it could be possible to synthesize them. At the very least, an attacker can capture these metrics and replay them.

Downgrading to Less Secure Passwords

It is a good idea to use long passwords and to change your passwords often. And, as I discussed in Part 1 of this series, it is also a good practice (contrary to some advice) to record your passwords—in a secure manner, of course. However, these practices are not conducive to typing your password by hand. A long password is more difficult to type, and a recorded password is far more useful when you can copy and paste it. One of the best ways to keep track of a hundred or more passwords is to use a software utility such as Password Safe (at sourceforge.net/projects/passwordsafe). With a tool like this, you can generate fully random passwords, store them in encrypted form, and paste them right into logon dialogs. In fact, with that type of tool, you don't even need to know what your password is.

But here's the rub: you cannot measure typing cadence unless the user types the password. And if you have to type the password into an object that measures typing cadence, this technique for managing passwords begins to break down. Typing a 15-character fully random password correctly is no simple task. And so users will generally opt to simplify their passwords when faced with this type of system. Yet, a 15-character fully random password on its own is far more secure than a shorter password coupled with a pseudo multifactor authentication system.

In effect, pseudo multifactor authentication, implemented exclusively, forces users to use less secure passwords. Unfortunately, as you will see shortly, accommodating users that use proper password management techniques obviates virtually any value offered by the pseudo multifactor authentication systems.

Bypassing the Pseudo Multifactor Logon

Even if a site does implement pseudo multifactor authentication, it must still support a way to bypass the pseudo multifactor system. First, few sites can require all its users to install a "fourth-party" technology to be able to access the site. Some portion of users (certain TechNet Magazine authors, for instance) will always refuse to install this sort of software.

Second, typing cadence can change for many reasons and therefore the sites must accommodate that. For example, say a user sprains her wrist and her typing cadence is temporarily altered. How will she then access the site if the site analyzes her cadence and thinks a different person is trying to log in with the user's credentials? If the injury were permanent, the user could reset the stored cadence value. However, in the case of a temporary injury, the user probably will not want to reset the stored value, assuming her typing cadence will soon return to approximately what it originally was.

Finally, not all users are even capable of using pseudo multifactor authentication systems. For example, a disabled person who interacts with the computer through a speech recognition interface may not be able to fill in the dialog, depending on whether it prevents programmatic entry or not. In that case, an alternate system that accommodates disabled users will likely be required by law.

The simplest and most straight-forward way to accommodate all these scenarios is to also support standard password-based authentication.

Problems with Compromised Passwords

Pseudo multifactor authentication systems are purported to address various problems associated with password-based authentication. However, these systems fail to fully address every significant way passwords can be compromised. There are five primary ways by which systems that rely on password-based authentication can be compromised, where "compromised" means an attacker has obtained and used another user's password:

  • Password guessing
  • Keystroke loggers
  • Phishing attacks
  • Asking the user
  • Breaking into any system that stores the password or a hash of it

Password guessing is not a particularly common attack method for criminals any longer, and it has become greatly overshadowed by keystroke loggers and phishing. Password guessing is also only partially mitigated by pseudo multifactor authentication. A conventional password guessing approach is unlikely to work against a pseudo multifactor authentication system since the attacker must guess the password as well as the typing cadence. While it may be theoretically possible to synthesize the typing metrics, it is rarely necessary to do so because the actual data can usually be captured by a phishing attack or a keystroke logger. In addition, if the system also provides a standard password-only logon interface, the attacker can just use that system for password guessing.

In addition, password guessing attacks are best defeated by strong passwords. If we can teach users to use stronger passwords, possibly with the aid of tools, pseudo multifactor authentication is not needed. (In contrast, the simpler passwords that are typically used with pseudo multifactor authentication systems actually make password guessing a more viable approach.) Thus, any value that may be offered here to mitigate password-guessing is only really adding value if the implementation does not support password-only logons. And, as I pointed out, this is rarely if ever possible. In other words, if users use weaker passwords in the presence of pseudo multifactor authentication systems, password guessing may become a viable attack method again; pseudo multifactor authentication would reduce security, rather than increase it. More research is needed into this question.

Pseudo multifactor authentication also fails to address the problem of keystroke loggers. While I am not aware of any keystroke loggers in common use today that capture typing cadence, there is absolutely no reason why one could not be designed to do this. A keystroke logger is a piece of hardware that sits between the keyboard and the computer or a piece of software that captures keystrokes. In either case, the logger has full access to all the data used by the pseudo multifactor authentication solution.

In fact, the keystroke logger can access even more data since an authentication solution runs in user mode while a keystroke logger sits far beneath that. Without a trusted path between the keyboard and the Web-based object used to capture the password, it is not possible to prevent this type of compromise. If pseudo multifactor authentication solutions become common enough, you can be sure someone will create such a keystroke logger.

Similarly, phishing attacks are not defeated by pseudo multifactor authentication. Instead of using just a fake logon screen, the attacker can use a Flash object to capture the password as well as the typing cadence.

Granted, pseudo multifactor au­then­ti­cation can help in some scenarios where the attacker's technique will only provide the user's password. For example, the easiest way to get a password from a user is to ask the user. Shockingly, this has proven to be an effective technique, whether asking people in person, over the phone, or through phishing messages.

Of course, asking a user for his typing cadence will be far less effective. Likewise, if a corporate password database is compromised, the attacker will likely only gain access to the passwords. But again, these attacks will only be mitigated if the system provides no standard password-only authentication.

I should also point out that if the password database itself has in fact been compromised, the attacker has probably compromised the system in a much deeper way than would be possible with a single, or even many, regular user account passwords. Thus, it is not really valid to attempt to mitigate what an attacker in possession of a password database can do.

Some Benefits

In all fairness, there are a couple problems that pseudo multifactor authentication can help address (assuming that the system provides no standard password-only logon option). For example, it can be used to prevent password sharing. However, this can be a drawback for systems where it is legitimate for multiple users to share accounts, such as joint bank accounts.

In addition, a properly constructed interactive login dialog (not a Web site login) could force a user to go through additional authentication steps if her typing cadence failed to match. This can provide additional security against a compromised account in a highly sensitive environment.

Misleading Users with Eye Candy

One of the best ways to confuse users is to give inaccurate indications of security. The most common is probably the padlock image displayed on a Web page, as shown in Figure 1. This page even goes as far as to display the word "Secure" by the padlock.

fig01.gif

Figure 1 An example of padlock icon abuse, a worrisome trend (Click the image for a larger view)

As you surely know, simply putting a padlock image and the word "Secure" on a page does not make it secure. Yet, the practice is disturbingly common, even among the most reputable, and most targeted, Web sites. The result is that many users are trained to look for these visual safety cues in the body of the Web page instead of looking where these cues actually mean something: in the address bar. (The W3C Web Security Context Wiki has an entry on this problem, available at w3.org/2006/WSC/wiki/PadlockIconMisuse.)

It's unfortunate that there are still so many examples of this misuse. Research has shown that users are unable to identify malicious Web sites even when the certificates are very obvious (see www.usablesecurity.org/papers/jackson.pdf). It comes down to the ability to easily tell fake from real, even when you don't have the real to compare with. This takes skill; misleading security eye candy on Web pages hinders development of that skill as users are drawn to the wrong information.

A particularly disturbing variant of this problem is shown in Figure 2. In this case, the page that displays the information is actually not secure. If you look at the address bar, you see the "http" indicating the protocol. This site uses a very common optimization technique—rather than encrypting the page that contains the logon form, only the form submission is encrypted. The login is secure, just like the page states, as long as you equate "secure" with "encrypted." However—and this is critically important—the user has no way to verify where the credentials are being sent before they are sent! The site does not show a certificate authenticating the site to the user before the form is submitted. It's a game of trust, like falling backwards and assuming the person behind you will catch you. By the time the form is submitted the damage may already be done.

fig02.gif

Figure 2 Security eye candy on an insecure page (Click the image for a larger view)

Secure Sockets Layer (SSL), the protocol that provides the security in HTTPS, serves two important purposes. First, it authenticates the server to the user. Second, it provides an easy mechanism to negotiate a session encryption key that can be used between client and server.

When only the actual form submission is encrypted, the first and most important objective is not achieved. Sites employing this optimization are just using SSL as a way to negotiate keys. Ironically, they could do this simply by employing a standard key negotiation protocol, thereby avoiding the cost and overhead of SSL.

The site shown in Figure 2 is not a rarity. Many sites provide SSL protection for just the form submission, but not the form itself. This particular site, however, demonstrates an even more disturbing trait. If you type https://www.<site>.com (note the secure https indicator) into the browser address bar, the site will redirect you to the non-SSL version of the site! Even if you try to inspect a certificate before sending your credentials to the site, the site refuses to show you the certificate.

Not all sites are that bad, but there are many that are. And two of the largest credit card issues in the United States are among the offenders. In fact, of the three major credit card companies I use, only American Express provides a certificate on the logon form. American Express even redirects HTTP requests to HTTPS. Well done!

One final thought regarding the meaningless security eye candy and the lack of certificates on the logon form. You may be wondering why a site would do this. The reason is simply for economics. Presenting certificates requires encrypting the page, and encrypting the page creates processing overhead. Processing overhead means that more computers are required to service the same load. And more computers cost more money. Unfortunately, when it comes to a choice between protecting customer privacy and increasing profits, many organizations will opt for the increased profits.

To Open or Not to Open

Recently I received an astonishing e-mail message from my health insurance company. Anyone who has used a computer in the past 10 years would know that they should never open unsolicited e-mail attachments. So imagine my surprise when I received the message shown in Figure 3.

fig03.gif

Figure 3 E-mail message containing a “secure attachment” (Click the image for a larger view)

Apparently I had asked a question on the health insurance company's Web site (and I had forgotten about it by the time the suspect message arrived). This is how the company responded. At first I thought it was a clever phishing scheme. As I realized this was a legitimate message, the hairs on the back of my neck started to stand up.

The very first direction is to double-click the attachment to begin decrypting the message. The security community and IT administrators at large have spent the past 10 years trying to teach people NOT to double-click on attachments. And then a company comes along (my health care provider, by the way, is not the only organization using this approach) and says that I must click on the attachment for my security. How should the user act in this situation? What behavior does he learn or unlearn?

Next, I used the preview feature in Microsoft® Office Outlook® 2007 to view it. As you can see in Figure 4, Outlook thought this message might be an attack and warned me not to open it!

fig04.gif

Figure 4 Outlook 2007 considers the secure document hostile (Click the image for a larger view)

It is both ironic and sad that a health insurance company would violate the very basic security checks used in the most popular e-mail client in the world. Considering this company did not even bother to test its new security solution with Outlook, I have to wonder what else this company is doing in the interest of protecting my private information. Or, put more bluntly, what other solutions are being implemented by the company solely to avoid being accused of not adequately protecting customer privacy? This is similar to the security theater performed by the financial institutions. In this case, I think accusation avoidance is the main objective of the solution—not actually protecting customers.

I wanted to see what the attachment really was. It turned out to be an ActiveX® control object. To see the attachment, I had to open it in Internet Explorer® and install the object. I was presented with the reassuring screen shown in Figure 5. As you can see, the designers went to great lengths to make the message look like a regular postal envelope; it even has a stamp that claims the envelope is trusted.

fig05.gif

Figure 5 The Secure doc shows an envelope stamped “Trusted” (Click the image for a larger view)

This type of technology is worrisome for many reasons. First, it reinforces a very bad behavior on the part of the user: opening unsolicited attachments. Second, in the actual message, it gives very bad guidance to configure the system to always open attachments without prompting. Third, in the face of conflicting messages from the computer, where the e-mail says it is trusted, and Outlook says it is not, the message looks very suspicious. Finally, the actual attachment contains meaningless security eye candy to convince the user that it is actually trusted. If the user learns to trust these kinds of messages, the step to trusting malicious messages with a similar look is minimal.

Providing Secure Communication

Admittedly, the issue that this technology is purporting to address is very important. Communicating with customers in a trusted fashion is difficult. However, this particular solution is over-engineered. It will likely cause customer confusion and potentially lead to the opposite end result it aimed to address: the customer's system becoming compromised.

Better-engineered Web sites now use a "message center." In this design, when the company needs to communicate with the customer, it sends an e-mail message that says something to the effect of: "You have a message in the message center. Please go to our Web site, log on, and click the Message Center link to view the message." A company gets bonus points if it uses Secure Multipurpose Internet Mail Extensions (S/MIME) to sign all customer-facing e-mail messages so the user can authenticate the source. There's another bonus if the message does not include any "please click this link" items in the e-mail. The user should type in the company's URL by hand to ensure she is going to the site she expects.

With a message center and a signed message, a company can achieve a trusted path to communicate with the customer. Everything the customer sees during the message workflow is authenticated, and the company does not promote poor security practices.

It's About Privacy

So far, I have spent the first two installments of this series describing how security professionals are actually doing a disservice to users. It is our job to maintain security. But many of the decisions being made and solutions being implemented are confusing users, teaching them to make bad decisions, and giving them a false sense of security. We should not be overwhelming users with these conflicting ideas.

As I pointed out previously, to mainstream users, security is simply about protecting passwords and credit card numbers. They want technology to work and be trustworthy. Unfortunately, they have to make decisions, and it is our job to make sure that they are informed decisions.

In the final installment of this series, I will discuss how some of the most important technologies available to consumers are not living up to expectations. I will also present my call to arms. So please check out the September 2008 Security Watch column.

Jesper M. Johansson is a Software Architect working on security software and is a contributing editor to TechNet Magazine. He holds a Ph.D. in Management Information Systems, has more than 20 years experience in security, and is a Microsoft Most Valuable Professional (MVP) in Enterprise Security. His latest book is the Windows Server 2008 Security Resource Kit.

© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.