Why It Pays to Submit to Hackers

Every big online security breach seems to end in a big lecture. Use strong passwords, users are told. Make fresh logins for every website. Back up your data. Encrypt all your stuff.

That familiar chorus began again after our own Mat Honan suffered a hack attack in which three different internet accounts were seized and three computing devices wiped of their data. “Turn On Gmail’s 2-Step Verification Now,” wrote James Fallows on The Atlantic’s website, adding to similar security lectures from The New York Times, Lifehacker, TechCrunch, a Google engineer, etc.

If that advice sounds familiar, it should. Just this past June, after 6 million LinkedIn passwords were exposed to hackers, Fallows wrote about “the one step you must take today” — improving your login security — adding to similar security lectures from The New York Times, Lifehacker, TechCrunch, a Google engineer, etc.

That lecture, in turn, followed similar admonitions after Gawker Media let 1.3 million usernames and passwords fall into the hands of hackers in December 2010. And after Sony PlayStation Network exposed 77 million accounts in April 2011. And after 60 million users of the permission-marketing service Epsilon were hit with a phishing attack. Etc.
The lectures clearly aren’t working and that, behavioral economists say, is because we already know how we should protect ourselves online, we just choose not to do so. Hardening your internet identity, whether through new passwords, a backup regimen, or other means, costs time and energy in the present, and pays dividends only in some far-off hypothetical future. Humans are already hard-wired to prefer small near-term pleasures over big long-term benefits; throw in the possibility you might not ever actually need a strong password or a computer backup, and it’s no wonder people are so lax about security.
“Most people are never hacked in their lives, and computers have become so good and stable that you [often] don’t even need to have backups,” says Stanford business professor Baba Shiv, who specializes in neuroeconomics, studying how the brain works as a way of understanding economic decisions.
“Imagine you live in a suburb, and there’s a stop sign on your way home. One day you say, ‘Wait a second, every time I arrive here, I have to slow down for no reason.’ And then one day you pass the stop sign without stopping. And then nothing happens. What happens when you do the same thing the next day? Nothing happens. And the third day? Nothing happens. Tenth day, 50th day, 100th day — nothing happens.
“That’s what lulls people into complacency — this regularity of nothing happening. Your computer getting hacked, or your computer completely crashing, is what they call a ‘black swan incident.’ They only happen once in a while, but when they do everything comes crashing down.”
It’s not only individuals who are susceptible to this kind of negative feedback loop around low-probability events. Dan Ariely, the Duke behavioral economist we interviewed in June, says that organizations are lulled into complacency as well. Apple and Amazon, for example, appear to have routinely allowed customer-support callers to authenticate using minimal information and in some cases without knowing the answers to their own security questions. Ariely likens this to the driver who learns to run stop signs.
“I suspect what happens with companies is they don’t authenticate and nothing bad happens,” Ariely says. “So they say, ‘oh, it’s no problem.’ And they don’t authenticate again and again and again until something really bad happens.” Which, of course, is what hackers’ depend upon when they launch the sort of social engineering attacks that wiped out Honan’s digital identity.
People and organizations see “black swan” incidents happening to others, of course. They read about cases of identity theft, email account takeovers, and so on. But they aren’t moved to change their behavior; the marginal cost of more security at present is calculated to be too large to offset the future risk of an attack. When asked to create a login on a new website, for example, people will default to making an easy, unsecure password if they are allowed to do so.
What’s especially interesting, Shiv says, is that many of those same people make plans to create more complex passwords later, when they have more time. “Later,” of course, never comes. But they believe their future self will weight the costs and benefits of online security more prudently than their present self.
This sort of thinking is a textbook case of “hyperbolic discounting,” Ariely says. The canonical example of hyperbolic discounting is how, when you offer people a dollar today or three dollars tomorrow, many will prefer the dollar today, but when you offer those same folks a dollar in a year or three dollars in a year and one day, suddenly lots of the dollar-now people make the more rational choice of waiting the extra day. The ideal future self discounts the future more reasonably than the all-too-real present self.
This human tendency to make biased judgments about the future is exacerbated in the case of passwords, Ariely says. That’s because when we start using a new service and create a password, we have no idea how long we’re going to use it because we don’t fully understand the benefits of the service. “If there’s any experience where we learn about benefits over time, and on top of that where benefits increase as we use it more and more, those things will be particularly sensitive to hyperbolic discounting,” Ariely says.
So maybe what people need to secure themselves better isn’t information, which they seem to already have. Maybe instead they need ways to bridge what they think they should do with the choices they actually make when the future arrives.
“Anyone who believes simply educating people is going to solve the problem has been proven wrong, because everyone is educated about this,” says University of California, Berkeley professor Steven Webber, who recently taught a class on “Applied Behavioral Economics for Information Systems.”
Instead, he offered several suggestions on how tech companies and others can systematically use people’s distorted economic decision making to enhance security rather than undermine it. Shiv and Ariely did likewise. Here are some of their ideas:
Binding mechanisms: If people imagine themselves making better decisions in the future, they will often be willing to bind themselves to those decisions in the present. Just as Odysseus, as described by the ancient Greek poet Homer, had his sailors bind him to the mast before he came within earshot of the Sirens, users could check a box on a website’s account-creation form that would force them to upgrade to stronger passwords at some point — two weeks, say — in the future. “I could invest 10 minutes today in changing all my passwords so Steve [Webber] one year from now wouldn’t have to spend days dealing with identity theft,” Webber says. “But Steve can’t bind himself to do that on any given day.”
Secure defaults: The same laziness and complacency that keeps people from upgrading their security can also keep them from downgrading it. “We know that default options are very powerful,” Shiv says. “My sister lives in Wales. I go to visit Wales, and I ask my brother-in-law, ‘Can you give me the Wi-Fi password?’ You won’t believe this — he gives me a 16-digit alphanumeric password off the top of his head. This guy is a medical doctor [not in IT]. I asked him, ‘How on Earth did you create this password, and how on Earth do you even remember the password?’ He said, ‘When these guys set up the internet for me at home, they said this is the 16-digit string you need to use to create a password.’” Instead of setting up a new, easier password, the doctor had just memorized the original.
User friendliness: When passwords were invented as a form of computer authentication, sophisticated graphics of the sort available today did not exist. Decades later, authentication remains largely text based. Why not create a new system? “We need to do something that is easy for people but not easy for computers,” Ariely says. “One thing people are really good at for example is differentiating faces. We have a fantastic ability to see faces. Imagine what would happen if passwords were not about typing some code but looking at faces? … If I were to design passwords from the start, I would take what is available from computer technology now, and I would say, ‘What can people do well and intuitively?’”
People are bad at memorizing — remembering exactly what something is — compared to how good they are at recalling, which is when you see something and remember that you’ve seen it before. A new authentication system might show people an array of faces and ask which ones they’ve seen before, or seen the most, though Ariely is reluctant to get pinned down on specifics.
Incentives: Just because people make bad security tradeoffs doesn’t mean they are impervious to costs and rewards. Webber suggests offering medium-term incentives for locking down your logins. Your ISP, for example, might give you a discount if you enable strong security on your Wi-Fi router. Amazon.com might likewise give you some price breaks if you log in with a lengthy password, of if you can prove you bought a password manager. On the flip side, Webber says we should consider shifting fraud liability from credit card companies to consumers in some cases, especially when the consumer failed to take basic steps to prevent fraud.
In the end, changing the infrastructure of online security will probably do a whole lot more good than lecturing people about how they should change their behavior. After all, if a longtime tech writer, an enterprise social network, and an online media company all make bad computer security choices, why should we think those decisions have anything to do with technical ignorance?
And don’t think you’ll behave any better for having read this article: Knowing about the twisted economics of human behavior doesn’t elevate you above it. “I still use a password, in many cases, from 1992 that I created when I was an undergrad at Duke,” says Shiv, the neuroeconomist. “I’m guilty of this as well.”

0 yorum: