Cypherpunk askes a) why has phishing gone beyond "don't click that link" and b) why we can't educate the users?
A lot of what I wrote in The Year of the Snail is apropos to that first question. In economic terms, we would say that Phishing is now institutionalised. In more general parlance, and in criminal terms, it would be better expressed as organised crime. Phishing is now a factory approach, if you like, with lots of different phases, and different actors all working together. Which is to say that it is now very serious, it's not a simple net bug like viruses or spam, and that generally means telling people to avoid it will be inadequate.
We can look at the second question much more scientifically. The notion of teaching people not to click has been tried for so long now that we have a lot of experience just how effective the approach of 'user education' is. For example, see the research by Ye and Smith and also Herzberg and Gbara, who tested users in user interface security questions. Bottom line: education is worse than useless.
Users defy every effort to be educated. They use common sense and their own eyes: and they click a link that has been sent to them. If they didn't do that, then we wouldn't have all these darn viruses and all this phishing! But viruses spread, and users get phished, so we know that they don't follow any instructions that we might give them.
So why does this silly notion of user education persist? Why is every security expert out there recommending that 'users be educated' with not the least blush of embarrassment at the inadequacy of their words?
I think it's a case of complexity, feedback and some fairly normal cognitive dissonance. It tends to work like this: a security expert obviously receives his training from some place, which we'll call received wisdom. Let's call him Trent, because he is trusted. He then goes out and employs this wisdom on users. Our user, Alice, hears the words of "don't click that link" and because of the presence of Trent, our trusted teacher, she decides to follow this advice.
Then, Alice goes out into the world and ... well, does productive work, something us Internet geeks know very little about. In her office every day she dutifully does not click, until she notices two thing. Firstly, everyone else is clicking away like mad, and indeed sending lots of Word documents and photos of kids and those corny jokes that swill around the office environment.
And, secondly, she notices that nobody else seems to suffer. So she starts clicking and enjoying the life that Microsoft taught her: this stuff is good, click here to see my special message. It all becomes a blur and some time later she has totally forgotten *why* she shouldn't click, and cannot work out what the problem is anyway.
(Then of course a virus sweeps the whole office into the seas ...)
So what's going on here? Well, several factors.
Hence, cognitive dissonance. In this case, the security industry has an unfounded view that education is a critical component of a security system. Out in the real world, though, that doesn't happen. Not only doesn't the education happen, but when it does happen, it isn't effective.
Perhaps a better way to look at this is to use Microsoft as a barometer. What they do is generally what the user asks for. The user wants to click on mail coming in, so that's what Microsoft gives them, regardless of the wider consequences.
And, the user does not want to be educated, so eventually, Microsoft took away that awful bloody paperclip. Which leaves us with the lesson of inbuilt, intiutive, as-delivered security. If you want a system to be secure, you have to build it so that it is so intiutively to the user. Each obvious action should be secure. And you have to deliver it so that it operates out of the box, securely. (Mozilla have recently made some important steps in this direction by establishing a policy of delivery to the average user. It's a first welcome step which will eventually lead them to delivering a secure browser.)
If these steps aren't taken, then it doesn't help to say to the user, don't click there. Which brings me to the last point: why is user education *worse* than useless? Well, every time a so-called security expert calls for the users to be educated, he is avoiding the real problems, and he is shifting the blame away from the software to the users. In this sense, he is the problem, and until we can get him out of the way, we can't start thinking of the solutions.
Posted by iang at December 27, 2004 02:17 PM | TrackBackBut phishing links are of a specific kind. They are links to commercial sites which invite you to enter your password. (Right? Are there other kinds I'm overlooking?) People just need to learn that receiving an email with a link in it is not, in general, a reason to believe that you are talking to the legitimate site. It's just like meeting someone on the street and him saying, I'm a bank officer, please give me your money and I'll go deposit in the bank for you. No one would fall for that, we've all learned better. In the same way, we will learn better not to believe similar claims in email.
Claiming that this won't happen and that people will continue to click on phishing links, giving away their account numbers and passwords, is tantamount to claiming that people can't learn from unpleasant experiences.
Posted by: Cypherpunk at December 27, 2004 06:46 PMSecond part: "people can't learn from unpleasant experiences." That's exactly what I'm claiming, in 4. above.
First part: well, right! It's just like that. It's also just like a bank discovering a security breach and emailing all its users and asking them to check their accounts in case there is a problem. Or, it's like a bank notifying that there is now a new site that offers new services. One of which happens to be a fantastic new prize giveaway that requires you to login.
In banking, there is a well known way in which banks make their presence felt: they rent or build a big fat impressive building in the centre of town, and they tell everyone that's where they are. Those pillars that you see in front of a bank had a purpose; they were there to impress on the users that only a bona fide bank would have the money to waste on such luxury, and it's a real bank inside.
Once we get onto the net, all that changes. Your claim that "we will learn better not to believe similar claims in email" is tantamount to banks not being able to use email. Well. That's not very satisfactory, is it? Just how is the bank going to notify the user when it has real problems? Real offers? Real anything?
Posted by: Iang at December 27, 2004 07:04 PMGood article, I mostly agree with you. I'd append
7.: because of 4. (unclear who's responsible), Microsoft has no need to build more secure systems. Therefor, secure by default will not be done a long time... There is no profit in it.
That leads me to the question, what _can_ we as the IT-experts do so far?
Some ideas:
- Let the users run into the wall until he learns it, e.g. by using "fake" phishing mails that points to a trap: a site with explanations, counter and a gift for the first user that clicked on it? As an add-on force winupdate ("browser-kidnapping" until the update is done)
- Force the use of Knoppix to do online banking?
You know why user eductation is useless? Most people do not have any (cognitive or intuitive) frame of reference available they can use to understand the subtleties in identity and identification, naming, references, authenticity and authentication and representation in a bit medium. You can't blame them, there is no established de facto standard way of representing or naming on the internet. And it is doubtfull if there ever will be one used for everything. Besides even people in the industry make serious errors here.
Our hope lies in limiting the "circle of friends". The key signing party concept of PGP was the first attempt at this; linking the social/physical world into bit representation requires first hand knowledge about this relationship. And by necessity this circle of friends needs to be rather small. Say the size of an average tribe or extended family ;)
Posted by: Twan at February 18, 2005 10:03 PM