March 10, 2007

Feelings about Security

In the ongoing saga of "what is security?" and more importantly, "why is it such a crock?" Bruce Schneier weighs in with some ruminations on "feelings" or perceptions, leading to an investigation of psychology.

I think the perceptional face of security is a useful area to investigate, and the essay shines as a compendium of archtypical heuristics, backed up by experiments, and written for a security audience. These ideas can all be fed in to our security thinking, not to be taken for granted, but rather as potential explanations to be further tested. Recommended reading, albeit very long...

I would however urge some caution. I claim that buyers and sellers do not know enough about security to make rational decisions, the essay suggests a perceptional deviation as a second uncertainty. Can we extrapolate strongly from these two biases?

As it is a draft, requesting comment, here are three criticisms, which suggest that the introduction of essay seems unsustainable:

THE PSYCHOLOGY OF SECURITY -- DRAFT

Security is both a feeling and a reality. And they're not the same.

The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures.

Firstly, I'd suggest that "what security is" is not yet well defined, and has defied our efforts to come to terms with it. I say a bit about that in Pareto-secure but I'm only really looking at one singular aspect of why cryptographers are so focussed on no-risk security.

Secondly, both maths and feelings are approximations, not the reality. Maths is just another model, based on some numeric logic as opposed to intuition.

What one could better say is that security can be viewed through a perceptional lens, and it can be viewed through a mathematical lens, and we can probably show that the two views look entirely different. Why is this?

Neither is reality though, as both take limited facts and interpolate a rough approximation, and until we can define security, we can't even begin to understand how far from the true picture we are.

We can calculate how secure your home is from burglary, based on such factors as the crime rate in the neighborhood you live in and your door-locking habits. We can calculate how likely it is for you to be murdered, either on the streets by a stranger or in your home by a family member. Or how likely you are to be the victim of identity theft. Given a large enough set of statistics on criminal acts, it's not even hard; insurance companies do it all the time.

Thirdly, insurance is sold, not bought. Actuarial calculations do not measure security to the user but instead estimate risk and cost to the insurer, or more pertinently, insurer's profit. Yes, the approximation gets better for large numbers, but it is still an approximation of the very limited metric of profitability -- a single number -- not the reality of security.

What's more, these calculations cannot be used to measure security. The insurance company is very confident in its actuarial calculations because it is focussed on profit; for the purpose of this one result, large sets of statistics work fine, as well as large margins (life insurance can pay out 50% to the sales agent...).

In contrast, security -- as the victim sees it -- is far harder to work out. Even if we stick to the mathematical treatment, risks and losses include factors that aren't amenable to measurement, nor accurate dollar figures. E.g., if an individual is a member of the local hard drugs distribution chain, not only might his risks go up, and his losses down (life expectancy is generally lowered in that profession) but also, how would we find out when and how to introduce this skewed factor into his security measurement?

While we can show that people can be sold insurance and security products, we can also show that the security they gain from those products has no particular closeness to the losses they incur (if it was close, then there would be more "insurance jobs").

We can also calculate how much more secure a burglar alarm will make your home, or how well a credit freeze will protect you from identity theft. Again, given enough data, it's easy.

It's easy to calculate some upper and lower bounds for a product, but again these calculations are strictly limited to the purpose of actuarial cover, or insurer's profit.

They say little about the security of the user, and they probably play as much to the feelings of buyer as any mathematical model of seller's risks and losses.

It's my contention that these irrational trade-offs can be explained by psychology.

I think that's a tough call, on several levels. Here's some contrary plays:

  • Peer pressure explains a lot, and while that might feel like psychology; I'd suggest it is simple game theory.
  • Ignorance is a big factor (c.f., insufficient information theory).
  • Fear of being blamed also plays its part, which is more about agent/principal theory and incentives. It may matter less whether you judge the risk well than if you lose your job!
  • Transaction cost economics (c.f., Coase, Williamson) has a lot to say about some of the experiments (footnotes 16,17,51,52).
  • Marketing feeds into security, although perhaps marketing simply uses psychology -- and other tools -- to do its deeds.

If we put all those things together, a complex pattern emerges. I look at a lot of these elements in the market for silver bullets, and, leaning heavily on the Spencarian theory of symmetrically insufficient information, I show that best practices may emerge naturally as a response to costs of public exposure, and not the needs for security. Some of the experiments listed (24,38) may actually augment that pattern, but I wouldn't go so far as to say that the heuristics described are the dominating factor.

Still, in conclusion, irrationality is a sort of code word in economics for "our models don't explain it, yet." I've read the (original) literature of insufficient information (Spence, Akerlof, etc) and found a lot of good explanations. Psychology is probably an equally rewarding place to look, and I found the rest of the article very interesting.

Posted by iang at March 10, 2007 12:20 PM | TrackBack
Comments

I must say that I completely agree with you on this one. I think that the rationality assumption is core to economics the same way as, for instance, conservation laws are core to physics. If there is some matter or energy missing from the balance, it doesn't challenge the conservation law but our model. Similarly, if people's behavior (on a large scale) seems to defy rationality, it does not challenge the rationality assumption but rather the model that has been applied.
I would argue with Schneier's repeated assertion that our cave-man psychology is not rational in today's world: People regularly make very rational decisions based on gut feeling, which seems irrational at first glance, but is actually not. The fact that humans do not understand what and why they are doing does not meen that it is irrational. If we were regularly and predictably irrational, we wouldn't be the most populous mammal on the planet (with rats being a distant second: there are about 2 billion of them, despite their fertility rates)

Posted by: Daniel A. Nagy at March 9, 2007 05:34 AM

It's a tough one, isn't it!?!

On the one hand, the Internet as a body rejected most of the major security models for one reason or another, and about the only one that survived in mass usage was the application-controlled username + password. Opportunistic methods (SSH, Skype) coming a very distant second.

On the other hand there is this huge list of excuses to get through: the users are irrational, they aren't trained, they don't take security seriously, the GUI people aren't cooperating, the CAs aren't doing checks properly, we should make the vendors liable, the developers don't understand... all of which turn out to be bogus when examined closely.

The users were right, and (Ir)rationality is just the latest excuse in a long line of them. Studying history would show that. Psychology may help us to understand why the users got it right and the security world got it wrong, but I doubt that such can be learnt without addressing the basic flaw: "we woz wrong!"

Posted by: Iang at March 9, 2007 06:16 AM

It's "Akerlof", not "Akerlov".

Posted by: Daniel A. Nagy at March 9, 2007 07:02 AM

thanks, fixed.

Posted by: Iang at March 9, 2007 08:36 AM

@Daniel:

Ah yes. Homo Economicus.

With advances in brain imaging and our understanding of the neurophysiology behind perceptions of risk, etc, we may be seeing the leading edge of an effective theoretical attack on some classical assumptions.

I suspect that so-called "neural economics" may induce involuntary muscular twitching among many economists (much as the research of, say, Thaler, Kahneman, or Tversky did back in the day), but the fact that this stuff is being picked up by Bruce is telling, I think.

Posted by: Chris at March 9, 2007 02:11 PM

The lack of insurance jobs relates to an overlay of regulatory efforts to limit competition and I suggest the same applies to security over all. The standards are created to limit competition and protect vested interest of those in the security industry, just like they protect those in the insurance industry. Security is a feeling and not a fact because the user as is the case in insurance has no venue to contest the claims and premiums other than a contrived manipulated regulatory over lay called the government at large. The issue is the inability of governmental entities to understand the complex nature of placing their foot in the pool of commerce. This total ignorance does not prevent their interest and efforts to do so. The NSA prohibition on the exportation of crypto did nothing to actually stop it. The oversight by interested governmental entities is a blind effort to placate the security industries influence over them. This myopic relationship has real consequences that will result in the demise of private efforts at first and the general publics ability to have confidence in uses that would advance their services and desires. So who gets served by governmental entities? The constituents of the Government are highly specific and serve not purpose to governance. The simple fact is no objective review of products released for effectiveness has ever been successful. The governmental efforts in other areas falls far short, just look at the ability of the FDA to protect the public from poorly formulated uses of medicine. So we are faced with academic, governmental, and industry abuse of privilege that has been usurped using corrupt methods. How can it be secure when the purpose is designed to protect the industry rather than the public or user. The total corruption of the intent has brought us to a point where private efforts without governmental support is required and can only be vetted by private users. We are required to dis-intergrated with the institutions that where previously relied upon, this includes the corrupted academic efforts that are funding by governmental entities of corporate interest in whole or part.

Posted by: Jim N. at March 11, 2007 10:26 AM

This episode of Financial Cryptography has been truly entertaining. Men talking about feelings is somewhat rare. Men correlating those feelings to mathematical formulae is priceless. As a woman I feel obligated to clear this up. Feeling secure is like being in love, you are until you're not.

Posted by: Anon at March 12, 2007 02:03 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.