January 02, 2005

Security Signalling - the market for Lemmings

Adam continues to grind away at his problem: how to signal good security. It's a good question, as we know that the market for security is highly inefficient, some would say dysfunctional. E.g., we perceive that many security products are good but ignored, and others are bad but extraordinarily popular, and despite repeated evidence of breaches, en masse, users flock to it with lemming-like behaviour.

I think a real part of this is that the underlying question of just what security really is remains unstudied. So, what is security? Or, in more formal economics terms, what is the product that is sold in the market for security?

This is not such an easy question from an economist's point of view. It's a bit like the market for lemons, which was thought to be just anomalous and weird until some bright economist sat down and studied it. AFAIK, nobody's studied the market for security, although I admit to only having asked one economist, and his answer was "there's no definition for *that* product that I know of!"

Let's give it a go. Here's the basic issue: security as a product lacks good testability. That is, when you purchase your standard security product, there is no easy way to show that it achieves its core goal, which is to secure you against the threat.

Well, actually, that's not quite correct; there are obviously two sorts of security products, those that are testable and those that are not. Consider a gate that is meant to guard against dogs. You can install this in a fence, then watch the rabid canines try and beat against the gate. With a certain amount of confidence you can determine that the gate is secure against dogs.

But, now consider a burglar alarm. You can also install it with about the same degree of effort. You can conduct the basic workability tests, same as a gate. One opens and goes click on closing; the other sets and resets, with beeping.

But there the comparison gets into trouble, as once you've shown the burglar alarm to work, you still have no real way of determining that it achieves its goal. How do you know it stops burglars?

The threat that is being addressed cannot be easily simulated. Yes, you can pretend to be a burglar, but non-burglars are pretty poor at that. Whereas one doesn't need to be a dog to pretend to be a dog, and do so well enough to test a gate.

What then is one supposed to do? Hire a burglar? Well, let's try that: put an ad in the paper, or more digitally, hang around IRC and learn some NuWordz. And your test burglar gets in and ... does what? If he's a real burglar, he might tell you or he might just take the stuff. Or, both, it's not unreasonable to imagine a real burglar telling you *and* coming back a month later...

Or he fails to get in. What does that tell you? Only that *that* burglar can't get in! Or that he's lying.

Let's summarise. We have these characteristics in the market for security:

  • a test of the product by a simulated threat is:
    • expensive, and
    • the results cannot be relied upon to predict defence against a real threat;
  • a test of the product by a real threat is:
    • difficult to arrange,
    • could be destructive, and
    • the results cannot be relied upon to predict defence against any _other_ real threat.

Perhaps some examples might help. Consider a security product such as Microsoft Windows Operating System. Clearly they write it as well as they can, and then test it as much as they can afford. Yet, it always ships with bugs in it, and in time those bugs are exploited. So their testing - their simulated threats - is unsatisfactory. And their ability to arrange testing by real threats is limited by the inefficient market for blackhats (another topic in itself, but one beyond today's scope).

Closer to (my) home, let's look at crypto protocols as a security product. We can see that it is fairly close as well: The simulated threat is the review by analysts, the open source cryptologists and cryptoplumbers that pore through the code and specs looking for weaknesses. Yet, it's expensive to purchase review of crypto, which is why so many people go open source and hope that someone finds it interesting enough. And, even when you can attract someone to review your code, it is never ever a complete review. It's just what they had time for; no amount of money buys a complete review of everything that is possible.

And, if we were to have any luck in finding a real attacker, then it would only be by deploying the protocol in vast numbers of implementations or in a few implementations of such value that it would be worth his time to try and attack it. So, after crossing that barrier, we are probably rather ill-suited to watching for his arrival as a threat, simply due to the time and effort already undertaken to get that far. (E.g., the protocol designers are long since transferred to other duties.) And almost by default, the energy spent in cracking our protocol is an investment that can only be recouped by aggressive acquisition of assets on the breach.

(Protocol design has always been known to have highly asymmetric characteristics in security. It is for this reason that the last few years have shown a big interest in provability of security statements. But this is a relatively young art; if it is anything like the provability of coding that I did at University it can be summarised as "showing great potential" for many decades to come.)

Having established these characteristics, a whole bunch of questions are raised. What then can we predict about the market for Lemmings? (Or is it the market for Pied Pipers?) If we cannot determine its efficacy as a product, why is it that we continue to buy? What is it that we can do to make this market respond more ... responsibly? And finally, we might actually get a chance to address Adam's original question, to whit, how do we go about signalling security, anyway?

Lucky we have a year ahead of us to muse on these issues.

Posted by iang at January 2, 2005 12:24 AM | TrackBack
Comments

In the realm of physical security, one obvious success story is position-broadcasting systems in cars. Perhaps that could be extended to less valuable good as well, such as computers and monitors, so that they'd start squealing if they were moved more than a few feet from their intended location. After all, we don't know how a burglar might break in, but we can be pretty sure he's gonna try to make off with the computer.

For computer security, one technique which is so obvious, easy, and practical people simply scoff at it stands out: write your program in a language which has garbage collection, bignums, and bounds checking. The days of C being the only widely supported language are long gone, and anyone using C for anything vaguely security-related which doesn't really (and I mean really) need tremendous performance is simply being pig-headed.

Posted by: Bram at January 2, 2005 02:16 AM

Very interesting topic, iang. Interesting in that it is weighted towards a technical analysis of security. What about the psychological analysis of security. The feeling felt by Londoners in the Blitz when they heard the British anti-aircraft guns. Especially those who knew that the anti aircraft guns were a poor defence yet still felt some sort of security. In a similar vein, what about those who buy some sort of security product on the basis that _everyone else uses it_. Any links or comments in this regard would be of interest.

Posted by: Darren at January 2, 2005 06:47 AM

Bram, I'm curious what you think of OpenBSD? It is AFAIK "the secure OS" when it comes down to it. Yet, it's written in C...

(Just to clear the air here, I don't disagree, and use Java and Perl for security code partly for that reason!).

Darren, that's exactly where Adam and I are looking ... we know that the ackack didn't do much good, but the locals still bought it anyway. So why is that? In terms of signally, it was a bad signal ("ackackackack..." :-) and in terms of economics, the appearance of a popular bad security product crowds out the potential for good ones to emerge.

What do we have to do to silence the guns?

Posted by: Iang at January 2, 2005 10:23 AM

Bram - sadly, the position tracking for cars appears to be easily defeated. Any steal-to-order thief who makes off with a high value car should be able to disable Tracker in under a minute (see discussion threads on sites such as pistonheads.com)

And although I agree in principle that having a language which has more security built in is a good thing (tm) in general, training programmers to code securely is key no matter what the underlying language is.

Darren/Ian - seems like sadly it takes a vast number of dead lemmings to alert the population. And then the swing moves the other way - look at Firefox. A sudden upswing because it is the 'secure alternative to IE.' Well, it appears to be more secure, and I generally use it because of that, but this may be down to more attacks aimed at IE due to it being MS or it having market dominance, or that the early adopters of Firefox are more technically savvy so patch better, or many other reasons. The less savvy new switchers to Firefox appear to believe it is 'Secure' so we could end up with unsuspecting victims thinking they are now securely browsing, but are still being exploited.

Rory

Posted by: frog51 at January 10, 2005 08:46 AM

Rory, thanks for posting! On the issue of position sensors, there was an article (lost URL sadly) that reported that a special Merc owned by one of the directors of the company, laden with all the new stuff including position broadcasting ... was nicked!

On Firefox - I wholeheartedly recommend people to switch. That's because primarily it is different, so they benefit. But the point you make is indeed valid. Will it maintain any difference once it becomes targetted?

I think that the answer here is that the Mozilla people do think a bit about security. They do accept it as their responsibility. As and when they see the attacks, especially the classical spoofing attacks, I think they will be more proactive about fixing them and what's more important, more embarrassed!

It's been 2 years since Bill Gates' famous memo, and we've seen some evidence that the company is thinking about security. The shame and horror of it is that given their position and liability and scrutiny and all that, it is only "some evidence;" he has completely failed to turn the company onto this new direction.

That's where the difference lies, I believe. But I could be totally wrong in this. I have for example completely failed to convince Mozilla that phishing exists... And as far as I can tell, there is no Security Officer at Mozilla (wakeup call to Adam here...).

Posted by: Iang at January 10, 2005 09:14 AM

If I were to study internet cryptography, which language should I devote energy to Javascript or Java? What do banks use?

Posted by: bernie at February 25, 2005 05:03 PM

Crypto is done primarily in C, with some Java and other languages. But, it makes less difference than one would think as most crypto protocols are done with toolkits.

Banks never touch anything but full systems, so languages don't come into it. If it isn't supported by IBM, it probably won't get in the door as far as a bank is concerned. Well, not quite 100% true, more like 90% true, but good enough for a rule of thumb.

Posted by: Iang at February 25, 2005 06:25 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.