September 14, 2010

Internet Intellectuals, Media Security Reporting, and other explorations in the market for silver bullets

Evgeny Morozov and a whole lot of other media-savvy people have a silver bullets moment when analysing Haystack, a hopeful attempt at bypassing censorship for citizens in countries like Iran. The software was released, lauded by the press, and got an export licence from the USA government.

By all media-validated expectations, Haystack should have been good to go on and wreak merry havoc against Iranian censorship. Until Jake Appelbaum and his team took a poke at it and discovered it permitted tracking of the dissidents. Then the media flipped and attacked. Familiar story, right?

I want to know why the media was so quick to push this tool. I want answers.

Morozov asks, in various ways, what went wrong? Here's a breakdown of what I think are his essential points, and my answers.

Why didn't the security community come in and comment? That's easy. The security community is mostly a commercially minded group of people who work for food. It includes a small adjunct rabble who make a lot of noise breaking things. Not for money, but for fun & media attention. Allegedly, Appelbaum said:

Haystack is the worst piece of software I have ever had the displeasure of ripping apart. Charlatans exposed. Media inquiries welcome.

If Jake's deep sarcasm isn't slapping you on the forehead, here it is in plain writing: we break the tools because it's cool, because the media write about it, and because it's fun. But the presence of the crowd-pleasing infosoc vigilantes doesn't mean that anyone is going to fix the broken efforts. Or provide good advice. No, that costs money:

UPDATE #1: I just received information that "Haystack has been turned off as of ~19:00 PST, Sept 10/2010", with Austin Heap agreeing that "Haystack will not be run again until there is a solid published threat model, a solid peer reviewed design, and a real security review of the Haystack implementation."

Look at for wider example, the fabled OpenPGP encryption system. In its long history, the major providers emerged as PGP Inc and GnuPG. Both of these groups had substantial funding or business reasons to carry on, to build, at one time or another. Which meant that their programmers could eat. As an alternate case in point, my own efforts in Cryptix OpenPGP went up and down to the tune of money and business need, not to the tune of crackers or bugs or media attention. The hoi polloi took their best shot at these products, and a few cracks were found, but the real story is how the builders built, not how the cracks were found.

So in essence we have in the security community an asymmetric relationship with the world. We are happy to break your product for our fun; but we won't be fixing it. For that, put your money on the table. If you want to change that, get the media to make building secure apps more sexy than breaking them. Simple, but the opposite of how the Haystack story went.

Next. Why did the State Department endorse Haystack with a license to export? The best way of seeing this is a case of "the enemy of my enemy is my friend." It has been evident since the start of the Bush administration (1990?) that the US government has a policy of taunting the Iranians when and how they can. So, of course, the Haystack product fit with the policy.

One could look at the technical merits of the product, and come to some sort of hopeful case. The license is not an endorsement of any strong security, it's actually the reverse. It is an endorsement that the security isn't strong enough to worry the USA. There is one further aspect: the exporting organization has a way to avoid any hard discussions: simply open source the product.

From that perspective, the State department has no benefit from not issuing the license, and every reason to issue it.

What is probably more interesting is to ask: what do we do about a product that puts Iranian lives at risk? The easy answer is to not put lives at risk. Let's not do that, it would seem undeniable to think otherwise, right?

Wrong. It is wrong, at three levels.

Firstly, this is clearly against the policy and practice of the various governments in this space, who routinely put foreign lives at risk in order to pursue local objectives. (We already established some alignment there, above.) According to the count of lives next-door in Iraqi, we're seemingly running at a 100:1 ratio, order of magnitude, of putting their lives at risk, compared to our lives. We might talk about "undeniable value of human life" but the facts make that a difficult assumption.

We could simply say that we the Internet, we the Intellectuals shouldn't adopt the low tactics and cavalier attitude of our governments. We are better than them!

Except, that's unfounded as well. Secondly: Consider the OpenPGP community: this community distributed encrypted software that was frequently used by the same target audience as Haystack. I know because I was part of that community (proudly) and I heard some of the stories.

Stories of success, and stories of failure. People used OpenPGP product and people disappeared and people died.

So why this apparent contradiction? Why is OpenPGP so secure, but people still die, whereas we don't accept Haystack which is insecure and might lead to deaths? The answer is risk.

All security is risk-based. Adi Shamir put it best:

  • Absolutely secure systems do not exist.
  • To halve your vulnerability, you have to double your expenditure.
  • Cryptography is typically bypassed, not penetrated.

Security is relative to everything, and the "black box" called Haystack or OpenPGP is only a part of that context. The security of Haystack may be sufficient in one context, OpenPGP may be hopeless in another context.

And it takes quite a lot of experience, and fairly difficult analysis of the overall context to establish whether the risk of a tool is worth taking. For example, we deep in the security community know that all OpenPGP products can be utterly defeated with equipment worth about five bucks.

Which should make the point: we can't easily say that the use of Haystack will be absolutely safer or less safe. We can only take on risk, or expose others to risk through our efforts, which is why Haystack may well have deserved the Entrepreneur award: the team went where others were too afraid to go, the true spirit of an Entrepreneur.

Finally, thirdly, and to close on risk, we must always consider the null option: we do nothing, therefore we cannot put lives at risk. Right?

No, wrong again. The Null option, do nothing, doesn't work either. If we do not supply OpenPGP secure communications to the Iranian dissidents (or Haystack or whoever, or whatever) then they will use less secure techniques. Because of our actions to limit the availability of secure tools, our actions of denial will increase risk for some others.

That's because we can assume that the dissidents will diss, and we can either help them by providing better tools, or stand idly by while they die for want of better tools. We have to negate the easy implication of "causality & responsibility," there is no simple binary responsibility here; people die if we act, and they die if don't act. Our risk might go down if we do nothing, their's may go up.

What in summary do we have? How to answer the blogsphere angst of "how did this happen to us? Why can't you fix it? The government must do something?"

That's leading to the final question. Why is it that this is so hard, when it seems so easy? Who can we blame for the hype? Why have the expectations of the media been so truly flipped over in the blink of an eye?

The security market is a market in silver bullets.

In other words, in a silver bullets market, there is an absence of well-agreed solid practice & theory. There are lots of producers, and there are lots of products, and lots of theories and lots of practices. But, within the security community, these theories are at war with one-another, and for every apparently sustainable argument, you'll be able to find someone to trash it. And the data to prove it trash-worthy.

In this sense, security is about as well understood as freedom. Just to give a case in point: this article quotes the misnamed and misunderstood Kerckhoffs' Principle:

"Although we sincerely wish we could release Haystack under a free software license, revealing the source code at this time would only aide the authorities in blocking Haystack."

Thatís a statement in direct conflict with Kerckhoffs' Principle, a cornerstone of security philosophy. The Principle states that the only security worth doing is that which remains secure even if your enemy knows the totality of how it works. Haystackís refusal to publish the software is an enormous red-flag to security practitioners, suggesting strongly that some aspect of the security it provides somehow hinges on a parlour trick that - once known - becomes useless or potentially hazardous.

This is a reference to Kerckhoffs 6 principles of secure communications which fails for a too-simple reading of one of them. It's a common problem.

Kerckhoffs' second principle states "It must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience;" Unfortunately, this is not strictly true. K2 remains a principle and not a law, and yes, when people talk about Kerckhoffs' law, they are wrong.

It's perhaps easier to show this by a hypothetical: if for example Haystack had been built as a Skype plugin, or had used RIM's Blackberry enterprise layer, etc, would we then be able to rely on it? Yes, remembering our risk discussion, because it would be better than the alternate. But these things are secret, breaking K2. Or for more realworld example, if the NSA were to mount Haystack, now with new-improved-secret-crypto!, do you think they would be publishing the source?

Why then does K2 work for us, or as Shannon's maxim, "the enemy knows the system" ? Because revealing the internal design generally makes it much harder to hide behind incompetence. And the silver bullet aspect of the entire security world makes it almost a given that an incompetent result is ensured. In this, Haystack has proved the general incompetence principle of secrecy: that a secret system is likely to hide a great deal of incompetence.

But, that can still be a good risk to take. It all depends. There is no absolute security, so where you draw the line, depends. On everything. Now perhaps we see why Adi's words above, and Kerckhoffs principles, *all of them*, have sustained over time. Knowing the Principles and Hypotheses of security engineering is a given, that's the job of a protocol engineer. That which separates out engineering from art is knowing when to breach a hypothesis.

All this by way of showing that one man's security wisdom might be another man's folly, and in such a world, a silver bullet is a seemingly valuable thing.

Should we support Haystack, knowing all the above? Yes. But maybe we needed hindsight to see the reasons, laid out more clearly. Look at the public lambasting that the participants have had!

Now, imagine you want to do a better job. Feel scared and queasy? Yup, in the climate generated by the media, the security folk and the political agenda, today, there are relatively few incentives to take on this task. Instead, there are much greater incentives to build a social network and really monetarise the potential for massive abuses in privacy than to muck around with democracy and freedom of speech and all that.

Secondly, consider the open security community. We will break it for you, but we won't help you fix it. Like the media, our attention is slanted dramatically against you.

So, in practice, it should be no surprise that groups such as the Haystack team are few and far between. It's almost as if we have the devil's choice: a dodgy system or no system at all. A good security model is not a cheap option, it's not a practical option, nor an economic option. Security will kill your dreams, the structure of the industry makes it so.

If your objective is to help freedom of speech, then delivering crypto systems will help, even ones with known leaks. That's assuming they will do some good, in the balance. There is one final advantage, it is also a lot easier to fix broken tools than to fix absent tools. In contrast to accepted wisdom, writing the solid security model up front, with no customer base, is a fool's errand.

Posted by iang at September 14, 2010 10:00 PM | TrackBack

On the question of "improving systems," Jake works on Tor, which is a much more improvement-worthy system.

Posted by: Adam at September 17, 2010 10:22 AM

@ Iang,

With regards Adi Shamir's three rules,

1, Absolutely secure systems do not exist.
2, To halve your vulnerability, you have to double your expenditure.
3, Cryptography is typically bypassed, not penetrated.

I've never been very happy with them as anything other than a sound byte.

The first rule is a little trite, because "absolute" is unbounded. It's like saying a finite universe imposes no limitations...

Another way of looking at Adi's 2nd rule is as 1/(1-x) where x is the normalised value of your degree towards 100% or Absolute security. Thus it holds with his first rule.

But apart from my previous objection to the first rule I have a real objection to the second rule because it can be shown to be a poor assumption.

The vulnerability of a system increases with the number of attack vectors which rises with the number of interactions between the parts (ie complexity) of the system not the number of it's component parts.

At it's simplest the cost increase in systems is often better equated by the increase of complexity in a system which at the lower bound tends to be a half N^2-N but is often a more significant power law.

So if you add one more component the vulnerabiliy does not rise by 1 but the number of the existing components in the system, unless you excercise skill in segregating components through controled choke points etc to limit the complexity (which is one of the main ways EmSec / TEMPEST gets around the side channel issue with "clock the inputs and outputs").

As for his third rule it's the principle of low hanging fruit and it applies to the whole system not just the crypto algorithm.

Code cutters assume from this third rule that one AES implementation is the same as any other except in terms of performance and thus the code does not require further scrutiny.

But the rule glibly deals only with the abstract algorithm and ignores the very real issue of implementation of the algorithm. This blinds code cutters to the dangers of side channels which is where the best attack vectors are in existing systems.

The way to higher asurance systems is by managing complexity correctly in all parts of a system. The methods to achieving this are similar in ethos to that of increasing system availability. And although it has been known for many years to the likes of hardware and safety system engineers, it does not appear to be a concept known to many software system designers and code cutters (or for that matter a large number of security gurus).

Sadly it appears from their recent conferance in Orlando that the NSA are pandering to this view point and ignoring the work of their own alumni such as Brian Snow.

Posted by: Clive Robinson at September 18, 2010 12:50 PM
Post a comment

Remember personal info?

Hit preview to see your comment as it would be displayed.