May 26, 2008

Firefox 3 and the new "make a security exception" (+ 1 bug)

Firefox 3 has reworked the ability to manage your certificates. After some thought back at Mozo central, they've introduced a more clarified method for dealing with those sites that you know are good. E.g., for the people who are new to this; there are those sites that you know are good, and there are those sites where others tell you they are good. People have spent decades over deciding which of these spaces owns the term "trust" so I won't bore you with that sad and sorry tale today.

Meanwhile, I've worked my way through the process and added the FC blog as an Exception. The process is OK: The language is better than before, as it now says that the site is not trusted. Before it said "*you* don't trust this site!" which was so blatantly wrong as to be rude to me and confusing to everyone else. Now it just fudges the issue by saying the site is untrusted, but not indicating by whom. Most people will realise this is a meaningless statement, as trust comes from a person, it isn't something that you can get from an advert.

There are multiple clicks, possibly intended to indicate that you should really know what you are doing. I don't think that is so much going to help. What would help better is a colour. So far, it is white, indicating that ... well, nothing. So there is a confusion between sites you trust and those that have nothing, they are both cast into the nothing bucket.

However, this all in time. There is no doubt that KCM or Key Continuity Management is the way to go because users need to work with their sites, and when their communities install and use certs, that's it. KCM is needed to let the rest of the world outside Main Street, USA use the thing called SSL and secure browsing. So it will come in time, as the people at Firefox work out how to share the code with the two models.

One thing however: I did this around a week ago, carefully following the exception process. Now, just a moment before I started this post, Firefox suddenly lost its memory! As I was saving another blog post it decided to blow away the https site. Suddenly, we were back to the untrusted regime, and I had to do do the whole "I trust, I know I trust, I trust I know, I know I trust more than any blah blah trust blah!" thing, all over again. And then there was no post left ... luckily it was a minor change and the original was saved.

This could be an MITM. Oh, that I would be so important... oh, that someone would want to sneak into my editorial control over the influential Financial Cryptography blog and change the world-view of the thousands of faithful readers... well, fantasies aside, this isn't likely to be an MITM.

It could be a sysadm change, but the cert looks the same, although there is little info there to check (OK, this is the fault of the servers, because the only way to tell is to go to the server, and ... it doesn't give you any info worth talking about. SSH servers have the same problem.) And the sysadm would have told me.

So Occam's razor suggests this is a bug in Firefox. Well, we'll see. I cannot complain too loudly about that, as this is RC1. Release Candidates might have bugs. This is a big change to the way Firefox works, bugs are expected. One just bit me. As someone once said, the pioneers are the ones with the arrows in the back.

Posted by iang at 07:21 AM | Comments (3) | TrackBack

May 14, 2008

Case study in risk management: Debian's patch to OpenSSL

Ben Laurie blogs that downstream vendors like Debian shouldn't interfere with sensitive code like OpenSSL. Because they might get it wrong... And, in this case they did, and now all Debian + Ubuntu distros have 2-3 years of compromised keys. 1, 2.

Further analysis shows however that the failings are multiple, are at several levels, and they are shared all around. As we identified in 'silver bullets' that fingerpointing is part of the problem, not the solution, so let's work the problem, as professionals, and avoid the blame game.

First the tech problem. OpenSSL has a trick in it that mixed in uninitialised memory with the randomness generated by the OS's formal generator. The standard idea here being that it is good practice to mix in different sources of randomness into your own source. Roughly following the designs from Schneier and Ferguson (Yarrow and Fortuna), modern operating systems take several random things like disk drive activity and net activity and mix the measurements into one pool, then run it through a fast hash to filter it.

What is good practice for the OS is good practice for the application. The reason for this is that in the application, we do not know what the lower layers are doing, and especially we don't really know if they are failing or not. This is OK if it is an unimportant or self-checking thing like reading a directory entry -- it either works or it doesn't -- but it is bad for security programming. Especially, it is bad for those parts where we cannot easily test the result. And, randomness is that special sort of crypto that is very very difficult to test for, because by definition, any number can be random. Hence, in high-security programming, we don't trust the randomness of lower layers, and we mix our own [1].

OpenSSL does this, which is good, but may be doing it poorly. What it (apparently) does is to mix in uninitialised buffers with the OS-supplied randoms, and little else (those who can read the code might confirm this). This is worth something, because there might be some garbage in the uninitialised buffers. The cryptoplumbing trick is to know whether it is worth the effort, and the answer is no: Often, uninitialised buffers are set to zero by lower layers (the compiler, the OS, the hardware), and often, they come from other deterministic places. So for the most part, there is no reasonable likelihood that it will be usefully random, and hence, it takes us too close to von Neumann's famous state of sin.

Secondly, to take this next-to-zero source and mix it in to the good OS source in a complex, undocumented fashion is not a good idea. Complexity is the enemy of security. It is for this reason that people study the designs like Yarrow and Fortuna and implement general purpose PRNGs very carefully, and implement them in a good solid fashion, in clear APIs and files, with "danger, danger" written all over them. We want people to be suspicious, because the very idea is suspicious.

Next. Cryptoplumbing involves by its necessity lots of errors and fixes and patches. So, bug reporting channels are very important, and apparently this was used. Debian team "found" the "bug" with an analysis tool called Valgrind. It was duly reported up to OpenSSL, but the handover was muffed. Let's skip the fingerpointing here, the reason it was muffed was that it wasn't obvious what was going on. And, the reason it wasn't obvious looks like the code was too clever for its own good. It tripped up Valgrind (and Purify), it tripped up the Debian programmers, and the fix did not alert the OpenSSL programmers. Complexity is our enemy, always, in security code.

So what in summary would you do as a risk manager?

  1. Listen for signs of complexity and cuteness. Clobber them when you see them.
  2. Know which areas of crypto are very hard, and which are "simple". Basically, public keys and random generation are both devilish areas. Block ciphers and hashes are not because they can be properly tested. Tune your alarm bells for sensitivity to the hard areas.
  3. If your team has to do something tricky (and we know that randomness is very tricky) then encourage them to do it clearly and openly. Firewall it into its own area: create a special interface, put it in a separate file, and paint "danger, danger" all over it. KISS, which means Keep the Interface Stupidly Simple.
  4. If you are dealing in high-security areas, remember that only application security is good enough. Relying on other layers to secure you is only good for medium-level security, as you are susceptible to divide and conquer.
  5. Do not distribute your own fixes to someone else's distro. Do an application work-around, and notify upstream. (Or, consider 4. above)
  6. These problems will always occur. Tech breaks, get used to it. Hence, a good high-security design always considers what happens when each component fails in its promise. Defence in depth. Systems that fail catastrophically with the failure of one component aren't good systems.
  7. Teach your team to work on the problem, not the people. Discourage fingerpointing; shifting the blame is part of the problem, not the solution. Everyone involved is likely smart, so the issues are likely complex, not superficial (if it was that easy, we would have done it).
  8. Do not believe in your own superiority. Do not believe in the superiority of others. Worry about people on your team who believe in their own superiority. Get help and work it together. Take your best shot, and then ...
  9. If you've made a mistake, own it. That helps others to concentrate on the problem. A lot. In fact, it even helps to falsely own other people's problems, because the important thing is the result.

[1] A practical example is that which Zooko and I did in SDP1 with the initialising vector (IV). We needed different inputs (not random) so we added a counter, the time *and* some random from the OS. This is because we were thinking like developers, and we knew that it was possible for all three to fail in multiple ways. In essence we gambled that at least one of them would work, and if all three failed together then the user deserved to hang.

Posted by iang at 05:57 AM | Comments (8) | TrackBack

May 10, 2008

The Italian Job: highlights the gap between indirect and direct damage

If you've been following the story of the Internet and Information Security, by now you will have worked out that there are two common classes of damage that are done when data is breached: The direct damage to the individual victims and the scandal damage to the organisation victim when the media get hold of it. From the Economist:

Illustration by Peter Schrank

... Italians had learnt, to their varying dismay, amusement and fascination, that—without warning or consultation with the data-protection authority—the tax authorities had put all 38.5m tax returns for 2005 up on the internet. The site was promptly jammed by the volume of hits. Before being blacked out at the insistence of data protectors, vast amounts of data were downloaded, posted to other sites or, as eBay found, burned on to disks.

The uproar in families and workplaces caused by the revelation of people's incomes (or, rather, declared incomes) can only be guessed at. A society aristocrat, returning from St Tropez, found himself explaining to the media how he financed a gilded lifestyle on earnings of just €32,043 ($47,423). He said he had generous friends.

...Vincenzo Visco, who was responsible for stamping out tax dodging, said it promoted “transparency and democracy”. Since the 1970s, tax returns have been sent to town halls where they can be seen by the public (which is how incomes of public figures reach the media). Officials say blandly that they were merely following government guidelines to encourage the use of the internet as a means of communication.

The data-protection authority disagreed. On May 6th it ruled that releasing tax returns into cyberspace was “illicit”, and qualitatively different from making them available in paper form. It could lead to the preparation of lists containing falsified data and meant the information would remain available for longer than the 12 months fixed by law.

The affair may not end there. A prosecutor is investigating if the law has been broken. And a consumer association is seeking damages. It suggests €520 per taxpayer would be appropriate compensation for the unsolicited exposure.

An insight of the 'silver bullets' approach to the market is that these damages should be considered separately, not lumped together. The one that is the biggest cost will dominate the solution, and if the two damages suggest opposing solutions, the result may be at the expense of the weaker side.

What makes Information Security so difficult is that the public scandal part of the damage (the indirect component) is generally the greater damage. Hence, breaches have been classically hushed up, and the direct damages to the consumers are untreated. In this market, then, the driving force is avoiding the scandal, which not only means that direct damage to the consumer is ignored, it is likely made worse.

We then see more evidence of the (rare) wisdom of breach disclosure laws, even if, in this case, the breach was a disclosure by intention. The legal action mentioned above puts a number on the direct damage to the consumer victim. We may not agree with €520, but it's a number and a starting position that is only possible because the breach is fully out in the open.

Those then that oppose stronger breach laws, or wish to insert various weasel words such as "you're cool to keep it hush-hush if you encrypted the data with ROT13" should ask themselves this: is it reasonable to reduce the indirect damage of adverse publicity at the expense of making direct damages to the consumer even worse?

Lots of discussion, etc etc blah blah. My thought is this: we need to get ourselves to a point, as a society, where we do not turn the organisation into more of a secondary victim that it already is through its breach. We need to not make matters worse; we should work to remove the incentives to secrecy, rather than counterbalancing them with opposing and negative incentives such as heavy handed data protection regulators. If there is any vestige of professionalism in the industry, then this is one way to show it: let's close down the paparazzi school of infosec and encourage and reward companies for sharing their breaches in the open.

Posted by iang at 10:24 AM | Comments (2) | TrackBack

May 07, 2008

H2.2 KISS -- Keep the Interface Stupidly Simple

Here's another in my little list of hypothesies, this one from the set known as "Divide and Conquer." The delay in publishing them is partly the cost of preparing the HTML and partly that some of them really need a good editorial whipping. Today's is sparked by this comment made by Dan, at least as attributed by Jon:

... you can make a secure system either by making it so simple you know it's secure, or so complex that no one can find an exploit.
allegedly Dan Geer, as reported by Jon Callas.

#2.2 KISS

In the military, KISS stands for keep it simple, stupid because soldiers are dumb by design (it is very hard to be smart when being shot at). Crypto often borrows from the military, but we always need to change a bit. In this case, KISS stands for

Keep the Interface Stupidly Simple.

When you divide your protocol into parts, KISS tells you that even a programmer should find every part easy to understand. This is more than fortiutous, it is intentional, because the body programmer is your target audience. Remember that your hackers have to also understand all the other elements of good programming and systems building, so their attention span will be limited to around 1% of their knowledge space.

A good example of this is a block cipher. A smart programmer can take a paper definition of a secret-key cipher or a hash algorithm and code it up in a weekend, and know he has got it right.

Why is this? Three reasons:

  • The interface is simple enough. There's a secret key, there's a block of input, and there's a block of output. Each is generally defined as a series of bits, 128 being common these days. What else is there to say?
  • There is a set of numbers at the bottom of that paper description that provides test inputs and outputs. You can use those numbers to show basic correctness.
  • The characteristic of cryptography algorithms is that they are designed to screw up fast and furiously. Wrong numbers will 'avalanche' through the internals and break everything. Which means, to your good fortune, you only need a very small amount of testing to show you got it very right.

An excellent protocol example of a simple interface is SSL. No matter what you think of the internals of the protocol or the security claims, the interface is designed to exactly mirror the interface to TCP. Hence the name, Secure Socket Layer. An inspired choice! The importance of this is primarily the easy understanding achieved by a large body of programmers who understand the metaphor of a stream.

Posted by iang at 08:39 AM | Comments (1) | TrackBack

May 05, 2008

USD reserve currency shift -- some numbers

Some figures on the decline of the dollar as reserve currency:

  • emerging-market countries shrank their dollar holdings from 71% (1997) to 61% (2007), while growing their foreign exchange holdings over fourfold.
  • the euro component of emerging market reserves grew from 19% (2000) to 28% (4th Q 2007).
  • Japan's exports are now often invoiced in Yen: 34% (2001) to 40% (now). Back in 1971 it was almost all in dollars.
  • dollars held outside the US: from 1.83% (2002) down to 1.22% (2006), as measured in percentage of world trade.

For the Americans, this is the maths behind the current too-good-too-bad economy in the USA. For the economists, it is interesting because the monetary tool is not helpful when all these dollars flood back unwanted. For the financial cryptographers, it's a great time to be in the currency business. For the gold community, it's their decade!

Note that we have predicted this for some time, but these are fundamental shifts and they move so slowly that any discussion is fraught until the numbers come in.

2003, 2004, 2006, 2006

Posted by iang at 10:48 AM | Comments (1) | TrackBack