Firefox 3 has reworked the ability to manage your certificates. After some thought back at Mozo central, they've introduced a more clarified method for dealing with those sites that you know are good. E.g., for the people who are new to this; there are those sites that you know are good, and there are those sites where others tell you they are good. People have spent decades over deciding which of these spaces owns the term "trust" so I won't bore you with that sad and sorry tale today.
Meanwhile, I've worked my way through the process and added the FC blog as an Exception. The process is OK: The language is better than before, as it now says that the site is not trusted. Before it said "*you* don't trust this site!" which was so blatantly wrong as to be rude to me and confusing to everyone else. Now it just fudges the issue by saying the site is untrusted, but not indicating by whom. Most people will realise this is a meaningless statement, as trust comes from a person, it isn't something that you can get from an advert.
There are multiple clicks, possibly intended to indicate that you should really know what you are doing. I don't think that is so much going to help. What would help better is a colour. So far, it is white, indicating that ... well, nothing. So there is a confusion between sites you trust and those that have nothing, they are both cast into the nothing bucket.
However, this all in time. There is no doubt that KCM or Key Continuity Management is the way to go because users need to work with their sites, and when their communities install and use certs, that's it. KCM is needed to let the rest of the world outside Main Street, USA use the thing called SSL and secure browsing. So it will come in time, as the people at Firefox work out how to share the code with the two models.
One thing however: I did this around a week ago, carefully following the exception process. Now, just a moment before I started this post, Firefox suddenly lost its memory! As I was saving another blog post it decided to blow away the https site. Suddenly, we were back to the untrusted regime, and I had to do do the whole "I trust, I know I trust, I trust I know, I know I trust more than any blah blah trust blah!" thing, all over again. And then there was no post left ... luckily it was a minor change and the original was saved.
This could be an MITM. Oh, that I would be so important... oh, that someone would want to sneak into my editorial control over the influential Financial Cryptography blog and change the world-view of the thousands of faithful readers... well, fantasies aside, this isn't likely to be an MITM.
It could be a sysadm change, but the cert looks the same, although there is little info there to check (OK, this is the fault of the servers, because the only way to tell is to go to the server, and ... it doesn't give you any info worth talking about. SSH servers have the same problem.) And the sysadm would have told me.
So Occam's razor suggests this is a bug in Firefox. Well, we'll see. I cannot complain too loudly about that, as this is RC1. Release Candidates might have bugs. This is a big change to the way Firefox works, bugs are expected. One just bit me. As someone once said, the pioneers are the ones with the arrows in the back.
Ben Laurie blogs that downstream vendors like Debian shouldn't interfere with sensitive code like OpenSSL. Because they might get it wrong... And, in this case they did, and now all Debian + Ubuntu distros have 2-3 years of compromised keys. 1, 2.
Further analysis shows however that the failings are multiple, are at several levels, and they are shared all around. As we identified in 'silver bullets' that fingerpointing is part of the problem, not the solution, so let's work the problem, as professionals, and avoid the blame game.
First the tech problem. OpenSSL has a trick in it that mixed in uninitialised memory with the randomness generated by the OS's formal generator. The standard idea here being that it is good practice to mix in different sources of randomness into your own source. Roughly following the designs from Schneier and Ferguson (Yarrow and Fortuna), modern operating systems take several random things like disk drive activity and net activity and mix the measurements into one pool, then run it through a fast hash to filter it.
What is good practice for the OS is good practice for the application. The reason for this is that in the application, we do not know what the lower layers are doing, and especially we don't really know if they are failing or not. This is OK if it is an unimportant or self-checking thing like reading a directory entry -- it either works or it doesn't -- but it is bad for security programming. Especially, it is bad for those parts where we cannot easily test the result. And, randomness is that special sort of crypto that is very very difficult to test for, because by definition, any number can be random. Hence, in high-security programming, we don't trust the randomness of lower layers, and we mix our own [1].
OpenSSL does this, which is good, but may be doing it poorly. What it (apparently) does is to mix in uninitialised buffers with the OS-supplied randoms, and little else (those who can read the code might confirm this). This is worth something, because there might be some garbage in the uninitialised buffers. The cryptoplumbing trick is to know whether it is worth the effort, and the answer is no: Often, uninitialised buffers are set to zero by lower layers (the compiler, the OS, the hardware), and often, they come from other deterministic places. So for the most part, there is no reasonable likelihood that it will be usefully random, and hence, it takes us too close to von Neumann's famous state of sin.
Secondly, to take this next-to-zero source and mix it in to the good OS source in a complex, undocumented fashion is not a good idea. Complexity is the enemy of security. It is for this reason that people study the designs like Yarrow and Fortuna and implement general purpose PRNGs very carefully, and implement them in a good solid fashion, in clear APIs and files, with "danger, danger" written all over them. We want people to be suspicious, because the very idea is suspicious.
Next. Cryptoplumbing involves by its necessity lots of errors and fixes and patches. So, bug reporting channels are very important, and apparently this was used. Debian team "found" the "bug" with an analysis tool called Valgrind. It was duly reported up to OpenSSL, but the handover was muffed. Let's skip the fingerpointing here, the reason it was muffed was that it wasn't obvious what was going on. And, the reason it wasn't obvious looks like the code was too clever for its own good. It tripped up Valgrind (and Purify), it tripped up the Debian programmers, and the fix did not alert the OpenSSL programmers. Complexity is our enemy, always, in security code.
So what in summary would you do as a risk manager?
[1] A practical example is that which Zooko and I did in SDP1 with the initialising vector (IV). We needed different inputs (not random) so we added a counter, the time *and* some random from the OS. This is because we were thinking like developers, and we knew that it was possible for all three to fail in multiple ways. In essence we gambled that at least one of them would work, and if all three failed together then the user deserved to hang.
If you've been following the story of the Internet and Information Security, by now you will have worked out that there are two common classes of damage that are done when data is breached: The direct damage to the individual victims and the scandal damage to the organisation victim when the media get hold of it. From the Economist:
Illustration by Peter Schrank ... Italians had learnt, to their varying dismay, amusement and fascination, that—without warning or consultation with the data-protection authority—the tax authorities had put all 38.5m tax returns for 2005 up on the internet. The site was promptly jammed by the volume of hits. Before being blacked out at the insistence of data protectors, vast amounts of data were downloaded, posted to other sites or, as eBay found, burned on to disks.
The uproar in families and workplaces caused by the revelation of people's incomes (or, rather, declared incomes) can only be guessed at. A society aristocrat, returning from St Tropez, found himself explaining to the media how he financed a gilded lifestyle on earnings of just €32,043 ($47,423). He said he had generous friends.
...Vincenzo Visco, who was responsible for stamping out tax dodging, said it promoted “transparency and democracy”. Since the 1970s, tax returns have been sent to town halls where they can be seen by the public (which is how incomes of public figures reach the media). Officials say blandly that they were merely following government guidelines to encourage the use of the internet as a means of communication.
The data-protection authority disagreed. On May 6th it ruled that releasing tax returns into cyberspace was “illicit”, and qualitatively different from making them available in paper form. It could lead to the preparation of lists containing falsified data and meant the information would remain available for longer than the 12 months fixed by law.
The affair may not end there. A prosecutor is investigating if the law has been broken. And a consumer association is seeking damages. It suggests €520 per taxpayer would be appropriate compensation for the unsolicited exposure.
An insight of the 'silver bullets' approach to the market is that these damages should be considered separately, not lumped together. The one that is the biggest cost will dominate the solution, and if the two damages suggest opposing solutions, the result may be at the expense of the weaker side.
What makes Information Security so difficult is that the public scandal part of the damage (the indirect component) is generally the greater damage. Hence, breaches have been classically hushed up, and the direct damages to the consumers are untreated. In this market, then, the driving force is avoiding the scandal, which not only means that direct damage to the consumer is ignored, it is likely made worse.
We then see more evidence of the (rare) wisdom of breach disclosure laws, even if, in this case, the breach was a disclosure by intention. The legal action mentioned above puts a number on the direct damage to the consumer victim. We may not agree with €520, but it's a number and a starting position that is only possible because the breach is fully out in the open.
Those then that oppose stronger breach laws, or wish to insert various weasel words such as "you're cool to keep it hush-hush if you encrypted the data with ROT13" should ask themselves this: is it reasonable to reduce the indirect damage of adverse publicity at the expense of making direct damages to the consumer even worse?
Lots of discussion, etc etc blah blah. My thought is this: we need to get ourselves to a point, as a society, where we do not turn the organisation into more of a secondary victim that it already is through its breach. We need to not make matters worse; we should work to remove the incentives to secrecy, rather than counterbalancing them with opposing and negative incentives such as heavy handed data protection regulators. If there is any vestige of professionalism in the industry, then this is one way to show it: let's close down the paparazzi school of infosec and encourage and reward companies for sharing their breaches in the open.
Here's another in my little list of hypothesies, this one from the set known as "Divide and Conquer." The delay in publishing them is partly the cost of preparing the HTML and partly that some of them really need a good editorial whipping. Today's is sparked by this comment made by Dan, at least as attributed by Jon:
... you can make a secure system either by making it so simple you know it's secure, or so complex that no one can find an exploit.
allegedly Dan Geer, as reported by Jon Callas.
In the military, KISS stands for keep it simple, stupid because soldiers are dumb by design (it is very hard to be smart when being shot at). Crypto often borrows from the military, but we always need to change a bit. In this case, KISS stands for
Keep the Interface Stupidly Simple.
When you divide your protocol into parts, KISS tells you that even a programmer should find every part easy to understand. This is more than fortiutous, it is intentional, because the body programmer is your target audience. Remember that your hackers have to also understand all the other elements of good programming and systems building, so their attention span will be limited to around 1% of their knowledge space.
A good example of this is a block cipher. A smart programmer can take a paper definition of a secret-key cipher or a hash algorithm and code it up in a weekend, and know he has got it right.
Why is this? Three reasons:
An excellent protocol example of a simple interface is SSL. No matter what you think of the internals of the protocol or the security claims, the interface is designed to exactly mirror the interface to TCP. Hence the name, Secure Socket Layer. An inspired choice! The importance of this is primarily the easy understanding achieved by a large body of programmers who understand the metaphor of a stream.
Some figures on the decline of the dollar as reserve currency:
For the Americans, this is the maths behind the current too-good-too-bad economy in the USA. For the economists, it is interesting because the monetary tool is not helpful when all these dollars flood back unwanted. For the financial cryptographers, it's a great time to be in the currency business. For the gold community, it's their decade!
Note that we have predicted this for some time, but these are fundamental shifts and they move so slowly that any discussion is fraught until the numbers come in.