December 24, 2005
A new security metric?
I have a sort of draft paper on security metrics - things which I observe are positive in security projects. The idea is that I should be able to identify security projects, on the one hand, and on the other provide some useful tips on how to think past the press release. Another metric just leaped out and bit me from that same interview with Damien Miller:
Why did you increase the default size of new RSA/DSA keys generated by ssh-keygen from 1024 to 2048 bits?
Damien Miller: Firstly, increasing the default size of DSA keys was a mistake (my mistake, corrected in the next release) because unmodified DSA is limited by a 160-bit subgroup and SHA-1 hash, obviating the most of the benefit of using a larger overall key length, and because we don't accept modified DSA variants with this restriction removed. There are some new DSA standards on they way that use larger subgroups and longer hashes, which we could use once they are standardized and included in OpenSSL.
We increased the default RSA keysize because of recommendations by the NESSIE project and others to use RSA keys of at least 1536 bits in length. Because host and user keys generated now will likely be in use for several years we picked a longer and more conservative key length. Also, 2048 is a nice round (binary) number.
Here it is again in bold:
Damien Miller: Firstly, increasing the default size of DSA keys was a mistake (my mistake, corrected in the next release) because [some crypto blah blah]
A mistake! Admitted in public! Without even a sense of embarrassment! If that's not a sign that the security is more important than the perception then I don't know what is...
Still not convinced? When was the last time you ever heard anyone on the (opposing) PKI side admit a mistake?
Posted by iang at December 24, 2005 08:55 AM
Sure, that was an admission of mistake. But shouldn't one quantify a mistake as being bad/not-so-bad/for-the-better? It is not like the default DSA key size was set to 4 bits from 1024! It was a mistake which had no security implications other than maybe longer computational delays.
Not IMHO because once you get into that game, you are drawn into avoiding the mistake.
My advice - admit the mistake and let others decide whether it was important or not.
Once the mistake is admitted, it then becomes *possible* to analyse and fix. But if the mistake is not admitted, then ... there is nothing wrong, nothing to analyse, nothing to fix...
(Bear in mind though that I preach not practice. It's much easier to admire and write about others mistakes ;-)
What I was trying to convey was that the admission of mistake may have come about because the mistake was after all not detrimental in terms of security. If it was a bigger blunder, would he have been equally forthcoming?
A good question! As we are all better off when these mistakes are admitted and dealt with in an open fashion, how then do we encourage a sense of open security, even when the mistakes are big blunders? Where saying "I did bad" isn't the kiss of death?
If you examine even this mistake critically then it's possible to make it look really bad, even though in this case there is no real harm done. For hypothetical example, DSA is nicely balanced and was designed not to be fiddled with. Cunningly, the designer recognised that adding in lots of flexibility to "improve" things was not a good idea, a hypothesis I call "the one true cipher suite". Now, people who've researched DSA know this. So if you didn't know that, what else don't you know? Shouldn't you be letting an expert do this?
And on and on....yadda yadda.
So it's really easy to take anything in security and point fingers at it. But this is ultimately wasteful and destructive as we actually want people to get in there and code up stuff - and make mistakes. Once the mistakes are cleaned out, we are left with good stuff. But if people are too afraid of making mistakes, then they do nothing, and we are left with .. nothing!
Of course, the whole idea with SSH is that it was done against the best advice of wellwishing do-goody experts. So it is not so surprising that the spirit lives on. For that we are grateful!
Okay, let me use this forum to admit a mistake of mine, which I do not know how to correct:
In my pet-project of web-based pgp encryption-decryption (pgp.epointsystem.org), I use iterated but not salted passphrase-to-key conversion for the decryption sub-key. This was supposed to thwart dictionary attacks, but it doesn't, because the dictionary can be calculated with all the iterations once, and then used a number of times (a similar mistake was made in Bruce Schneier's password-safe).
Now, the main key's ID would be an excellent salt value, but the problem is that the system is already deployed and used by a number of people. There are several ways out:
a) recommend that people salt their passphrases themselves (by including the main key ID in the passphrase) and avoid weak passphrases in general or
b) add automatic salting and revert to the unsalted decryption, if decryption with salt fails or
c) add automatic salting and ask people to revoke old unsalted sub-keys, replacing them with salted ones.
Perhaps surprisingly, I am leaning towards a), because it allows for most flexibility (I know, Ian does not like flexible security). How would you deal with such a mistake?
How deliciously scary - a bug that might cause attacks!
The essence of the problem is that the keys are already out there and the servers and users are coordinating based on the iterated conversion? So if you just switch over to some other method, all those keys break? Do I have that right?
In terms of your choices above, I'd plumb for b).
I would vote down a), c) on the basis that this would lose too many users, and as I understand the system, the goal is to give people access using passwords - your intention is to let them use passwords because that's what they do easiest. So you are already stressing the best you can do with convenience. In case a) you are asking users to do something complicated, and in c) you are asking users to re-do their initialisation process. Both of these are the sorts of costs you were trying to eliminate from the system.
Mind you, you know the system best - I'm speculating on what I thought were the goals of the password-public-key conversion. (Something I'm definately looking forward to adding if we can get it into a "toolkit" posture...)
> (I know, Ian does not like flexible security).
:-) This is my "one true cipher suite" mantra. Yes, I wouldn't fiddle around at the margins. Which is essentially what choice b) is, putting a hack in to allow a flexible upgrade of the protocol.
When I design systems these days, I'm preferring that the initial negotiation says "I'm using #1" and that's it. If there is a problem with #1, then I prefer to wrap all the problems into a #2, hopefully after many years.
So I prefer to run with less than perfect crypto for as long as possible. But to do that, the protocol has to allow a complete suite negotiation.
Oddly enough, even if you do allow some flexibility built into the protocol, because of the code that is out there and working, it still can take years to migrate people within the flexible regime.
I left out one possibility:
d) do nothing.
After all, a passphrase that is vulnerable to dictionary attack is vulnerable. Is any effort on my part justified at this point? I am not sure.
I will think more about it before taking any action. But your comment about the whole point of the excercise to be as little annoying as possible is a very valid one.
> d) do nothing.
I should have spotted that!
> After all, a passphrase that is vulnerable to dictionary attack is vulnerable. Is any effort on my part justified at this point? I am not sure.
There is only so far you can go, yes. I think the effort is somewhat justified in that it means a successful dictionary attack on your site isn't then exportable to other places where the same password is used, and our users are in the habit of using the same password everywhere. Which I don't think they are going to stop doing...
> I will think more about it before taking any action. But your comment about the whole point of the excercise to be as little annoying as possible is a very valid one.
One could suggest that we only get away with this many-eyeballs approach to the problem at hand because your site is not under attack. I wonder if we would modify our posture if you had an aggressive dictionary attacker lurking out there. I still like to rationalise that we are still better off as he can work out the issues for himself.