It may work and be adopted by others I hope it solves the issues that where presented.
Posted by Jimbo at November 27, 2004 09:07 PMAs an aside note, in the mid-80s, I worked on a daughter card that did something similar, needed dedicated hardware to be able to sustain several mbytes/sec with possibility of changing key on successive packets; along the way got into a disagreement with the inventor of DES, took three months to win the argument, was something of a hollow victory ... since was then told that could produce any numbers of cards, but they all had to be shipped to an address in maryland; it was just under 20 years ago, some things have changed since then.
Posted by Anne & Lynn Wheeler at November 29, 2004 06:25 PM(copied from elsewhere)
It looks pretty good. One thing I'd point out is that there are two natural key sizes for HMAC-SHA1: as big as you need for security; or 512 bits. 160 bits is probably not either of these. I'd suggest 128 bits. AFAIK there is no square root attack against a MAC key. But hey, maybe if we studied some economics like the professionals do it would shed light on the issue.
I also think you are a little fuzzy about the role of the MAC. You suggest that it only for DOS, and provides no protection against replay, ordering or tampering attacks. But in the context of datagrams, ordering is not guaranteed, so there is no such thing as an ordering attack. And the MAC should indeed defend against tampering attacks. One thing you didn't mention is that of course this fails to defend against message deletion, but again non-deletion (i.e. reliable delivery) is typically not a property of a datagram channel. The only real attack this fails to defend against, which doesn't normally happen anyway on a datagram channel, is replay. That would be hard to defend against on a datagram channel with arbitrary out of order packet delivery and deletion, so it is properly kicked upstairs to the application which presumably has its own ways of dealing with these problems.
I don't like the suggestion to pad with random bytes; that is expensive and unnecessary. Granted it's just a suggestion but I don't see the point. Maybe you're afraid of encrypting lots of zeros; but don't be. Trust your cipher. You certainly wouldn't advise your clients (callers) to structure their payload in some convoluted and unnatural way, to avoid blocks of zeros and other patterns in their plaintext, I hope! The same applies to your pad.
Posted by cypherpunk at December 3, 2004 05:54 AMHey, thanks for those comments.
Mac size: OK! Thinking about it some more, this could be one area where it could be optionally sized, but we've steered away from options of any form.
MAC purpose: You are right in your intiution that a lot of the protection is "kicked upstairs" and that's the impression I want to give. That to some extent is biased by the experience in financial protocols where we can't rely on lower layers to "just get it right." Yes, the Mac will protect against tampering, but in the scheme of an overall application design, it might well be easier to ignore that factor.
Pad randomness: The idea to pad up front with random bytes is designed to make a unique IV. It's too easy to say "make it unique" and much more difficult to code. We've taken the approach that we'll provide a Pad that is overkill but unique even when the protocol gets installed on misconfigured machines; in this sense we perceive that the risk of misconfigured machines is more costly than the extra randomness required.
Posted by Iang at December 3, 2004 07:08 AMLet me back up a little bit on pad randomness. You are doing the IV in an unconventional manner. The normal way would be to just have a random 16 byte IV at the front of the message. Then you could put block padding in at the beginning or end, it doesn't really matter. But what you do instead is to make an IV by taking the Context IV, which is random but constant over the duration of message stream, and xoring with the DIV, the first 16 bytes of the pad. Then you require that the DIV be unique, which therefore guarantees that the IV = CIV xor DIV is unique during the current SA.
I didn't see much analysis of this nonstandard construction so I may have jumped to some conclusions. Let me ask some questions instead.
1. Why do it like this, instead of a random 16 byte IV?
2. Why use the CIV at all, if the DIV is guaranteed to be unique? The first thing CBC does is encrypt the IV so there is no problem if the DIV has a simple structure like a counter, as long as it is in fact unique.
Part of my confusion is that you refer to the IV as being unguessable. Do you view that as a security requirement for CBC mode??? If so you better crack open that economics textbook.
I wanted to make one other point regarding DOS. I don't see the bulk encryption as being the place for DOS resistance. Problems are more likely to occur during the handshake phase. That's going to involve setting up state and probably some PK operations, which will be much more costly than AES and HMAC.
I don't even really understand what kind of DOS resistance your construction provides. What are we talking about here, clients which swamp a server by sending high-volume streams? And if so, what in your protocol stops this, the MAC? What if a client computes one packet with a MAC and send a billion copies? That won't be detected at this layer. Or maybe they compute a thousand different packets with valid MACs and send them over and over again. Either way you are still adding the cost of a decryption and MAC verification. That's not much, but then, as I said, DOS using bulk packets probably isn't that big a threat anyway. I just don't see what problem you are trying to solve here.
Let me turn it around and ask, suppose you didn't have a MAC, but everything else was the same. Do you think the protocol would now suddenly be more vulnerable to DOS? How?
I'd say this is would be a spectacularly bad idea for cryptographic reasons, nothing to do with DOS. It allows message modification attacks which you're protected against now. Message modification can allow for reaction attacks against CBC, where a message gets modified and the recipient leaks information about the bad decryption it received. The MAC should be there for these valid cryptographic reasons. That makes more sense to me than the DOS concerns you appear to emphasize.
Posted by Cypherpunk at December 3, 2004 04:31 PMSorry, I wrote too fast, my question 2 is unclear. Normal CBC XORs the IV directly into the first block of plaintext. In your method you effectively do CIV xor DIV and encrypt that, then XOR into the next plaintext block, which will be composed of 0-15 bytes of padding and 1-16 bytes of payload. What I meant was, do you really need CIV if you're going to do this, or could you in effect set it to 0 and just use Encrypt(DIV) as the IV.
Posted by Cypherpunk at December 3, 2004 04:42 PMHy Cp,
2. CIV. Taking these easy ones first: Yes, the CIV can quite happily be set to zeros, and as long as the DIV works, we are covered. The main reason for using the CIV *as well* is that it is cheap to set it up, because it is done at the same time as the key setup. Adding another 32 bytes of random data is nothing, and it firewalls problems if for some reason we stuff up on the DIV.
(In fact, even though the protocol specifies a strong random CIV, there is nothing stopping an implementation being lazy there. But, implementation wise, there is no point in being that lazy. Once you have the PRNG in swing for the key, just wind it on a bit more for the CIV.)
(MTF)
Posted by Iang at December 3, 2004 09:20 PM3. DOS. A couple of observations, and a few answers.
(i) There are many things that can be done in the DOS world to attack a site. There is a trend to move further and further down the stack, from http to tcp to Ip, etc. That's just a trend, I'm not sure what to make of it.
(ii) DOS is a real threat. It is a happening threat. There are maybe half a dozen active extortion operators out there, and there are ongoing attacks on financial sites all the time. I kid you not, this is a real active issue. I wouldn't say it is as bad as phishing, but it is definately more real than say MITM or eavesdropping, which can be classified as fantasy issues.
DOS is certainly costing small, medium and large businesses a big bucket of cash. This has been going on in a regular fashion for about 2 years, and it is fair to say that if you are not "in the know", you may never find out. (I only found this out a month or two back, it seems that practically all the money issuers have been hit over the last couple of years, and they've stayed mum.)
(a) So, under this threat, what can we do? We all know that DOS is terribly hard to deal with. Thinking about this, I came up with the notion that any new security protocol should be at least no worse under DOS than would be found without the security protocol. That's my current view of the target, and that's why I put the MAC in there. (Note that I speak for myself, not Zooko :-)
(b) Does this defend against DOS? Well, a flood of datagrams can only be defended against on the basis of ensuring that they are from a valid known party. What better way to do this than to do an encrypt-then-MAC approach? The MAC is on the outer layer, and it "authenticates" the source. My only regret is that the MAC is quite slow to calculate, so we could happily just flood with junk packets with junk MACs. I'm keen to improve that ability, but I think it up to others to find the fastest ratios there, and it has to be balanced against practicality: HMAC-SHA1 is well known and well coded. This is simply "good enough" for now. Note that we take a pragmatic approach there, and defer real optimisation until SDP2, etc.
(c) Handshaking, etc. Well, sure, but that's out of scope, and frankly, the existing protocols I have to use here (SOX) will have to be retrofitted or rethought for this issue. Also, whatever happens there, we still have the datagram issue. To defend against DOS means everything has to be defended, and in more or less equal terms.
(d) "What if a client computes one packet with a MAC and send a billion copies?" Hmm, that's a good one. Yes, I don't see we have anything against that right now. Thought needed...
(e) "Or maybe they compute a thousand different packets with valid MACs and send them over and over again." Those would be done under a context, and as each context is "expensive" we can potentially block / clobber contexts.
Drifting into whiteboarding here, I think the way to address this is to move to dynamic load stacking. That is, under DOS conditions, the thing is to ramp up the client's load for valid packets. We can (conceivably) do this by ensuring contexts are fresh, and we can also use hashcash techniques and force the MAC to be calculated in a certain way. This is so far out of the scope of the current protocol, I've not really developed the notion, but I suspect that if a server breaks all existing contexts and then sets new contexts with work factors set in them, it could make DOS much harder. (Setting work factors is as easy as saying first 8 bits of the MAC tag must equal 0.) Hence, we really ideally desire a MAC that is fast to check and slow to create.
(f) "Let me turn it around and ask, suppose you didn't have a MAC, but everything else was the same. Do you think the protocol would now suddenly be more vulnerable to DOS? How?"
OK, here's one of my hidden assumptions sneaking through. This protocol is not designed to protect more than the confidentiality of the lower layer packet. It's the really basics here. That's because the application *I* have in mind is financial, in which case (by some blah blah that I will skip here) the application must provide its own integrity and other checks. E.g., briefly, digital signatures.
So there is no real trauma about forged packets. But, digital signatures are expensive, in general. So we don't want to rely on them for DOS protection. Also, not all packets in a financial protocol are so protected; some nominal packets sneak through and they could be DOSed. Hence it becomes highly convenient, if short of essential, to use the MAC.
(g) Last minor point :-) "Either way you are still adding the cost of a decryption and MAC verification." Hopefully not as the MAC is done on the encrypted outer packet, if the MAC fails we don't need to decrypt. This was intentional, for the DOS reasons, and also because the paper by Bellare and Namprempre seems to prefer this method.
1. "Why do it like this, instead of a random 16 byte IV?"
Last question, and this is the hard one. The simple answer is that the construction is designed to be implementation-wise robust and efficient, even at the expense of cryptographic wierdness.
There are several issues.
(i) A small one is that this design is easier to code. I claim. That's because it replaces two wierd constructs (the IV and the padding) with one (the Pad) which does both. This is easier for coders to deal with, but not cryptographers. As our market is the former not the latter, we go for simplicity of coding.
(ii) A big issue is that the IV is supposed to be unique. Most designs stop there, leaving the programmer to figure out how to do that. And then begin the problems. Is it unique? Or random? Or is it some combination? Or what?
AFAIK we have to make it unique. But, conventional methods for doing that are all flawed. Time is flawed because we can send multiple packets. Sequence numbers are flawed because some software has a lot of trouble generating them (the software I am dealing with now cannot seriously do this AFAICS, and Zooko's analysis of crash recovery makes sequence numbers a very dodgy assumption). Random numbers can be a) slow, b) poorly random, c) blocking, and/or d) not random at all, like all zeros.
All of these are real practical problems that occur over and over again in implementation world, unlike the more esoteric fantasies that people talk about. A clagged PRNG is a real worry, and a protocol that says "must use strong random numbers" is basically one that I would say is non-robust in field conditions. I'd plan on something like 10% of field uses having a clagged PRNG.
So how do we deal with it? We construct a DIV that includes all three components. And we advise the implementor to watch out for *all* three. Hopefully one will still be working by the time the attacker turns up and tries to raid the family treasure.
(iii) Practical defence is "defence in depth." Don't assume a perfect component, model it as if it fails. Hence, the MAC can fail, and the higher application has already been warned. Here, the DIV might fail totally, but we still have the CIV and we still have some latent Pad randoms.
(iv) this is why we run the random generator on a little more. Defence in depth: we know we have to pad out the block size so let's shove extra randomness in there up front. Randoms are cheap, and if the DIV is really stuffed up, this might help by creating us a second IV.
(v) there have been lots of padding attacks that allow pakcets to be fiddled with. Partly because it is at the end. By putting the padding at the front, we make that harder. Yes, the MAC should cover it, but again, "defence in depth."
Posted by Iang at December 3, 2004 10:35 PM