February 13, 2008

H2.1 Protocols Divide Naturally Into Two Parts

Peter Gutmann made the following comment:

Hmm, given this X-to-key-Y pattern (your DTLS-for-SRTP example, as well as OpenVPN using ESP with TLS keying), I wonder if it's worth unbundling the key exchange from the transport? At the moment there's (at least):
  TLS-keying --+-- TLS transport
               |
               +-- DTLS transport
               |
               +-- IPsec (ESP) transport
               |
               +-- SRTP transport
               |
               +-- Heck, SSH transport if you really want

Is the TLS handshake the universal impedance-matcher of secure-session mechanisms?

Which reminds me to bring out another hypothesis in secure protocol design, being #2: Divide and conquer. The first part is close to the above remark:

#2.1 Protocols Divide Naturally Into Two Parts

Good protocols divide into two parts, the first of which says to the second,

trust this key completely!

Frequently we see this separation between a Key Exchange phase and Wire Encryption phase within a protocol. Mingling these phases seems to result in excessive confusion of goals, whereas clean separation improves simplicity and stability.

Note that this principle is recursive. That is, your protocol might separate around the key, being part 1 for key exchange, and part 2 for encryption. The first part might then separate into two more components, one based on public keys and the other on secret keys, which we can call 1.a and 1.b. And, some protocols separate further, for example primary (or root) public keys and local public keys, and, session negotiation secret keys and payload encryption keys.

As long as the borders are clean and simple, this game is good because it allows us to conquer complexity. But when the borders are breached, we are adding complexity which adds insecurity.

A Warning. An advantage from this hypothesis might appear to be that one can swap in a different Key Exchange, or upgrade the protection protocol, or indeed repair one or other part without doing undue damage. But bear in mind the #1: just because these natural borders appear doesn't mean you should slice, dice, cook and book like software people do.

Divide and conquer is #2 because of its natural tendency to turn one big problem into two smaller problems. More later...

Posted by iang at February 13, 2008 07:12 AM | TrackBack
Comments

I've always thought that sending the key separately from sending the "money message" is the logical solution ie two distinct messaging networks.

Moreover it always appeared logical that "soft authentication" involving geo location is capable of supporting such a functional separation.

eg texting the key (backed up by geo-location of the mobile) and ensuring a match with some other geo-location for the PC/ATM/Set Top Box "ground station" from where the money message itself is transmitted.

We did a bit of work on that back in my Enfocast days: there was nothing technically difficult about it, although as I recall, it needed IPv6 addresses or something?

Posted by: Chris Cook at February 13, 2008 10:46 AM

part of the existing POS/ATM/etc environment involves a single round-trip transaction operation.

some of the internet implementations have ignored atomicity and commit transaction principles.

i remember sigmod conference circa 1992 in san jose where there was some discussion of the x.5xx stuff. one of the people present attempted to explain what was going on as a bunch of networking engineers attempting to re-invent 1960s database technology.

some of the past ecommerce specifications for the internet involved large amounts of protocol chatter ... on the internet side ... which was all then (effectively) thrown away as part of interfacing to payment gateway to perform the actual transaction. in at least some cases, this represented a factor of 100 times increase in both payload and processing bloat ... compared to the actual transaction.

as a result there was enormous complexity and impedance mismatch between what was being doing on the internet side ... and what actually happening as part of the actual financial transaction.

for some topic drift ... this is recent post referencing working out details for distributed lock manager for massive parallel database scaleup
http://www.garlic.com/~lynn/2008c.html#81
to start drifting back to the subject ... this
old post regarding meeting on massive database
scaleup
http://www.garlic.com/~lynn/95.html#13

two of the people mentioned in the above reference meeting later show up in a small client/server startup responsible for something called the commerce server. we were brought in as consultants because they wanted to do payment transactions on the server. it turns out that the small client/server startup had this technology called SSL that they wanted to use for the process. The resulting effort included something called a payment gateway
http://www.garlic.com/~lynn/subnetwork.html#gateway

and is now frequently referred to as electronic commerce.

However, these recent posts mentions the enormous inefficiency associated with layering SSL on top of TCP
for transaction operation
http://www.garlic.com/~lynn/aadsm28.htm#21 Dutch Transport Card Broken
http://www.garlic.com/~lynn/aadsm28.htm#22 Dutch Transport Card Broken

for other topic drift ... this is short thread about whether financial operations requires five-nines availability
http://www.garlic.com/~lynn/2008d.html#21
http://www.garlic.com/~lynn/2008d.html#30

the issue here is that a lot of payment infrastructure having authorization occurring as part of real-time transactions ... but actual settlement occurs later in some sort of batch operation (frequently in something called the overnight batch window) and requires reconciliation between the separate authorization and settlement operations.

one of the emerging issues is that for the past 10-15 yrs or so, there have been numerous efforts to re-engineer the batch settlement and combine it with the authorization function, sometimes referred to as "straight-through processing".

Posted by: Lynn Wheeler at February 13, 2008 12:36 PM

"Good protocols divide into two parts, the first of which says to the second, trust this key completely! "

This might well be the basis of a better problem factorization than the layer factorization - divide the task by the way trust is embodied, rather than the basis of layered communication.

Trust is an application level issue, not a communication layer issue, but neither do we want each application to roll its own trust cryptography - which at present web servers are forced to do. (Insert my standard rant against SSL/TLS)

Most web servers are vulnerable to attacks akin to session cookie fixation attack, because each web page reinvents session cookie handling, and even experts in cryptography are apt to get it wrong.

The correct procedure is to generate and issue a strongly unguessable random https only cookie on successful login, representing the fact that the possessor of this cookie has proven his association with a particular database record, but very few people, including very few experts in cryptography, actually do it this way. Association between a client request and a database record needs to be part of the security system. It should not something each web page developer is expected to build on top of the security system.

Posted by: James A. Donald at February 14, 2008 04:43 PM

Your "divide and conquer" rule is an example of "separation of concerns" .

Which, IMNSHO, reflects better, why you want to separate the phases. It even forces you to be clear on the "concerns" (see http://en.wikipedia.org/wiki/Separation_of_concerns).

BTW, I still get questions from "security professionals" why one should have different keys for authentication and confidentiality.

Posted by: Twan at February 15, 2008 09:20 AM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.