Comments: H2.1 Protocols Divide Naturally Into Two Parts

I've always thought that sending the key separately from sending the "money message" is the logical solution ie two distinct messaging networks.

Moreover it always appeared logical that "soft authentication" involving geo location is capable of supporting such a functional separation.

eg texting the key (backed up by geo-location of the mobile) and ensuring a match with some other geo-location for the PC/ATM/Set Top Box "ground station" from where the money message itself is transmitted.

We did a bit of work on that back in my Enfocast days: there was nothing technically difficult about it, although as I recall, it needed IPv6 addresses or something?

Posted by Chris Cook at February 13, 2008 10:46 AM

part of the existing POS/ATM/etc environment involves a single round-trip transaction operation.

some of the internet implementations have ignored atomicity and commit transaction principles.

i remember sigmod conference circa 1992 in san jose where there was some discussion of the x.5xx stuff. one of the people present attempted to explain what was going on as a bunch of networking engineers attempting to re-invent 1960s database technology.

some of the past ecommerce specifications for the internet involved large amounts of protocol chatter ... on the internet side ... which was all then (effectively) thrown away as part of interfacing to payment gateway to perform the actual transaction. in at least some cases, this represented a factor of 100 times increase in both payload and processing bloat ... compared to the actual transaction.

as a result there was enormous complexity and impedance mismatch between what was being doing on the internet side ... and what actually happening as part of the actual financial transaction.

for some topic drift ... this is recent post referencing working out details for distributed lock manager for massive parallel database scaleup
to start drifting back to the subject ... this
old post regarding meeting on massive database

two of the people mentioned in the above reference meeting later show up in a small client/server startup responsible for something called the commerce server. we were brought in as consultants because they wanted to do payment transactions on the server. it turns out that the small client/server startup had this technology called SSL that they wanted to use for the process. The resulting effort included something called a payment gateway

and is now frequently referred to as electronic commerce.

However, these recent posts mentions the enormous inefficiency associated with layering SSL on top of TCP
for transaction operation Dutch Transport Card Broken Dutch Transport Card Broken

for other topic drift ... this is short thread about whether financial operations requires five-nines availability

the issue here is that a lot of payment infrastructure having authorization occurring as part of real-time transactions ... but actual settlement occurs later in some sort of batch operation (frequently in something called the overnight batch window) and requires reconciliation between the separate authorization and settlement operations.

one of the emerging issues is that for the past 10-15 yrs or so, there have been numerous efforts to re-engineer the batch settlement and combine it with the authorization function, sometimes referred to as "straight-through processing".

Posted by Lynn Wheeler at February 13, 2008 12:36 PM

"Good protocols divide into two parts, the first of which says to the second, trust this key completely! "

This might well be the basis of a better problem factorization than the layer factorization - divide the task by the way trust is embodied, rather than the basis of layered communication.

Trust is an application level issue, not a communication layer issue, but neither do we want each application to roll its own trust cryptography - which at present web servers are forced to do. (Insert my standard rant against SSL/TLS)

Most web servers are vulnerable to attacks akin to session cookie fixation attack, because each web page reinvents session cookie handling, and even experts in cryptography are apt to get it wrong.

The correct procedure is to generate and issue a strongly unguessable random https only cookie on successful login, representing the fact that the possessor of this cookie has proven his association with a particular database record, but very few people, including very few experts in cryptography, actually do it this way. Association between a client request and a database record needs to be part of the security system. It should not something each web page developer is expected to build on top of the security system.

Posted by James A. Donald at February 14, 2008 04:43 PM

Your "divide and conquer" rule is an example of "separation of concerns" .

Which, IMNSHO, reflects better, why you want to separate the phases. It even forces you to be clear on the "concerns" (see

BTW, I still get questions from "security professionals" why one should have different keys for authentication and confidentiality.

Posted by Twan at February 15, 2008 09:20 AM
Post a comment

Remember personal info?

Hit Preview to see your comment.
MT::App::Comments=HASH(0x55621b1335d8) Subroutine MT::Blog::SUPER::site_url redefined at /home/iang/www/fc/cgi-bin/mt/lib/MT/ line 125.