September 11, 2009

40 years on, packets still echo on, and we're still dropping the auth-shuns

It's terrifically cliched to say it these days, but the net is one of the great engineering marvels of science. The Economist reports it as 40 years old:

Such contentious issues never dawned on the dozen or so engineers who gathered in the laboratory of Leonard Kleinrock (pictured below) at the University of California, Los Angeles (UCLA) on September 2nd, 1969, to watch two computers transmit data from one to the other through a 15-foot cable. The success heralded the start of ARPANET, a telecommunications network designed to link researchers around America who were working on projects for the Pentagon. ARPANET, conceived and paid for by the defence department’s Advanced Research Projects Agency (nowadays called DARPA), was unquestionably the most important of the pioneering “packet-switched” networks that were to give birth eventually to the internet.

Right, ARPA funded a network, and out of that emerged the net we know today. Bottom-up, not top-down like the European competitor, OSI/ISO. Still, it wasn't about doing everything from the bottom:

The missing link was supplied by Robert Kahn of DARPA and Vinton Cerf at Stanford University in Palo Alto, California. Their solution for getting networks that used different ways of transmitting data to work together was simply to remove the software built into the network for checking whether packets had actually been transmitted—and give that responsibility to software running on the sending and receiving computers instead. With this approach to “internetworking” (hence the term “internet), networks of one sort or another all became simply pieces of wire for carrying data. To packets of data squirted into them, the various networks all looked and behaved the same.

I hadn't realised that this lesson is so old, but that makes sense. It is a lesson that will echo through time, doomed to be re-learnt over and over again, because it is so uncomfortable: The application is responsible for getting the message across, not the infrastructure. To the extent that you make any lower layer responsible for your packets, you reduce reliability.

This subtlety -- knowing what you could push down into the lower layers, and what you cannot -- is probably one of those things that separates the real engineers from the journeymen. The wolves from the sheep, the financial cryptographers from the Personal-Home-Pagers. If you thought TCP was reliable, you may count yourself amongst latter, the sheepish millions who believed in that myth, and partly got us to the security mess we are in today. (Related, it seems is that cloud computing has the same issue.)

Curiously, though, from the rosy eyed view of today, it is still possible to make the same layer mistake. Gunnar reported on the very same Vint Cerf saying today (more or less):

Internet Design Opportunities for Improvement

There's a big Gov 2.0 summit going on, which I am not at but in the event apparently John Markoff asked Vint Cerf ths following question: "what would you have designed differently in building the Internet?" Cerf had one answer: "more authentication"

I don't think so. Authentication, or authorisation or any of those other shuns is again something that belongs in the application. We find it sits best at the very highest layer, because it is a claim of significant responsibility. At the intermediate layers you'll find lots of wannabe packages vying for your corporate bux:

* IP * IP Password * Kerberos * Mobile One Factor Unregistered * Mobile Two Factor Registered * Mobile One Factor Contract * Mobile Two Factor Contract * Password * Password Protected transport * Previous Session * Public Key X.509 * Public Key PGP * Public Key SPKI * Public Key XML Digital Signature * Smartcard * Smartcard PKI * Software PKI * Telephony * Telephony Nomadic * Telephony Personalized * Telephony Authenticated * Secure remote password * SSL/TLS Client Authentication * Time Sync Token * Unspecified

and that's just in SAML! "Holy protocol hodge-podge Batman! " says Gunnar, and he's not often wrong.

Indeed, as Adam pointed out, the net works in part because it deliberately shunned the auth:

The packet interconnect paper ("A Protocol for Packet Network Intercommunication," Vint Cerf and Robert Kahn) was published in 1974, and says "These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate."

So what was Vint Cerf getting at? He clarified in comments to Adam:

The point is that the current design does not have a standard way to authenticate the origin of email, the host you are talking to, the correctness of DNS responses, etc. Does this autonomous system have the authority to announce these addresses for routing purposes? Having standard tools and mechanisms for validating identity or authenticity in various contexts would have been helpful.

Right. The reason we don't have standard ways to do this is because it is too hard a problem. There is no answer to what it means:

people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn't get us in too much trouble. (Hi S! Hi C!)

which therefore means, it is precisely and only whatever the application wants. Or, if your stack design goes up fully past layer 7 into the people layer, like CAcert.org, then it is what your boss wants. So, Skype has it, my digital cash has it, Lynn's X959 has it, and PGP has it. IPSec hasn't got it, SSL hasn't got it, and it looks like SAML won't be having it, in truck-loads :) Shame about that!

Digital signature technology can help here but just wasn't available at the time the TCP/IP protocol suite was being standardized in 1978.

(As Gunnar said: "Vint Cerf should let himself off the hook that he didn't solve this in 1978.") Yes, and digital signature technology is another reason why modern clients can be designed with it, built in and aligned to the application. But not "in the Internet" please! As soon as the auth stuff is standardised or turned into a building block, it has a terrible habit of turning into treacle. Messy brown sticky stuff that gets into everything, slows everyone down and gives young people an awful insecurity complex derived from pimples.

Oops, late addition of counter-evidence: "US Government to let citizens log in with OpenID and InfoCard?" You be the judge!

Posted by iang at September 11, 2009 12:57 PM | TrackBack
Comments

There are two issues which you are trying to address with authentication the "who" and the "what".

I agree that the "who" is an open problem that I don't think will ever be fully addressed, due to "contexts within society" which will always be dynamic (roles change etc).

The "what" is a technical issue which can be solved relatively easily, and I'm always amazed that it has not been, except on a case by case basis.

I suspect the reason is that system architects are making the mistake of thinking they are either the same problem or directly related to each other and thus are "close coupled" in their designs.

The answer is they are not and thus need a coupling mechanism or "interface".

To use a mechanical analogy it is the difference between a two part rivet and a nut and bolt. They both achieve the same purpose (fixing two things together) but in very different ways.

The difference between the nut and bolt and the two part rivet is the "screw thread" (interface) it is what makes a significant functional difference giving the nut and bolt many many advantages over the rivet.

However there was a problem with screw threads which was every engineer had his own, and they where all different and "hand made". This made engineering very very expensive, so much so that sometimes you had a specific nut to a specific bolt.

A Victorian gentleman by the name of Whitworth realised that this was a significant problem and he came out with a "standard" set of threads.

The result, we no longer even think about nut's and bolts we just select the ones that will do the job and "buy them in" from where ever.

And this is really the issue that needs addressing. There needs to be a "standard" by which the various "who" and "what" systems can be joined together, without problem.

However to avoid there being "many standards" (giving rise to the toothbrush issue) there should be a flexible framework into which various additions can be added. And importantly meet the expanding individual system needs.

One of my "bug bears" about NIST is it concentrates on the "how" not the "why" of doing things. It appears to be an American "mindset" issue, in Europe the standards bodies tend to concentrate on the "why" not the "how" with flexible frameworks where many different types of "how" (which might have their own sub-standards) can be used interchangeably.

Posted by: Regards, Clive at September 12, 2009 10:04 AM

On the WHO: yes, it is not solvable in tech.
On confusing WHO with WHAT: yes, happens all the time.
On substituting WHO for WHAT: I think this is an inevitable result of the confusion. And results in some pretty ropey situations. Far too frequently systems are built on the basis that we know WHO when we really want to know WHAT.

On the WHAT as standardised nuts & bolts: on this I disagree. To restate, the reason we've only ever solved it in case-by-case bases (e.g, financial instruments) is because the problem is very hard. And it only gets solved when the application gels. Not a good fit for standardisation. For e.g., financial instruments (click below) I propose a very strong very elegant solution; but it doesn't lend itself to anything other than contracts.

We all know what standardisation means. It works spectacularly well in nuts & bolts and protocols and seatbelts and so forth. That doesn't mean it applies to every problem. There are some areas of life that just don't get standardised so well: love, politics, competition, crime, internet arguments, war, ... and *security*.

Posted by: Iang (on solving the case of financial instruments) at September 12, 2009 01:21 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.