So much of what we do today is re-inventing what others have already done, but has been lost in the noise. Today, Ian Brown said:
EIFFEL is a group of leading networking researchers funded by the European Commission to "provide a place for discussing and exchanging ideas and research trajectories on the future of the Internet architecture and governance building as a foundation of the future networked society." I'm proud they have asked me to speak on Tuesday at their second meeting, alongside such luminaries as MIT's Dr David "end-to-end principle" Clark. You can see my presentation below — any comments (before or after Tuesday morning) most welcome!
What caught my eye, other than the grandiose goal, was the mention of an end-to-end principle. Lo and behold, Ian linked to this over on wikipedia:
The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled.According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization, hence, Transmission Control Protocol (TCP) retransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached.
Aha! Well, that makes sense. This is very close to what I say in this unpublished hypothesis, here:
Hypothesis #5 -- Security Begins at the Application and Ends at the Mind The application must do most of the work [1]. For really secure systems, only application security is worthwhile. That is simply because most applications are delivered into environments where they have little control or say over how the underlying system is set up.
#5.1 Security below the application layer is unreliable For security needs, there is little point in for example relying on (specifying) IPSec or the authentication capabilities or PKI or trusted rings or such devices. These things are fine under laboratory conditions, but you have no control once it leaves your door. Out there in the real world, and even within your own development process, there is just too much scope for people to forget or re-implement parts that will break your model.
If you need it, you have to do it yourself. This applies as much to retries and replay protection as to authentication and encryption; in a full secure system you will find yourself dealing with all these issues at the high layer eventually, anyway, so build them in from the start.
Try these quick quizzes. It helps if you close your eyes and think of the question at the deeper levels.
- Is your firewall switched on? How do you know? Or, how do you know when it is not switched on?
- Likewise, is your VPN switched on? How do you know?
- Does TLS provide replay protection [2]?
These questions lead to some deeper principles.
Super hint! Now out there and published ... leaving only one more to go.
As an aside, the set of 7 hypotheses covering secure design is around 60% published. As time goes on, I find new info that I try and integrate in, which slows down the process. Some of them are a little rough (or, a mess) and some of them fight for the right to incorporate a key point. One day, one day...
Posted by iang at February 15, 2009 01:34 PM | TrackBackfor the fun of it, something from yesterday in the cryptography mailing list
http://www.garlic.com/~lynn/2009c.html#25 Crypto Craft Knowledge
that mentions several "end-to-end" audit/issues ... including doing some reviews where trivial exploits were found ... and being told that wasn't part of the security protocol.
Posted by: Lynn Wheeler at February 15, 2009 02:20 PMlol... I saw your post, as well as the various attempts to re-define CAs as mathematically provable functions. Actually I should have been working on the "audit" series of posts this evening, but got distracted. I haven't really considered audit from an end-to-end perspective, but it certainly fits. Defining things "inside" and "outside" scope until the model suits our sense of elegance seems to be the common design paradigm.
Posted by: Iang at February 15, 2009 02:29 PMHaving been involved in business critical dataprocessing for a couple decades ... clients typically cared whether there was any kind of fault/failure ... and only secondarily (if at all) cared about how/what.
Only focusing on some moderately trivial portion of end-to-end problems ... seemed frequently to be part of "hype" promoting some specific (possibly security) solution or market.
The start of the crypto craft thread was somewhat about limited breadth of knowledge to recognize all possible ways things might fail (but still limited to crypto or security specific issues).
In business critical dataprocessing ... an issue was *ALL* possible kinds of faults or failures. *ALL* somewhat translates into "end-to-end" when approaching the subject from a communication or networking perspective.
I've mentioned several times in the past ... we had been asked to consult with small client/server startup that wanted to do payments on there server and they had this technology they had invented called SSL they wanted to use.
As part of that effort, we had to go around and look at several of these new things calling themselves Certification Authorities. One of the insights was that many of the principles in these early efforts had come from mathematical or technical background. Most admitted that they were surprised to learn that only about 5% of a CA is technical ... 95% of a CA is bookkeeping, administration, filing, ... traditional business operation.
Mathematical provable is a really small part of the overall operation and ... raising that issue may even sometimes be misdirection.
For a slightly different perspective on Financial dataprocessing
http://www.garlic.com/~lynn/2008p.html#2 Father of Financial Dataprocessing
old email regarding when he was leaving SJR for Tandem and turning over a lot of his responsibilities to me:
http://www.garlic.com/~lynn/2009c.html#email801016
in this post
http://www.garlic.com/~lynn/2007.html#1
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
oops, finger slip that was:
http://www.garlic.com/~lynn/2007.html#email801016
Confused by the applicability of this particular hypothesis (and in general all of them)...
Are you primarily concerned with over the wire comms, especially outside the 'boundaries' (whatever they are) of an organisation's network?
If not, then would App to DB communication fall in its remit? H5 is pretty much broken in this space afaik, as such communication IS dependent on underlying infrastructure/ protocols to a great extent.. Unless you suggest that the app encrypt everything apart from the SELECT WHERE FROM etc in the SQL...
Posted by: AC2 at February 16, 2009 03:05 AMThe recent focus on application security is confusing.
You say 'only application security is worthwhile', yet the National Security Agency have observed current security efforts suffer from the flawed assumption that adequate application level security can be provided by existing security mechanisms running under insecure mainstream operating systems.1 They conclude that in reality secure applications require secure operating systems, and that all efforts to provide application level security without secure operating systems are doomed to fail. This was stated in the following 1998 paper, which all links to seem to have dried up:
The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments, by Peter A. Loscocco, Stephen D. Smalley, Patrick A. Muckelbauer, Ruth C. Taylor, S. Jeff Turner, and John F. Farrell.
With this consideration, a preferred model would probably be along the lines of OS security that monitored into the application stack. The trick then, would be to provide usability.
Posted by: Rob Lewis at February 17, 2009 11:17 AM