August 01, 2010
memes in infosec I - Eve and Mallory are missing, presumed dead
Things I've seen that are encouraging. Bruce Schneier in Q&A:
Q: We've also seen Secure Sockets Layer (SSL) come under attack, and some experts are saying it is useless. Do you agree?
A: I'm not convinced that SSL has a problem. After all, you don't have to use it. If I log-on to Amazon without SSL the company will still take my money. The problem SSL solves is the man-in-the-middle attack with someone eavesdropping on the line. But I'm not convinced that's the most serious problem. If someone wants your financial data they'll hack the server holding it, rather than deal with SSL.
Right. The essence is that SSL solves the "easy" part of the problem, and leaves open the biggest part. Before the proponents of SSL say, "not our problem," remember that AADS did solve it, as did SOX and a whole bunch of other things. It's called end-to-end, and is well known as being the only worthwhile security. Indeed, I'd say it was simply responsible engineering, except for the fact that it isn't widely practiced.
OK, so this is old news, from around March, but it is worth declaring sanity:
Q: But doesn't SSL give consumers confidence to shop online, and thus spur e-commerce?
A: Well up to a point, but if you wanted to give consumers confidence you could just put a big red button on the site saying 'You're safe'. SSL doesn't matter. It's all in the database. We've got the threat the wrong way round. It's not someone eavesdropping on Eve that's the problem, it's someone hacking Eve's endpoint.
Which is to say, if you are going to do anything to fix the problem, you have to look at the end-points. The only time you should look at the protocol, and the certificates, is how well they are protecting the end-points. Meanwhile, the SSL field continues to be one for security researchers to make headlines over. It's BlackHat time again:
"The point is that SSL just doesn't do what people think it does," says Hansen, an security researcher with SecTheory who often goes by the name RSnake. Hansen split his dumptruck of Web-browsing bugs into three categories of severity: About half are low-level threats, 10 or so are medium, and two are critical. One example...
Many observers in the security world have known this for a while, and everyone else has felt increasingly frustrated and despondent about the promise:
There has been speculation that an organization with sufficient power would be able to get a valid certificate from one of the 170+ certificate authorities (CAs) that are installed by default in the typical browser and could then avoid this alert ....
But how many CAs does the average Internet user actually need? Fourteen! Let me explain. For the past two weeks I have been using Firefox on Windows with a reduced set of CAs. I disabled ALL of them in the browser and re-enabled them one by one as necessary during my normal usage....
On the one hand, SSL is the brand of security. On the other hand, it isn't the delivery of security; it simply isn't deployed in secure browsing to provide the user security that was advertised: you are on the site you think you are on. Only as we moved from a benign world to a fraud world, around 2003-2005, this has this been shown to matter. Bruce goes on:
Q: So is encryption the wrong approach to take?
A: This kind of issue isn't an authentication problem, it's a data problem. People are recognising this now, and seeing that encryption may not be the answer. We took a World War II mindset to the internet and it doesn't work that well. We thought encryption would be the answer, but it wasn't. It doesn't solve the problem of someone looking over your shoulder to steal your data.
Indeed. Note that comment about the World War II mindset. It is the case that the entire 1990s generation of security engineers were taught from the military text book. The military assumes its nodes -- its soldiers, its computers -- are safe. And, it so happens, that when armies fight armies, they do real-life active MITMs against each other to gain local advantage. There are cases of this happening, and oddly enough, they'll even do it to civilians if they think they can (ask Greenpeace). And the economics is sane, sensible stuff, if we bothered to think about it: in war, the wire is the threat, the nodes are safe.
However, adopting "the wire" as the weakness and Mallory as the Man-In-The-Middle, and Eve as the Eavesdropper as "the threat" in the Internet was a mistake. Even in the early 1990s, we knew that the node was the problem. Firstly, ever since the PC, nodes in commercial computing are controlled by (dumb) users not professional (soldiers). Who download shit from the net, not operate trusted military assets. Secondly, observation of known threats told us where the problems lay: floppy viruses were very popular, and phone-line attacks were about spoofing and gaining entry to an end-point. Nobody was bothering with "the wire," nobody was talking about snooping and spying and listening [*].
The military model was the precise reverse of the Internet's reality.
To conclude. There is no doubt about this in security circles: the SSL threat model was all wrong, and consequently the product was deployed badly.
Where the doubt lies is how long it will take the software providers to realise that their world is upside down? It can probably only happen when everyone with credibility stands up and says it is so. For this, the posts shown here are very welcome. Let's hear more!
[*] This is not entirely true. There is one celebrated case of an epidemic of eavesdropping over ethernets, which was passwords being exchanged over telnet and rsh connections. A case-study in appropriate use of security models follows...
PS: Memes II - War! Infosec is WAR!
Posted by iang at August 1, 2010 04:33 PM
I would like to listen to your thoughts on Strict Transport Security - do you think its a waste of time?
The WWII Mindset is a bit of a cop out, the problem arises from Shannon's original channel model which is still taught today as part of information theory.
Shannon was only dealing with the channel in his work and deliberately left out the issue of end points.
And there are several reasons why he might have done this but two are pertinent,
1, Where is the end point?
2, How can you prevent an "end run" around it?
Back in the 90's I was trying to get across to people that the end point is never where it actualy should be which is "inside your head".
The problem is how do you know that something past the end point of the encrypted channel has not been subverted.
Back in 2000 with Win95 still being the predominant OS even security experts where having a real hard time accepting that IO drivers etc could be easily subverted. And an even harder time accepting that IO hardware could be likewise subverted. 10 years later however it is taken as read that any mutable component in an end system is fair game for attackers these days.
After looking at the issue on and off for a number of years prior to that I had concluded three things,
1, The end point really should be in the head.
2, All transactions (not sessions) should be authenticated.
3, The human brain was not equipped to do it.
Thus I looked at a way of putting the end point on the other side of the human brain.
That is the human became part of the communications channel and the end point was an "out of band" "side channel" most easily done by a token.
This brought forward a number off issues,
1, The token must be effectively immutable.
2, The token must be fully independent of the primary communications channel.
3, The token must be always available.
4, The token must be easy to use.
My original idea due to all of the above (plus some other considerations) was to use "call back" on a mobile phone where the banks IVR system would call you back and read out to you what you had typed in to be authenticated and if you said yes would give you a confirmation number to type in at the PC.
I built a working prototype which worked as advertised however in the early part of 97 it became clear that the IVR systems where not a solution that was going to be taken up.
So the second design used the Mobile Network Short Message Service and again worked in the prototype. However a limited trial in 97 showed there where issues to do with SMS being a secondary service on the networks and thus not a reliable or timely service.
And due to issues with smart phones I would advise anybody thinking about using mobile phones as side channel tokens to walk away you are entering a world of hurt (as some banks are starting to find out).
In late 97 early 98 I switched my attention to "key ring" tokens as these where becoming usable (think credit card sized calculator) and reasonably affordable (sub 10USD).
These however had an issue in that they involved way to much typing by the user.
By the end of 98 I had started developing a system using "OCR Proof" graphics which looked like it might significantly reduce the user entry problems. But as we now know attackers out source the problem to China and other countries where people will do the conversion for a few cents in real time no questions asked.
Thus the problem remains any end point not in or on the other side of the human is going to be a vulnerable end point. The human limitations appear to limit the viability of moving the end point to their brain or beyond...
Shannon describes in his seminal paper "A Mathematical Theory of Communication" [1948 Bell Systems Technical Journal 22 pp379-423], a "channel" as (p381), quoting:
"3. The /channel/ is merely the medium used to transmit the signal from transmitter to receiver. It may be a pair of wires, a coaxial cable, a band of radio frequencies, a beam of light, etc."
This description of channel is broad enough to capture anything between the transmitter and /a/ receiver.
Also note that Shannon states that the information measure of a message is determined by the messages possibly sent by the sender. In the information measure or "entropy" of a message the information available at a receiver's side doesn't play any role whatsoever. No concept of an "endpoint" there.
Moreover, in his paper "Communication Theory of Secrecy Systems" [same journal July 1948, p379], Shannon writes "/Perfect Secrecy/" is defined by requiring of a system that after a cryptogram is intercepted by the enemy the /a posteriori/ probabilities of this cryptogram representing various messages be identical the same as the /a priori/ probabilities of the same messages before the interception."
Again, only the "nature" of the transmitter determines the information measure of the messages sent.
If information at a node can be accessed, in principle or practically, that in itself constitutes a channel by Shannon's definition!
This means that the rather narrow interpretation of a channel you describe doesn't fit with the generalised and abstract notion of a channel as defined by Shannon and doesn't do justice to his theory.
I my opinion the problem you so eloquently describe is not about endpoints or channels, but about a crypto system attaining perfect secrecy or failing to do so.
A crypto system with perfect secrecy, by definition, defies myriads of parallel and simultanuous enemies. But if the message is of a finite length, there is a finite possibility that one of those enemies guesses the plain text message in a finite amount of time. But does that enemy then "know" the plain text message? Guessing a solution "THE ENEMY ATTACKS AT DOWN" is under perfect secrecy as probable as "ROMEO LOVES JULIA". It reminds me of Jorge Louis Borge's parable of "The Library of Babel".