The Grnch asks, in all honesty:
> What is the point of encrypting information that is publicly visible?
To which the answer is:
To remove the security weakness of the decision.
This weakness is the decision required to determine what is "publicly visible" or not. Or, more generally, what is sensitive or not.
Unfortunately, there is no algorithm to determine those two sets. Software can't make that decision, so it falls to humans. Unfortunately, humans have no good algorithm either -- they can make quick value judgements, they can make bad value judgements, or they can turn the whole thing off. Worse, even, it often falls to software engineers to make the decision (e.g., HTTPS) and not only are engineers poor at making such judgements, they don't even have local contextual information to inform them. That is, they have no clue what the user wants protected.
The only strategy then is to encrypt everything, all the time. This feeds into my third hypothesis:
There is only one mode, and it is secure.
I'm being very loose here in the use of the word "secure" but suffice to say, we include encryption in the definition. But it also leads feeds into another hypothesis of mine:
Only end-to-end security is secure.
For the same reasons ... if we introduce a lower layer security mechanism, we again introduce a decision problem. Following on from the above, we can't trust the software to decide whether to encrypt or not, because it has no semantic wisdom with which to decide. And we can't trust the user...
Which brings rise to the knowledge problem. Imagine a piece of software that has a binary configuration for own-security versus rely-on-lower-layers. A button that says "encrypt / no-encrypt" which you set if the lower layer has its own security or not, for example. There is, so the theory goes, no point in encrypting if the lower layer does it.
But, how does it know? What can the software do to reliably determine whether the lower layer has encryption? Consider IPSec ... how do we know whether it is there? Consider your firewall sysadmin ... who came in on the weekend and tweaked the rules ... how do we know he didn't accidentally open something critical up? Consider online account access through a browser ... how do we know that the user has secured their operating system and browser before opening Internet Explorer or Firefox?
You can't. We can't, I can't, nobody can rely on these things. Security models built on "just use SSL" or similar are somewhere between fatally flawed and nonsense for these reasons; for real security, security models that outsource the security to some other layer just don't cut the mustard.
But, the infrastructure is in place, which counts for something. So are there some tweaks that we can put in place to at least make it reasonably secure, whatever that means? Yes, they include these fairly minor tweaks:
Spread the word! This won't stop phishing, but it will make it harder. And, it gets us closer to doing the hard re-engineering ... such as securing against MITB.
PAIN security acronym:
P ... privacy (sometimes CAIN & confidentiality)
A ... authentication
I ... integrity
N ... non-repudiation
original (and possibly still one of the major) use for SSL was hiding account numbers as part of e-commerce ... long winded archeological reference
http://www.garlic.com/~lynn/2006u.html#56
a large part of the issue with account numbers is diametrically opposing requirements.
frequently just knowledge of account numbers can effectively be used in various kinds of replay attacks for fraudulent transactions ... resulting in the requirement for account numbers to be kept confidential and never divulged.
at the same time, account numbers are required in scores of business processes, and as such, are required to be readily available. my oft repeated comment is that as a result of the diametrically opposing requirements, the planet could be buried under miles of encryption and still be unable to prevent account number leakage.
somewhat related thread
http://www.garlic.com/~lynn/aadsm26.htm#6
http://www.garlic.com/~lynn/2006v.html#1
http://www.garlic.com/~lynn/2006v.html#2
as mentioned in the above, the x9.59 financial standard changed the paradigm ... eliminating the requirement for keeping account number confidential ... effectively by using (consistently applying end-to-end) authentication and integrity for security ... as part of "armoring" all transactions (instead of privacy/confidentiality to achieve security, authentication and integrity was used for security).
of course, part of this was studying what the threats were and why ... and creating countermeasures for the actual threats.
Posted by: Lynn Wheeler at November 24, 2006 05:36 PMsomewhat related news item, hot off the presses
Michigan Credit Card Mystery Deepens
http://www.consumeraffairs.com/news04/2006/11/mi_card_fraud.html
from above:
Numerous incidents involving breaches of bank security also demonstrate that there are major vulnerabilities at every level of a plastic transaction, from withdrawing money to buying goods online.
... snip ...
and/or can you say security proportional to risk?
http://www.garlic.com/~lynn/2001h.html#61
and then there are a couple recent posts about insider threats
http://www.garlic.com/~lynn/2006v.html#2 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/aadsm26.html#7 Citibank e-mail looks phishy
I was busily coding away today using a little javascript and such and found by accident all browsers except MS IE don't warn users if you visit a https website and there is a script that points to a http URI...
So much for them all being more secure then MSIE :)
Posted by: Duane at November 26, 2006 11:04 AM"Only end-to-end security is secure."
Not only that, but I think this can be made stronger: only application level end-to-end security can be secure. You mention IPsec, "Consider IPSec ... how do we know whether it is there?" which really raises a more critical question: how do we know it is there and working properly?
If you rely upon lower level security, you are relying that it is really providing security. How do you know that it is really AES-256 and not ROT-13 claiming to be IPsec? The sad, painful and expensive answer is that you can not know without doing the exact same validation that you need to implement your own application level code using AES-256. Which negates all the advantages of having the easy to use lower level security.
Building secure communications is painful and hard. It is essentially hard and there are no shortcuts.
Posted by: Pat Farrell at November 26, 2006 12:08 PMTo Duane: It's not a bug, it's a setting, usually people check the "don't bother me with this again" when they see it.
http://www.pengdows.com/images/firefox.png
(Editor's note: Alaric's picture is above.)
Posted by: Alaric at November 26, 2006 02:25 PMThis is good stuff. I think it would be useful to have a documentary film, an educational film, to explain this. It could be done almost exactly like a powerpoint, with a written script and alternating between a narration, and some simple diagrams.
The larger trend in society is that people are increasingly turned off by technology. In the 1960s there were a lot more shop and technology classes in high schools for example; But successive generations of people have learned that you're a chump, to learn engineering, software, etc. It's a life of being laid off, outsourced, etc. as everything you do that's valuable migrates to corporate ownership, and the rest is outsourced to low cost countries.
So the result is that fewer and fewer people seem to have any familiarity with the idea of a software stack, etc. To the public, the whole computer, as a whole is their only unit of analysis, and of course, they don't trust it; they know it betrays their own interests in so many ways, This ignorance is the problem that the documentary clip would address.
Posted by: Todd at November 28, 2006 06:49 AM