Daniel wrote in comments a month or so back about the need to put the CA's brand on the chrome, so all can see who makes the statement:
Assume for the moment that there is a real interest in fixing this issue (there isn't, but I'll play along). Andy is right that it isn't going to do much good because, in essence, users don't care.
The fundamental problem with this security scheme is that it requires some action of the part of the consumer. But consumers aren't interested in the bother.
This is the accepted wisdom of the community that builds these tools. Unfortunately it is too simple, and the sad reality is that this view is dangerously wrong, but self-perpetuating. Absence of respect is not evidence that the actors are stupid. For a longer discussion, see this paper: "So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users." The title is maybe self-referential; if it takes you a while to work out what it is saying, you'll appreciate how consumers feel :-)
In short, it is not that consumers aren't interested in the bother, it's that they reject bad advice. And they're right to do so.
So there are two paths here, one is to improve the advice /up to the point where it is rational for users to pay attention/ which you'll recognise is a very hard target. Or, remove the advice entirely and fix the model so that it represents a better trade-off (e.g., there is only one mode, and it is secure). As far as the secure browser architecture goes, that second path is pretty much impossible because it relies on too many external components, ones which will not move unless we've also figured out how to start and stop earthquakes, volcanoes and tsunamis at whim.
So we are left with improving the advice, itself a very hard target. Let's try that:
Imagine the following situation. You walk into your local bank but in order to withdraw any money you needed to do the following: interview the guard at the door to make sure he really worked for the bank, interviewed the teller to make sure he really worked for the bank, and then set at least 10% of the money you withdrew from the bank on fire so you could watch it burn and see if it was fake or not.
Right, this is a common problem. The mistake you are making is that the majority view is how to design the product. In this case, if the majority ignore the information, we don't need to follow their view in order to redesign the product.
The reason for this is that the minority can have a sufficient effect to achieve the desired result. This is what we call Open Governance: the information is put out there, but only a small minority look at any particular subset. The crowd in aggregate looks at all, but individually, specialisation takes root and becomes the norm.
Let's step outside that context and try another. Consider a police officer's badge. It's got a number on it. Often a name, as well. When the police officer busts some trouble maker, likely the perp does not notice the badge, nor the number. 99% likely, because the perp doesn't need to know, he's busted, and it matters little by whom. So what's the point?
The point is, 1% will notice the badge number! And that's enough to cause the police -- that officer and all others -- to be cautious. To follow the rules. They don't know beforehand who's noting these things down, or not, and they don't need to. The just need to know that bad behaviour can be spotted, and as we get closer to routine bad behaviour, it is more likely that the number will be noted.
Same with your bank guard. You don't have to interview him because the teller will. And if not, someone else in the branch. And if not them, some other customer will look.
Welcome to Open Governance. This is a concept where the governance of the thing, whatever it be (a CA, a bank, a government, a copper) is done by all of us, the world, not by "some special agency." Each of us on the net has the same chance to play in this game -- to govern the big bad player -- but only a very few of us actually govern any particular thing in question.
Let's go back closer into context and consider CAs. How are these governed? Well, they publish CPSs, they get audited by auditors, and they audit is checked over by third party vendors.
For example, we've seen audit reports that totally exclude important issues from consideration. And, nobody noticed beforehand! Which indicates that whatever is being done, whatever is being written, it isn't being verified nor understood. Which more or less casts in doubt all the prior due diligence done over CAs.
This is one reason why Mozilla decided to bring in more open governance ideas. There was a recognition that the old mid-1990s CA audit model wasn't providing a reliably solid answer. There was at least some smoke and mirrors, some criticism of abuse, and these criticisms weren't getting answered. More was needed, but not more of the same, more alternate governance.
So Mozilla put in place an open list (you can join), published all new requests from CAs, and proposed them for open review (section 16 of the policy). There are a few people who read these things. Not many, because it is hard work, and it takes a lot of time. But it's a start, we can't grow these things on trees. A forest starts with a single tree.
The brand name on the chrome is the same thing. We might predict that 99% of the users won't look at it. But 1% will. And, we also know that most all computer users have someone experienced they turn to for help, and those people have a shot at knowing what the brand is about.
The effect of the brand on the chrome as a security feature is then highly dependent on that effect: the CA doesn't know who is looking, but it knows that it is now totally tied to the results in the minds of those who are looking. This is powerful. Any marketing person will tell you that a threat to the brand is far more important than a deviation from a policy. CAs will fiddle their policies and practices in a heartbeat, but they'll not fiddle their brand.
There is an old saying "trust but verify". The problem is that this is a contradiction in terms. Trust means precisely that I don't have to verify. If I have to verify every transaction to see if the money is good, that's not trust. If I have to spy on my wife all the time to see if she's cheating, that's not trust.
Asking the user to verify, when what the user wants to do is trust, is design failure that no amount of coding is going to fix.
Actually, the expression is dead-right; trust can only come from verification, and repeated verifications at that. However, those verifications will have happened in the past; we might for example point to the fact that 99.999999% of all certificates issued have never caused a problem. That's a million verifications, right there.
When you say you don't have to verify, you're really saying you can take a risk this time. But there will come a time when that will rebound. Trust without verification is na´vetÚ.
But, what we can do is outsource and share who does the verifying. And that's what Brand on the Chrome is about; outsourcing and sharing the verification of the CAs' business practices to the crowd.Posted by iang at May 18, 2010 06:49 AM | TrackBack