Daniel wrote in comments a month or so back about the need to put the CA's brand on the chrome, so all can see who makes the statement:
Assume for the moment that there is a real interest in fixing this issue (there isn't, but I'll play along). Andy is right that it isn't going to do much good because, in essence, users don't care.The fundamental problem with this security scheme is that it requires some action of the part of the consumer. But consumers aren't interested in the bother.
This is the accepted wisdom of the community that builds these tools. Unfortunately it is too simple, and the sad reality is that this view is dangerously wrong, but self-perpetuating. Absence of respect is not evidence that the actors are stupid. For a longer discussion, see this paper: "So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users." The title is maybe self-referential; if it takes you a while to work out what it is saying, you'll appreciate how consumers feel :-)
In short, it is not that consumers aren't interested in the bother, it's that they reject bad advice. And they're right to do so.
So there are two paths here, one is to improve the advice /up to the point where it is rational for users to pay attention/ which you'll recognise is a very hard target. Or, remove the advice entirely and fix the model so that it represents a better trade-off (e.g., there is only one mode, and it is secure). As far as the secure browser architecture goes, that second path is pretty much impossible because it relies on too many external components, ones which will not move unless we've also figured out how to start and stop earthquakes, volcanoes and tsunamis at whim.
So we are left with improving the advice, itself a very hard target. Let's try that:
Imagine the following situation. You walk into your local bank but in order to withdraw any money you needed to do the following: interview the guard at the door to make sure he really worked for the bank, interviewed the teller to make sure he really worked for the bank, and then set at least 10% of the money you withdrew from the bank on fire so you could watch it burn and see if it was fake or not.
Right, this is a common problem. The mistake you are making is that the majority view is how to design the product. In this case, if the majority ignore the information, we don't need to follow their view in order to redesign the product.
The reason for this is that the minority can have a sufficient effect to achieve the desired result. This is what we call Open Governance: the information is put out there, but only a small minority look at any particular subset. The crowd in aggregate looks at all, but individually, specialisation takes root and becomes the norm.
Let's step outside that context and try another. Consider a police officer's badge. It's got a number on it. Often a name, as well. When the police officer busts some trouble maker, likely the perp does not notice the badge, nor the number. 99% likely, because the perp doesn't need to know, he's busted, and it matters little by whom. So what's the point?
The point is, 1% will notice the badge number! And that's enough to cause the police -- that officer and all others -- to be cautious. To follow the rules. They don't know beforehand who's noting these things down, or not, and they don't need to. The just need to know that bad behaviour can be spotted, and as we get closer to routine bad behaviour, it is more likely that the number will be noted.
Same with your bank guard. You don't have to interview him because the teller will. And if not, someone else in the branch. And if not them, some other customer will look.
Welcome to Open Governance. This is a concept where the governance of the thing, whatever it be (a CA, a bank, a government, a copper) is done by all of us, the world, not by "some special agency." Each of us on the net has the same chance to play in this game -- to govern the big bad player -- but only a very few of us actually govern any particular thing in question.
Let's go back closer into context and consider CAs. How are these governed? Well, they publish CPSs, they get audited by auditors, and they audit is checked over by third party vendors.
For example, we've seen audit reports that totally exclude important issues from consideration. And, nobody noticed beforehand! Which indicates that whatever is being done, whatever is being written, it isn't being verified nor understood. Which more or less casts in doubt all the prior due diligence done over CAs.
This is one reason why Mozilla decided to bring in more open governance ideas. There was a recognition that the old mid-1990s CA audit model wasn't providing a reliably solid answer. There was at least some smoke and mirrors, some criticism of abuse, and these criticisms weren't getting answered. More was needed, but not more of the same, more alternate governance.
So Mozilla put in place an open list (you can join), published all new requests from CAs, and proposed them for open review (section 16 of the policy). There are a few people who read these things. Not many, because it is hard work, and it takes a lot of time. But it's a start, we can't grow these things on trees. A forest starts with a single tree.
The brand name on the chrome is the same thing. We might predict that 99% of the users won't look at it. But 1% will. And, we also know that most all computer users have someone experienced they turn to for help, and those people have a shot at knowing what the brand is about.
The effect of the brand on the chrome as a security feature is then highly dependent on that effect: the CA doesn't know who is looking, but it knows that it is now totally tied to the results in the minds of those who are looking. This is powerful. Any marketing person will tell you that a threat to the brand is far more important than a deviation from a policy. CAs will fiddle their policies and practices in a heartbeat, but they'll not fiddle their brand.
There is an old saying "trust but verify". The problem is that this is a contradiction in terms. Trust means precisely that I don't have to verify. If I have to verify every transaction to see if the money is good, that's not trust. If I have to spy on my wife all the time to see if she's cheating, that's not trust.Asking the user to verify, when what the user wants to do is trust, is design failure that no amount of coding is going to fix.
Actually, the expression is dead-right; trust can only come from verification, and repeated verifications at that. However, those verifications will have happened in the past; we might for example point to the fact that 99.999999% of all certificates issued have never caused a problem. That's a million verifications, right there.
When you say you don't have to verify, you're really saying you can take a risk this time. But there will come a time when that will rebound. Trust without verification is naïveté.
But, what we can do is outsource and share who does the verifying. And that's what Brand on the Chrome is about; outsourcing and sharing the verification of the CAs' business practices to the crowd.
Posted by iang at May 18, 2010 06:49 AM | TrackBackInteresting..reminds me a bit of herd immunity--if some people are paying attention/vaccinated, everyone is protected.
Posted by: Adam at May 18, 2010 07:52 PMI've been talking with some colleagues recently about concerns of the trustworthiness of CAs, and we found ourselves very ignorant of how CA breaches are or would be detected.
Suppose that a CA incorrectly issued a certificate because an attack successfully subverted domain validation and spoofed/captured the emails the CA used to authenticate the request. Would this compromise be detected in a routine audit? Would the proposal in this article detect such an attack? I worry that it would require the user to detect the bad cert at the time of use and may not help when the user finds a problem with a transaction days or weeks later.
This proposal would help detect a mitm appliance like the one in the Soghoian and Stammby paper that you cited a couple months ago. But if one's company is trying to detect data exfiltration this way, they may not be as worried about the brand or reputation image.
Posted by: Alan at May 21, 2010 03:37 PMAlan writes:
> I've been talking with some colleagues recently about concerns of the trustworthiness of CAs, and we found ourselves very ignorant of how CA breaches are or would be detected.
A lot of this has to do with the hypothesis that the CA is an /Authority/ and therefore is completely trusted to do the right thing. By definition of being an Authority, it won't do that, so there is nothing built into the tech to prevent it.
Of course, this is a hypothesis, or claim of the PKI concept, but it does explain why there is relatively little available to detect such breaches.
> Suppose that a CA incorrectly issued a certificate because an attack successfully subverted domain validation and spoofed/captured the emails the CA used to authenticate the request. Would this compromise be detected in a routine audit?
The only way an audit would detect such an issuance is if it tried the subversion itself, which probably implies it would be a gaping hole in the implementation. For example, there was a CA recently that was spotted doing no diligence on names, because it passed that across to the RA. When the RA didn't check, it was possible to get any cert.
Another way of thinking about this is that the audit checks that the disclosures match the implementation to a reasonable degree. It doesn't check every issuance, every cert.
> Would the proposal in this article detect such an attack?
It would, if the cert was used and spotted by a user as having switched from one CA to another.
At the moment, because the CA is considered to be an Authority, the browsers downgrade the "brand" of the CA to be more or less unreadable to users (without digging). So, showing the brand on the chrome would substantially change the riskiness of this attack.
> I worry that it would require the user to detect the bad cert at the time of use and may not help when the user finds a problem with a transaction days or weeks later.
Sure, this is an acknowledged limitation. The question really is whether it is 0% useful, 100% useful or somewhere inbetween.
Research seems to indicate that the browser warnings are in low numbers, perhaps approaching 1% of utility. This is the comment about there being 100% false positives; by empirical evidence, the users ignore the alarms, because they know they are all "false".
So, we don't actually need to get 100% in order to solve this problem. We could very well be happy with 10% success. Which is to say, if 1 in 10 people spot a brand shift in the certificate, and ring the alarm, this might be way better than we have now ... with phishing for example.
Also, this is why we have to think in terms of BRAND. It isn't that the cert might or might not change before the user, it is that the user should be able to go to their bank website, check the SSL claim, and see the BRAND of the claim that made it. Users are very good at recognising brands. But hopeless at recognising technical arcania in certificates. In the fullness of time, we'd really want to see the CA's logo.
(This has wider ramifications, because without the CA's representation there, it is in fact the browser that makes the representation. So, if the user does lose a million, and then sues the browser, the browser probably is responsible, because the browser said who the site was. Browser vendors have never to my knowledge commented on their potential liability, but that can be read both ways...)
> This proposal would help detect a mitm appliance like the one in the Soghoian and Stammby paper that you cited a couple months ago. But if one's company is trying to detect data exfiltration this way, they may not be as worried about the brand or reputation image.
I'm not sure how data exfiltration comes into it... The comments here are on only the CA and its brand.
Thanks.
My comment about data exfiltration was a reference to a use case where a company may want to enforce a policy that blocks encrypted communications out of the company to try to prevent or detect proprietary data loss. One control for this policy may be to have the company's forward web proxy to the Internet also be a mitm for SSL connections. The forward proxy would dynamically generate a mitm SSL certificate that would be issued by a company root CA that all of the employees' computers trust. This use case seems to depend on the opaqueness that your proposal is trying to address because employees would be less likely to detect this surveillance scenario if the CA is not displayed in the browser chrome.
Posted by: Alan at May 24, 2010 12:12 AM