Interesting..reminds me a bit of herd immunity--if some people are paying attention/vaccinated, everyone is protected.
Posted by Adam at May 18, 2010 07:52 PMI've been talking with some colleagues recently about concerns of the trustworthiness of CAs, and we found ourselves very ignorant of how CA breaches are or would be detected.
Suppose that a CA incorrectly issued a certificate because an attack successfully subverted domain validation and spoofed/captured the emails the CA used to authenticate the request. Would this compromise be detected in a routine audit? Would the proposal in this article detect such an attack? I worry that it would require the user to detect the bad cert at the time of use and may not help when the user finds a problem with a transaction days or weeks later.
This proposal would help detect a mitm appliance like the one in the Soghoian and Stammby paper that you cited a couple months ago. But if one's company is trying to detect data exfiltration this way, they may not be as worried about the brand or reputation image.
Posted by Alan at May 21, 2010 03:37 PMAlan writes:
> I've been talking with some colleagues recently about concerns of the trustworthiness of CAs, and we found ourselves very ignorant of how CA breaches are or would be detected.
A lot of this has to do with the hypothesis that the CA is an /Authority/ and therefore is completely trusted to do the right thing. By definition of being an Authority, it won't do that, so there is nothing built into the tech to prevent it.
Of course, this is a hypothesis, or claim of the PKI concept, but it does explain why there is relatively little available to detect such breaches.
> Suppose that a CA incorrectly issued a certificate because an attack successfully subverted domain validation and spoofed/captured the emails the CA used to authenticate the request. Would this compromise be detected in a routine audit?
The only way an audit would detect such an issuance is if it tried the subversion itself, which probably implies it would be a gaping hole in the implementation. For example, there was a CA recently that was spotted doing no diligence on names, because it passed that across to the RA. When the RA didn't check, it was possible to get any cert.
Another way of thinking about this is that the audit checks that the disclosures match the implementation to a reasonable degree. It doesn't check every issuance, every cert.
> Would the proposal in this article detect such an attack?
It would, if the cert was used and spotted by a user as having switched from one CA to another.
At the moment, because the CA is considered to be an Authority, the browsers downgrade the "brand" of the CA to be more or less unreadable to users (without digging). So, showing the brand on the chrome would substantially change the riskiness of this attack.
> I worry that it would require the user to detect the bad cert at the time of use and may not help when the user finds a problem with a transaction days or weeks later.
Sure, this is an acknowledged limitation. The question really is whether it is 0% useful, 100% useful or somewhere inbetween.
Research seems to indicate that the browser warnings are in low numbers, perhaps approaching 1% of utility. This is the comment about there being 100% false positives; by empirical evidence, the users ignore the alarms, because they know they are all "false".
So, we don't actually need to get 100% in order to solve this problem. We could very well be happy with 10% success. Which is to say, if 1 in 10 people spot a brand shift in the certificate, and ring the alarm, this might be way better than we have now ... with phishing for example.
Also, this is why we have to think in terms of BRAND. It isn't that the cert might or might not change before the user, it is that the user should be able to go to their bank website, check the SSL claim, and see the BRAND of the claim that made it. Users are very good at recognising brands. But hopeless at recognising technical arcania in certificates. In the fullness of time, we'd really want to see the CA's logo.
(This has wider ramifications, because without the CA's representation there, it is in fact the browser that makes the representation. So, if the user does lose a million, and then sues the browser, the browser probably is responsible, because the browser said who the site was. Browser vendors have never to my knowledge commented on their potential liability, but that can be read both ways...)
> This proposal would help detect a mitm appliance like the one in the Soghoian and Stammby paper that you cited a couple months ago. But if one's company is trying to detect data exfiltration this way, they may not be as worried about the brand or reputation image.
I'm not sure how data exfiltration comes into it... The comments here are on only the CA and its brand.
Thanks.
My comment about data exfiltration was a reference to a use case where a company may want to enforce a policy that blocks encrypted communications out of the company to try to prevent or detect proprietary data loss. One control for this policy may be to have the company's forward web proxy to the Internet also be a mitm for SSL connections. The forward proxy would dynamically generate a mitm SSL certificate that would be issued by a company root CA that all of the employees' computers trust. This use case seems to depend on the opaqueness that your proposal is trying to address because employees would be less likely to detect this surveillance scenario if the CA is not displayed in the browser chrome.
Posted by Alan at May 24, 2010 12:12 AM