This ArsTechnica article explores what happens when the CA-supplied certificate is used as an MITM over some SSL connection to protect online-banking or similar. In the secure-lite model that emerged after the real-estate wars of the mid-1990s, consumers were told to click on their tiny padlock to check the cert:
Now, a careful observer might be able to detect this. Amazon's certificate, for example, should be issued by VeriSign. If it suddenly changed to be signed by Etisalat (the UAE's national phone company), this could be noticed by someone clicking the padlock to view the detailed certificate information. But few people do this in practice, and even fewer people know who should be issuing the certificates for a given organization.
Right, so where does this go? Well, people don't notice because they can't. Put the CA on the chrome and people will notice. What then?
A switch in CA is a very significant event. Jane Public might not be able to do something, but if a customer of Verisign's was MITM'd by a cert from Etisalat, this is something that effects Verisign. We might reasonably expect Verisign to be interested in that. As it effects the chrome, and as customers might get annoyed, we might even expect Verisign to treat this as an attack on their good reputation.
And that's why putting the brand of the CA onto the chrome is so important: it's the only real way to bring pressure to bear on a CA to get it to lift its game. Security, reputation, sales. These things are all on the line when there is a handle to grasp by the public.
When the public has no handle on what is going on, the deal falls back into the shadows. No security there, in the shadows we find audit, contracts, outsourcing. Got a problem? Shrug. It doesn't effect our sales.
So, what happens when a CA MITM's its own customer?
Even this is limited; if VeriSign issued the original certificate as well as the compelled certificate, no one would be any the wiser. The researchers have devised a Firefox plug-in that should be released shortly that will attempt to detect particularly unusual situations (such as a US-based company with a China-issued certificate), but this is far from sufficient.
Arguably, this is not an MITM, because the CA is the authority (not the subscriber) ... but exotic legal arguments aside; we clearly don't want it. When it goes on, what we do need is software like whitelisting, like Conspiracy and like the other ideas floating around to do it.
And, we need the CA-on-the-chrome idea so that the responsibility aspect is established. CAs shouldn't be able to MITM other CAs. If we can establish that, with teeth, then the CA-against-itself case is far easier to deal with.
Posted by iang at March 29, 2010 11:20 PM | TrackBackAn OpenSSH-like warning when the certificate changes would be a welcome addition. The warning could depend on the exact change, with a bigger warning for CA change. Of course normal users would click through it but at least security savvy users could be protected and potentially raise an alert.
Posted by: Miguel Lourenco at March 30, 2010 05:19 AMWe have a student that surveyed this last year. He implemented something very similar of what you are talking about here. The paper published on SBSEG' 09 is here:
http://dl.dropbox.com/u/142722/Papers/sbseg09/wticgST2a3.pdf
Posted by: Jean Martina at March 30, 2010 07:36 AMDo you think this *gives* the user a chance to detect it, or do you think it actually will lead to a reduction in the problem?
Much has been written about user security indicators and without going crazy with references its a quick search through the literature to show that your idea probably/definitely won't work, because user visible security indicators just don't work this way.
Posted by: Andy Steingruebl at March 30, 2010 10:38 AMAssume for the moment that there is a real interest in fixing this issue (there isn't, but I'll play along). Andy is right that it isn't going to do much good because, in essence, users don't care.
The fundamental problem with this security scheme is that it requires some action of the part of the consumer. But consumers aren't interested in the bother.
Imagine the following situation. You walk into your local bank but in order to withdraw any money you needed to do the following: interview the guard at the door to make sure he really worked for the bank, interviewed the teller to make sure he really worked for the bank, and then set at least 10% of the money you withdrew from the bank on fire so you could watch it burn and see if it was fake or not.
There is an old saying "trust but verify". The problem is that this is a contradiction in terms. Trust means precisely that I don't have to verify. If I have to verify every transaction to see if the money is good, that's not trust. If I have to spy on my wife all the time to see if she's cheating, that's not trust.
Asking the user to verify, when what the user wants to do is trust, is design failure that no amount of coding is going to fix.
been a number of ongoing
latest
In SSL We Trust? Not Lately
http://www.darkreading.com/blog/archives/2010/04/trust_in_ssl_st.html
recent comments:
http://www.garlic.com/~lynn/2010g.html#79
as to "trust but verify" ... old (financial cryptography) Audits VII post
http://www.garlic.com/~lynn/2009s.html#45
here:
http://financialcryptography.com/mt/archives/001131.html
mentioning relative spent decade at DTRA in treaty compliance and on-site inspection.
more recent dtra post
http://www.garlic.com/~lynn/2010b.html#47
yesterday new treaty was signed (still has to be ratified)
Posted by: Lynn Wheeler at April 9, 2010 10:29 AMClient side certificate verification is all i have to say.
Posted by: Anon at April 21, 2010 10:42 AMYou should checkout the Monkeysphere project (http://web.monkeysphere.info), which aims to re-engineer this problem from a decentralized, and distributed trust model.
Posted by: micah at April 21, 2010 12:19 PM