July 24, 2004

In Search of Eve - the upper boundary on Mallory

The primary reason for looking at threats is to develop a threat model [1]. This model then feeds into a security model, which latter decides which threats we can afford to deal with, and how.

But I have a secondary reason for looking at threats. An ulterior motive, as it were. I'm looking for reports of eavesdropping. Here's why. It's not the normal reason.

For the sake of today's polemic, on-the-wire threats divide into a few broad classes: denial of service, traffic analysis, eavesdropping, and the man-in-the-middle attack (MITM). Let's ignore the first two [2] and look at how the HTTPS protocol addresses eavesdropping, committed by Eve, and the MITM, committed by Mallory.

To defend itself, browsing has three major possibilities open to it: open HTTP, self-signed certificates and Certificate Authority (CA) signed certificates [3].

Of those, HTTP defends merely by the presence of masses of other traffic. Hiding in the noise of the net, as it where, which isn't much protection if someone is looking at you, directly, but it's better than nothing, and it's free.

CA-signed certs, on the other hand, protect by comprehensively negotiating secure connections between known parties. Sounds great, but they are expensive both to buy and to install [4] This is the reason why HTTPS is not widely deployed [5]: it's simply too darn expensive for an Internet that was built on "free".

Somewhere in the middle sits the self-signed cert. It can be generated within software, for free. There's no technical reason why a server can't deploy a self-signed cert automatically, and no good business reason why not, either [6]. It's a matter of a few seconds to create, and the SSL protocol handles it just fine.

It would seem to be a no-brainer that HTTPS would deploy much more widely, more rapidly, and more conveniently if it could bootstrap with self-signed certs.

Self-signed certs provide complete, 100% protection from eavesdropping. But they do have one weakeness - the MITM. Mallory can sneak in on the first connection or use and take control. And this weakness is touted as the major reason why, de facto, browsers force the use of CA-signed certs by discriminating against self-signed certs with popup warnings and dialogs and complicated security displays.

So the question comes down to this: How much security do we get by choosing CA-signed certs over self-signed certs? That question comes down to two things: how costly is an MITM, and how frequent is it.

Declaring my hand immediately, I don't think the MITM exists [7]. Or, Mallory is so rare that he will never be a threat. But I can't prove a negative. Also, say some, surely there is some merit in the argument that he isn't there because we deployed HTTPS? But that is to confuse cause and effect - we still need to prove that Mallory is present in any sense if we need to show that we have to protect against him.

There was little or no evidence of Mallory before HTTPS. And there is precious little now, in any field of computing endeavour [8].

There is another way however - instead of measuring Mallory, we can measure a *proxy* for Mallory. This is quite a sensible approach, and is widely used in statistics and risk measurement.

Eve is a great proxy. Here's why: Anyone likely to launch a man-in-the-middle attack is almost certainly eavesdropping already. And anyone who can't eavesdrop has no chance of running an MITM, skills-wise. Plus, Eve is a low risk attack, whereas Mallory takes risks by sending out packets.

So, the set of all MITMs is a subset of all eavesdroppers, at least for statistical purposes. We find Eve and we find something about Mallory - an upper bound.

We can't measure Mallory, he's simply too rare. But we could set at least an upper bound with a measure of Eve, his naughty cousin. And if that upper bound comes in at serious levels, we might deal with it. If not, then not.

The search for Eve is on! I've been looking for a year, by scanning all the news of threats, and last month I kept the list and posted a summary [9]. Yup, there's Eve, right up there in the first on on the list (obviously):

"Via eavesdropping, terror suspects nabbed" / Intelligence officials use cellphone signals to track Al Qaeda operatives, as number of mid-level arrests rises [10].

The attacker (in security terms) is listening to the victim (again, in security terms) using the cell phone network. That's over in Iraq, where they have armies and killing and destruction and things, so you'll pardon me if I present that as an example that has only hypothetical relevence to the Internet.

Which remains blissfully peaceful, and also apparently unlistened to, certainly in the last month. And that's been the pattern on the net, since whenever.

What can we conclude about MITMs? Here's a total stab in the dark: MITMs will be about 2 orders of magnitude less common than eavesdropping attacks (that's my guess, you make your own). So if there are 100 cases every month of eavesdropping - which might happen, and we'd not know about it - then there could be one MITM every month.

100 Eves a month might happen - the Internet's a big place [11]. I'd personally be suspicious of a claim of 1000 per month, that's too high to not be noticed [13].

So, I conclude that there might be about one Mallory per month, and we simply can't find him. Mallory gets away with his MITM.

Should we then protect against him? Sure, if you want to, but definitely optional, if it costs money. One case - hypothetical or otherwise - can never create enough cost and risk to cover society-wide costs. Remember, the Internet's a big place, and we only worry about things that happen to a lot of people.

The MITM is thus ruled out as a threat that simply must be protected against. Therefore, there is no foundation to the treatment of self-signed certificates as being too weak to promote!

That's the logic. Tricky, wasn't it? Eve is quite rare, and measurably so. Therefore Mallory is even rarer. Which means we don't need to pay any money to worry about him.

When doing scientific analysis of complex systems, especially for security purposes, we often have to make do with proxies (approximations) instead of the real measures. The same thing happens with economics; we can't easily test our designs against reality, we have to deploy and wait for the crooks to find us. This means we always have to keep our feet close to the ground, and take very careful steps, otherwise w
e start floating away on our assumptions.

Then, we end up with systems that are widely exploitable, and we just don't know how we got there. Or how to deal with it. (Well-read readers will recognise that phishing has that floating feeling of "just how did this happen?")

The logic above shows why a vote against self-signed certs is actually a vote *against* eavesdropping protection [14]. The fact that we got there by measuring eavesdropping and finding it low is interesting and perverse [15]. Nonetheless, the question remains whether we want to protect against eavesdropping: if so, then the answer is to deploy the self-signed cert.

In the meantime, keep your eye out for more reports of Eve. She will slip up one day, she's very natty, but not perfect.

[1] As an example, here is some work on a threat model for secure browsing, done for the Mozilla effort.
http://iang.org/ssl/browser_threat_model.html and http://iang.org/maps/
[2] I'm ignoring DOS and traffic analysis just for this argument. In a real security analysis, they should be better treated.
[3] There is another way: anonymous Diffie-Hellman (ADH), or ephemeral Diffie Hellman. Again, I'll ignore that, because it is deprecated in the RFC for TLS and also because most browser / server combos fail to implement it.
[4] All this cost falls on merchants, but they simply pass it on to consumers, so, yes, it costs us all.
[5] http://iang.org/ssl/how_effective.html or this month's embarrassment at http://www.securityspace.com/s_survey/sdata/200406/certca.html
gives 177k certs across 14 million servers. About 1%.
[6] There are many known bad reasons. I won't address those today.
[7] http://iang.org/ssl/mallory_wolf.html
[8] I've actually been scanning news reports for rumour of stolen credit cards - off the wire - since HTTPS started. But, no joy, not even amongst the many smaller merchants that don't use SSL. The credit card companies confirm that they have never ever seen it.
[9] There are some anecdotes. For the record, we generally rule out demos and techie boasts as threats.
[10] http://www.financialcryptography.com/mt/archives/000183.html
[11] By Faye Bowers June 02, 2004 http://www.csmonitor.com/2004/0602/p02s01-usmi.html
[12] By way of example, about 10,000 to 20,000 Linux servers get hacked every month, and people notice. Meanwhile about 200 BSD boxes get hacked a month, and that's so small that nobody notices. See the mi2g figures.
[13] Again, you pick your own numbers and justify your assumptions. It's called science.
[14] Voting against self-signed certs is also a vote against protecting against Mallory. For this even more perverse result, think of the stalled deployment, and check out the marketing angle:
[15] For the record, I believe that eavesdropping will increase dramatically in the future. But that's another story.

Posted by iang at July 24, 2004 09:50 AM | TrackBack

IMO, you've clarified well how SSL addresses the risks of eavesdropping versus MITM.

MITM may be a low risk; however, in order for self-signed certificates to become widely used, their use needs to be made easy and clear for users who have no idea what a certificate is. Of course, this is a big challenge.

Perhaps this wouldn't be such a big problem if, during the SSL handshake, an unknown_ca error were not fatal. Correct me if I'm wrong in how I understand this: If the connection remained open for this error, the host would be able to guide the user (via good web design) through the process of installing the certificate. This process could work something like, "Welcome, this is your first time here, in order to use our site you need to install the certificate or accept it for this session." This could be made better with some thought.

Short of that functionality (under the current model), the user is going to be presented with a warning that is too intimidating for many average users to accept.

Posted by: Will at July 24, 2004 11:54 AM


Yes, Will, you've hit on precisely the thing. Self signed certificates are given short shift by the browsers - this derives from the old flawed security assumption that they are bad.

How the self-signed certificate is presented I believe to be a browser GUI issue and I'm not that fussed as there is competition warming up between Mozilla and IE, as well as others.

But here's my suggestion: it should be presented in the branding box - which would normally present a colour logo of the CA in glorious cached credibility. Instead of our well-branded CA logos, browser would present something like black & white with "self-signed" words in basic boring type.

In this, the user's intro to the security protocol is enhanced, something we know has to be done to address phishing. Now the user can see the self-signed one as boring, and the CA one as comforting. Not only is the security model enhanced, the CAs preserve their franchise by having enhanced incentives.

Basically, displaying the certs & CAs in a branding box (Amir Herzberg calls this a local trusted area, or LCA) is a win for all parties.

(Detail - yes, the popups have to go. They destroy the security by what I call click-thru-syndrome. Also, I don't think this is an SSL issue. It's up to the application to decide what quality of cert to accept.)

Posted by: Iang at July 24, 2004 12:15 PM

> But here's my suggestion: it should be presented in the branding box - which would normally present a colour logo of the CA in glorious cached credibility. Instead of our well-branded CA logos, browser would present something like black & white with "self-signed" words in basic boring type.

I understand now how you want to use branding for certs. This makes perfect sense to me, and it reminds me of how the founding of the UL dramatically improved safety for consumers. That is, consumers didn't need to know a thing about the mystery subject of electricity, all they needed to know was how to look for the UL label. Labels can be a simple, yet powerful tool for end users -- just thinking about labels causes the "look for the union label" tune to start in my head.

> In this, the user's intro to the security protocol is enhanced, something we know has to be done to address phishing. Now the user can see the self-signed one as boring, and the CA one as comforting. Not only is the security model enhanced, the CAs preserve their franchise by having enhanced incentives.

True. In the current context, the end user has no idea who are what Verisign is, nor do they need to.

>(Detail - yes, the popups have to go. They destroy the security by what I call click-thru-syndrome. Also, I don't think this is an SSL issue. It's up to the application to decide what quality of cert to accept.)

So, I'm guessing that if you were to design a browser, its settings would default to "always accept a self-signed certificate on first connection." For self-signed connections, the browser would display the boring "self-signed" words in the application gutter or on the toolbar.

Posted by: Will at July 24, 2004 01:05 PM

there are two separate threats ... one carried over from the real world ... and one ... somewhat unique to the internet.

in the real world ... people that are dealing with unknown entities, don't really care who they are ... they care whether they can be trusted. this is the better business bureau model. certificates for this purpose didn't really fly on the internet because almost all transactions are done with known entities ... repeat business and/or extremely well known operations. They didn't need a certificate attested to whether they could be trusted. The other five percent of the transactions are spread over millions of commercial websites ... and it was/is difficult to drum up a revenue model that would cover reputation certificates. ebay is somewhat addressing this with reputation scoring.

the other issue is that you are dealing with somebody you know .... but because the communication is thru a untrusted and possibly hostile network ... can you really be sure that it hasn't been subverted. PGP has a model that covers this .... you acquire the public key and validate with some out-of-band process. This handles all of the entities that you frequently deal with &/or know.

This is effectively what has been for some number of commercial entities called certification authorities which have had their public key preloaded in browsers (before they are shipped) .... it is the PGP model ... but the out-of-band mechanism is having the public keys preloaded by the browser manufacture.

For commercial entities .... a PGP browser could be preloaded with a better business bureau public key. The better business bureau could maintain a table of registerd & trusted commercial entities ... along with their public keys and URLs. You can perodically download the latest table from the better business burear website.

So one question ... is it easier for the general public to understand ... if instead of describing the paradigm with analogies to the certificate model .... is it easier to describe it as a PGP model where the person loads other entities public keys (i.e. their signature verification mechanism) and uses out-of-band process to guarentee that they have the correct signature verification mechanism.

It might be easier for the general public to relate to a signature and a signature verification methodology. That is much closer to what they currently expierence. Trying to get the general public to relate to a certificate description paradigm .... (including self-signed certificate description) that has a couple layers of (unecessary) indirection ... would seem to be a much harder task

Posted by: lynn at July 24, 2004 11:12 PM

Hey, Lynn!

we can slice and dice models until they do the things we want, but we have to be careful to keep them close to some sort of reality.

In the real world, people make trust decisions through a network of sources. The BBB model is but one input. When approaching a store, chances are it is already known, there is a history, a trail of good transactions, and a pile of anecdotes to support any trust decision.

It's relatively rare that people enter places they haven't already trusted. Even when in a very strange place, people search for the brands they know. In fact, entering a place without any reference to a prior relationship is a very unusual event. When was the last time you did this?

It is this property that some of the better ideas attempt to exploit: Amir's logo stuff, my branding box, cert counts, the various URL redesign proposals (YURLs, etc).

In contrast, the existing web PKI has an *explicit* assumption that there is no prior relationship. So it tries to take on the whole burden of trust decisions on one metric and one metric alone - the tenuous ring of merchant/CA/browser/consumer as laid out in the chain of signatures.

It's no surprise that this can't be used for trust decisions, and it's no surprise that users easily (and IMO reasonably) ignore it for trust decisions.

The same can be said for PGP. It's not that the model can or can't be explained, it's that users - humans - don't do trust solely in that way. PGP works better because it lets in more relationships in its web of trust, but it is still only part of any model. For the most part, users don't trust the PGP web of trust even though they say they do. What they trust is the combined sum of all the little checks they do, of which, the PGP parts are some subset.

(Still, having said all that, the browser PKI infrastructure is in place, so it does give us a layer over which to place relationship information. The PKI layer is relatively strong, it's just mostly meaningless. The question is whether we can use it to add in enough meaning - at the browser level. It remains an open question.)

Posted by: Iang at July 27, 2004 06:44 AM

If people only went to sites/businesses they already trusted they could actually never build a group of businesses that they trusted in the first place. When new services arise the public is forced to be introduced to new and unfamiliar names, and eventually they acquire trust with the brand. Several factors come into play such as "trusted brand" failure (IE is a good example of people increasing their trust in Mozilla) Martha Stewart as well... some brands give you an excuse to leave your comfort zone...

Posted by: Scott at July 27, 2004 09:43 AM

Long ago and far away we were asked to work with this small client/server startup that wanted to do payments on the server. In the year that we worked with them, they moved from menlo park to mountain view and changed their name from mosaic to netscape.
minor references:



out of this work came something called a payment gateway and electronic commerce. along the way we had to do various kinds of detailed due dilligence on some number of the major things called certification authorities.

While the certification authorities were doing all sorts of integrity things ... it basically came down to making sure that the domain name you typed into your browser in some way related to the domain name of the server you were talking to.

This is significantly different "trust" issue from merchants taking credit card payments. For a merchant to take credit card payments, a merchant/acquiring financial institution has to take financial liability for the merchant ... and typically every transaction passes thru some processs related to that merchant financial institution before it gets to the consumers' bank.

With respect to the earlier post on this topic, we actually spent quite a bit of effort on certificates that had more meaning than whether the domain names match ... and weren't able to come up with a viable business scenario.

In theory, there is a requirement for trust for entities that have never dealt before. BBB, gov. & non-gov. licensing agencies provide some of this in the physical world.

In fact, various BBB and licensing agencies have looked at providing online, realtime trust information (as opposed to offline stale, static certificate oriented solutions) ... aka moving the paper certificates (hung on some wall) paradigm into an online 1990s paradigm ... as opposed to the PKI model that wants to preserve the pre-1990 offline paradigm with simple substitution of electronic bits for the paper.

The issue of the paper certificates in the pre-1990, real world ... was that there really wasn't a practical way of doing a realtime check with the authoritative agency issuing the certificate (hanging on the wall). To some extent, the PKI model is emulating the pre-1990, offline real world paradigm ... but substituting electronic bits for paper. However, with the emerging 1990, online world .... it is now frequently possible to go agency web sites and check realtime status.

to some extent this is the ebay model ... maintaining realtime information/history about parties active on ebay.

Posted by: lynn at July 27, 2004 10:59 AM

The comment about repeat business vis-a-vis first time business was based on paying for a trust infrastructure with something like transaction charges.

However, it turns out that something like 90-95 percent of the transactions are repeat transactions and/or with small number of extremely well known operations.

That leaves millions of commerce sites and possibly five percent (or less) of the transactions that are most in need of a trust infrastructure. So each of these millions of commerce sites are each making a couple bucks a month off thier commerce sites ... and, as a result, for each of these commerce sites there isn't a very large value proposition for paying for a trust operation.

The issue isn't whether or not a trust infrastructure is required for such market segment ... but working out a value proposititon to cover the costs of a trust infrastructure.

Posted by: Lynn at July 27, 2004 11:09 AM

> If people only went to sites/businesses
> they already trusted they could actually
> never build a group of businesses that
> they trusted in the first place. When
> new services arise the public is forced
> to be introduced to new and unfamiliar
> names, and eventually they acquire trust
> with the brand.

Scott, it's not really the case that there is never any relationship. When new businesses start, there is no forcing about it. Instead, every new business does the same thing: it uses its existing relationships to bootstrap to new customer relationships.

Consider restaraunts: on the week before opening, they get all the staff's family, friends, suppliers, and whoever else through. This creates buzz. Then on opening week they try and get all the restaraunt critics through. More buzz. Next weeks they start tramping the streets, doing leaflets, offers, prizes, two-for-ones, happy hours.

All this is about using existing relationship to build new relationship. Every net business does it the same way - they get all their people using the service, and then they send them out to find customers. They pay magazines to write about them. Word of mouth is by far and away the most important marketing tool, so they often pay their first customers to find more customers.

(This is all by way of saying that the PKI logic that "they can do trust coz there is no prior relationship" is spectacularly wrong in every detail.)

Posted by: Iang at July 27, 2004 11:12 AM

Kevin Mitnick once tried an MITM attack against Hewlett Packard. He sent an e-mail, purporting to be from HP, to a customer (or perhaps it was vice versa) and the e-mail contained a PGP public key, ostensibly belonging to the named counterparty, but actually controlled by Mitnick.

Hm. I don't remember if he invented the e-mail or if he somehow intercepted a legitimate e-mail. I suspect the former, so I guess this doesn't really count as a good MITM example. Ask Jon Callas if you want details -- he is my source for this story.

I think Dug Song's dsniff, or some such tool, had the facility to do MITM against ssh.

Ross Anderson's "Security Engineering" contained a fascinating anecdote called "MIG in the Middle". In the on-line errata, Prof. Anderson says that this story turned out to be untrue, but that his military sources assure him that equivalent MITM attacks *have* happened.



Posted by: Zooko at August 10, 2004 03:15 PM

Ian, I've really appreciated your rants against worrying about MITM attacks. I've agreed, and I myself have fussed that we worry too much about them.

I think protecting against a MITM attack is like protecting against a car being hotwired. Yeah, it's a security threat, yeah, I'm sure it happens, but having as a basic goal of a car that it not be *possible* to be hotwired is just a bit over the top.

MITM attacks are in general hard to pull off, and it's easier to pull off something else, like an impersonation attack that is impossible to prevent and difficult to defend against. What Mitnick did to us at DEC and what the armies of phishers do is much easier than a full-blown MITM attack.


Posted by: Jon Callas at August 12, 2004 06:26 AM