CAcert has just approved rules for dispute resolution which, in brief, puts all before their own arbitration. (Disclosure: I was involved!)
The key in this process is the provision in the user agreement that asserts the agreement to arbitrate disputes, and the lock that matches the key is the Arbitration Act in most countries. To make it work, the Act generally says that courts must respect the intent to arbitrate. From the US:
Under the FAA, on the motion of a party, a court must stay proceedings and order the parties to arbitrate the dispute if the court finds that the parties have agreed in writing to do so. A party seeking to compel arbitration must show (1) that a valid agreement to arbitrate exists between the parties and (2) that the specific dispute falls within the scope of the agreement.
E.g., the courts will kick you back to Arbitration. But, there are some exceptions, and I took that above quote from one such, being Bragg v. Second Life, wherein Judge Robreno decided to kick out the Arbitration Clause, not the parties. As VB writes, this is a big deal. So, it is useful to check his logic, and find out if CAcert has made some of the same mistakes.
Bear in mind this is not legal writing; if you want the real story you have to read the full transcripts linked above. To stress that, I've stripped out the references, etc, so as to maintain the readability rather than the reliability.
Having said that, onwards! With some legal musing, the Court arrives at this:
Bragg claims that the arbitration agreement itself would effectively deny him access to an arbitrator, because the costs would be prohibitively expensive, a question that is more appropriately reserved for the Court to answer.
To answer the question, the Court decided to look at procedural and substantive components to the issue of unconscionability which is a get-out card generally written into Arbitration Acts, and construct a balanced view from those components. Here's a quick summary:
Contract of adhesion. The Second Life agreement is a contract of adhesion, because there is no chance to negotiate. It's a take it or leave it. Therefore, the contract meets a standard of procedural unconscionability.
"Surprise," meaning that the Arbitration intent is hidden. Again, SL has met the Court's standard of surprise, by (a) using an opaque heading and (b) not setting out the costs clearly. This is a second leg of procedural unconscionability.
"One-sidedness of the contract terms." This seems to ride on several issues:
The Court asserted that "the arbitration remedy must contain a “modicum of bilaterality." It also quoted a Paypal case which is likely as close as it gets in industry similarity. In short, Paypal was able to control the entire assets within by way of freezing, restricting, take ownership, and change the TOS, whereas the the user could only (presumably) arbitrate. Linden Labs had (has?) the same power:
The TOS proclaim that “Linden has the right at any time for any reason or no reason to suspend or terminate your Account, terminate this Agreement, and/or refuse any and all current or future use of the Service without notice or liability to you.” Whether or not a customer has breached the Agreement is “determined in Linden’s sole discretion.” Linden also reserves the right to return no money at all based on mere “suspicions of fraud” or other violations of law. Finally, the TOS state that “Linden may amend this Agreement . . . at any time in its sole discretion by posting the amended Agreement [on its website].”
Ouch! Which brings us to tricky issue of costs. For some reason, Linden Labs chose the ICC for Arbitration, with three Arbitrators. The Court estimated costs at $17,250 for an action of recovery of $75,000. However, the ICC rules say that costs must be shared by the parties, and that is apparently sufficient to make Arbitration unenforceable in California law. The trick here appears to be that the existence of a fee, imposed in excess of a similar court process, creates a supports the finding of unconscionability:
California law has often been applied to declare arbitration fee-sharing schemes unenforceable. Such schemes are unconscionable where they “impose[] on some consumers costs greater than those a complainant would bear if he or she would file the same complaint in court.” ... Here, even taking Defendants characterization of the fees to be accurate, the total estimate of costs and fees would be $7,500, which would result in Bragg having to advance $3,750 at the outset of arbitration. See Dfts.’ Reply at 11. The court’s own estimates place the amount that Bragg would likely have to advance at $8,625, but they could reach as high as $13,687.50. Any of these figures are significantly greater than the costs that Bragg bears by filing his action in a state or federal court. Accordingly, the arbitration costs and fee-splitting scheme together also support a finding of unconscionability.
As well as that, the Court found that all these factors helped to suggest that Arbitration was an attempt to shield liability rather than resolve disputes:
OK, so the court really went to town in striking down the Arbitration clause. When I read their agreement a couple of weeks ago, I came to the same conclusion, without the Court's care, and the tip-off was the choice of the ICC (a big, expensive French body?!) and three, that's THREE arbitrators. The ICC has to be expensive just from the name, and by Linden Labs choosing 3 times the price, it doesn't take a PhD in maths to realise this was a barrier not an aid.
It may be that Linden Labs have learnt their lesson, as the TOS has just been changed, which is what sparked this blog post. Benjamin of VB writes:
the new terms also create a special class of claims under $10,000 that are to be handled via non-appearance arbitration. This change is very good for users, as the new clause replaces one that required a full-blown arbitration proceeding before a three-person panel, which could easily cost more than $10,000 itself (that is essentially why the clause was declared unconscionable in the Bragg case). Non-appearance arbitration can actually be quite inexpensive, and, notably, it could even be conducted in Second Life. The arbitrator must be an established ADR provider, must have published guidelines for dispute resolution, and must be a “retired judge or attorney with legal expertise in the subject matter of the dispute.”
Two caveats: it seems to stop around the $10k mark, and I haven't looked at the new terms.
Now, to get back to CAcert and their new arbitration system. We can run the Court's ruler over CAcert's new user agreement (albeit, still in DRAFT). It's maybe a little premature as experience is new, and only one case has been heard. But let's see what we can find:
Now, with a nod to the other elements of the Court's ruling, and to the Appeals Court which needs to affirm the ruling, it should be borne in mind that this is a back-of-the-napkin calculation. Still, it's instructive. I'd say cautiously that CAcert made none of the mistakes that the Court found. Indeed, CAcert bent over backwards and tied itself in knots in order to present itself as approximately equal to the registered users.
(As I say, I had something to do with the process. Indeed, I have been hammering the desk for this policy, or any other, to be approved for more than a year now. The more excellent result of last week's conference, which I attended, is that CAcert is now firmly back on the rails.)
Dave asks over on DigitalMoney, perhaps in wobbly exasperation as he tries to walk the logic of the ECB's level playing field:
One of the ECB's points is that they want to create a level playing field of payments. This is a good idea: so is there a single or simple action that could be taken to do this?
I know we've played out all these arguments in the past, but on the off chance that anyone really wants to know the answer:
Yes, there is:
separate banking from payment systems!
The reason for the silly limits on anonymous prepaid cards is because banks don't issue them; their business is about identifying people for borrowing and lending purposes, so they need to know who you are.
However, issuers of pre-paid products do not want to know who you are, just what you do. What you are buying is fine. For them, a traceable-but-anonymous product works perfectly because it solves their privacy issues, and gives them the marketing data to offer you precise deals that are likely to be a win-win for both parties.
Then, why do the banks care about these products that they don't issue? Simple: a pre-paid card is a payment instrument, and a payment done by the retailer without resort to the bank is a payment lost to the bank. So we can see the lost fees as an issue (ask a bank what proportion of their income comes from payment fees).
That's bad, but it gets worse: the payment is not only a payment (for the customer) it is a loan (for the retailer). One of the inside secrets of pre-paid cards is that on the balance sheet, they appear as ... customer-provided financing! Which means that the retailer has cut out the bank.
Now do we see why the ECB is going loopy trying to fit restrictions on emerging payment instruments into contortions labelled "the level playing field?"
And, if you think that's not good, prepare for double-plus-ungood: consider the *cost* of the financing. On paper, it looks like a zero-percent loan from the customer to the business. That is, the cash put into the pre-paid card this month comes back to the customer at face value when they buy goods. Zero percent!
Can it get worse? Oh, yes. For various reasons to do with abandoned funds and expiry conditions, the actual expected interest rate is less than zero. Because the cards are also losable or inefficient, that's money the retailer never needs to give back.
That's right: the customer gives a negative interest rate loan to the business. The basic result is going to be that the one and only chance of banks surviving this is if the Central Banks declare prepaid cards to be totally illegal. We are talking pure economics here, this is a slam-dunk.
But, the CB cannot simply declare them illegal. Not without asserting some form of jurisdiction over retail processes, and coming up with an argument that will appease the consumer. And that's the rub. As its mission has some semblance of helping the consumer, it is hard to convince the consumer that you are helping them by taking something from them.
The best the CB can currently do is declare them as essential tools for money launderers, etc etc, and put lots of restrictions on them because they are "tools too dangerous to be let loose on the innocent public." That's what Dave talks about and ridicules in his post:
If there is going to be a level playing field, then lightening the regulatory burden on e-cash might be an obvious place to begin. One source of costs is the requirement to verify the identity of e-cash users. There is a simplified due diligence procedure for a limited set of circumstances:
- nonrechargeable... no more than 150 euro; or
- rechargeable... a limit of 2,500 euro in a calendar year, ...
These limits seem low to me. I think...
Obviously, they are stupidly low, but Dave has yet to consider how much damage Al-Qaeda can do if they get into M&S with a pocketful of these cards.
Will it work? No. Even with these limits in place, it still won't be enough to save banking (again, resort to the economics argument to see why). They can even afford to squeeze the limits lower, and retailers will still be on top. This is part and parcel of why I predict that the next 1-2 decades will see the end of Central Banking as we know it.
I know what I would do if I was one of those players. A bank, or a CB or a retailer. But that's not interesting. What's interesting is to watch how, as negative-interest loans become more "compelling" to the public, how much more wobbly the ECB can make the playing field before people start sliding off.
Over on Second Life, they (LL) are trying to solve a problem by providing an outsourced service on identity verification with a company called Integrity. This post puts it in context (warning, it's quite long, longer even than an FC post!):
So now we understand better what this is all about. In effect, Integrity does not really provide “just a verification service”. Their core business is actually far more interesting: they buy LL’s liability in case LL gets a lawsuit for letting minors to see “inappropriate content”. Even more interesting is that LL does not need to worry about what “inappropriate content” means: this is a cultural question, not a philosophic one, but LL does not need to care. Whatever lawsuits will come LL’s way, they will simply get Integrity to pay for them.Put into other words: Integrity is an insurance company. In this day and age where parents basically don’t care what their children are doing, and blame the State for not taking care of a “children-friendly environment” by filing lawsuits against “the big bad companies who display terrible content”, a new business opportunity has arisen: selling insurance against the (albeit remote) possibility that you get a lawsuit for displaying “inappropriate content”.
(Shorter version maybe here.)
Over on Perilocity, which is a blog about the insurance world, John S. Quarterman points at the arisal of insurance to cover identity theft from a company called LifeLock.
I have to give them credit for honesty, though: LifeLock admits right out that the main four preventive things they do you could do for yourself. Beyond that, the main substance they seem to offer is essentially an insurance package:
"If your Identity is stolen while you are our client, we’re going to do whatever it takes to recover your good name. If you need lawyers, we’re going to hire the best we can find. If you need investigators, accountants, case managers, whatever, they’re yours. If you lose money as a result of the theft, we’re going to give it back to you."For $110/year or $10/month, is such an insurance policy overpriced, underpriced, or what?
It's possible easier for the second provider to be transparent and open. After all they are selling insurance for stuff that is a validated disaster. The first provider is trying to cover a problem which is not yet a disaster, so there is a sort of nervousness about baring all.
How viable is this model? The first thing would be to ask: can't we fix the underlying problem? For identity theft, apparently not, Americans want their identity system because it gives them their credit system, and there aren't too many Americans out there that would give up the right to drive their latest SUV out of the forecourt.
On the other hand, a potential liability issue within a game would seem to be something that could be solved. After all, the game operator has all the control, and all the players are within their reach. Tonight's pop-quiz: Any suggestions on how to solve the potential for large/class-action suits circling around dodgy characters and identity?
(Manual trackbacks: Perilocity suggests we need identity insurance in the form of governments taking the problem more seriously and dealing with identity thefts more proactively when they occur.)
In the long-running threatwatch theme of how much a set of identity documents will cost you, Dave Birch spots new data:
Other than data breaches, another useful rule-of-thumb figure, I reckon, might come from identity card fraud since an identity card is a much better representation of a persons identity than a credit card record. Luckily, one of the countries with a national smart ID card just had a police bust: in Malyasia, the police seized fake MyKad, foreign workers identity cards, work permits and Indonesian passports and said that they thought the fake documents were sold for between RM300 and RM500 (somewhere between $100 to $150) each. That gives us a rule-of-thumb of $20 for a "credit card identity" and $100, say, for a "full identity". Since we don't yet have ID cards in the U.K., I thought that fake passports might be my best proxy. Here, the police says that 1,800 alleged counterfeit passports recovered in raid in North London were valued at £1m. If we round it up to 2,000 fakes, then that's £500 each. This, incidentally, was the largest seizure of fake passports in the U.K. so far and vincluded 200 U.K. passports, which, according to police, are often considered by counterfeiters to be too difficult to reproduce. Not!The point I actually wanted make is not that these figures a very variable, which they are, but that they're not comparing apples with apples. Hence the simplistic "what's your identity worth?" question cannot be answered with a simple number.
OK, that's consistent with my long-standing estimate of 1000 (in the major units, pounds, dollars, euros) to get a set of docs. It is important to track this because if you are building a system based on identity, this gives you a solid number on which to base your economic security. E.g., don't protect much more than 1000 on the basis of identity, alone.
As a curious footnote, I recently acquired a new high-quality document from the proper source, and it cost me around 1000, once all the checking, rechecking, couriered documents and double phase costs were all added up. If a data set of one could be extrapolated, this would tell us that it makes no difference to the user whether she goes for a fully authentic set or not!
Luckily my experiences are probably an outlier, but we can see a fairly damning data point here: the cost of an "informal" document is far to similar to the cost of a "formal" document.
Postscript: It turns out that there is no way to go through FC archives and see all the various categories, so I've added a button at the right which allows you to see (for example) the cost of your identity, in full posted-archive form.
This blog frequently presses the case for the dysfunctional family known as security, and even presents evidence. So much so, that we've gone beyond the evidence and the conclusion, and we are more interested in the why?
Today we have insights from the crypto layer. Neal Koblitz provides his thoughts in an article named "The Uneasy Relationship Between Mathematics and Cryptography" published in something called the Notices of the AMS. As perhaps everyone knows, it's mostly about money, and Koblitz identifies several threads:
We've certainly seen the first three, and Koblitz disposes of them well. Definately well recommended reading.
But the last thread takes me by surprise. I would have said that cryptographers have done precisely the reverse, by meddling in areas outside their competence. To leap to computer science's defence, then, permit me to turn Koblitz's evidence:
Here the word “protocol” means a specific sequence of steps that people carry out in a particular application of cryptography. From the early years of public key cryptography it has been traditional to call two users A and B of the system by the names “Alice” and “Bob.” So a description of a protocol might go as follows: “Alice sends Bob..., then Bob responds with..., then Alice responds with...,” and so on.
I don't think so! If you think that is a protocol, you are not a computer scientist. Indeed, if you look at the designs for protocols by cryptographers, that's what you get: provable security and a "protocol" that looks like Alice talking to Bob.
Computer science protocols are not about Alice and Bob, they are about errors. Errors happen to include a bunch of things, including Alice and Bob. Also, Mallory and Eve, but also many many other things. As the number of things that can go wrong far exceed the numbers of cryptographic friends on the planet, we would generally suggest that computer scientists should write protocols, so as to avoid the Alice-Bob effect.
Just to square that circle with yesterday's post, it is OK to talk about Alice-Bob protocols, in order to convey a cryptographic idea. But it should be computer scientists who put it into practice, and as early as possible. Some like to point out that cryptography is complex, and therefore you should employ cryptographers for this part. I disagree, and quote Adi Shamir's 3rd misconception. Eventually your crypto will be handled by developers, and you had better give them the simplest possible constructions they can deal with.
Back to Koblitz's drive-by shooting on computer science:
Cryptography has been heavily influenced by the disciplinary culture of computer science, which is quite different from that of mathematics. Some of the explanation for the divergence between the two fields might be a matter of time scale. Mathematicians, who are part of a rich tradition going back thousands of years, perceive the passing of time as an elephant does. In the grand scheme of things it is of little consequence whether their big paper appears this year or next. Computer science and cryptography, on the other hand, are influenced by the corporate world of high technology, with its frenetic rush to be the first to bring some new gadget to market. Cryptographers, thus, see time passing as a hummingbird does. Top researchers expect that practically every conference should include one or more quickie papers by them or their students.
Ouch! OK, that's fair point. However the underlying force here is the market, and computer science itself is not to blame, rather, the product of its work happens to locate a lot closer to the market than, say, mathematics. Koblitz goes on to say:
There’s also a difficulty that comes from the disciplinary culture of cryptography that I commented on before. People usually write papers under deadline pressure — more the way a journalist writes than the way a mathematician does. And they rarely read other authors’ papers carefully. As a result even the best researchers sometimes publish papers with serious errors that go undetected for years.
That certainly resonates. When you spot an error, and it appears embedded in an academic paper, it stays for years. Consider a random paper on my desktop today, one by Anderson and Moore, "On Information Security -- and beyond." (Announced here.) It is exceedingly well-written, and quite a useful summary of economics thought today in academic security fields. Its pedigree is the best, and it seems unassailable.
Let us then assail. It includes an error: the "lemons" myth, which has been around for years now. Indeed, someone has pointed out the flaw, but we don't know who:
In some cases, security is even worse than a lemons market: even the vendor does not know how secure its software is. So buyers have no reason to pay more for protection, and vendors are disinclined to invest in it.
In detail, "lemons" only applies when the seller knows more than the buyer, this being one of the two asymmetries referred to in asymmetrical information theory. Other things apply in the other squares of the information matrix, and as with all such matrices, we have to be careful not to play hopscotch. The onus would be on Anderson and Moore, and other fans of citric security, to show whether the vendors even know enough to qualify for lemon sales!
Which all supports Koblitz's claims at least partially. The question then is why is it that the academic world of cryptography is so divorced? Again, Anderson and Moore hint at the answer:
Economic thinkers used to be keenly aware of the interaction between economics and security; wealthy nations could afford large armies and navies. But nowadays a web search on ‘economics’ and ‘security’ turns up relatively few articles.
Ironically, their very care and attention is reflected in the list of cited references at the end of the paper: one hundred and eight references !!! Koblitz would ask if they had read all those papers. Assuming yes, they must have covered the field, right?
I scanned through quickly and found three references that were not from reputable conferences, university academics and the like. (These 3 lonely references were from popular newspapers.)
What does a references section that only references the academic world mean? Are Anderson and Moore ignoring everything outside their own academic world?
I hasten to add, this is not a particular criticism against those authors. Many if not all academic authors, and conferences, and peer-review committee chairs would plead guilty of the same crime, proudly, even. By way of example, I have on my desk a huge volume called Phishing and Countermeasures, edited by Jakobsson and Myers. The list of contributors reads like a who's who of Universities, with the occasional slippage to Microsoft, RSA Laboratories, etc.
Indeed, Koblitz himself might be bemused by this attack. Why is the science-wide devotion to academic rigour a criticism?
Because it's security, that's why. Security against real threats is the point where scientific integrity, method and rigour unravels.
Consider a current threat, phishing. The academic world turned up to phishing relatively late, later than practically everyone else. Since arriving, they've waltzed around as if they'll solve it soon, just give them time to get up to speed, and proceeded to mark out the territory. The academics have presented stuff that is sometimes interesting but rarely valuable. They've pretty much ignored all the work that was done before hand, and they've consequently missed the big picture.
Why is this? One reason is above: academic work is only serious if it quotes other academic work. The papers above are reputable because they quote, only and fulsomely, other reputable work. And the work is only rewarded to the extent that it is quoted ... again by academic work. The academics are caught in a trap: work outside academia and be rejected or perhaps worse, ignored. Or, work with academic references, and work with an irrelevant rewarding base.
And be ignored, at least by those who are monetarily connected to the field. By way of thought experiment, consider how many peer-review committees on security conferences include the experts in the field? If you scan the lists, you don't find names like "Ivan Trotsky, millionaire phisher" or perhaps "John Smith, a.k.a. Bob Jones and 32 other aliases, wanted for tera-spamming in 6 states." Would we find "Mao Tze Ling, architect for last year's Whitehouse network shakedown?"
OK, so we can't talk to the actual crooks, but it's just a matter of duplicating their knowledge, right? In theory, it's just a matter of time before the academics turn the big guns onto the threat, and duplicate the work done outside academia. They can catch up, right?
Unfortunately not. Consider anoother of Koblitz's complaints, above where cryptography is
"influenced by the corporate world of high technology, with its frenetic rush to be the first to bring some new gadget to market."
Actually, there are two forces at work here, being the market and the attacker . In short, a real attack in this field migrates in terms of a month. Even with an accelerated paper cycle of 3 months to get the work peer-reviewed and into the next party for student quickies, to use Koblitz's amusing imagery, attacks migrate faster.
The academic world of security is simply too far away from their subject. Do they know it? No, it seems not. By way of example, those in the academic world of security and economics claim that the field started only recently, in the last few years. Nonsense! It has always been there, and received massive boosts with the work of David Chaum, the cypherpunks, Digicash, the Amsterdam hackers and many others.
What was not done was to convert this work into academic papers. The literature is there, but not in conference proceedings.
What was done was to turn the academic thought process into hugely beneficial contributions to security and society. All of the work mentioned above has led directly, traceably, to the inventions we now know as PayPal, ebay, WebMoney, gold community, and slowly moving through to the mass of society in the form of NFC, mobile and so forth. (The finance and telco sectors move slowly to accomodate these innovations, but do they move more slowly than the academic sector?)
The problem is that since the arisal of the net, the literature for fast-paced work has moved out of the academic sphere into other means of distribution: shorter essays, private circulations over email, business plans, open source experiments, focussed maillists and ... of course, blogs. If you are limiting yourself to conference proceedings on security, you are dead in the water, to take a phrase from naval security of a bygone age.
So much so that when you add up all these factors, the conclusion suggested is that the academic world will possibly never be able to deal with security as a science. Not if they stick to their rules, that is. Is it possible we now live in a world where today's academia cannot practically and economically contribute to an entire sector of science? That's the claim, let's see how it pans out.
Some criticisms also apply to Koblitz. Maybe mathematics is so conveniently peaceful as to support an academic tradition, but why is it that on the first page, he waxed longingly and generously on the invention of public key cryptography, but without mentioning the authors of the paper? He might say that Diffie and Hellman did not include any mathematics in their paper, but I would say "tosh!"
I've been using S/MIME for around a year now for encrypted comms, and I can report that the overall process is easier than OpenPGP. The reasons are twofold:
Sadly, S/MIME sucks. I reported previously on Thunderbird's most-welcome improvements to its UI (from unworkable to woeful) and also its ability to encrypt-not-sign, which catapulted the tool into legal sensibility. Recall, we don't know what a signature means, and the lawyers say "don't sign anything you don't read" ... I'd defy you to read an S/MIME signed email.
The problem that then occurs is that the original S/MIME designers (early 1990s?) used an unfortunate trick which is now revealed as truly broken: the keys are distributable with signing.
Ooops. Worse, the keys are only distributable with signing as far as I can see, which uncovers the drastic failings of tools designed by cryptographers and not software engineers. This sort of failure derives from such claims as, you must sign everything "to be trusted" ... which we disposed of above.
So, as signing is turned off, we now need to distribute the keys. This occurs by 2 part protocol that works like this:
With various error variations built in. OK, our first communal thought was that this would be workable but it turns out not to scale.
Consider that we change email clients every 6 months or so, and there appears no way to export your key collection. Consider that we use other clients, and we go on holidays every 3 months (or vacations every 12 months), and we lose our laptops or our clients trash our repositories. Some of us even care about cryptographic sanitation, and insist on locking our private keys in our secured laptop in the home vault with guards outside. Which means we can't read a thing from our work account.
Real work is done with a conspiracy of more than 2. It turns out that with around 6 people in the ring, someone is AFK ("away from keys"), all the time. So, someone cannot read and/or write. This either means that some are tempted to write in clear text (shame!), or we are all running around Alice-Bobbing each other. All the time.
Now, of course, we could simply turn on signing. This requires (a) a definition of signing, (b) written somewhere like a CPS, (c) which is approved and sustainable in agreements, (d) advised to the users who receive different signature meanings, and (e) acceptance of all the preceeding points as meaningful. These are very tough barriers, so don't hold your breath, if we are talking about emails that actually mean something (kid sister, knock yourself out...).
Turning on the signing also doesn't solve the core problem of key management, it just smothers it somewhat by distributing the keys every chance we get. It still doesn't solve the problem of how to get the keys when you lose your repository, as you are then locked out of posting out until you have everyone's keys. In every conspiracy, there's always one important person who's notoriously shy of being called Alice.
This exposes the core weakness of key management. Public Key cryptography is an engineering concept of 2 people, and beyond that it scales badly. S/MIME's digsig-distro is just a hack, and something like OpenPGP's key server mechanism would be far more sensible, far more scaleable. However, I wonder if we can improve on even OpenPGP, as the mere appearance of a centralised server reduces robustness by definition (TTPs, CVP, central points of attack, etc).
If an email can be used to send the key (signed), then why can't an email be used to request a key? Imagine that we added an email convention, a little like those old maillist conventions, that did this:
Subject: GETSMIME fc@example.com
and send it off. A mailclient like Thunderbird could simply reply by forwarding the key. (How this is done is an exercise for the reader. If you can't think of 3 ways in the next 3 minutes, you need more exercise.)
Now, the interesting thing about that is that if Tbird could respond to the GETSMIME, we wouldn't need key servers. That is, Alice would simply mail Bob with "GETSMIME Carol@example.com" and Bob's client could respond, perhaps even without asking because Bob already knows Alice. Swarm key distro, in other words. Or, Dave could be a key server that just sits there waiting for the requests, so we've got a key server with no change to the code base.
In closing, I'll just remind that the opinion of this blog is that the real solution to the almost infinite suckiness of S/MIME is that the clients should generate the keys opportunistically, and enable use of crypto as and when possible.
This solution will never be ideal, and that's because we have to deal with email's legacy. But the goal with email is to get to some crypto, some of the time, for some of the users. Our current showing is almost no crypto, almost none of the time, for almost none of the users. Pretty dire results, and nothing a software engineer would ever admit to.