August 22, 2006

Identity v. anonymity -- that is not the question

An age-old debate has sprung up around something called Identity 2.0. David Weinburger related it to transparency (and thus to open governance). David indicates that transparency is good, but it has its limits as an overarching framework:

So, all hail transparency... except...

...Except it's important that we preserve some shadows. Opaqueness in the form of anonymity protects whistleblowers and dissidents, women being beaten by their husbands, girls looking for abortion advice, people working through feelings of shame about who they are, and more. Anonymity and pseudonymity allow people to participate on the Web who perhaps aren't as self-confident as the loudest voices we hear there. It's even been known to enable snarky bloggers to comment archly on their industry, even if sometimes they play too rough.

Where he's heading is that he's suggesting that anonymity is a good thing in and of itself, and that it is something that should be in Identity 2.0. I'll leave you to wonder whether this is a political point or a savvy marketing angle aimed at early adopters.

Ben Laurie made a more incisive plea for anonymity:

But the point is this: unless you have anonymity as your default state, you don’t get to choose where on that spectrum you lie.

Eric Norlin says

Further, every “user-centric” system I know of doesn’t seek to make “identity” a default, so much as it seeks to make “choice” (including the choice of anonymity) a default.

as if identity management systems were the only way you are identified and tracked on the ‘net. But that’s the problem: the choices we make for identity management don’t control what information is gathered about us unless we are completely anonymous apart from what we choose to reveal.

Unless anonymity is the substrate, choice in identity management gets us nowhere. This is why I am not happy with any existing identity management proposal - none of them even attempt to give you anonymity as the substrate.

There is a bit of a insiders' secret here that Ben almost gets to the nub of (my emphasis). Unfortunately, nobody will believe the secret until they've lost their shirt for not believing it, and even then, it's optional. FWIW I'll reveal it here, you get to keep your shirt on and your beliefs intact, though.

Identity systems fail and fail again. From the annals of financial cryptography (Rights layer), the reason is that the design starts from a position of "identity," an assumption that is embedded as a meta-requirement from the very earliest days. I sometimes call this the "one true identity problem," meaning there isn't one true identity. That is, identity is simply too soft a concept to be called a requirement, and is thus a flawed foundation on which to build a large scale project. (And, if your concept calls for identity, assume large scale from the start.)

In order to accomodate different views of what identity is -- something necessary because it's not only soft but contentious in more ways that intimated by David above -- we need a flexible system. A very flexible system, indeed a system of such exeedingly flexible quantities it will work without any identity at all.

In fact, it turns out that the best way to build an identity system is to build an identity-free system, and then to layer your favourite identity flavour over the top. The system should aim to work well without any identity, so it can support all. To develop this further, by far the best base for such a system is public key based psuedonym systems. It is easy to put at least one identity system over the top of a standard psuedonym system, and it is plausible to put most identity concepts over a really good psuedonym system.

Yet, it is somewhere between traumatic and impractical to do it any other way. As an example, look at what Kim Cameron writes:

You can call this anonymity, or you can call this “not needlessly blabbing everything about yourself”.

Sites should only ask for identifying information when there is some valid and defensible reason to do so. They should always ask for the minimum possible. They should keep it for the shortest possible time. They should encrypt it so it is only available to systems that must access it. They should ensure as few parties as possible have access to such systems. And if possible, they should only allow it to be decrypted on systems not connected to the internet. Finally, they should audit their conformance with these best practices.

Once you accept that release of identifying information should be proportionate to well-defined needs - and that such needs vary according to context - it follows that identity must “be a spectrum”.

In order to keep his identity architecture intact, Kim Cameron proposes a set of guidelines. Yet, these are implausible on the face of it, and only serve to surface the trap that Kim is in -- "Identity" is his meta-requirement, and as he wrote it up in a series of "Identity Laws" he's pretty much bound to that. "It's the Law," he might say!

In order to maintain his concept of identity in the face of an apparent requirement for anonymity, Kim is therefore tempted into flirting with extra-system guidelines that will either be ignored or will impose intolerable costs on the system. It's from twister-grade designs like these that we see the steady series of failures of identity systems.

The short answer to the debate is, if you want your system to work, make psuedonymity the base (and thus the default, if that's how you want to characterise it). An identity system that is built without using psuedonymity as the base or substrate simply has a much lower chance of working becaue you won't be able to flexibly adjust the design when you discover the shortfalls in your concept of identity.

On the positive side, what is interesting about this debate is that people are debating the relative benefits of either approach at a high level strategic sense. They seem to have moved on from the old cypherpunk v. state debate; both of those arguments having lost their credibility over time. Perhaps, this is because this time someone (Microsoft?) is building it for commercial purposes, so that focusses the debate much more clearly. It's much easier to build a system when someone pays for it, and is willing to pay for their mistakes.

I would be remiss if I didn't close with some examples of strong (a.k.a. public key) psuedonymous systems: AADS (Anne and Lynn Wheeler) and Ricardo (Howland and Grigg) both proved the concept in the hardest of fields -- hard assets. Skype for VoIP and SSH for communications show it outside finance. Soft systems include most chat systems which allow you to create psuedonyms and dispose of them after the weekend, and any website that creates a login for you.

Failed systems include x.509, which includes an absolute assumption of Identity, and thus fails to work as an identity system. PGP deliberately does not assume identity. It succeeds in providing Identity, but has trouble in other areas.

For those interested in the wider debate, David includes a useful summary.

Posted by iang at August 22, 2006 01:16 PM | TrackBack
Comments

in the mid-90s the x9a10 financial standard working group was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. the resulting protocol was x9.59 which we claimed was privacy agnostic
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

a transaction had account number and was digitally signed ... and provided for end-to-end authentication and integrity. to the extent there might be a binding between an account number and an identity ... was outside the x9.59 protocol.

part of this was the realization by numerous institutions in the mid-90s that the x.509 "identity" certificates represented enormous privacy and liability exposures. this led to some of the abortions that attempted to preserve the x.509 infrastructures but eliminating most identity features ... at the time, periodically referred to as relying-party-only certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo

and we were repeatedly able to show that such relying-party-only certificates were redundant and superfluous.

the theme about end-to-end authentication and integrity was also somewhat the theme of the naked transaction threads here:
https://financialcryptography.com/mt/archives/000744.html
https://financialcryptography.com/mt/archives/000745.html
https://financialcryptography.com/mt/archives/000747.html
https://financialcryptography.com/mt/archives/000749.html

a lot of SSL use has been hiding transactions during internet transit ... not so much for privacy purposes but for integrity purposes and fraud prevention. however, that left the transactions exposed at thousands of other points. this is somewhat my old post about security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61

for instance, lots of the data breaches in the news involves attackers being able to use the exposed information for fraudulent transactions. Neither SSL or x9.59 addresses such data breaches. however, x9.59 eliminates the possibility that the attackers can use the acquired information for generating fraudulent transactions

from the perspective of the x9a10 working group effort ... a lot of the other activities from the period had extremely poorly defined threat models and even less linkage between their possible efforts and exactly how such efforts represented specific countermeasures for specific threats.

recent posting in another fora related to some SSL use:
http://www.garlic.com/~lynn/aadsm25.htm#17 Hamiltonian path as protection against DOS
http://www.garlic.com/~lynn/aadsm25.htm#19 Hamiltonian path as protection against DOS

Posted by: Lynn Wheeler at August 22, 2006 11:10 AM

for other drift ... we had been called in to help word smith the cal. state electronic signature legislation (and later federal electronic signature legislation)
http://www.garlic.com/~lynn/subpubkey.html#signature

one of the participating industry organizations was also involved in various privacy regulation and legislation efforts. they had done a survey regarding the primary driving factors behind privacy regulations and legislation and found
them to be

1) identity theft
2) denial-of-service (by institutions to individuals)

more recently there has been efforts by various organizations to further clarify identity theft ... into

* account fraud (using stolen information to perform fraudulent transactions against existing accounts

* Identity fraud (using stolen information to open new accounts in the name of the victim)

a large portion of the data breaches involve the "account fraud" flavor of identity theft.

as mention in the previous comment
http://www.garlic.com/~lynn/aadsm25.htm#20 Identity v. anonymity -- that is not the question

x9.59 doesn't have countermeasure for data breaches ... again the old posting about security proportional to risk:
http://www.garlic.com/~lynn/2001h.html#61

however x9.59 does have countermeasure against account fraud ... i.e. being able to use the information from data breaches (or other mechanisms) as part of performing fraudulent transactions against existing accounts.

and for other privacy drift ... i was co-author for the financial industry x9.99 PIA privacy standard ... and as part of that did work on a merged privacy taxonomy and glossary
http://www.garlic.com/~lynn/index.html#glosnote

which includes sources from EUDPD, FTC, GLBA, HIPAA, OECD and OMB.

Posted by: Lynn Wheeler at August 22, 2006 12:32 PM

Ironically, a few of the blogs discussing anonymity do not permit anonymous or even plausibly efficient psuedonymous commentary. So much for that.

One comment-shy blog, "Ruminations on Identity" mentioned "Eric Norlin" on another comment-shy blog saying:

http://bderidder.wordpress.com/2006/08/15/do-you-really-think-you-are-anonymous/
http://blogs.zdnet.com/digitalID/?p=61

"All of that nasty, real-world talk aside — the question now becomes: Should the online world reflect the real-world, or not?"

Of course not, and here's why: anonymity is not a principle, it is a design, based on the physics and economics of the environment. Anonymity strictly means you can write something on a piece of paper and not leave any identifying marks. That is, consider pamphlets distributed on some political issue. We can relatively easily create "anonymity" by distributing pamphlets with no author's name on them, thus ensuring that there is obscured relationship between the pamphlet and the person who expressed the opinion.

Trying to do that on the net is hard -- very hard. Better to align ones designs with the capabilities of the net. In further contrasting example, when David Chaum was trying to create digital cash, what he did was to try and duplicate the untraceability of coins as they moved from hand to hand. The result was elegant, one of the most interesting designs of modern cryptography, but mostly impractical, because it fought against the net's natural topography.

The same goes with anonymity. Far better to think in terms of psuedonymity, which is not anonymity as it allows linking between different actions.

Posted by: Iang at August 24, 2006 04:22 AM

Hello Ian,

CardSpace allows for the exact same "pseudonymity" as AADS and Ricardo: nothing more than a smoke-and-mirrors version of genuine pseudonymity. In fact, the approach is the stereotypical "trivial" approach that scientists use to explain why getting genuine strong pseudonymity is a hard problem:-). With the smoke-and-mirrors approach, the link between issuance and presentation of "tokens" is not broken; as a result, issuers can track and link all of a user's actions by simply comparing notes with relying parties (unless issuers do not know the users themselves to whom they issue "pseuudonyms", but then there is no or at best very little security benefit in having issuers - unless one solves the problem of having genuine strong pseudonymity!). As such, the "trivial" attempts provide no more "pseudonymity" than third-party cookies do - in fact, they are worse because the identifiers (crypto keys) that can be used to trace and link all "pseudonymous" actions are universally unique. Saying that the approach provides pseudonymity is the same as saying that having a central party assign a random alias to the author of a book suffices to make the author pseudonymous - that only holds if the assigner is in essence an agent of the book author that is fully trusted by the book author (in essence, the case of self-assignment).

Chaum's original proposals, in contrast, and improved proposals in the spirit of his breakthrough work (such as Credentica's Digital Credentials and IBM's IDEMIX), DO provide for genuine strong pseudonymity. A pseudonymous transaction is accomplished by simply reusing the same untraceable token. Anonymity occurs when pseudonymity is avoided by showing the same token no more than once (or, in IDEMIX, proving knowledge of ALL of its cryptographic data constituents using a "zero-knowledge" proof of knowledge).

- Stefan

Posted by: Stefan at October 23, 2006 03:15 PM

Stefan, Thanks for the comment. I'll have to read it carefully to find something to disagree with ;-)

But, yes, I do agree that CardSpace allows for the basic form(s) of psuedonymity. In a sense I'm hoping that if we can keep poking Microsoft on this point, they'll leave that opportunity open, leaving some room for further developments and to save their own bacon when their preferred path collapses.

What then is "genuine" strong psuedonymity? Psuedonymity always outsources the problem of tracking, which is another way of saying it is definately possible. (Psuedonymity is a half measure if your goal is privacy and nothing else, but a full measure if your goal is robustness.)

Can something do better than that?

Posted by: Iang at October 23, 2006 04:00 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.