June 12, 2016

Where is the Contract? - a short history of the contract in Financial Cryptography systems

(Editor's note: Dates are approximate. Written in May of 2014 as an educational presentation to lawyers, notes forgotten until now. Freshened 2016.09.11.)

Where is the contract? This is a question that has bemused the legal fraternity, bewitched the regulator, and sent the tech community down the proverbial garden path. Let's track it down.

Within the financial cryptography community, we have seen the discussion of contracts in approximately these ways:

  1. Smart Contracts, as performance machines with money ,
  2. Ricardian Contract which captures the writings of an agreement ,
  3. Compositions: of elements such as the "offer and acceptance" agreement into a Russian Doll Contracts pattern, or of clause-code pairs, or of split contract constructions.

Let's look at each in turn.

a. Performance

a(i) Nick Szabo theorised the notion of smart contracts as far back as 1994. His design postulated the ability of our emerging financial cryptography technology to automate the performance of human agreements within computer programs that also handled money. That is, they are computer programs that manage performance of a contract with little or less human intervention.

At an analogous level at least, smart contracts are all around. So much of the performance of contracts is now built into the online services of corporations that we can't even count them anymore. Yet these corporate engines of performance were written once then left running forever, whereas Szabo's notion went a step further: he suggested smart contracts as more of a general service to everyone: your contractual-programmer wrote the smart contract and then plugged it into the stack, or the service or the cloud. Users would then come along and interact with this machine, to get services.

a(ii). Bitcoin. In 2009 Bitcoin deployed a limited form of Smart Contracts in an open service or cloud setting called the blockchain. This capability was almost a side-effect of a versatile payments transaction of smart contracts. After author Satoshi Nakamoto left, the power of smart contracts was reduced in scope somewhat due to security concerns.

To date, success has been limited to simple uses such as Multisig which provides a separation of concerns governance pattern by allowing multiple signers to release funds.

If we look at the above graphic we can see a fairly complicated story that we can now reduce into one smart contract. In a crowd funding, a person will propose a project. Many people will contribute to a pot of money for that project until a particular date. At that date, we have to decide whether the pot of money is enough to properly fund the project and if so, send over the funds. If not, return the funds.

To code this up, the smart contract has to do these steps:

  1. describe the project, including an target value v and a strike date t.
  2. collect and protect contributions (red, blue, green boxes)
  3. on the strike date /t/, count the total, and decide on option 1 or 2:
    1. if the contributions reach the amount, pay all over to owner (green arc), else
    2. if the contributions do not exceed the target v, pay them all back to funders (red and blue arcs).

A new service called Lighthouse now offers crowdfunding but keep your eyes open for crowdfunding in Ethereum as their smart contracts are more powerful.

b. Writings of the Contract

Back in 1996, as part of a startup doing bond trading on the net, I created a method to bring a classical 'paper' contract into touch with a digital accounting system such as cryptocurrencies. The form, which became known as the Ricardian Contract, was readily usable for anything that you could put into a written contract, beyond its original notion of bonds.

In short: write a standard contract such as a bond. Insert some machine-readable tags that would include parties, amounts, dates, etc that the program also needed to display. Then sign the document using a cleartext digital signature, one that preserves the essence as a human-readable contract. OpenPGP works well for that. This document can be seen on the left of this bow-tie diagram.



Then - hash the document using a cryptographic message digest function that creates a one-for-one identifier for the contract, as seen in the middle. Put this identifier into every transaction to lock in which instrument we're paying back and forth. As the transactions start from one genesis transaction and then fan out to many transactions, all of them including the Ricardian hash, with many users, this is shown in the right hand part of the bow-tie.

See 2004 paper and wikipedia page on the Ricardian contract. We have then a contract form that is readable by person and machine, and can be locked into every transaction - from the genesis transaction, value trickles out to all others.

The Ricardian Contract is now emerging in the Bitcoin world. Enough businesses are looking at the possibilities of doing settlement and are discovering what I found in 1996 - we need a secure way to lock tangible writings of a contract on to the blockchain. A highlight might be NASDAQ's recent announcements, and Coinprism's recent work with OpenAssets project [1, 2, 3], and some of the 2nd generation projects have incorporated it without much fuss.

c. Composition

c(i). Around 2006 Chris Odom built OpenTransactions, a cryptocurrency system that extended Ricardian Contract beyond issuance. The author found:

"While these contracts are simply signed-XML files, they can be nested like russian dolls, and they have turned out to become the critical building block of the entire Open Transactions library. Most objects in the library are derived, somehow, from OTContract. The messages are contracts. The datafiles are contracts. The ledgers are contracts. The payment plans are contracts. The markets and trades are all contracts. Etc.

I originally implemented contracts solely for the issuing, but they have really turned out to have become central to everything else in the library."

In effect Chris Odom built an agent-based system using the Ricardian Contract to communicate all its parameters and messages within and between its agents. He also experimented with Smart Contracts, but I think they were a server-upload model.

c(ii). CommonAccord construct small units containing matching smart code and prose clauses, and then compose these into full contracts using the browser. Once composed, the result can be read, verified and hashed a la Ricardian Contracts, and performed a la smart contracts.

c(iii) Let's consider person to person trading. With face-to-face trades, the contract is easy. With mail order it is harder, as we have to identify each components, follow a journey, and keep the paper work. With the Internet it is even worse because there is no paperwork, it's all pieces of digital data that might be displayed, might be changed, might be lost.

Shifting forward to 2014 and OpenBazaar decided to create a version of eBay or Amazon and put it onto the Bitcoin blockchain. To handle the formation of the contract between people distant and anonymous, they make each component into a Ricardian Contract, and place each one inside the succeeding component until we get to the end.

Let's review the elements of a contract in a cycle:

✓ Invitation to treat is found on blockchain similar to web page.
✓ offer by buyer
✓ acceptance by merchant
✓ (performance...)
✓ payment (multisig partner controls the money)

The Ricardian Contract finds itself as individual elements in the formation of the wider contract formation around a purchase. In each step, the prior step is included within the current contractual document. Like the lego blocks above, we can create a bigger contract by building on top of smaller components, thus implementing the trade cycle into Chris Odom's vision of Russian Dolls.


Conclusion

In conclusion, the question of the moment was:

Where is the contract?

So far, as far as the technology field sees it, in three areas:

  • as performance - the Smart Contract
  • as writing - the Ricardian Contract
  • as composition - elements packaged into Russian Dolls, clause-code pairs and convergance as split contracts.

I see the future as convergence of these primary ideas: the parts or views we call smart & legal contracts will complement each other and grow together, being combined as elements into fuller agreements between people.

For those who think nothing much has changed in the world of contracts for a century or more, I say this: We live in interesting times!

(Editor's reminder: Written in May of 2014, and the convergence notion fed straight into "The Sum of all Chains".)

Posted by iang at 07:35 PM | Comments (0)

June 28, 2015

The Nakamoto Signature

The Nakamoto Signature might be a thing. In 2014, the Sidechains whitepaper by Back et al introduced the term Dynamic Membership Multiple-party Signature or DMMS -- because we love complicated terms and long impassable acronyms.

Or maybe we don't. I can never recall DMMS nor even get it right without thinking through the words; in response to my cognitive poverty, Adam Back suggested we call it a Nakamoto signature.

That's actually about right in cryptology terms. When a new form of cryptography turns up and it lacks an easy name, it's very often called after its inventor. Famous companions to this tradition include RSA for Rivest, Shamir, Adleman; Schnorr for the name of the signature that Bitcoin wants to move to. Rijndael is our most popular secret key algorithm, from the inventors names, although you might know it these days as AES. In the old days of blinded formulas to do untraceable cash, the frontrunners were signatures named after Chaum, Brands and Wagner.

On to the Nakamoto signature. Why is it useful to label it so?

Because, with this literary device, it is now much easier to talk about the blockchain. Watch this:

The blockchain is a shared ledger where each new block of transactions - the 10 minutes thing - is signed with a Nakamoto signature.

Less than 25 words! Outstanding! We can now separate this discussion into two things to understand: firstly: what's a shared ledger, and second: what's the Nakamoto signature?

Each can be covered as a separate topic. For example:

the shared ledger can be seen as a series of blocks, each of which is a single document presented for signature. Each block consists of a set of transactions built on the previous set. Each succeeding block changes the state of the accounts by moving money around; so given any particular state we can create the next block by filling it with transactions that do those money moves, and signing it with a Nakamoto signature.


Having described the the shared ledger, we can now attack the Nakamoto signature:

A Nakamoto signature is a device to allow a group to agree on a shared document. To eliminate the potential for inconsistencies aka disagreement, the group engages in a lottery to pick one person's version as the one true document. That lottery is effected by all members of the group racing to create the longest hash over their copy of the document. The longest hash wins the prize and also becomes a verifiable 'token' of the one true document for members of the group: the Nakamoto signature.

That's it, in a nutshell. That's good enough for most people. Others however will want to open that nutshell up and go deeper into the hows, whys and whethers of it all. You'll note I left plenty of room for argument above; Economists will look at the incentive structure in the lottery, and ask if a prize in exchange for proof-of-work is enough to encourage an efficient agreement, even in the presence of attackers? Computer scientists will ask 'what happens if...' and search for ways to make it not so. Entrepreneurs might be more interested in what other documents can be signed this way. Cryptographers will pounce on that longest hash thing.

But for most of us we can now move on to the real work. We haven't got time for minutia. The real joy of the Nakamoto signature is that it breaks what was one monolithic incomprehensible problem into two more understandable ones. Divide and conquer!

The Nakamoto signature needs to be a thing. Let it be so!



NB: This article was kindly commented on by Ada Lovelace and Adam Back.

Posted by iang at 09:38 AM | Comments (1)

June 17, 2015

Cash seizure is a thing - maybe this picture will convince you

There are many many people who do not believe that the USA police seize cash from people and use it for budget. The system is set up for the benefit of police - budgetary plans are laid, you have no direct recourse to the law because it is the cash that defends itself, the proceeds are carved up.

Maybe this will convince you - if cash seizure by police wasn't a 'thing' we wouldn't need this chart:

Posted by iang at 08:00 PM | Comments (1)

May 12, 2015

Using CommonAccord to build "First Class Persons"

Paraphrasing from James' Twitter storm response to The Sum of All Chains:

A way to organize similarity of "First Class Person" is to build them from objects.

Andrea -> ID_She -> ID_Individual - ID

Andrea -> 55 -> Broadway -> Cambridge -> MA -> USA

which permits an example such as an NDA:

P1-> Acme -SIGN-> Andrea

P2-> Quake -SIGN-> Colleen

(in graph notation). The text is codified in layers as with CSS, as per an example template including work by #CooleyGo and @Emperor_Chan, and stored in GitHub with history.

The proof of the transaction is k=v pairs on the blockchain. The proof of the boilerplate is either on chain or off, as you wish.

If corporations have rights, maybe your fridge should assert some too? Here below is the three tabs of the CommonAccord interface that in this case is asserting the identity of an Internet of Things device (ID-IoT).

Which leaves open - how to manage identity?

Posted by iang at 07:11 AM | Comments (0)

October 19, 2009

Denial of Service is the greatest bug of most security systems

I've had a rather troubling rash of blog comment failure recently. Not on FC, which seems to be ok ("to me"), but everywhere else. At about 4 in the last couple of days I'm starting to get annoyed. I like to think that my time in writing blog comments for other blogs is valuable, and sometimes I think for many minutes about the best way to bring a point home.

But more than half the time, my comment is rejected. The problem is on the one hand overly sophisticated comment boxes that rely on exotica like javascript and SSO through some place or other ... and spam on the other hand.

These things have destroyed the credibility of the blog world. If you recall, there was a time when people used blogs for _conversations_. Now, most blogs are self-serving promotion tools. Trackbacks are dead, so the conversational reward is gone, and comments are slow. You have to be dedicated to want to follow a blog and put a comment on there, or stupid enough to think your comment matters, and you'll keep fighting the bl**dy javascript box.

The one case where I know clearly "it's not just me" is John Robb's blog. This was a *fantastic* blog where there was great conversation, until a year or two back. It went from dozens to a couple in one hit by turning on whatever flavour of the month was available in the blog system. I've not been able to comment there since, and I'm not alone.

This is denial of service. To all of us. And, this denial of service is the greatest evidence of the failure of Internet security. Yet, it is easy, theoretically easy to avoid. Here, it is avoided by the simplest of tricks, maybe one per month comes my way, but if I got spam like others get spam, I'd stop doing the blog. Again denial of service.

Over on CAcert.org's blog they recently implemented client certs. I'm not 100% convinced that this will eliminate comment spam, but I'm 99.9% convinced. And it is easy to use, and it also (more or less) eliminates that terrible thing called access control, which was delivering another denial of service: the people who could write weren't trusted to write, because the access control system said they had to be access-controlled. Gone, all gone.

According to the blog post on it:

The CAcert-Blog is now fully X509 enabled. From never visited the site before and using a named certificate you can, with one click (log in), register for the site and have author status ready to write your own contribution.

Sounds like a good idea, right? So why don't most people do this? Because they can't. Mostly they can't because they do not have a client certificate. And if they don't have one, there isn't any point in the site owner asking for it. Chicken & egg?

But actually there is another reason why people don't have a client certificate: it is because of all sorts of mumbo jumbo brought up by the SSL / PKIX people, chief amongst which is a claim that we need to know who you are before we can entrust you with a client certificate ... which I will now show to be a fallacy. The reason client certificates work is this:

If you only have a WoT unnamed certificate you can write your article and it will be spam controlled by the PR people (aka editors).

If you had a contributor account and haven’t posted anything yet you have been downgraded to a subscriber (no comment or write a post access) with all the other spammers. The good news is once you log in with a certificate you get upgraded to the correct status just as if you’d registered.

We don't actually need to know who you are. We only need to know that you are not a spammer, and you are going to write a good article for us. Both of these are more or less an equivalent thing, if you think about it; they are a logical parallel to the CAPTCHA or turing test. And we can prove this easily and economically and efficiently: write an article, and you're in.

Or, in certificate terms, we don't need to know who you are, we only need to know you are the same person as last time, when you were good.

This works. It is an undeniable benefit:

There is no password authentication any more. The time taken to make sure both behaved reliably was not possible in the time the admins had available.

That's two more pluses right there: no admin de-spamming time lost to us and general society (when there were about 290 in the wordpress click-delete queue) and we get rid of those bl**dy passwords, so another denial of service killed.

Why isn't this more available? The problem comes down to an inherent belief that the above doesn't work. Which is of course a complete nonsense. 2 weeks later, zero comment spam, and I know this will carry on being reliable because the time taken to get a zero-name client certificate (free, it's just your time involved!) is well in excess of the trick required to comment on this blog.

No matter the *results*, because of the belief that "last-time-good-time" tests are not valuable, the feature of using client certs is not effectively available in the browser. That which I speak of here is so simple to code up it can actually be tricked from any website to happen (which is how CAs get it into your browser in the first place, some simple code that causes your browser to do it all). It is basically the creation of a certificate key pair within the browser, with a no-name in it. Commonly called the self-signed certificate or SSC, these things can be put into the browser in about 5 seconds, automatically, on startup or on absence or whenever. If you recall that aphorism:

There is only one mode, and it is secure.

And contrast it to SSL, we can see what went wrong: there is an *option* of using a client cert, which is a completely insane choice. The choice of making the client certificate optional within SSL is a decision not only to allow insecurity in the mode, but also a decision to promote insecurity, by practically eliminating the use of client certs (see the chicken & egg problem).

And this is where SSL and the PKIX deliver their greatest harm. It denies simple cryptographic security to a wide audience, in order to deliver ... something else, which it turns out isn't as secure as hoped because everyone selects the wrong option. The denial of service attack is dominating, it's at the level of 99% and beyond: how many blogs do you know that have trouble with comments? How many use SSL at all?

So next time someone asks you, why these effing passwords are causing so much grief in your support department, ask them why they haven't implemented client certs? Or, why the spam problem is draining your life and destroying your social network? Client certs solve that problem.

SSL security is like Bismarck's sausages: "making laws is like making sausages, you don't want to watch them being made." The difference is, at least Bismark got a sausage!

Footnote: you're probably going to argue that SSCs will be adopted by the spammer's brigade once there is widespread use of this trick. Think for minute before you post that comment, the answer is right there in front of your nose! Also you are probably going to mention all these other limitations of the solution. Think for another minute and consider this claim: almost all of the real limitations exist because the solution isn't much used. Again, chicken & egg, see "usage". Or maybe you'll argue that we don't need it now we have OpenID. That's specious, because we don't actually have OpenID as yet (some few do, not all) and also, the presence of one technology rarely argues against another not being needed, only marketing argues like that.

Posted by iang at 10:47 AM | Comments (6) | TrackBack

August 06, 2008

_Electronic Signatures in Law_, Stephen Mason, 2007

Electronic signatures are now present in legal cases to the extent that while they remain novel, they are not without precedence. Just about every major legal code has formed a view in law on their use, and many industries have at least tried to incorporate them into vertical applications. It is then exceedingly necessary that there be an authoritative tome on the legal issues surrounding the topic.

Electronic Signatures in Law is such a book, and I'm now the proud owner of a copy of the recent 2007 second edition, autographed no less by the author, Stephen Mason. Consider this a review, although I'm unaccustomed to such. Like the book, this review is long: intro, stats, a description of the sections, my view of the old digsig dream, and finally 4 challenges I threw at the book to measure its paces. (Shorter reviews here.)

First the headlines: This is a book that is decidedly worth it if you are seriously in the narrow market indicated by the title. For those who are writing directives or legislation, architecting software of reliance, involved in the Certificate Authority business of some form, or likely to find themselves in a case or two, this could well be the essential book.

At £130 or so, I'd have to say that the Financial Cryptographer who is not working directly in the area will possibly find the book too much for mild Sunday afternoon reading, but if you have it, it does not dive so deeply and so legally that it is impenetrable to those of us without an LLB up our sleeves. For us on the technical side, there is welcome news: although the book does not cover all of the failings and bugs that exist in the use of PKI-style digital signatures, it covers the major issues. Perhaps more importantly, those bugs identified are more or less correctly handled, and the criticism is well-ground in legal thinking that itself rests on centuries of tradition.

Raw stats: Published by Tottel publishing. ISBN 978-1-84592-425-6. At over 700 pages, it includes a comprehensive indexes of statutory instruments, legislation and cases that runs to 55 pages, by my count, and a further 10 pages on United Kingdom orders. As well, there are 54 pages on standards, correspondents, resources, glossary, followed by a 22 page index.

Description. Mason starts out with serious treatments on issues such as "what is a signature?" and "what forms a good signature?" These two hefty chapters (119 pages) are beyond comprehensive but not beyond comprehension. Although I knew that the signature was a (mere) mark of intent, and it is the intent which is the key, I was not aware of how far this simple subject could go. Mason cites case law where "Mum" can prove a will, where one person signs for another, where a usage of any name is still a good signature, and, of course, where apparent signatures are rejected due to irregularities, and others accepted regardless of irregularities.

Next, there is a fairly comprehensive (156 pages) review of country and region legal foundations, covering the major anglo countries, the European Union, and Germany in depth, with a chapter on International comparisons covering approaches, presumptions, liabilities and other complexities and a handful of other countries. Then, Mason covers electronic signatures comprehensively and then seeks to compare them to Parties and risks, liability, non-contractual issues, and evidence (230 pages). Finally, he wraps up with a discussion of digital signatures (42 pages) and data protection (12 pages).

Let me briefly summarise the financial cryptography view of the history of Digital Signatures: The concept of the digital signature had been around since the mid-1970s, firstly in the form of the writings by the public key infrastructure crowd, and secondly, popularised to a small geeky audience in the form of PGP in the early 1990s. However, deployment suffered as nobody could quite figure out the application.

When the web hit in 1994, it created a wave that digital signatures were able to ride. To pour cold water on a grand fire-side story, RSA Laboratories manage to convince Netscape that (a) credit cards needed to be saved from the evil Mallory, (b) the RSA algorithm was critical to that need, and (c) certificates were the way to manage the keys required for RSA. Verisign was a business created by (friends of) RSA for that express purpose, and Netscape was happily impressed on the need to let other friends in. For a while everything was mom's apple pie, and we'll all be rich, as, alongside VeriSign and friends, business plans claiming that all citizens would need certificates for signing purposes were floated around Wall Street, and this would set Americans back $100 a pop.

Neither the fabulous b-plans nor the digital signing dream happened, but to the eternal surprise of the technologists, some legislatures put money down on the cryptographers' dream to render evidence and signing matters "simpler, please." The State of Utah led the way, but the politicians dream is now more clearly seen in the European Directive on Electronic Signatures, and especially in the Germanic attitude that digital signatures are as strong by policy, as they are weak in implementation terms. Today, digital signatures are relegated to either tight vertical applications (e.g., Ricardian contracts), cryptographic protocol work (TLS-style key exchanges), or being unworkable misfits lumbered with the cross of law and the shackles of PKI. These latter embarrassments only survive in those areas where (a) governments have rolled out smart cards for identity on a national basis, and/or (b) governments have used industrial policy to get some of that certificate love to their dependencies.

In contrast to the above dream of digital signatures, attention really should be directed to the mere electronic signature, because they are much more in use than the cryptographic public key form, and arguably much more useful. Mason does that well, by showing how different forms are all acceptable (Chapter 10, or summarised here): Click-wrap, typing a name, PINs, email addresses, scanned manuscript signatures, and biometric forms are all contrasted against actual cases.

The digital signature, and especially the legal projects of many nations get criticised heavily. According to the cases cited, the European project of qualified certificates, with all its CAs, smart cards, infrastructure, liabilities, laws, and costs ad infinitum ... are just not needed. A PC, a word processor program and a scan of a hand signature should be fine for your ultimate document. Or, a typewritten name, or the words "signed!" Nowhere does this come out more clearly than the Chapter on Germany, where results deviate from the rest of the world.

Due to the German Government's continuing love affair with the digital signature, and the backfired-attempt by the EU to regularise the concept in the Electronic Signature Directives, digital and electronic signatures are guaranteed to provide for much confusion in the future. Germany especially mandated its courts to pursue the dream, with the result that most of the German case results deal with rejecting electronic submissions to courts if not attached with a qualified signature (6 of 8 cases listed in Chapter 7). The end result would be simple if Europeans could be trusted to use fax or paper, but consider this final case:

(h) Decision of the BGH (Federal Supreme Court, 'Bundesgerichtshof') dated 10 October 2006,...: A scanned manuscript signature is not sufficient to be qualified as 'in writing' under §130 VI ZPO if such a signature is printed on a document which is then sent by facsimile transmission. Referring to a prior decision, the court pointed out that it would have been sufficient if the scanned signature was implemented into a computer fax, or if a document was manually signed before being sent by facsimile transmission to court.

How deliciously Kafkaesque! and how much of a waste of time is being imposed on the poor, untrustworthy German lawyer. Mason's book takes on the task of documenting this confusion, and pointing some of the way forward. It is extraordinarily refreshing to find that the first to chapters, and over 100 pages, are devoted to simply describing signatures in law. It has been a frequent complaint that without an understanding of what a signature is, it is rather unlikely that any mathematical invention such as digsigs would come even close to mimicing it. And it didn't, as is seen in the 118 pages romp through the act of signing:

What has been lost in the rush to enact legislation is the fact that the function of the signature is generally determined by the nature and content of the document to which it is affixed.

Which security people should have recognised as a red flag: we would generally not expect to use the same mechanism to protect things of wildly different values.

Finally, I found myself pondering these teasers:

Athenticate. I found myself wondering what the word "authenticate" really means, and from Mason's book, I was able to divine an answer: to make an act authentic. What then does "authentic" mean and what then is an "act"? Well, they are both defined as things in law: an "act" is something that has legal significance, and it is authentic if it is required by law and is done in the proper fashion. Which, I claim, is curiously different to whatever definition the technologists and security specialists use. OK, as a caveat, I am not the lawyer, so let's wait and see if I get the above right.

Burden of Liability. The second challenge was whether the burden of liability in signing has really shifted. As we may recall, one of the selling points of digital signatures was that once properly formed, they would enable a relying party to hold the signing party to account, something which was sometimes loosely but unreliably referred to as non-repudiation.

In legal terms, this would have shifted the burden of proof and liability from the recipient to the signer, and was thought by the technologists to be a useful thing for business. Hence, a selling point, especially to big companies and banks! Unfortunately the technologists didn't understand that burden and liability are topics of law, not technology, and for all sorts of reasons it was a bad idea. See that rant elsewhere. Still, undaunted, laws and contracts were written on the advice of technologists to shift the liability. As Mason puts it (M9.27 pp270):

For obvious reasons, the liability of the recipient is shaped by the warp and weft of political and commercial obstructionism. Often, a recipient has no precise rights or obligations, but attempts are made using obscure methods to impose quasi-contractual duties that are virtually impossible to comply with. Neither governments nor commercial certification authorities wish to make explicit what they seek to achieve implicitly: that is, to cause the recipient to become a verifying party, with all the responsibilities that such a role implies....

So how successful was the attempt to shift the liability / burder in law? Mason surveys this question in several ways: presumptions, duties, and liabilities directly. For a presumption that the sender was the named party in the signature, 6 countries said yes (Israel, Japan, Argentina, Dubai, Korea, Singapore) and one said no (Australia) (M9.18 pp265) Britain used statutory instruments to give a presumption to herself, the Crown only, that the citizen was the sender (M9.27 pp270). Others were silent, which I judge an effective absence of a presumption, and a majority for no presumption.

Another important selling point was whether the CA took on any especial presumption of correctness: the best efforts seen here were that CAs were generally protected from any liability unless shown to have acted improperly, which somewhat undermines the entire concept of a trusted third party.

How then are a signer and recipient to share the liability? Australia states quite clearly that the signing party is only considered to have signed, if she signed. That is, she can simply state that she did not sign, and the burden falls on the relying party to show she did. This is simply the restatement of the principle in the English common law; and in effect states that digital signatures may be used, but they are not any more effective than others. Then, the liability is exactly as before: it is up the to relying party to check beforehand, to the extent reasonable. Other countries say that reliance is reasonable, if the relying party checks. But this is practically a null statement, as not only is it already the case, it is the common-sense situation of caveat emptor deriving from Roman times.

Although murky, I would conclude that the liability and burden for reliance on a signature is not shifted in the electronic domain, or at least governments seem to have held back from legislating any shift. In general, it remains firmly with the recipient of the signature. The best it gets in shiftyville is the British Government's bounty, which awards its citizens the special privilege of paying for their Government's blind blundering; same as it ever was. What most governments have done is a lot of hand-waving, while permitting CAs to utilise contract arrangements to put the parties in the position of doing the necessary due diligence,. Again, same as it ever was, and decidedly no benefit or joy for the relying party is seen anywhere. This is no more than the normal private right to a contract or arrangement, and no new law nor regulation was needed for that.

Digital Signing, finally, for real! The final challenge remains a work-in-progress: to construct some way to use digital signatures in a signing protocol. That is, use them to sign documents, or, in other words, what they were sold for in the first place. You might be forgiven for wondering if the hot summer sun has reached my head, but we have to recall that most of the useful software out there does not take OpenPGP, rather it takes PKI and x.509 style certificate cryptographic keys and certificates. Some of these things offer to do things called signing, but there remains a challenge to make these features safe enough to be recommended to users. For example, my Thunderbird now puts a digital signature on my emails, but nobody, not it, not Mozilla, not CAcert, not anyone can tell me what my liability is.

To address this need, I consulted the first two chapters, which lay out what a signature is, and by implication what signing is. Signing is the act of showing intent to give legal effect to a document; signatures are a token of that intention, recorded in the act of signing. In order, then, to use digital certificates in signing, we need to show a user's intent. Unfortunately, certificates cannot do that, as is repeatedly described in the book: mostly because they are applied by the software agent in a way mysterious and impenetrable to the user.

Of course, the answer to my question is not clearly laid out, but the foundations are there: create a private contract and/or arrangement between the parties, indicate clearly the difference between a signed and unsigned document, and add the digital signature around the document for its cryptographic properties (primarily integrity protection and confirmation of source).

The two chapters lay out the story for how to indicate intention in the English common law: it is simple enough to add the name, and the intention to sign, manually. No pen and ink is needed, nor more mathematics than that of ASCII, as long as the intention is clear. Hence, it suffices for me to write something like signed, iang at the bottom of my document. As the English common law will accept the addition of merely ones name as a signature, and the PKI school has hope that digital signatures can be used as legal signatures, it follows that both are required to be safe and clear in all circumstances. For the champions of either school, the other method seems like a reduction to futility, as neither seems adequate nor meaningful, but the combination may ease the transition for those who can't appreciate the other language.

Finally, I should close with a final thought: how does the book effect my notions as described in the Ricardian Contract, still one of the very few strong and clear designs in digital signing? I am happy to say that not much has changed, and if anything Mason's book confirms that the Ricardo designs were solid. Although, if I was upgrading the design, I would add the above logic. That is, as the digital signature remains impenetrable to the court, it behoves to add the words seen below somewhere in the contract. Hence, no more than a field name-change, the tiniest tweak only, is indicated:

Signed By: Ivan


Posted by iang at 10:44 AM | Comments (0) | TrackBack

June 30, 2008

Cross-border Notarisations and Digital Signatures

My notes of a presentation by Dr Ugo Bechini at the Int. Conf. on Digital Evidence, London. As it touches on many chords, I've typed it up for the blog:

The European or Civil Law Notary is a powerful agent in commerce in the civil law countries, providing a trusted control of a high value transaction. Often, this check is in the form of an Apostille which is (loosely) a stamp by the Notary on an official document that asserts that the document is indeed official. Although it sounds simple, and similar to common law Notaries Public, behind the simple signature is a weighty process that may be used for real estate, wills, etc.

It works, and as Eliana Morandi puts it, writing in the 2007 edition of the Digital Evidence and Electronic Signature Law Review:

Clear evidence of these risks can be seen in the very rapid escalation, in common law countries, of criminal phenomena that are almost unheard of in civil law countries, at least in the sectors where notaries are involved. The phenomena related to mortgage fraud is particularly important, which the Mortgage Bankers Association estimates to have caused the American system losses of 2.5 trillion dollars in 2005.

OK, so that latter number came from Choicepoint's "research" (referenced somewhere here) but we can probably agree that the grains of truth sum to many billions.

Back to the Notaries. The task that they see ahead of them is to digitise the Apostille, which to some simplification is seen as a small text with a (dig)sig, which they have tried and tested. One lament common in all European tech adventures is that the Notaries, split along national lines, use many different systems: 7 formats indicating at at least 7 softwares, frequent upgrades, and of course, ultimately, incompatibility across the Eurozone.

To make notary documents interchangeable, there are (posits Dr Bechini) two solutions:

  1. a single homogenous solution for digsigs; he calls this the "GSM" solution, whereas I thought of it as a potential new "directive failure".
  2. a translation platform; one-stop shop for all formats

A commercial alternative was notably absent. Either way, IVTF (or CNUE) has adopted and built the second solution: a website where documents can be uploaded and checked for digsigs; the system checks the signature, the certificate and the authority and translates the results into 4 metrics:

  • Signed - whether the digsig is mathematically sound
  • Unrevoked - whether the certificate has been reported compromised
  • Unexpired - whether the certificate is out of date
  • Is a notary - the signer is part of a recognised network of TTPs

In the IVTF circle, a notary can take full responsibility for a document from another notary when there are 4 green boxes above, meaning that all 4 things check out.

This seems to be working: Notaries are now big users of digsigs, 3 million this year. This is balanced by some downsides: although they cover 4 countries (Deustchland, España, France, Italy), every additional country creates additional complexity.

Question is (and I asked), what happens when the expired or revoked certificate causes a yellow or red warning?

The answer was surprising: the certificates are replaced 6 months before expiry, and the messages themselves are sent on the basis of a few hours. So, instead of the document being archived with digsig and then shared, a relying Notary goes back to the originating Notary to request a new copy. The originating Notary goes to his national repository, picks up his *original* which was registered when the document was created, adds a fresh new digsig, and forwards it. The relying notary checks the fresh signature and moves on to her other tasks.

You can probably see where we are going here. This isn't digital signing of documents, as it was envisaged by the champions of same, it is more like real-time authentication. On the other hand, it does speak to that hypothesis of secure protocol design that suggests you have to get into the soul of your application: Notaries already have a secure way to archive the documents, what they need is a secure way to transmit that confidence on request, to another Notary. There is no problem with short term throw-away signatures, and once we get used to the idea, we can see that it works.

One closing thought I had was the sensitivity of the national registry. I started this post by commenting on the powerful position that notaries hold in European commerce, the presenter closed by saying "and we want to maintain that position." It doesn't require a PhD to spot the disintermediation problem here, so it will be interesting to see how far this goes.

A second closing thought is that Morandi cites

... the work of economist Hernando de Soto, who has pointed out that a major obstacle to growth in many developing countries is the absence of efficient financial markets that allow people to transform property, first and foremost real estate, into financial capital. The problem, according to de Soto, lies not in the inadequacy of resources (which de Soto estimates at approximately 9.34 trillion dollars) but rather in the absence of a formal, public system for registering property rights that are guaranteed by the state in some way, and which allows owners to use property as collateral to obtain access to the financial captal associated with ownership.

But, Latin America, where de Soto did much of his work, follows the Civil Notary system! There is an unanswered question here. It didn't work for them, so either the European Notaries are wrong about their assertation that this is the reason for no fraud in this area, or de Soto is wrong about his assertation as above. Or?

Posted by iang at 08:02 AM | Comments (1) | TrackBack

June 16, 2008

Digital Signing: new category for FC

A comment by Stephen Mason this morning caused me to realise that there is no digital signing category in FC. I have now added it, and you can see the link in the 'menu' section to the right of the main page, under Governance. Or click here.

Here's an older reference to an apparent application of digital signing of serious documents, spotted in the wild:

Great Lakes Educational Loan Services used an external e-signature service from DocuSign to help it deal with the flood of loan requests it gets around this time each year. It combined the service with its loan application system on its Web site. In the first two months, 80 percent of its 72,000 applicants used e-signatures, which cut its costs in this area by 75 percent, Musser said.

The rest of the article seems off-topic.

Posted by iang at 08:51 AM | Comments (0) | TrackBack

April 17, 2008

On the search for the perfect Identity Biometric: scratch Iris

Our world is obsessed with determining who you are. Part of this is working out who you are now, and then determining whether you were the same person 10 or 20 years back. Reliably. And that includes in the presence of some nasty attacker, who wants your money or your cooperation or worse.

In the race for future biometric, here are today's favourites: fingerprints (criminals and visitors to America), facial pictures (passports), and eyes (some systems). As Richard Clayton points out, the latter dark horse, has some disadvantages:

...Here is a summary of how I [John Daugman] established for the Society that the above portraits show the same person, by running my Iris Recognition algorithms on magnified images of the eye regions in the 1984 and 2002 photographs.

First I computed IrisCodes (see the mathematical explanation on this website) from both of her eyes as photographed in 1984. A processed portion of the 1984 photograph is shown here. (The superimposed graphics show the automatic localization of the iris and its boundaries, the scrubbing of some specular reflections from the eye, and a representation of the IrisCode.)

Then I computed IrisCodes from both of her eyes in the 2002 photograph. Those processed images can be seen here for her left and right eyes.

When I ran the search engine (the matching algorithm) on these IrisCodes, I got a Hamming Distance of 0.24 for her left eye, and 0.31 for her right eye. As may be seen from the histogram that arises when DIFFERENT irises are compared by their IrisCodes, these measured Hamming Distances are so far out on the distribution tail that it is statistically almost impossible for different irises to show so little dissimilarity. The mathematical odds against such an event are 6 million to one for her right eye, and 10-to-the-15th-power to one for her left eye.


Which unfortunately means that the biometric is too copyable. Scratch your Iris. If we can measure it from a photograph, that means I can photograph you and then insert your Iris into some other situation. Although an attack is not clear, the theoretical possibility is strong, and as we do not have many systems using this process yet, the attacks won't be clear.

I wonder how long it will be before we get Iris cameras that can work in the open?

And, as Philipp Güring points out, it rather raises a bit of a difficulty for the perfect biometric. If a machine can grab it, how do we stop ... any machine ... grabbing all of them?

Posted by iang at 02:13 AM | Comments (4) | TrackBack

March 20, 2008

World's biggest PKI goes open source: DogTag is released

One of the frequently lamented complaints of PKI is that it simply didn't scale (IT talk for not delivering enough grunt to power a big user base), and there was no evidence to the contrary. Well, that's not quite true, there is one large-scale PKI on the planet:

Red Hat has teamed up with August Schell to run and support the U.S. Department of Defense’s (DoD) public key infrastructure (PKI). The DoD PKI is the world’s largest and most advanced PKI installation, supporting all military and civilian personnel throughout the DoD worldwide.

Red Hat Certificate Authority (CA) and Lightweight Directory Access Protocol (LDAP) are used to operate the DoD identity management infrastructure with August Schell providing hands-on support for the source-code-level implementation. The Red Hat Certificate System has issued more than 10 million digital credentials, many of which were issued immediately preceding conflicts in Iraq and Afghanistan.

It seems that its success or growth rode the wave of the recent Iraq military expedition (or war or whatever they call it these days). The reason for stressing the military context is that in such circumstances, things can be pushed out which would never evolve of their own pace in the government or private economy. This may make it a trendsetter, because it finally breached the barriers for the rest of the world. Or it may simply underline the reasons why it won't set any trends; only future history will tell us which it is.

Today's news is that the code behind the above PKI has just gone open source under the name of DogTag (a reference to the 2 metal identity tags that US servicemen and women wear around their necks). Bob Lord announces:

In December of 2004, Red Hat purchased several technologies and products from AOL. The most prominent of those products were the Directory Server and the Certificate System. Since then we opened the source code to the Directory Server (see http://directory.fedoraproject.org/ for all the details). However a number of factors kept the work to release the Certificate System on the back burner. That's about to change.

Today, I'm extremely happy to announce the release of the Certificate System source code to the world.

This isn't a “Lite” or demo version of the technology, but the entire source code repository, the one we use to build the Red Hat branded version of the product. It's the real deal.

Another barrier breached! This could change the map for small, boutique or startup CAs. For those who think that this will lead to an explosion of interest in crypto and CAs and PKIs, I cannot resist adding some thoughts from Frank Hecker, who commented:

It's ... the end of an era: When I was working at Netscape ten years ago we were dealing with patented crypto algorithms (RSA), classified crypto algorithms (FORTEZZA), proprietary crypto libraries (RSA BSafe) and crypto applications (Netscape Communicator, Netscape Enterprise Server, Netscape Certificate Management System), and crypto export control. All gone now, or at least gone for most practical purposes (e.g., export control).

Of course, now that people have all the open-source patent-free no-export-hassle crypto that they could possibly want, they're realizing that crypto in and of itself wasn't nearly the panacea they thought it was :-)

I cannot say it better!

Posted by iang at 04:08 AM | Comments (1) | TrackBack

March 06, 2008

Microsoft acquires Stefan Brands (patents and friends)

Interesting news: According to the posts over at identity corner, Microsoft is picking up (some of? all of?) Credentica's patent portfolio, and Stefan Brands himself will join the team.

Quick summary: Stefan Brand's patent portfolio is the "extension" of David Chaum's better known blinding and privacy work into a comprehensive claims-based framework. I don't know how it works, but it does things like reveal age, preferences, gender, and so forth without breaching privacy. That is, it can reveal these things in a way that means no other information can be divined. (It can also do digital cash, and it has some advantages over Chaum's original blinding patents, but I don't recall why.)

Now, it has been pretty clear that this discussion has been going on for a long time. The reason? It's never easy to speculate on how big companies plan to do things, but I would say it like this: Brands technology is being viewed as the next generation CardSpace/InfoCard.

What that means in branding, package, and timeline issues is anyone's guess. The important point here is that CardSpace/InfoCard is Microsoft's first generation of framework where individual / commercial identities can live (nymous versus sold). Later on we'll see Brands technology being used to add interesting things to that.

The significance of this is either huge or its not, we'll know in a few years whether Microsoft are able to do something interesting here. This is not a given. Not because they haven't got the people and other assets to make this happen, but because they've got *all* the people and *all* the assets facing them, and those people and assets don't move easily.

By way of example, you probably saw my mention of nymous identities above. Yes, Microsoft. You already know what that means and don't believe it ... or you don't want it to happen because you don't understand it or you think the sky will fall in. Resistance is huge to new technologies, and, as we saw earlier today, the market for silver bullets is no respecter of sources.

Stefan's technology plus Microsoft's challenge takes this to the point where even *I* don't know what it means, so we are talking about really cracking up the equilibrium, not just shifting a few clicks like EV. (This also makes that Microsoft team the hottest place to work for the next few years in Rights, if not all of FC. Brush up on your CVs, guys.)

Posted by iang at 02:16 PM | Comments (0) | TrackBack

November 11, 2007

Oddly good news week: Google announces a Caps library for Javascript

Capabilities is one of the few bright spots in theoretical computing. In Internet software terms, caps can be simply implemented as nymous public/private keys (that is, ones without all that PKI baggage). The long and deep tradition of capabilities is unchallenged seriously in the theory literature.

It is heavily challenged in the practical world in two respects: the (human) language is opaque and the ideas are simply not widely deployed. Consider this personal example: I spent many years trying to figure out what caps really was, only to eventually discover that it was what I was doing all along with nymous keys. The same thing happens to most senior FC architects and systems developments, as they end up re-inventing caps without knowing it: SSH, Skype, Lynn's x95.9, and Hushmail all have travelled the same path as Gary Howland's nymous design. There's no patent on this stuff, but maybe there should have been, to knock over the ivory tower.

These real world examples only head in the direction of caps, as they work with existing tools, whereas capabilities is a top-down discipline. Now Ben Laurie has announced that Google has a project to create a Caps approach for Javascript (hat tip to JPM, JQ, EC and RAH :).

Rather... than modify Javascript, we restrict it to a large subset. This means that a Caja program will run without modification on a standard Javascript interpreter - though it won’t be secure, of course! When it is compiled then, like CaPerl, the result is standard Javascript that enforces capability security. What does this mean? It means that Web apps can embed untrusted third party code without concern that it might compromise either the application’s or the user’s security.

Caja also means box in Spanish which is a nice cross-over as capabilities is like the old sandbox ideas of Java applet days. What does this mean, other than the above?

We could also point to the Microsoft project Cardspace (was/is now Inforcard) and claim parallels, as that, at a simplistic level, implements a box for caps as well. Also, the HP research labs have a constellation of caps fans, but it is not clear to me what application channel exists for their work.

There are then at least two of the major financed developers pursuing a path guided by theoretical work in secure programming.

What's up with Sun, Mozilla, Apple, and the application houses, you may well ask! Well, I would say that there is a big difference in views of security. The first-mentioned group, a few tiny isolated teams within behemoths, are pursuing designs, architectures and engineering that is guided by the best of our current knowledge. Practically everyone else believes that security is about fixing bugs after pushing out the latest competitive feature (something I myself promote from time to time).

Take Sun (for example, as today's whipping boy, rather than Apple or Mozo or the rest of Microsoft, etc).

They fix all their security bugs, and we are all very grateful for that. However, their overall Java suite becomes more complex as time goes on, and has had no or few changes in the direction of security. Specifically, they've ignored the rather pointed help provided by the caps school (c.f., E, etc). You can see this in J2EE, which is a melting pot of packages and add-ons, so any security it achieves is limited to what we might called bank-level or medium-grade security: only secure if everything else is secure. (Still no IPC on the platform? Crypto still out of control of the application?)

Which all creates 3 views;

  1. low security, which is characterised by the coolness world of PHP and Linux: shove any package in and smoke it.
  2. medium security, characterised by banks deploying huge numbers of enterprise apps that are all at some point secure as long as the bits around them are secure.
  3. high security, where the applications are engineered for security, from ground up.

The Internet as a whole is stalled at the 2nd level, and everyone is madly busy fixing security bugs and deploying tools with the word "security" in them. Breaking through the glass ceiling and getting up to high security requires deep changes, and any sign of life in that direction is welcome. Well done Google.

Posted by iang at 09:51 AM | Comments (4) | TrackBack

September 10, 2007

Threatwatch - more data on cost of your identity

In the long-running threatwatch theme of how much a set of identity documents will cost you, Dave Birch spots new data:

Other than data breaches, another useful rule-of-thumb figure, I reckon, might come from identity card fraud since an identity card is a much better representation of a persons identity than a credit card record. Luckily, one of the countries with a national smart ID card just had a police bust: in Malyasia, the police seized fake MyKad, foreign workers identity cards, work permits and Indonesian passports and said that they thought the fake documents were sold for between RM300 and RM500 (somewhere between $100 to $150) each. That gives us a rule-of-thumb of $20 for a "credit card identity" and $100, say, for a "full identity". Since we don't yet have ID cards in the U.K., I thought that fake passports might be my best proxy. Here, the police says that 1,800 alleged counterfeit passports recovered in raid in North London were valued at £1m. If we round it up to 2,000 fakes, then that's £500 each. This, incidentally, was the largest seizure of fake passports in the U.K. so far and vincluded 200 U.K. passports, which, according to police, are often considered by counterfeiters to be too difficult to reproduce. Not!

The point I actually wanted make is not that these figures a very variable, which they are, but that they're not comparing apples with apples. Hence the simplistic "what's your identity worth?" question cannot be answered with a simple number.

OK, that's consistent with my long-standing estimate of 1000 (in the major units, pounds, dollars, euros) to get a set of docs. It is important to track this because if you are building a system based on identity, this gives you a solid number on which to base your economic security. E.g., don't protect much more than 1000 on the basis of identity, alone.

As a curious footnote, I recently acquired a new high-quality document from the proper source, and it cost me around 1000, once all the checking, rechecking, couriered documents and double phase costs were all added up. If a data set of one could be extrapolated, this would tell us that it makes no difference to the user whether she goes for a fully authentic set or not!

Luckily my experiences are probably an outlier, but we can see a fairly damning data point here: the cost of an "informal" document is far to similar to the cost of a "formal" document.

Postscript: It turns out that there is no way to go through FC archives and see all the various categories, so I've added a button at the right which allows you to see (for example) the cost of your identity, in full posted-archive form.

Posted by iang at 05:27 AM | Comments (1) | TrackBack

June 07, 2007

What is the DRM problem?

A Digital Rights Management system is a system to manage digital rights. But, if you read some news blogs, you get the impression that Apple has stopped managing the digital rights of its music sales (a.k.a. iTunes). E.g., 1, 2, 3.

No such. Managing digital rights can be done by putting an email address into a song. There is nothing in the business requirements of DRM that says it can't be broken, unless it was put there by an over-zealous cryptographer who has never owned a compact cassette recorder.

Breaking DRM is not what it is about. DRM is about creating a system to distribute the content to those who will pay, and make it hard for those who pay to avoid the system.

Note that this is not the same as stopping the distribution of content to those who won't pay. We don't care about them, as they won't pay. What we do care about is whether those that won't pay (a) get access to the content and (b) make it easy for those who will pay to get access to it. It's that second part that is the important part.

In all the history of MP3s, and indeed content of all forms, we have (a) in dumper-loads, and little or none of (b). Apple have come closest as a commercial enterprise, but they are still a long way from (b). If you think that this is wrong, help me (please!) with this little test: Tell me where the button is on my iTunes to get access to the paid content, unpaid?

Why is this so? It's simple to write but harder to grasp: it is because marketing is driven by marketing laws, and in this case, an economic law known as "discrimination." The DRM problem is to create a discrimination between those that will pay and those that will not. Sticking an email address in a song sounds like a way to do that, presuming that other things are going on too.

Posted by iang at 11:33 AM | Comments (1) | TrackBack

June 05, 2007

Identity resurges as a debate topic

RAH pointed last week to a series of blog postings on Identity. This seems to be a discussion between Ben Laurie, Kim Cameron and Stefan Brands. In contrast, Hasan points out the same debate is happening on slashdot. I'd vote for slashdot, this time. They say it's hard to do. They're right.

Why? As slashdot people suggest, this is a typical bottom-up versus top-down approach to a question you shouldn't be asking.

Drilling down, it's the same old story. High level managers say "we need to know who the consumer is." Or, as Dave says,

What is it about smart cards and health? Health ought to be one of the places where getting someone's identity right -- and being able to authenticate them quickly and efficiently -- is a driver.

Engineers in the space then address that problem, with varying degrees of modification of the original requirement. Note the temptation introduced by David Chaum to introduce privacy architectures so as to address the perceived harms of things like linkability, etc, continues.

The people over at Microsoft, Credentica, and probably Google are trying to build the toolkit. What they are not doing is establishing a clear user-driven set of requirements. That's because they can't, they are platform providers, and they are trying to establish a one-size-fits-all approach to the Rights space. And then impose that on the users.

Instead, we should address the business problem at its core. Why do you need to know who the consumer is? Stefan Brands' techniques go *some* way towards suggesting this by pushing the notion of a claims-based toolkit based on sophisticated cryptography, but it is still only a suggestion, it's still a toolkit, and it still imposes bottom-up thinking on a top-down world.

The debate will rumble on, because the big(gest) corporations and governments are going to invest capital in this. In the direct FC space, the same thing happened in the 90s between Netscape, Sun and Microsoft. Then, as now, the business case was flawed.

A flawed case doesn't mean a failed business, necessarily. Instead, it suggests that the real battle is going on in the business strategy space, not in the FC space. This means we can be relatively relaxed about the various claims batting back and forth in the rights layer, and instead keep our eyes on the business battle.

Posted by iang at 11:31 AM | Comments (1) | TrackBack

April 02, 2007

The One True Identity -- cracks being examined, filled, and rotted out from the inside

Preaching to the converted again, but we all know that the critical flaw in the Identity push is that there isn't One True Identity. If we assume Identity is One and True in our designs, our systems or our society, this will come back to haunt us.

Now, in response to wide-scale evidence of failure of Identity-centric systems, there are emerging signs of people starting to realise that this is one of the foundations of America's Crisis of Identity. Here's one from an influential report by The Royal Academy of Engineering:

One of the issues that Dilemmas of Privacy and Surveillance - challenges of technological change looks at is how we can buy ordinary goods and services without having to prove who we are. For many electronic transactions, a name or identity is not needed; just assurance that we are old enough or that we have the money to pay. In short, authorisation, not identification should be all that is required. Services for travel and shopping can be designed to maintain privacy by allowing people to buy goods and use public transport anonymously. "It should be possible to sign up for a loyalty card without having to register it to a particular individual - consumers should be able to decide what information is collected about them," says Professor Nigel Gilbert, Chairman of the Academy working group that produced the report. "We have supermarkets collecting data on our shopping habits and also offering life insurance services. What will they be able to do in 20 years' time, knowing how many donuts we have bought?"

One of the technical people who know this -- that names and identity are no good for systems design -- is Gunnar Peterson. He points to someone called Mike who jumped on the Bandwagon:

Over the last few weeks, I’ve made an effort to become an OpenID power user. OK, ok, so maybe I’m just responding to the sound and the fury over this deceptively simple technology. But OpenID caught my imagination because it’s ostensibly something I get to own for myself—not something handed to me by the federal-industrial complex.

So, centralised, big-company naming schemes are bad, right? Therefore we need an open source, decentralised, every-ones-in-control alternative, right?

But don’t get me wrong: OpenID isn’t the problem here.

Bite the bullet. Copying the bad guy won't work.

OpenID simply calls into sharp focus something I’ve believed for years. It’s a kind of axiom, so I’d like to give it a name. I’ll call it, “identifiers.axiom.neunmike’s.axiomproxy.info”—that way you can easily refer to it unambiguously from anywhere. Here it is:
There are no identifiers, only attributes

Names are slippery. Most people have many more than one legal name, none of which are unique. They also have several dozen nicknames. There’s no practical way to get any of these every-day-use names onto a global namespace. And what’s a name after all but a synthetic attribute—a foreign key that we hope the receiving party stores somewhere so we can remember them later? Names are invaluable communication aids, but they have little to do with recognition, which is what’s at issue in most identity management contexts. Biologically, creatures don’t recognize others based on names but rather the confluence of attributes appearing within a certain context.

Right. And the unavoidable conclusion is that names don't cut it. Names are wrong for the job. What is right for the job is ... something else, but let's settle for that simple message.

"Names are no good." We need a catchy meme to invade the mindspace of the public on this one. Mike suggests:

Lao Tzu (who goes by several dozen names) had a pretty good post on this idea over 2000 years ago. In a section called “Ineffability,” he writes:
The Way that can be told of is not an Unvarying Way;
The names that can be named are not unvarying names.
It was from the Nameless that Heaven and Earth sprang;
The named is but the mother that rears the ten thousand creatures, each after its kind. (chap. 1, tr. Waley)

Not that it is encouraging to know that this bug has been in existance for as long as history recorded names ... So the new European project of collecting info so as to breach nyms and find our One True Identity should come as no surprise:

In privacy-conscious Europe, some governments seek stricter rules on online anonymity

February 23, 2007 - 4:20AM

The cloak of online anonymity could be lifted in parts of Europe as some governments seek to make it easier to identify people who use fake names to set up e-mail accounts and Web sites.

The German and Dutch governments have taken the lead, writing proposals that would make the use of false or fake information illegal in opening a Web-based e-mail account and require phone companies to save detailed records, including when customers make calls, where and to whom.

The measures, none of which have yet become law, would not outlaw having false or misleading names on e-mail or other Internet addresses _ only providing false information to Internet service providers.

The aim, analysts say, is to make it easier for law enforcement officials to get information when they investigate crimes or terrorist attacks.

The Dutch! Tell me it ain't so! On the good news front, there is no truth to the rumour that privacy activists will be out of a job due to lack of interest.

Posted by iang at 07:09 PM | Comments (5) | TrackBack

January 09, 2007

Cat's Credit Card

Please read the following story, writes guest quizmeister Philipp, about Cat´s Credit Card, and afterwards please answer the Puzzling Identity Question:

Bank of Queensland issues credit card to cat

Australia's Bank of Queensland has apologised for issuing a credit card to a customer's cat after its owner decided to test the bank's identity screening system.

The bank issued a credit card to Messiah the cat after its owner, Katherine Campbell from Melbourne, applied for a secondary card on her account under its name.

According to local press reports the cat was issued a Visa credit card with a A$4200 limit.

Campbell told reporters that the bank requested identification from Messiah but later sent a credit card without receiving any proof of ID. To make matters worse Campbell - who is the primary credit card holder - says she was not notified that a secondary credit card attached to her account had been issued.

The bank has apologised for the error but stated that people who apply for credit cards must sign to confirm the information provided is true. The bank says it will not be taking any legal action against Campbell in this instance.

And here is the question: Think twice!

Who is to blame?
[ ] The bank
[ ] The people working in the bank
[ ] The bad security technology, the should have used XYZ
[ ] The risk manager in the bank
[ ] The risk analysis of the bank
[ ] The credit card standards
[ ] The identity standards in that particular country
[ ] The general identity model
[ ] The missing species rights, allowing cats to have credit cards
[ ] The woman
[ ] The cat
[ ] Nobody
[ ] The one who didn´t limit their liability beforehand
[ ] I don´t know
[ ] All of the above
[ ] ________________


Best regards,
Philipp Gühring
PS: (;-)-CC (Smiley that shows a smiling cat with a creditcard)


Posted by iang at 12:13 PM | Comments (3) | TrackBack

October 10, 2006

NZ on Identity

It is almost but not quite a truism that if you make identity valuable, then you make identity theft economic, amongst other things. Here's New Zealand's take on the issue, at the end of a long article on government reform:

Let me share with you one last story: The Department of Transportation came to us one day and said they needed to increase the fees for driver's licenses. When we asked why, they said that the cost of relicensing wasn't being fully recovered at the current fee levels. Then we asked why we should be doing this sort of thing at all. The transportation people clearly thought that was a very stupid question: Everybody needs a driver's license, they said. I then pointed out that I received mine when I was fifteen and asked them: "What is it about relicensing that in any way tests driver competency?" We gave them ten days to think this over. At one point they suggested to us that the police need driver's licenses for identification purposes. We responded that this was the purpose of an identity card, not a driver's license. Finally they admitted that they could think of no good reason for what they were doing - so we abolished the whole process! Now a driver's license is good until a person is 74 years old, after which he must get an annual medical test to ensure he is still competent to drive. So not only did we not need new fees, we abolished a whole department. That's what I mean by thinking differently.

The rest of the article is very well worth reading, for a summary of NZ's economics successes.

Posted by iang at 06:28 AM | Comments (4) | TrackBack

August 22, 2006

Identity v. anonymity -- that is not the question

An age-old debate has sprung up around something called Identity 2.0. David Weinburger related it to transparency (and thus to open governance). David indicates that transparency is good, but it has its limits as an overarching framework:

So, all hail transparency... except...

...Except it's important that we preserve some shadows. Opaqueness in the form of anonymity protects whistleblowers and dissidents, women being beaten by their husbands, girls looking for abortion advice, people working through feelings of shame about who they are, and more. Anonymity and pseudonymity allow people to participate on the Web who perhaps aren't as self-confident as the loudest voices we hear there. It's even been known to enable snarky bloggers to comment archly on their industry, even if sometimes they play too rough.

Where he's heading is that he's suggesting that anonymity is a good thing in and of itself, and that it is something that should be in Identity 2.0. I'll leave you to wonder whether this is a political point or a savvy marketing angle aimed at early adopters.

Ben Laurie made a more incisive plea for anonymity:

But the point is this: unless you have anonymity as your default state, you don’t get to choose where on that spectrum you lie.

Eric Norlin says

Further, every “user-centric” system I know of doesn’t seek to make “identity” a default, so much as it seeks to make “choice” (including the choice of anonymity) a default.

as if identity management systems were the only way you are identified and tracked on the ‘net. But that’s the problem: the choices we make for identity management don’t control what information is gathered about us unless we are completely anonymous apart from what we choose to reveal.

Unless anonymity is the substrate, choice in identity management gets us nowhere. This is why I am not happy with any existing identity management proposal - none of them even attempt to give you anonymity as the substrate.

There is a bit of a insiders' secret here that Ben almost gets to the nub of (my emphasis). Unfortunately, nobody will believe the secret until they've lost their shirt for not believing it, and even then, it's optional. FWIW I'll reveal it here, you get to keep your shirt on and your beliefs intact, though.

Identity systems fail and fail again. From the annals of financial cryptography (Rights layer), the reason is that the design starts from a position of "identity," an assumption that is embedded as a meta-requirement from the very earliest days. I sometimes call this the "one true identity problem," meaning there isn't one true identity. That is, identity is simply too soft a concept to be called a requirement, and is thus a flawed foundation on which to build a large scale project. (And, if your concept calls for identity, assume large scale from the start.)

In order to accomodate different views of what identity is -- something necessary because it's not only soft but contentious in more ways that intimated by David above -- we need a flexible system. A very flexible system, indeed a system of such exeedingly flexible quantities it will work without any identity at all.

In fact, it turns out that the best way to build an identity system is to build an identity-free system, and then to layer your favourite identity flavour over the top. The system should aim to work well without any identity, so it can support all. To develop this further, by far the best base for such a system is public key based psuedonym systems. It is easy to put at least one identity system over the top of a standard psuedonym system, and it is plausible to put most identity concepts over a really good psuedonym system.

Yet, it is somewhere between traumatic and impractical to do it any other way. As an example, look at what Kim Cameron writes:

You can call this anonymity, or you can call this “not needlessly blabbing everything about yourself”.

Sites should only ask for identifying information when there is some valid and defensible reason to do so. They should always ask for the minimum possible. They should keep it for the shortest possible time. They should encrypt it so it is only available to systems that must access it. They should ensure as few parties as possible have access to such systems. And if possible, they should only allow it to be decrypted on systems not connected to the internet. Finally, they should audit their conformance with these best practices.

Once you accept that release of identifying information should be proportionate to well-defined needs - and that such needs vary according to context - it follows that identity must “be a spectrum”.

In order to keep his identity architecture intact, Kim Cameron proposes a set of guidelines. Yet, these are implausible on the face of it, and only serve to surface the trap that Kim is in -- "Identity" is his meta-requirement, and as he wrote it up in a series of "Identity Laws" he's pretty much bound to that. "It's the Law," he might say!

In order to maintain his concept of identity in the face of an apparent requirement for anonymity, Kim is therefore tempted into flirting with extra-system guidelines that will either be ignored or will impose intolerable costs on the system. It's from twister-grade designs like these that we see the steady series of failures of identity systems.

The short answer to the debate is, if you want your system to work, make psuedonymity the base (and thus the default, if that's how you want to characterise it). An identity system that is built without using psuedonymity as the base or substrate simply has a much lower chance of working becaue you won't be able to flexibly adjust the design when you discover the shortfalls in your concept of identity.

On the positive side, what is interesting about this debate is that people are debating the relative benefits of either approach at a high level strategic sense. They seem to have moved on from the old cypherpunk v. state debate; both of those arguments having lost their credibility over time. Perhaps, this is because this time someone (Microsoft?) is building it for commercial purposes, so that focusses the debate much more clearly. It's much easier to build a system when someone pays for it, and is willing to pay for their mistakes.

I would be remiss if I didn't close with some examples of strong (a.k.a. public key) psuedonymous systems: AADS (Anne and Lynn Wheeler) and Ricardo (Howland and Grigg) both proved the concept in the hardest of fields -- hard assets. Skype for VoIP and SSH for communications show it outside finance. Soft systems include most chat systems which allow you to create psuedonyms and dispose of them after the weekend, and any website that creates a login for you.

Failed systems include x.509, which includes an absolute assumption of Identity, and thus fails to work as an identity system. PGP deliberately does not assume identity. It succeeds in providing Identity, but has trouble in other areas.

For those interested in the wider debate, David includes a useful summary.

Posted by iang at 01:16 PM | Comments (5) | TrackBack

June 25, 2006

FC++3 - Dr Mark Miller - Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control

Financial cryptographer Mark Miller has finished his PhD thesis, formally entitled Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control.

This is a milestone! The thrust of Dr Miller's work could be termed as "E: a language for financial cryptography applications." Another way of looking at it could be "Java without the bugs" but that is probably cutting it short.

If this stuff was in widespread usage, we would probably reduce our threat surface area by 2 orders of magnitude. You probably wouldn't need to read the other two papers at all, if.

http://www.erights.org/talks/thesis/
http://www.caplet.com/

Abstract

When separately written programs are composed so that they may cooperate, they may instead destructively interfere in unanticipated ways. These hazards limit the scale and functionality of the software systems we can successfully compose. This dissertation presents a framework for enabling those interactions between components needed for the cooperation we intend, while minimizing the hazards of destructive interference.

Great progress on the composition problem has been made within the object paradigm, chiefly in the context of sequential, single-machine programming among benign components. We show how to extend this success to support robust composition of concurrent and potentially malicious components distributed over potentially malicious machines. We present E, a distributed, persistent, secure programming language, and CapDesk, a virus-safe desktop built in E, as embodiments of the techniques we explain.

Advisor: Jonathan S. Shapiro, Ph.D.

Readers: Scott Smith, Ph.D., Yair Amir, Ph.D.

Presented here as the lead article in FC++. Mark's generous licence:

Copyright © 2006, Mark Samuel Miller. All rights reserved.
Permission is hereby granted to make and distribute verbatim copies of this document without royalty or fee. Permission is granted to quote excerpts from this documented provided the original source is properly cited.
Posted by iang at 01:00 PM | Comments (2) | TrackBack

June 24, 2006

Identity 7, watchlist error rate, $300 to get off the watchlist

I love this article, it's cracker-jack full of interesting stuff about a crime family who have industrialised identity document production in the US.

The dominant forgery-and-distribution network in the United States is allegedly controlled by the Castorena family, U.S. Immigration and Customs Enforcement officials say. Its members emigrated from Mexico in the late 1980s and have used their printing skills and business acumen to capture a big piece of the booming industry.

Nice colour, there. Actually the entire article is full of colour, well worth reading. We'll just do the dry facts here:

Federal authorities said that calculating the financial scope of document forgery is virtually impossible but that illicit profits easily amount to millions of dollars, if not billions. One investigation of CFO operations in Los Angeles alone resulted in *the seizure of 3 million documents with a street value of more than $20 million.*

"We've hit them pretty hard, but have we shut down the entire operation? I don't think we can say that yet," said Scott A. Weber, chief of the agency's Identity and Benefit Fraud Unit. "We know there are many different cells out there, and they are still providing documents."

Ouch. 20 millions divided by $3 millions is $7. Identity 7, here we come.

Illegal immigrants are often given packages of phony documents as part of a $2,000 smuggling fee. Others can easily make contact with vendors who operate on street corners or at flea markets in immigrant communities in virtually every city. .. . A typical transaction includes key papers such as a Social Security card, a driver's license and a "green card" granting immigrants permanent U.S. residency. Fees range from $75 to $300, depending on quality.

Identity is a throw-in for a $2000 package tour sold out of Mexico. Say no more. Obviously, these numbers are all screwed up as there is a big difference between $75 and $7. But, consider. Even at $300, it would be more cost-effective for the average American business traveller to travel on false documentation than to do the following:

Currently, individuals who want to clear their names have to submit several notarized copies of their identification. Then, if they're lucky, TSA might check their information against details in the classified database, add them to a cleared list and provide them with a letter attesting to their status.

More than 28,000 individuals had filed the paperwork by October 2005, the latest figures available, according to TSA spokeswoman Amy Kudwa. She says the system works. "We work rigorously to resolve delays caused by misidentifications," Kudwa says.
...
The TSA's lists are only a subset of the larger, unified terrorist watch list, which consists of 250,000 people associated with terrorists, and an additional database of 150,000 less-detailed records, according to a recent media briefing by Terrorist Screening Center director Donna Bucella. The unified list is used by border officials, embassies issuing visas and state and local law enforcement agents during traffic stops.

This programme is of interest because its identity keystone drives other programmes. We are looking at a 7% error rate as a minimum, which should come as no surprise - of course, there are unlikely to be more than a 100 people on the list that really qualify as "terrorists who are likely to do some damage on a plane" so if the error rate is anything less than 99% then we should probably be stopping the planes right now. About the best we can conclude is that the strategy of stopping terrorists by identifying them doesn't seem worth emulating in financial cryptography.

And Darren points out the statistical unwisdom of relying on such programmes:

Suppose that NSA's system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA's system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA's domestic monitoring of everyone's email and phone calls is useless for finding terrorists.

Sure. But the NSA are not using the databases to find terrorists. Instead, when other leads come in, they look to see what they have in their databases -- to add to the lead they already have. Simple. With this strategy, clearly, the more data, the more databases, the better this works.

But, again, it doesn't seem a strategy that we'd emulate in FC.

Posted by iang at 12:29 PM | Comments (2) | TrackBack

May 25, 2006

Is VeriSign's buyout of GeoTrust anti-competitive?

In the rather slack and happy marketplace for certificates, I mentioned that Verisign had decided to buy out GeoTrust for wads of cash. Nice news for the owners.

Dan Gillmor opposes this saying

This deal would be great for VeriSign, but terrible for the marketplace. It would consolidate one company's control over an essential part of the Internet infrastructure.

I realize the Bush administration doesn't enforce antitrust laws very often anymore. But this buyout should be blocked. It's anticompetitive, period.

Should it? Are VeriSign's actions anti-competitive? Or is the entire market anti-competitive? In the market for CAs and certificates, it seems to me the greater problem for the market for certs is that the market is structurally unsound. IMO, it is best modelled as a franchise, and not considered in any value or product terms.

So much so that it makes no difference whether it is one company ruling or many. Changing the market by blocking companies while leaving the structure unchanged is just fiddling while Rome burns. It's a textbook example of a Porterian nightmare, it deserves its final resting ground in a Kennedy case study. What do we get by pretending that congressmen know something about institutional economics when in practice they simply vote with their wallets in a favour market?

Better to keep them out of it, is my call. Let the marketplace bypass certs as it is doing.

Posted by iang at 06:33 AM | Comments (0) | TrackBack

May 17, 2006

CA market consolidates - Verisign to buy Geotrust

The certificate market is moving back towards one large player - Verisign is buying Geotrust for $125 million (PDF only so far.)

VeriSign Inc, the leading provider of intelligent infrastructure services for the Internet and telecommunications networks today announced it has entered into a definative agreement to purchase Needham, MA-based GeoTrust, Inc., a leading supplier of SSL and other solutions to secure e-business transactions, approximately $125 million in cash. The acquisition is subject to regulatory approvals and other conditions and is expected to close in the second half of this year.

The acquisition of GeoTrust extends VeriSign's mission to enable and protect all forms of networked interactions, and addresses the needs of an evolving SSL market. GeoTrust's well developed channel of more than 9.000 direct resellers in more than 140 countries will complement VeriSign's direct-sales SSL business, currently serving more than 3,000 enterprises worldwide.

I had predicted that Verisign would get out of the market for certs - I guess this says I'm wrong in the prediction, but the consolidation bears out the fundamentals. What we are looking at is a market that doesn't sell enough to make sense, so it is either "get out" or "monopolise it."

Over on SecuritySpace's CA survey, this takes the combined group of GeoTrust, Thawte, Verisign to a total of 110k servers, or 164k on their new market share report. Maybe more if they own some more CAs we don't know about... (Also, Verisign comments in their annual report something like 400k 462k sales per annum.)

If you take those numbers and multiply them out, you have a problem. If it is a vibrant, dog eat dog market in the $20-$40 area, we are looking at numbers under 10 million total revenues. That won't do. The only way to get the market up into the hundreds of dollars area that Verisign would like is to eliminate the low priced competition.

GeoTrust made a market in the last couple of years by aggressively expanding. Scuttlebut had it that they were the driving force behind much stuff like the High Assurance programme, and they had already taken the number one slot in raw numbers.

So maybe the structure of the market is shifting to one of "startup, get into browsers, issue like mad, cash out in the sale to Verisign." Nothing wrong with that, for the owners, but as always it has little to do with the risks faced by the average user.

Posted by iang at 02:51 PM | Comments (2) | TrackBack

March 15, 2006

Just another day in the office of Identity Control

Over at CeBIT I spent some time at the CAcert booth checking out what they were up to. Lots of identity checking, it seems. This process involves staring hard at government Id cards and faces, which gets a bit tricky when the photo is a decade or two out of date. What do you say to the heavy-set matron about the cute skinny teenager on her identity card?

One artist chap turned up and wanted to sign up as an artist. This turns out to be a 'right' in Germany. Lo and behold, on his dox there is a spot for his artist's identity name. Much debate ensued - is CAcert supposed to be upstream or downstream of the psuedonymous process?

In this case, the process was apparently resolved by accepting the artist's name, as long as the supporting documentation provided private clarification. Supporting nymous identities I think is a good idea - and age old scholars of democracy will point out the importance of the right to speak without fear and distribute pamphlets anonymously.

CAcert is probably downstream, and over in Taiwan (which might be representative of China or not) we discover more government supported nymous identities: passport holders can pick their own first names for their passport. Why? The formal process of translating (Kanji) Han into Latin for passports - (popumofu?) pinyin! - so mangles the real sense that letting a person pick a new name in Latin letters restores some humanity to the process.

This development is not without perverse results. It places CAcert in the position of supporting nyms only if they are government-approved yet frowning on nyms that are not. Hardly the result that was intended - should CAcert apply to sponsor the Big Brother awards - for protecting privacy - or to receive one - for supporting government shills?

Most people in most countries think Identity is simple, and this was evident in spades at the booth. For companies, one suggestion is to take the very strong German company scheme and make it world standard. This won't work, simply because it is built on certain artifacts of the German corporate system. So there is a bit of a challenge to build a system that makes companies look the same the world over.

Not that individuals are any easier. Some of the Assurers - those are the ones that do the identity verification - are heading for the Phillipines to start up a network. There, the people don't do government issued identity, not like we do in the West. In fact, they don't have much Id at all it seems, and to get a passport takes 4 visits to 4 different offices and 4 different pieces of paper (the last visit is to the President's Office in Manila...).

The easy answer then is to just do that - but that's prohibitively expensive. One of the early steps is visiting a notary, which is possibly a reliable document and a proxy for a government ID, but even that costs some substantial fraction of the average monthly wage (only about 40 Euros to start with).

A challenge! If you have any ideas about how to identify the population of a country that isn't currently on the identity track, let us know. Psuedonymously, of course.

Posted by iang at 04:07 PM | Comments (1) | TrackBack

February 27, 2006

Identity on the move III - some ramblings on "we'll get it right this time, honest injun!"

Kim Cameron - who writes for Microsoft over on his Identity blog - published an interview where Bill Gates says:

This is very simple. There are statements like, “I, the employer of this person, have given them a secret” – either a password or even better a big number, a key. So I, Intel, say if they present this secret back to me, I, Intel vouch that they are an employee. Then we at Microsoft collaborate with Intel, and we decide do we accept statements of that type to decide who can get into various collaborative websites for joint projects.

The statement! That is something that has been lacking from just about all of the popular designs, and is at the root of the harm of identity theft on the net. If Microsoft are heading in this direction, this is an encouraging development. However, when we take it a bit further:

That’s called federation, where we take their trust statement and we accept it, within a certain scope. So they don’t have to get another user account password. There’s no central node in this thing at all, there never can be. Banks are a key part of it, governments can be part of it. The US, probably not as much.

That's scary. If the point of system is to allow corporates to exchange statements about you, do we really believe that just because they say they are limited statements that users' privacy isn't being shredded? James also questions Federated Identity, the sum of which seems too many people with too many acronyms and too much reliance on adoption and users' blind religious trust.

In contrast there is this tantalising snippet in another interview that suggests that the system might be sort of maybe usable for nyms:

Cameron: I think people will be people offering InfoCard-enabled services by the time Vista ships. I’m at a disadvantage because I can’t tell you who we are working with. What I can say is there are thought leaders around this in each industry. Those are the guys who we will be working with and who will have these applications that are InfoCard ready.

You can get not just identity but sort of very interesting semi-anonymous things that are very privacy-friendly. One of the things we have been doing with this project is to work with the privacy advocates and have them as colleagues in the design of the thing. This is not one of those things where a bunch of nerds get in to a garage and come up with something that is going to gross out the privacy advocates.

Who are these shy thought leaders, and what do they mean by semi-anonymous?

If you read the (first) entire interview with Bill Gates, you like I might get the impression that Bill Gates remains a wolf in sheep's clothing. Kim Cameron says "A number of people have confided that they worry the committment to privacy and openness I make in my work can’t “possibly” reflect the ideas of the “official Microsoft juggernaut”' but is he trying on the same suit? Some of these comments read pretty thin, when we factor in Microsoft's history (which, again, shouldn't be taken to mean that any other company is any more concerned about privacy). Even their recent history isn't encouraging:

BG: No, no, it’s not even worth going back to that. We partly didn’t know what it was, and certainly what the press said it was wasn’t what we thought it was, but even what we thought it was we didn’t end up doing all of that. That’s old history.

Only the blindly religious would see Bill Gates' dismissal of past errors as anything but a warning sign. So, now we are here in not-old-history. What is it that is being said that gives us confidence that old istory isn't just around the corner, yet again? Not only does he decline to simply say "Passport was wrong," he's inviting everyone to trust him, this time. In Passport V3, we'll get it right, honest injun! Being blind and religious might help, but even that has limits.

The curious thing about this is that regardless of how Microsoft is going to get parts of this wrong, we now have a re-emerging competition in security. These ideas will be put into play in the Microsoft suite of software, and the few that work will be copied. Yes, some of them are going to work. The ones that won't work will end up in the dust heap (but not before being re-named mid-programme).

Is that the best we can do? To paraphrase Churchill, competition is a terrible way to do security, but it's better than all the other ways. So maybe we no longer care what Microsoft says, only what they succeed at.

Posted by iang at 02:09 PM | Comments (1) | TrackBack

Identity on the move II - Microsoft's "Identity Metasystem" TM, R, Passport-redux

A commercial presentation on Microsoft's Infocard system is doing the rounds. (Kim Cameron's blog.) Here's some highlights and critiques. It is dressed up somewhat as an academic paper, and includes more of a roadmap and analysis view, so it is worth a look.

The presentation identifies The Mission as "a Ubiquitous Digital Identity Solution for the Internet."

By definition, for a digital identity solution to be successful, it needs to be understood in all the contexts where you might want to use it to identify yourself. Identity systems are about identifying yourself (and your things) in environments that are not yours. For this to be possible, both your systems and the systems that are not yours – those where you need to digitally identity yourself – must be able to speak the same digital identity protocols, even if they are running different software on different platforms.

In the case of an identity solution for the entire Internet, this is a tall order...

Well, at least we can see a very strong thrust here, and as a mission-oriented person, I appreciate getting that out there in front. Agreeing with the mission is however an issue to discuss.

Many of the problems facing the Internet today stem from the lack of a widely deployed, easily understood, secure identity solution.

No, I don't think so. Many of the problems facing the Internet today stem from the desire to see systems from an identity perspective. This fails in part because there is no identity solution (and won't be), in part because an identity solution is inefficient, and in part because the people deploying these systems aren't capable of thinking of the problem without leaning on the crutch of identity. See Stefan Brands' perspective for thinking outside the tiny cramped box of identity.

A comparison between the brick-and-mortar world and the online world is illustrative: In the brick-and-mortar world you can tell when you are at a branch of your bank. It would be very difficult to set up a fake bank branch and convince people to do transactions there. But in today’s online world it’s trivial to set up a fake banking site (or e-commerce site …) and convince a significant portion of the population that it’s the real thing. This is an identity problem. Web sites currently don’t have reliable ways of identifying themselves to people, enabling imposters to flourish. One goal of InfoCard is reliable site-to-user authentication, which aims to make it as difficult to produce counterfeit services on the online world as it is to produce them in the physical world.

(My emphasis.) Which illustrates their point nicely - as well as mine. There is nothing inherent in access to a banking site that necessitates using identity, but it will always be an identity based paridigm simply because that's how that world thinks. In bricks-and-mortar contrast, we all often do stuff at branches that does not involve identity. In digital contrast, a digital cash system delivers strength without identity, and people have successfully mounted those over web sites as well.

That aside, what is this InfoCard? Well, that's not spelt out in so many words as yet:

In the client user interface, each of the user’s digital identities used within the metasystem is represented by a visual “Information Card” (a.k.a. “InfoCard”, the source of this technology’s codename). The user selects identities represented by InfoCards to authenticate to participating services. The cards themselves represent references to identity providers that are contacted to produce the needed claim data for an identity when requested, rather than claims data stored on the local machine. Only the claim values actually requested by the relying party are released, rather than all claims that the identity possesses (see Law 2).

References to providers is beginning to sound like keys managed in a wallet, and this is suggested later on. But before we get to that, the presentation looks at the reverse scenario: the server provides the certificate:

To prevent users from being fooled by counterfeit sites, there must be a reliable mechanism enabling them to distinguish between genuine sites and imposters. Our solution utilizes a new class of higher-value X.509 site certificates being developed jointly with VeriSign and other leading certificate authorities. These higher-value certificates differ from existing SSL certificates in several respects.

Aha. Pay attention, here comes the useful part...

First, these certificates contain a digitally-signed bitmap of the company logo. This bitmap is displayed when the user is asked whether or not they want to enter into a relationship with the site, the first time that the site requests an InfoCard from the user.

Second, these certificates represent higher legal and fiduciary guarantees than standard certificates. In many cases, all that having a standard site certificate guarantees is that someone was once able to respond to e-mail sent to that site. In contrast, a higher-value certificate is the certificate authority saying, in effect, “We stake our reputation on the fact that this is a reputable merchant and they are who they claim to be”.

Users can visit sites displaying these certificates with confidence and will be clearly warned when a site does not present a certificate of this caliber. Only after a site successfully authenticates itself to a user is the user asked to authenticate himself or herself to the site.

Bingo. This is just the High Authentication proposal written about elsewhere. What's salient here is that second paragraph, my emphasis added. So, do they close the loop? Elsewhere there has been much criticism of the proposals made by Amir and myself, but it is now totally clear that Microsoft have adopted this.

The important parts of the branding proposal are there:

  • The site is identified
  • the statement is made by the verifier of the site:
  • the verifier is named, and
  • the verifier's logo is present.

The loop is closed. Now, finally, we have a statement with cojones.

There remain some snafus to sort out. This is not actually the browser that does this, it is the InfoCard system which may or may not be available and may or may not survive as this year's Microsoft Press Release. Further, it only extends to the so-called High Assurance certs:

To help the user make good decisions, what’s shown on the screen varies depending on what kind of certificate is provided by the identity provider or relying party. If a higher-assurance certificate is provided, the screen can indicate that the organization’s name, location, website, and logo have been verified, as shown in Figure 1. This indicates to a user that this organization deserves more trust. If only an SSL certificate is provided, the screen would indicate that a lower level of trust is warranted. And if an even weaker certificate or no certificate at all is provided, the screen would indicate that there’s no evidence whatsoever that this site actually is who it claims to be. The goal is to help users make good decisions about which identity providers they’ll let provide them with digital identities and which relying parties are allowed to receive those digital identities.

The authors don't say it but they intend to reward merchants who pay more money for the "high-assurance". That's in essence a commercial response to the high cost of the DD that Geotrust/RSA/Identrus are trying to float. This also means that they won't show the CA as the maker of a "lower assurance" statement, which means the vast bulk of the merchants and users out there will still be phishable, and Microsoft will be liable instead of the statement provider. But that's life in the risk shifting business.

(As an explanatory note, much of the discussion recently has focussed on the merchant's logo. That's less relevant to the question of risk. What is more relevant is VeriSign's name and logo. They are the one that made the statement, and took money for it. Verisign's brand is something that the user can recognise and then realise the solidity of that statement: Microsoft says that Verisign says that the merchant is who they are. That's solid, because Microsoft can derive the Verisign logo and name from the certificate path in a cryptographically strong fashion. And they could do the same with any CA that they add into their root list.)

Finally, the authors have not credited prior work. Why they have omitted this is obscure to me - this would be normal with a commercial presentation, but in this case the paper looks, writes and smells like an academic paper. That's disappointing, and further convinces people to simply not trust Microsoft to implement this as written; if Microsoft does not follow centuries-old academic customs and conventions then why would we trust them in any other sense?

That was the server side. Now we come to the user-centric part of the InfoCard system:

2.7. Authenticating Users to Sites InfoCards have several key advantages over username/password credentials:
  • Because no password is typed or sent, by definition, your password can not be stolen or forgotten.
  • Because authentication is based on unique keys generated for every InfoCard/site pair (unless using a card explicitly designed to enable cross-site collaboration), the keys known by one site are useless for authentication at another, even for the same InfoCard.
  • Because InfoCards will resupply claim values (for example, name, address, and e-mail address) to relying parties that the user had previously furnished them to, relying parties do not need to store this data between sessions. Retaining less data means that sites have fewer vulnerabilities. (See Law 2.)

What does that mean? Although it wasn't mentioned there, it turns out that there are two possibilities: Client side key generation and relationship tracking, as well as "provider generated InfoCards" written up elsewhere:

Under the company's plan, computer users would create some cards for themselves, entering information for logging into Web sites. Other cards would be distributed by identity providers -- such as banks or governmental agencies or online services -- for secure online authentication of a person's identity.

To log in to a site, computer users would open the InfoCard program directly, or using Microsoft's Internet Explorer browser, and then click on the card that matches the level of information required by the site. The InfoCard program would then retrieve the necessary credentials from the identity provider, in the form of a secure digital token. The InfoCard program would then transmit the digital token to the site to authenticate the person's identity.

Obviously the remote provision of InfoCards will depend on buy-in, which is a difficult pill to follow as that means trusting Microsoft in oh so many ways - something they haven't really got to grips with. But then there are also client-generated tokens. Are they useful?

If they have client-side key generation and relationship caching, then these are two of the missing links in building a sustainable secure system. See my emphasis further above for a hint on relationship tracking and see Kim Cameron's blog for this comment: "Cameron: A self-issued one you create yourself." Nyms (as per SSH and SOX) and relationship tracking (again SSH, and these days Trustbar,Petname and recent other suggestions) are strong. These ideas have been around for a decade or more, we call it opportunistic cryptography as a school.

Alternatively, notice how the credentials term is slipped in there. That's not how Stefan Brands envisages it (from Identity on the move I - Stefan Brands on user-centric identity management), but they are using his term. What that means is unclear (and see Identity on the move III - some ramblings on "we'll get it right this time, honest injun!" for more).

Finally, one last snippet:

3.6. Claims != “Trust” A design decision was to factor out trust decisions and not bundle them into the identity metasystem protocols and payloads. Unlike the X.509 PKIX [IETF 05], for example, the metasystem design verifies the cryptography but leaves trust analysis for a higher layer that runs on top of the identity metasystem.

Hallelujah! Trust is something users do. Crypto systems do claims about relationships.

Posted by iang at 01:45 PM | Comments (0) | TrackBack

Identity on the move I - Stefan Brands on user-centric identity management

Stefan Brands has moved over to the podcasting world, with an interview on user-centric identity management. Here's my notes from listening (my comments injected in parenthesies, my errors everywhere):

Stefan relates cash payments as ways to transfer information, Hayekian style, for user-centric transactions. This time, instead of doing an application like digital cash, he is concentrating on an engine to wrap the data for user-controlled privacy.

Consider the large-scale distributed system of medical data management. Specialists across domain boundaries have only their own information on you. How do we allow the doctor to get access to the information in other domains? (Here's a canonical example were we assume a priori that such access is a good thing.)

Classically, would we put it all in a central database and shove some access control on it? This we don't do, because the access control doesn't work well enough - hackers and insiders get widespread access. This was the Passport approach.

The second approach is the federated identity management approach of Liberty Alliance. Here, we hook up all the silos of data together through a centralised party - could be a hospital. The doctor contacts the centralised party and asks for access to the info on the patient. What it gets is the access keys, and can then get the data it needs from the various sources.

The ability to move the data has to be facilitated by a central party - which means the party now knows what you are up to. (Insurance of course wants to know who is visiting which doctor...) It's similar to the credit card model with users, merchants and VISA in the middle. The centre might not know what you are "purchasing" but it does see the amount and the merchant. Does this give the user payment privacy? No, not like cash.

My Doctor typically knows me, and a merchant might as well - but should there be a central party that also knows me? The central party would also know all the clients of a doctor - breaching the patient-doctor autonomy, something that doctors in Quebec rejected when the provice tried to roll out a PKI.

There are economy, privacy and security rights, and they are all interlinked. You think you are disclosing just the pertinent information for the local decision, but this can easily jump to a wider scope. When competitive intelligence comes into play, lots of parties want to know all of these details. Governments are included on both sides, where for example different provinces or states are paranoid about handing their data over to centralised federal parties as this will result in loss of autonomy in dealing with own citizens.

A centralised approach also cannot cope with dynamic queries, it has no flexibility.

The engine of Credentica is the third approach. Model 3 is a crediential system in its purest form, and is closer to how people and society functioned before the computer age. When the relying party wants richer information, the user is asked for the information. The relying party - the doctor - just wants to get the data, and it wants it from a source that it trusts. What it does not care about is how it gets there.

We can put together a device - already available - that holds credentials. Instead of saying, here is my identity, and expecting a centralised party or silo to deal with it, the user provides the credentials and gets the data herself. The user carries a token - in effect a key of some form - that allows the patient and doctor together to get the data.

The default scenario is a user with a smart card or PDA holding her credentials. It can hold the data itself - as a copy of the data held at a service provider. This data can also be accessed directly from the service provider by the user.

A relying party such as principle doctor relies directly on the user. But now the question is, can the user be trusted with the integrity of the information, such as with prescriptions. In this case, the relying party has to go to the source of that prescription, and the user now becomes the person in the middle.

Simple digital signatures aren't good enough, as this information is private and substitutable. How do we stop the user substituting in a prescription? The same applies to credit and employment questions. How does the asker know the information really applies to Alice the users?

Which brings us to the crypto engine Credentica has developed. It allows the source to take data, make assertations, and package that data for the user's token. Some sort of smart card or USB token is needed, but that is implementation details.

Depending on the threat model, the protection is variable (???). Account information for example could be wrapped by the security engine. The relying party could contact the source to verify the information.

Now, what we want to do is give the user the same capability to answer the same questions that the source knows authoritively. Yet, relying party does not trust the user as much as the source. On this "cryptographically certified data" the engine can answer questions like "are you over 18?" We can do this by revealing the birthdate, as certified by some provider, but this reveals more information than needed.

The engine allows that question to be asked and answered - are you over 18? The statement returned is self-certifying, it can reveal its own identity.

Correlation is still an issue over handles. Imagine a question of male versus female. Normally, Alice reveals she is female, but she is also generally revealing data that is matchable to other events. If every piece of data is doled out with these identifiers, this has terrible privacy implications. (At this point the interview was running out of time, so we did not hear how the engine deals with correlation and identifier matching.)

What are the business opportunities? Health data is a big area, and European and Canadian agencies are pushing more and more towards rejecting the panopticon approach. Credentica would likely partner with others in such a complicated supply chain; the engine is literally a component in the vehicle.

Interview. Stefan's blog.

Posted by iang at 12:56 PM | Comments (1) | TrackBack

February 14, 2006

SSL phishing, Microsoft moves to brand, and nyms

fm points to Brian Krebs who documents an SSL-protected phishing attack. The cert was issued by Geotrust:

Now here's where it gets really interesting. The phishing site, which is still up at the time of this writing, is protected by a Secure Sockets Layer (SSL) encryption certificate issued by a division of the credit reporting bureau Equifax that is now part of a company called Geotrust. SSL is a technology designed to ensure that sensitive information transmitted online cannot be read by a third-party who may have access to the data stream while it is being transmitted. All legitimate banking sites use them, but it's pretty rare to see them on fraudulent sites.

(skipping details of certificate manufacturing...)

Once a user is on the site, he can view more information about the site's security and authenticity by clicking on the padlock located in the browser's address field. Doing so, I was able to see that the certificate was issued by Equifax Secure Global eBusiness CA-1. The certificate also contains a link to a page displaying a "ChoicePoint Unique Identifier" for more information on the issuee, which confirms that this certificate was issued to a company called Mountain America that is based in Salt Lake City (where the real Mountain America credit union is based.)

The site itself was closed down pretty quickly. For added spice beyond the normal, it also had a ChoicePoint unique Identifier in it! Over on SANS - something called the Internet Storm Center - Handler investigates why malware became a problem and chooses phishing. He has the Choicepoint story nailed:

I asked about the ChoicePoint information and whether it was used as verification and was surprised to learn that ChoicePoint wasn't a "source" of data for the transaction, but rather was a "recipient" of data from Equifax/GeoTrust. According to Equifax/GeoTrust, "as part of the provisioning process with QuickSSL, your business will be registered with ChoicePoint, the nation's leading provider of identification and credential verification services."

LOL... So now we know that the idea is to get everyone to believe in trusting trust and then sell them oodles of it. Quietly forgetting that the service was supposed to be about a little something called verification, something that can happen when there is no reason to defend the brand to the public.

Who would'a thunk it? In other news, I attended an informal briefing on Microsoft's internal security agenda recently. The encouraging news is that they are moving to put logos on the chrome of the browser, negotiate with CAs to get the logos into the certificates, and move the user into the cycle of security. Basically, Trustbar, into IE. Making the brand work. Solving the MITM in browsers.

There are lots of indicators that Microsoft is thinking about where to go. Their marketing department is moving to deflect attention with 10 Immutable Laws of Security:

Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore
Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore
Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore
Law #4: If you allow a bad guy to upload programs to your website, it's not your website any more
Law #5: Weak passwords trump strong security
Law #6: A computer is only as secure as the administrator is trustworthy
Law #7: Encrypted data is only as secure as the decryption key
Law #8: An out of date virus scanner is only marginally better than no virus scanner at all
Law #9: Absolute anonymity isn't practical, in real life or on the Web
Law #10: Technology is not a panacea

Immutable! I like that confidence, and so do the attackers. #9 is worth reading - as Microsoft are thinking very hard about identity these days. Now, on the surface, they may be thinking that if they can crack this nut about identity then they'll have a wonderful market ahead. But under the covers they are moving towards that which #9 conveniently leaves out - the key is the identity is the key, and its called psuedonymity, not anonymity. Rumour has it that Microsoft's Windows operating system is moving over to a psuedonymous base but there is little written about it.

There was lots of other good news, too, but it was an informal briefing, so I informally didn't recall all of it. Personally, to me, this means my battle against phishing is drawing to a close - others far better financed and more powerful are carrying on the charge. Which is good because there is no shortage of battles in the future.

To close, deliciously, from Brian (who now looks like he's been slashdotted):

I put a call in to the Geotrust folks. Ironically, a customer service representative said most of the company's managers are presently attending a security conference in Northern California put on by RSA Security, the company that pretty much wrote the book on SSL security and whose encryption algorithms power the whole process. When I hear back from Geotrust, I'll update this post.

That's the company that also ditched SSL as a browsing security method, recently. At least they've still got the conference business.

Posted by iang at 06:21 AM | Comments (1) | TrackBack

October 13, 2005

Ben Laurie on Identity

Ben Laurie enters the world of blogs in typical style ("Anyone who knows me knows I hate blogs") and also shows that the feeling's mutual ( ... unprintable!).

More apropos, there are some interesting posts on how to turn the MD5 collision attack into a useful attack involving primes. John Kelsey suggested one and several posts pursue it. Start here.

Even more useful, Ben's Laws of Identity and a paper to better describe. Systems must be:

  • Verifiable. There’s often no point in making a statement unless the relying party has some way of checking it is true. Note that this isn’t always a requirement - I don’t have to prove my address is mine to Amazon, because its up to me where my good get delivered. But I may have to prove I’m over 18 to get the porn delivered.
  • Minimal. This is the privacy preserving bit - I want to tell the relying party the very least he needs to know. I shouldn’t have to reveal my date of birth, just prove I’m over 18 somehow.
  • Unlinkable. If the relying party or parties, or other actors in the system, can collude to link together my various assertions, then I’ve blown the minimality requirement out of the water.

Which is looking good and it is nice to see some critical attention to Kim Cameron's Laws on Identify(ing Microsoft's Future Customers). (See also here Stefan Brands' blog for more on Identity.)

Mind you, Ben claims that x.509 is not suitable because "standard X.509 statements are verifiable, but not minimal nor unlinkable." I'm troubled by that word "verifiable." Either an x.509 cert points to somewhere else and therefore it in itself is not verifiable, just a reliable pointer to somewhere else, or the somewhere else is included in which case we are no longer talking about x.509.

Still, this is one of those debates where words twist their meaning faster than the average security guy can think, so let's save that for the bar.

Welcome!

Posted by iang at 03:03 PM | Comments (2) | TrackBack

October 12, 2005

The Mojo Nation Story - Part 2

[Jim McCoy himself writes in response to MN1] Hmmm..... I guess that I would agree with most of what Steve said, and would add a few more datapoints.

Contributing to the failure was a long-term vision that was too complex to be implemented in a stepwise fashion. It was a "we need these eight things to work" architecture when we were probably only capable of accomplishing three or four at any one time. Part of this was related to the fact the what became Mojo Nation was originally only supposed to be the distributed data storage layer of an anonymous email infrastructure (penet-style anonymous mailboxes using PIR combined with a form of secure distributed computation; your local POP proxy would create a retrieval ticket that would bounce around the network and collect your messages using multiple PIR calculations over the distributed storage network....yes, you can roll your eyes now at how much we underestimated the development complexity...)

As Bram has shown, stripping MN down to its core and eliminating the functionality that was required for persistent data storage turned out to create a pretty slick data distribution tool. I personally placed too much emphasis on the data persistence side of the story and the continuing complexity of maintaining this aspect was probably our achilles heel, if we had not focused on persistence as a design goal and let it develop as an emergent side-effect things might have worked but instead it became an expensive distraction.

In hindsight, it seems that a lot of our design and architecture goals were sound, since most of the remaining p2p apps are working on adding MN-like features to their systems (e.g. combine Tor with distributed-tracker-enabled BitTorrent and you are 85% of the way towards re-creating MN...) but the importance of keeping the short- term goal list small and attainable while maintaining a compelling application at each milestone was a lesson that I did not learn until it was too late.

I think that I disagree with Steve in terms of the UI issues though. Given the available choices at the time we could have either created an application for a single platform or use a web-based interface. The only cross-platform UI toolkit available to us at the time (Tk) was kinda ugly and we didn't have the resources to put a real UI team together. If we were doing this again today our options would include wxWidgets for native UI elements or AJAX for a dynamic web interface, but at the time a simple web browser interface seemed like a good choice. Of course, if we had re-focused on file-sharing instead of distributed persistent data storage we could have bailed on Linux & Mac versions and just created a native win32 UI...

The other point worth mentioning is that like most crypto wonks, we were far too concerned with security and anonymity. We cared about these features so we assumed our users would as well; while early adopters might care the vast majority of the potential user base doesn't really care as much as we might think. These features added complexity, development time, and a new source of bugs to deal with.

Jim

Back to Part 1 by Steve.

Posted by iang at 01:19 PM | Comments (8) | TrackBack

October 08, 2005

On Digital Cash-like Payment Systems

Just presented (slides) at ICETE2005 by Daniel Nagy: On Digital Cash-like Payment Systems:

Abstract. In present paper a novel approach to on-line payment is presented that tackles some issues of digital cash that have, in the author s opinion, contributed to the fact that despite the availability of the technology for more than a decade, it has not achieved even a fraction of the anticipated popularity. The basic assumptions and requirements for such a system are revisited, clear (economic) objectives are formulated and cryptographic techniques to achieve them are proposed.
Introduction. Chaum et al. begin their seminal paper (D. Chaum, 1988) with the observation that the use of credit cards is an act of faith on the part of all concerned, exposing all parties to fraud. Indeed, almost two decades later, the credit card business is still plagued by all these problems and credit card fraud has become a major obstacle to the normal development of electronic commerce, but digital cash-like payment systems similar to those proposed (and implemented) by D. Chaum have never become viable competitors, let alone replacements for credit cards or paper-based cash.

One of the reasons, in the author s opinion, is that payment systems based on similar schemes lack some key characteristics of paper-based cash, rendering them economically infeasible. Let us quickly enumerate the most important properties of cash:

1. "Money doesn't smell." Cash payments are -- potentially -- anonymous and untraceable by third parties (including the issuer).

2. Cash payments are final. After the fact, the paying party has no means to reverse the payment. We call this property of cash transactions irreversibility.

3. Cash payments are _peer-to-peer_. There is no distinction between merchants and customers; anyone can pay anyone. In particular, anybody can receive cash payments without contracts with third parties.

4. Cash allows for "acts of faith" or naive transactions. Those who are not familiar with all the antiforgery measures of a particular banknote or do not have the necessary equipment to verify them, can still transact with cash relying on the fact that what they do not verify is nonetheless verifiable in principle.

5. The amount of cash issued by the issuing authority is public information that can be verified through an auditing process.

The payment system proposed in (D. Chaum, 1988) focuses on the first characteristic while partially or totally lacking all the others. The same holds, to some extent, for all existing cash-like digital payment systems based on untraceable blind signatures (Brands, 1993a; Brands, 1993b; A. Lysyanskaya, 1998), rendering them unpractical.
...

[bulk of paper proposes a new system...]

Conclusion. The proposed digital payment system is more similar to cash than the existing digital payment solutions. It offers reasonable measures to protect the privacy of the users and to guarantee the transparency of the issuer s operations. With an appropriate business model, where the provider of the technical part of the issuing service is independent of the financial providers and serves more than one of the latter, the issuer has sufficient incentives not to exploit the vulnerability described in 4.3, even if the implementation of the cryptographic challenge allowed for it. This parallels the case of the issuing bank and the printing service responsible for printing the banknotes.

The author believes that an implementation of such a system would stand a better chance on the market than the existing alternatives, none of which has lived up to the expectations, precisely because it matches paper-based cash more closely in its most important properties.

Open-source implementations of the necessary software are being actively developed as parts of the ePoint project. For details, please see http://sf.net/projects/epoint

Posted by iang at 01:25 PM | Comments (3) | TrackBack

September 06, 2005

IP on IP

One class of finance applications that is interesting is that of developing new Intellectual Property (IP) over the net (a.k.a. IP). Following on from Ideas markets and Task markets, David points to ransomware, a concept where a piece of art or other IP is freed into the public domain once a certain sum is reached.

People seem ready for this idea. I've seen a lot of indications that there is readiness to try this from the open source Horde) and arts communities. The tech is relatively solvable (I can say that because I built it way back when...) but the cultural issues and business concepts surrounding IP in a community setting are still holding back the push.

Meanwhile, the CIA has decided to open up a little and build something like what we do on the net:

Opening up the CIA
Porter Goss plans to launch a new wing of the CIA that will sort through non-secret data
By TIMOTHY J. BURGER Aug. 15, 2005

In what experts say is a welcome nod to common sense, the CIA, having spent billions over the years on undercover agents, phone taps and the like, plans to create a large wing in the spookhouse dedicated to sorting through various forms of data that are not secret-such as research articles, religious tracts, websites, even phone books-but yet could be vital to national security. Senior intelligence officials tell TIME that CIA Director Porter Goss plans to launch by Oct. 1 an " open source" unit that will greatly expand on the work of the respected but cash-strapped office that currently translates...

The reason this is interesting is their obscure reference to translation, which we can reverse engineer with a little intelligence: the way that the spooks get things translated is to farm out paragraphs to different people and then combine them. They do this so that nobody knows the complete picture and therefore the translators can't easily spy on them.

Now, this farming out of packets is something that we know how to do using FC over the net. In fact, we can do it well with the tech we have already built (authenticated, direct-cash-settled, psuedonymous, reliable, traceable or untraceable) which would support remote secret-but-managed translators so well it'd be scary. That they haven't figured it out as yet is a bit of a surprise. Hmm, no, apparently it's scary, says Eric Umansky.

Unfortunately, they didn't open up enough to publish their article for free, and on one page at least Time were asking $$$ for the rest. More found here:

... foreign-language broadcasts and documents like declarations by extremist clerics. The budget, which could be in the ballpark of $100 million, is to be carefully monitored by John Negroponte, the Director of National Intelligence (DNI), who discussed the new division with Goss in a meeting late last month. "We will want this to be a separate, identifiable line in the CIA program so we know precisely what this center has in terms of investment, and we don't want money moved from it without [Negroponte's] approval," said a senior official in the DNI's office. Critics have charged in the past that despite the proven value of open-source information, the government has tended to give more prominence to reports gained through cloak-and-dagger efforts. One glaring example: the CIA failed in 1998 to predict a nuclear test in India, even though the country's Prime Minister had campaigned on a platform promising a robust atomic-weapons program.

"If it doesn't have the SECRET stamp on it, it really isn't treated very seriously," says Michael Scheuer, former chief of the CIA's Osama bin Laden unit. The idea of an open-source unit didn't gain traction until a White House commission recommended creating one last spring. Utilizing it will require "cultural and attitudinal changes," says the senior DNI official. Sure, watching TV and listening to the radio may not sound terribly sexy, but, says Scheuer, "there's no better way to find out what Osama bin Laden's going to do than to read what he says." --By Timothy J. Burger



So what's this got to do with Intellectual Property? Well, all the systems that will work to distribute IP over IP (and especially what is being discussed at the moment) also look uncannily like systems designed to pass intelligence around. It's no wonder - they are both combining small parts from many places and creating greater works from it. Content management is not the exlusive domain of the recording world.

Posted by iang at 06:36 PM | Comments (3) | TrackBack

August 22, 2005

Buying ID documents

Another structural beam just rusted and fell out of the foundations of the Identity Society. There is now a (thriving?) offshore market in forged secondary documents. Things like electricity bills, phone bills, credit documents, council or city papers can be drawn up for cash.

Beverley Young of Cifas, a fraud advice service set up by the credit industry, said: 'It is hard to prevent criminals using the internet to find false documentation, which can then be used to steal people's identities.
'We can warn people about false documents, but the sophisticated techniques used by the fraudsters mean that in most cases we cannot stop them. The regulators are powerless, too. ...

Cifas said the replica documents looked authentic because the people operating many of the websites had bought up printing equipment that was used by the companies whose documents they fake.

Fraud investigating company CPP of York accessed one of these websites and paid £200 for false statements from British Gas, Barclaycard, Barclays bank and Revenue & Customs.

As a reminder - what's left to establish Rights? There are a wealth of other techniques in Rights:

From a technology pov we have no issue. But for society at large this is yet more evidence of a looming clash between the unstoppable machine of Identity and the unimpressible rock of reality.

Posted by iang at 06:17 AM | Comments (2) | TrackBack

June 26, 2005

Nick Szabo - Scarce Objects

Nick Szabo is one of the few people who can integrate contracts into financial cryptograpy. His work with smart contracts echoes around the net, and he last year he gave the keynote presentation at the Workshop on Electronic Contracts. In this paper he seeks to integrate scarcity and property constructs with the object oriented model of programming.

Scarce Objects

Scarce objects, a.k.a. conserved objects, provide a user and programmer friendly metaphor for distributed objects interacting across trust boundaries. (To simplify the language, I will use the present tense to describe architectures and hypothetical software). Scarce objects also give us the ability to translate user preferences into sophisticated contracts, via the market translator described below. These innovations will enable us for the first time to break through the mental transaction cost barrier to micropayments and a micromarket economy.
A scarce object is a software object (or one of its methods) which uses a finite and excludable resource -- be it disk space, network bandwidth, a costly information source such as a trade secret or a minimally delayed stock quotes, or a wide variety of other scarce resources used by online applications. Scarce objects constrain remote callers to invoke methods in ways that use only certain amounts of the resources and do not divulge the trade secrets. Furthermore, scarce object wrappers form the basis for an online economy of scarce objects that makes efficient use of the underlying scarce resources.
Scarce objects are also a new security model. No security model to date has been widely used for distributing objects across trust boundaries. This is due to their obscure consequences, their origins in single-TCB computing, or both. The security of scarce objects is much more readily understood, since it is based on duplicating in computational objects the essential security features of physical objects. This architecture is "affordable" in Donald Norman's sense, since human brains are designed to reason in much more sophisticated ways about physical objects than about computational objects. It is thus also "affordable" in terms of mental transaction costs, which are the main barrier to sophisticated small-scale commerce on the Net. Finally, it will solve for the first time denial of service attacks, at all layers above the primitive scarce object implementation.

full paper

Comments below please!

Posted by iang at 07:39 PM | Comments (2) | TrackBack

Marc Stiegler - An Introduction to Petname Systems

Petnames evolved out of hard-won experience in the Electric Communities project, and went on to become a staple within the capabilities school of rights engineering. But it wasn't until Bryce 'Zooko' Wilcox made his dramatic claims of naming that petnames really discovered their place in the financial cryptographer's armoury.

Petnames were previously undocumented in any formal sense and disseminated by word of mouth and tantalising references such as Mark Miller's Pet Names Markup Language. Marc Stiegler, visiting researcher at HP Labs, has now stepped up and collected the community knowledge into one introductory piece, "An Introduction to Petname Systems." He has done us a grand service and deserves our thanks for putting petnames onto an academically sound footing:

An Introduction to Petname Systems

Zooko's Triangle argues that names cannot be global, secure, and memorable, all at the same time. Domain names are an example: they are global, and memorable, but as the rapid rise of phishing demonstrates, they are not secure.

Though no single name can have all three properties, the petname system does indeed embody all three properties. Human beings have been using petname systems since long before the advent of the computer. Informal experiments with Petname-like systems in computer-oriented contexts suggest that petnames are intuitive not only in the physical world but also in the virtual world. Proposals already exist for simple extensions to existing browsers that could alleviate (possibly dramatically) the problems with phishing. As phishers gain sophistication, it seems compelling to experiment with petname systems as part of the solution.

full paper


We seek comments below. Petnames is a very important concept in naming and if you design complicated financial cryptography systems it will be well worth your while to be at least familiar with the arguments within.




Marc Stiegler

Posted by iang at 07:35 PM | Comments (15)

June 10, 2005

Virus-safe Computing - HP Labs article

Mark Miller reports on a nice easy article from HP Labs. A must read!



Check out the current cover story on the HP Labs web page!

(For archival purposes, I also include here a direct link to Virus-safe computing.)

Research on how to do safe operating systems is concentrated in the capabilities field, and this article worth reading to get the flavour. The big challenge is to how to get capabilities thought into the mainstream product area, and the way the team has chosen to concentrate on securing Microsoft Windows programs has to be admired for the chutzpah!

Allegedlly, they've done it, at least in lab conditions. Can they take the capabilities and POLA approach and put it into the production world? I don't know but the research is right up there at the top of the 'watch' list.

Posted by iang at 08:01 AM | Comments (0) | TrackBack

May 21, 2005

To live in interesting times - open Identity systems

As the technical community is starting to realise the dangers of the political move to strong but unprotected ID schemes, there is renewed interest in open Internet-friendly designs to fill the real needs that people have. I've written elsewhere about CACert's evolving identity network, and here's another that just popped up: OpenId. Eagle eyed FCers will have noticed that Dani Nagy's paper in FC++ on generating RSA keys from passphrases speaks directly to this issue of identity (and I'm holding back from nagging him about the demo .... almost...). Further, and linked more closely again, there is renewed interest in things like the difficult problem of privacy blogging. Oh, and Stefan says it's not as easy as just using keys.

In a sense this is feeling a little like the 90s. Then, the enemy was the US government as it tried to close down the Pandora's box of crypto, and the net with it. This war sparked bursts of innovation epitomized by the hard lonely fight of PGP and the coattails corporate rise of SSL's bid for PKI dominance.

President Clinton signed that war to the history books in January 2000 when he gave permission for Americans to export free and open source crypto software **. Now the battle lines are being sketched up again with the signing into act last month of the REAL ID national identity card in the United States. Other anglo holdouts (UK, Canada, Australia, NZ, ...) will follow in time, and that would put most of the OECD under one form of hard identity token or another.

The net effect for the net is likely bad. The tokens being suggested in the US will not be protected, as the Europeans know how to do; one only needs to look at phishing and the confusion called the social security number to see that. Which predicts that the ID schemes will actually be only mildly useful to Internet ventures, and probably dangerous given the distance and the propensity of Identity thieves (a.k.a. fraudulent impersonators).

Yet something will be needed to stand in the place of the identity cards. It isn't good enough to point out the flaws in a system, one must have something one can claim - hopefully honestly - is better. So I feel the scene is set for a lot of experimentation in the field of strong authentication and identity systems. Hopefully, more along the lines of Stefan's privacy preserving notions and Dave Birch's frequent writings, and less along the lines of Corporate juggernaughts like Passport and LIberty Alliance. Those schemes look more like camoflauged hoovers for data mining than servants of you the user.

We may yet get to live again in interesting times, to use the Chinese parable.

Postscript **: But not closed source, not paid for, not hardware and not collaboration or teaching! Gilmore and others report signs that the United States Government is perhaps again renewing its War on Mathematics by clamping down on foreign students in Universities.

Posted by iang at 09:07 AM | Comments (3) | TrackBack

April 24, 2005

PKI News

Whitfield Diffie is again interviewed, and this time the interviewer gave him the full benefit of a leading question:

A running joke is that whatever year we're in is "The Year of PKI," meaning the technology has yet to live up to its hype. Do you believe there will ever be a true year of PKI?

Diffie: No. One day we will look around and start trying to figure out what year in the past was the year of PKI. Widespread use of PKI is inevitable. But there has been a standardization problem that isn't helped by the number of competitors in the field. It's fundamentally a capital development problem. Growth is slow now but it'll pick up later. Did I think it would develop more quickly? Yes. Am I surprised there's so little of it? No. The government uses quite a bit of it. And it's hard to say PKI hasn't had tremendous market penetration. It just seems there's not enough of it given the security needs out there.

Now, Whitfield Diffie is one of the brightest of the brights, he and his co-authors practically invented public key cryptography, and a few other things. So it may seem strange that I dare disagree with him, but that's what I doing:

The layout of public cryptography in diagrams does not a system make; PKI as we know it was created from ideas not from needs, and what we got was a paper infrastructure. What do you expect when you cut out the shapes from an academic paper and construct an edifice, other than a house of cards?

As a system, PKI simply did not serve a purpose that we need. Systems engineering, applications and the world at large simply don't work that way. No more time is needed to show that, and I'm glad we can now say that its "Year" has passed and we can now get on and build systems that serve purposes.

In other PKI News, the much watched Mozilla policy project has proposed its draft policy to "staff" which is their term for the executive. I've since discovered that other browser groups like Konqueror and Opera simply adopt Mozilla's lead in issues like this; so it's a thrice welcome development to get the draft out to a wider audience.

For those who don't follow these things, the policy creates an ascendency path for Certificate Authorities to be added into Mozilla's root set. As the Authority for all Authorities, within the world of Firefox, Thunderbird and other applications, the process by which Mozilla judges, accepts and rejects is thus important. Frank Hecker has crafted some quite broad and useful ways to let non-commercial and non-traditional CAs get in there and serve.

Given the actual attempt to address this policy in an open process, and come up with something thought out and stakeholder driven, I wouldn't be surprised if it becomes a secret input into Microsoft's future deliberations.

(I'll leave the reader to sort through the contradictions in these two pieces of news!)

Posted by iang at 09:15 AM | Comments (10) | TrackBack

March 10, 2005

Identity Theft exists because Identity is Valuable

Security writer Bruce Schneier said recently:

"Every credential has been forged. As you make a credential more valuable, there is more impetus to forge it. The reason identity theft is so nasty now is that your identity is so much more valuable than it used to be. By putting in the infrastructure, we have made the crime more common. That's scary."

In further good news for the economic analysis of security, he goes on to say:

"... ID theft will only be solved when banks are given responsibility to prevent it. "As soon as it becomes the banks' problem, it will be solved. The entity that is responsible for the risk will mitigate the risk."

To which I demur somewhat. The banks already took on that risk and passed it on to consumers. Bouncing it back to banks will simply encourage them to bounce it back again.

Far better to empower the individuals to look after and protect their own identity. We can do this quite successfully and quite easily - more easily than alternates - by simply reducing the identity in the infrastructure. (If you don't like my work in this area, have a look at Stefan's work or the capabilities crowd.)

But, the dangerous misperception that "identity equals security" is so deeply embedded in the minds of, well, most everyone, statistically, that these efforts are stalled. What do we need to overcome this? A disaster? A revolution?

Posted by iang at 10:08 AM | Comments (3) | TrackBack

March 09, 2005

On Quintessenz and the Biometric Consortium

Stefan blogs on the Quintessenz effort to datamine the NSA's biometric Consortium, which is excellent, as I didn't have time when I was there.

Posted by iang at 03:21 PM | Comments (0) | TrackBack

February 24, 2005

Choicepoint - 700 identities attacked

As predicted, the states ganged up on Choicepoint and forced them to agree to notify all the victims of the identity thefts. Of the 145k or so identified sets of identity, 700 ... 750 have now been identified as having been (ab)used. Which leaves me confused - is the identity stolen when the data is acquired, or when the data is used to commit a fraud?

Adam points to widening ripples of turmoil as the thought permeates of remote, inattantive and disinterested parties amass massive databases on everyone. My guess: the US will suddenly have a love affair with European-style data privacy directives, as the deal with the risks and interests. Whoops: spoke too soon.

Also see this nice open governance site that monitors Choicepoint and other companies.

Posted by iang at 10:30 AM | Comments (0) | TrackBack

February 18, 2005

New-look passports - The Economist stands before the Identity Juggernaut

Identity fits squarely in the Rights layer, as it establishes a way to get access to assets and resources. There are other ways, all methods having their pros and cons. The problem - the cons - with identity is that while it may well be fine for humans, it is a simply hopeless, intractable way to deal with computers and networks. To make matters worse, it is the only method that non-technical people understand, so we have a dichotomy between those who understand ... and those who really understand it. Or, at least, between those who use it and those who implement it.

All this notwithstanding, the identity project of national governments is rolling forward like a juggernaut, and rather than politicise this forum, I've tried to keep away from it. That project is much too much like "you're either with us or against us" and technical issues are swept aside in such debates. I fear that's a line in the sand, though, and the tide is rolling in.

Last December, The Economist weighed in against the juggernaut, and copies are circulating around the net. As a reputable listing of the dangers of one Identity project, this one is worth preserving. If I find a reputable listing for the benefits of the Identity project, I'll do the same.

Friday February 18th 2005

Border controls

New-look passports

Feb 17th 2005
From The Economist print edition

High-tech passports are not working

IN OLDEN days (before the first world war, that is) the traveller simply
pulled his boots on and went. The idea that he might need a piece of paper
to prove to foreigners who he was would not have crossed his mind. Alas,
things have changed. In the name of security (spies then, terrorists now),
travellers have to put up with all sorts of inconvenience when they cross
borders. The purpose of that inconvenience is to prove that the passport's
bearer is who he says he is.

The original technology for doing this was photography. It proved adequate
for many years. But apparently it is no longer enough. At America's
insistence, passports are about to get their biggest overhaul since they
were introduced. They are to be fitted with computer chips that have been
loaded with digital photographs of the bearer (so that the process of
comparing the face on the passport with the face on the person can be
automated), digitised fingerprints and even scans of the bearer's irises,
which are as unique to people as their fingerprints.

A sensible precaution in a dangerous world, perhaps. But there is cause for
concern. For one thing, the data on these chips will be readable remotely,
without the bearer knowing. And-again at America's insistence-those data
will not be encrypted, so anybody with a suitable reader, be they official,
commercial, criminal or terrorist, will be able to check a passport holder's
details. To make matters worse, biometric technology-as systems capable of
recognising fingerprints, irises and faces are known-is still less than
reliable, and so when it is supposed to work, at airports for example, it
may not. Finally, its introduction has been terribly rushed, risking further
mishaps. The United Sates want the thing to start running by October, at
least in those countries for whose nationals it does not demand visas.

Your non-papers, please

In theory, the technology is straightforward. In 2003, the International
Civil Aviation Organisation (ICAO), a UN agency, issued technical
specifications for passports to contain a paper-thin integrated
circuit-basically, a tiny computer. This computer has no internal power
supply, but when a specially designed reader sends out a radio signal, a
tiny antenna draws power from the wave and uses it to wake the computer up.
The computer then broadcasts back the data that are stored in it.

The idea, therefore, is similar to that of the radio-frequency
identification (RFID) tags that are coming into use by retailers, to
identify their stock, and mass-transit systems, to charge their passengers.
Dig deeper, though, and problems start to surface. One is interoperability.
In mass-transit RFID cards, the chips and readers are designed and sold as a
package, and even in the case of retailing they are carefully designed to be
interoperable. In the case of passports, they will merely be designed to a
vague common standard. Each country will pick its own manufacturers, in the
hope that its chips will be readable by other people's machines, and vice
versa.

That may not happen in practice. In a trial conducted in December at
Baltimore International Airport, three of the passport readers could manage
to read the chips accurately only 58%, 43% and 31% of the time, according to
confidential figures reported in Card Technology magazine, which covers the
chip-embedded card industry. (An official at America's Department of
Homeland Security confirmed that "there were problems".)

A second difficulty is the reliability of biometric technology.
Facial-recognition systems work only if the photograph is taken with proper
lighting and an especially bland expression on the face. Even then, the
error rate for facial-recognition software has proved to be as high as 10%
in tests. If that were translated into reality, one person in ten would need
to be pulled aside for extra screening. Fingerprint and iris-recognition
technology have significant error rates, too. So, despite the belief that
biometrics will make crossing a border more efficient and secure, it could
well have the opposite effect, as false alarms become the norm.

The third, and scariest problem, however, is one that is deliberately built
into the technology, rather than being an accident of its present
inefficiency. This is the remote-readability of the chip, combined with the
lack of encryption of the data held on it. Passport chips are deliberately
designed for clandestine remote reading. The ICAO specification refers quite
openly to the idea of a "walk-through" inspection with the person concerned
"possibly being unaware of the operation". The lack of encryption is also
deliberate-both to promote international interoperability and to encourage
airlines, hotels and banks to join in. Big Brother, then, really will be
watching you. And others, too, may be tempted to set up clandestine
"walk-through inspections where the person is possibly unaware of the
operation". Criminals will have a useful tool for identity theft. Terrorists
will be able to know the nationality of those they attack.

Belatedly, the authorities have recognised this problem, and are trying to
do something about it. The irony is that this involves eliminating the
remote readability that was envisaged to be such a crucial feature of the
system in the first place.

One approach is to imprison the chip in a Faraday cage. This is a
contraption for blocking radio waves which is named after one of the
19th-century pioneers of electrical technology. It consists of a box made of
closely spaced metal bars. In practice, an aluminium sheath would be woven
into the cover of the passport. This would stop energy from the reader
reaching the chip while the passport is closed.

Another approach, which has just been endorsed by the European Union, is an
electronic lock on the chip. The passport would then have to be swiped
through a special reader in order to unlock the chip so that it could be
read. How the European approach will interoperate with other countries'
passport controls still needs to be worked out. Those countries may need
special equipment or software to read an EU passport, which undermines the
ideal of a global, interoperable standard.

Sceptics might suggest that these last-minute countermeasures call into
doubt the reason for a radio-chip device in the first place. Frank Moss, of
America's State Department, disagrees. As he puts it, "I don't think it
questions the standard. I think what it does is it requires us to come up
with measures that mitigate the risks." However, a number of executives at
the firms who are trying to build the devices appear to disagree. They
acknowledge the difficulties caused by choosing radio-frequency chips
instead of a system where direct contact must be made with the reader. But
as one of them, who preferred not to be named, put it: "We simply supply all
the technology-the choice is not up to us. If it's good enough for the US,
it's good enough for us."

Whether it actually is good enough for the United States, or for any other
country, remains to be seen. So far, only Belgium has met America's
deadline. It introduced passports based on the new technology in November.
However, hints from the American government suggest that the October
deadline may be allowed to slip again (it has already been put back once)
since the Americans themselves will not be ready by then. It is awkward to
hold foreigners to higher standards than you impose on yourself. Perhaps it
is time to go back to the drawing board.
----------------

Related Items

From The Economist
Biometrics Dec 4th 2003
http://www.economist.com/science/displayStory.cfm?Story_id=2266290

Websites
America's State Department has information on the machine-readable passport requirement <http://www.state.gov/r/pa/ei/rls/36114.htm>. The Enhanced Border Security Act <http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.3525.ENR:> set the timetable for the introduction of the passports
<http://europa.eu.int/idabc/en/document/3669/194>. The EU has information on its own plans to introduce machine-readable passports. Ari Juels <http://www.rsasecurity.com/rsalabs/node.asp?id=2029> is a security expert at RSA laboratories.
----------------

Copyright ¿ The Economist Newspaper Limited 2005. All rights reserved.
http://www.economist.com/science/displaystory.cfm?story_id=3666171

Posted by iang at 10:18 AM | Comments (0) | TrackBack

February 15, 2005

Plans for Scams

Gervase Markham has written "a plan for scams," a series of steps for different module owners to start defending. First up, the browser, and the list will be fairly agreeable to FCers: Make everything SSL, create a history of access by SSL, notify when on a new site! I like the addition of a heuristics bar (note that Thunderbird already does this).

Meanwhile, Mozilla Foundation has decided to pull IDNs - the international domain names that were victimised by the Shmoo exploit. How they reached this decision wasn't clear, as it was taken on insider's lists, and minutes aren't released (I was informed). But Gervase announced the decision on his blog and the security group, and the responses ran hot.

I don't care about IDNs - that's just me - but apparently some do. Axel points to Paul Hoffman, an author of IDN, who pointed out how he had IDN spoofing solutions with balance. Like him, I'm more interested in the process, and I'm also thinking of the big security risks to come and also the meta-risks. IDN is a storm in a teacup, as it is no real risk beyond what we already have (and no, the digits 0,1 in domains have not been turned off).

Referring this back to Frank Hecker's essay on the foundation of a disclosure policy does not help, because the disclosure was already done in this case. But at the end he talks about how disclosure arguments fell into three classes:

  • Literacy: “What are the words?”
  • Numeracy: “What are the numbers?”
  • Ecolacy: “And then what?”
  • "To that end [Frank suggests] to those studying the “economics of disclosure” that we also have to study the “politics of disclosure” and the “ecology of disclosure” as well."

    Food for thought! On a final note, a new development has occurred in certs: a CA in Europe has issued certs with the critical bit set. What this means is that without the code (nominally) to deal with the cert, it is meant to be rejected. And Mozilla's crypto module follows the letter of the RFC in this.

    IE and Opera do not it seems (see #17 in bugzilla), and I'd have to say, they have good arguments for rejecting the RFC and not the cert. Too long to go into tonight, but think of the crit ("critical bit") as an option on a future attack. Also, think of the game play that can go on. We shall see, and coincidentally, this leads straight back into phishing because it is asking the browser to ... display stuff about the cert to the user!

    What stuff? In this case, the value of the liability in Euros. Now, you can't get more FC than that - it drags in about 6 layers of our stack, which leaves me with a real problem allocating a category to this post!

    Posted by iang at 10:05 PM | Comments (0) | TrackBack

    February 12, 2005

    Passport/Liberty leads to convergance with privacy community

    Stefan Brands postulates that the efforts of Passport/Liberty Alliance is leading to a convergance of thought between those who have been warning about privacy all these years, and those that want to build identity systems that share data across different organisations. Probably, this is a good thing; in that only the failures of these systems can lead these institutions to understanding that people won't support them unless they also deliver benefits with lowered risks to themselves.

    Should the 'privacy nuts' just stand back and let them make mistakes? I don't know about that, I'd say the privacy community would be better off building their own systems.

    Meanwhile, Stefan's musings and his view on the "7 laws of identity" that was postulated by Kim Cameron from the Passport project leads Stefan to postulate some design principles. First up:

    "The technical architecture of an identity system should minimize the changes it causes to the legacy trust landscape among all system participants."

    Sounds good to me. On two counts, it technically has a much better survival probability. One, it's a principle, and not a law. Laws don't just suddenly appear on blog entries, they are founded in much more than that. Secondly, it says "should" and so recognises its own frailty. There are cases where these things can be wrong.

    For the other 9 design principles, we have to wait until Stefan writes them down!

    Posted by iang at 01:44 PM | Comments (0) | TrackBack

    February 11, 2005

    US approves National Identity Card

    Yesterday, the US house of Representatives approved the National Identity card.

    This was first created in December 2004's Intelligence bill, loosely called the Patriot II act because it snuck in provisions like this without the Representatives knowing it. The deal is basically a no-option offer to the states: either you issue all your state citizens with nationally approved cards, or all federal employees are instructed to reject access. As 'public' transport (including flying) falls under federal rules now, that means ... no travel, so it's a pretty big stick.

    If this doesn't collapse, then America has a national identity card. That means that Australia, Canada and the UK will follow suit. Other OECD countries - those in the Napoleonic code areas - already have them, and poorer countries will follow if and when they can afford them.

    This means that financial cryptography applications need to start assuming the existence of these things. How this changes the makeup of financial cryptography and financial applications is quite complex, especially in the Rights and Governance layers. Some good, some bad, but it all requires thought.

    http://tinyurl.com/4futv

    Posted by iang at 09:54 AM | Comments (5) | TrackBack

    February 08, 2005

    A hybrid Nym / Centralised Identity?

    Whoops - spoke to soon, a bit later on Stefan introduces some sort of hybrid nym / centralised identifier which he claims to give the best of both worlds from the nymous identifier (what we use) and the fully delivered centralised one (that which governments probably would like to use in any national ID scheme). No details are forthcoming as yet, and it looks like a commercial product (probably based on Stefan's grab bag of patents) but certainly it will be interesting to see if there are new applications made possible by it.

    Posted by iang at 07:42 PM | Comments (0) | TrackBack

    4 Corners in Identity

    Adam & Michael discovered Stefan Brands' new blog called the Identity Corner. Stefan is of course the cryptographer who picked up from David Chaum and created a framework of mathematical formulas to deliver privacy control. Stefan's formulas as described in his book are rather complete, way too complete for all except talented mathematicians to understand, but his introductory 1st chapter remains a landmark in privacy literature.

    Stefan points at another Identity blog, which postulates something called the Laws of Identity. For my money, there are too many MUSTs in the list to be reasonable.

    Especially, I think a description like Zooko's Triangle is a much clearer starting point. Zooko says that we can have any two of the following three: decentralised, secure and human meaningful. But we can't have all three; he put forth a challenge to prove him wrong and to date nobody's managed to do that.

    Which of course brings us to Ricardo, which chooses secure and decentralised as its requirements (as have a lot of other systems, see the link for a list). That's because each client can then do a more or less transparent mapping for the user; although the problems that occur with this have sparked some thoughts and designs. In Ricardo we have generally told the user what contracts are named and let the user choose their account names, but there is a more powerful way to solve this is: use pet names.

    These are words or phrases that the user chooses herself to label things her software agent knows about. Because the user invented the name, by herself, she and her agent are the only ones who know. So when a secure global name turns up trying to phish her, the hope is that the absence of any familiar pet name will complete the security model conundrum left by Zooko above.

    So far so good, and if you've kept pace with the phishing season, this is analogous to what Amir & Ahmad propose for site logos - except with logos and images rather than words. It's *not* analogous to what I propose with branded CAs (also adopted in A&A's paper) but that's because the CAs exist in a centralised, not global space - we don't need to drop the meaningful name then. (Put the two together and we have a quite powerful solution. It's the best I've seen so far at least.)

    Which brings us full square back to Stefan who's proposals have little to do with the basic nymous technique that is pervasive in these concepts. It will be interesting to see how he integrates his proposals with those behemoths of PKI and the replacement, Federated Identity.

    Posted by iang at 07:08 PM | Comments (0) | TrackBack

    January 14, 2005

    Dr. Ron Paul understands the forces behind identity theft

    It seems that no sooner than I'd got the polemic on Why Hollywood has to take one for the team off my chest, Dr Ron Paul, a Representative in the US Congress, proposed legislation to the US Congress to ban the issue of uniform and universal identifiers.



    Your number, in other words, won't exist under Dr Paul's world. This is a good thing, as we've written many times that the one true number is a Rights mechanism that is a happening disaster. His proposal is called " Identity Theft Prevention Act" and it further aims to prevent the theft of Identity by repealing the national ID card in the oddly named "intelligence reform bill", forcing the re-issue of all social security numbers, making social security numbers only usable for ... Social Security, and repealing exceptions to privacy violations that permitted the IRS and the FBI to conduct various agenda related attacks in recent times. Among other things.

    Dr Paul's Act is a frontal attack on a behemoth that will not permit it. This will not succeed of course, but Dr Paul will at least have secured his own peace of mind by standing up and being counted.

    Posted by iang at 11:49 AM | Comments (2) | TrackBack

    January 09, 2005

    Identity Theft: Why Hollywood has to take one for the team.

    The Year of the Phish has passed us by, and we can relax in our new life swimming in fear of the net. Everyone now knows about the threats, even the users, but what they don't know is what happens next. My call: it's likely to get a lot worse before it gets better. And how it gets better is not going to be life as we knew it. But more on that later.

    First... The Good News. There is some cold comfort for those not American. A recent report had British phishing loses under the millions. Most of the rich pickings are 'over there' where credit rules, and identity says 'ok'. And even there, the news could be construed as mildly positive for those in need of good cheer. A judge recently ruled a billion dollar payout against spammers who are identified in name, if not in face. We might never see their faces, but at least it feels good. AOL reported spam down by 75% but didn't say how they did it.

    Also, news that Microsoft is to charge extra for security must make us believe they have found the magic pixie dust of security, and can now deliver an OS that's really, truly secure, this time! Either that, or they've cracked the conundrum of how to avoid the liability when the masses revolt and launch the class action suit of the century.

    All this we could deal with, I guess, in time, if we could as an industry get out collective cryptographic act together and push the security models over to protecting users (one month's coding in Mozilla should do it, but oh, what a long month it's been!). But there is another problem looming, and it's ...

    The Bad News: the politicians are now champing at the bit, looking for yet another reason to whip today's hobby horse of 'identify everyone' along into more lather. Yes, we can all mangle metaphors, just as easily as we can mangle security models. Let me explain.

    The current project to identify the humanity of the world will make identity theft the crime of the century. It's really extraordinarily simple. The more everything rests on Identity, the more value will Identity have. And the more value it has, the more it will be worth to steal.

    To get a handle on why it is more valuable, put yourself in the shoes of an identity thief. Imagine our phisher is three years old, and has a sweet tooth for data.

    How much sugar can there be found in a thousand cooperating databases? Each database perfectly indexed with your one true number and bubbling over with personal details, financial details, searchable on demand. A regulatory regime that creates shared access to a thousand agencies, and that's before they start sharing with other countries?

    To me, it sounds like the musical scene in the sweets factory of Chitty Chitty Bang Bang, where the over indulgent whistle of our one true identity becomes our security and dentistry nightmare. When the balance is upset, pandemonium ensues. (I'm thinking here the Year of the Dogs, and if you've seen the movie you will understand!)

    Now, one could ask our politicians to stop it, and at once. But it's too late for that, they have the bits of digital identity between their teeth, and they are going to do it to us to save us from phishing! So we may as well be resigned to the fact that there will be a thousand interlinked identity databases, and a 100 times that number of people who have the ability to browse, manipulate, package, steal and sell that data. (This post is already too long, so I'm going to skip the naivete of asking the politicians to secure our identity, ok? )

    A world like that means credit will come tumbling down, as we know it. Once you know everything about a person, you are that person, and no amount of digital hardware tokens or special biometric blah blahs will save the individual from being abused. So what do people do when their data becomes a phisher's candyfest?

    People will withdraw from the credit system and move back to cash.This will cost them, but they will do it if they can. Further, it means that net commerce will develop more along the lines of cash trading than credit trading. In ecommerce terms, you might know this better as prepaid payment systems, but there are a variety of ways of doing it.

    But the problem with all this is that a cash transaction has no relationship to any other event. It's only just tractable for one transaction: experienced FCers know that wrapping a true cash payment into a transaction when you have no relationship to fall back to in event of a hiccup is quite a serious challenge.

    So we need a way to relate transactions, without infecting that way with human identity. Enter the nym, or more fully known as the psuedonymous identifier. This little thing can relate a bunch of things together without needing any special support.

    We already use them extensively in email, and in chat. There are nyms like iang which are short and rather tricky to use because there are more than one of us. We can turn it into an email address, and that allows you to send a message to me using one global system, email. But spam has taught us a lesson with the email address, by wiping out the ease and reliability of the email nym ... leading to hotmail and the throw away address (for both offense and defense) and now the private email system.

    Email has other problems (I predict it is dying!) which takes us to Instant Messaging (or chat or IM). The arisal of the peer-to-peer (p2p) world has taken nyms to the next level: disposable, and evolutionary.

    This much we already know. P2P is the buzzword of the last 5 years. It's where the development of user activity is taking place. (When was the last time you saw an innovation in email? In browsing?)

    Walking backwards ... p2p is developing the nym. And the nym is critical for creating the transactional framework for ecommerce. Which is getting beaten up badly by phishing, and there's an enveloping pincer movement developing in the strong human identity world.

    But - and here's the clanger - when and as the nymous and cash based community develop and overcome their little difficulties, those aforementioned forces of darkness are going to turn on it with a vengeance. For different reasons, to be sure. For obvious example, the phishers are going to attack looking for that lovely cash. They are going to get rather rabid rather quickly when they work out what the pickings are.

    Which means the mother of all security battles is looming for p2p. And unfortunately, it's one that we have to win, as otherwise, the ecommerce thing that they promised us in the late nineties is looking like a bit more like those fairy tales that don't have a happy ending. (Credit's going to be squeezed, remember.)

    The good news is that I don't see why it can't be won. The great thing about p2p is the failure of standards. We aren't going to get bogged down by some dodgy 80's security model pulled out of the back pages of a superman comic, like those Mr Universe he-man kits that the guy with the funny name sold. No, this time, when the security model goes down in flames (several already have) we can simply crawl out of the wreckage, dust off and go find another fighter to fly into battle.

    Let's reel off those battles already fought and won and lost. Napster, Kazaa, MNet, Skype, BitTorrent. There are a bunch more, I know, I just don't follow them that closely. Exeem this week, maybe I do follow them?

    They've had some bad bustups, and they've had some victories, and for those in the systems world, and the security world, the progress is quite encouraging. Nothing looks insurmoutable, especially if you've seen the landscape and can see the integration possibilities.

    But - and finally we are getting to the BIG BUT - that means whoever these guys are defeating ... is losing! Who is it? Well, it's the music industry. And hollywood.

    And here's where it all comes together: ecommerce is going to face a devastating mix of over rich identity and over rich phishers. It'll shift to cash based and nym based, on the back of p2p. But that will shift the battle royale into p2p space, which means the current skirmishes are ... practice runs.

    And now we can see why Hollywood is in such a desperate position. If the current battle doesn't see Hollywood go down for the count, that means we are in a world of pain: a troubling future for communication, a poor future for ecommerce, and a pretty stark world for the net. It means we can't beat the phisher.

    Which explains why Hollywood and the RIAA have found it so difficult to get support on their fight: everyone who is familiar with Internet security has watched and cheered, not because they like to see someone robbed, but because they know this fight is the future of security.

    I like Hollywood films. I've even bought a few kilograms of them. But the notion of losing my identity, losing my ability to trade and losing my ability to communcate securely with the many partners and friends I have over the net fills me with trepidation. I and much of the academic and security world can see the larger picture, even if we can't enunciate it clearly. I'd gladly give up another 10 years of blockbusters if I can trade with safety.

    On the scales of Internet security, we have ecommerce on one side and Hollywood on the other. Sorry, guys, you get to take one for the team!


    Addendum: I've just stumbled on a similar essay that was written 3 weeks before mine: The RIAA Succeeds Where the Cypherpunks Failed by Clay Shirky.

    Posted by iang at 05:22 PM | Comments (6) | TrackBack

    December 26, 2004

    Nyms sighted in authentication software

    The use of the nym is seeing a little bit of a revival, driven by the onslaught of chat as the future means of communication for anyone under 30. Over on Adam's more prolific blog there is word of a company called WikID that is pushing nyms from phones.

    A short explanation of the cryptographic nym as a rights technique. A public/private key pair is created, and the public key is registered with a server. The private key is kept private! Now that arrangement can be used for the private key holder to authentice, or sign for, any action so permitted in the software. (You can also simulate the thing with a name and a password, which is good enough as long as there is no serious value involved.)

    Nyms are identities. They are not like IDentities, the ones that politicians assure us we'd be better off if we had more of. Nyms are instead short lived, light weight identities that allow one action to relate to another. That is, if you used a nym to chat with some dude today, and also tomorrow, he would be no trouble relating the two as the same person. Bob recognises Alice from a couple of days ago, by the light touch of her nym.

    Cryptographically, these are real strong. But they are also creatable on demand; making one of these things takes ... oh, seconds of otherwise wasted CPU time. Good nymous software makes them on demand and lets the users dictate what to do with them. Nyms are very very powerful, and can be used for chat, payments, trades and the like, as long as the designer recognises their asset characteristics. It is only the accident of the less powerful but more IDentifying Certificate Authority model that secures only a tiny part of the security needs of ecommerce that has overshadowed nyms in popular software.

    Nyms can also be dumbed down and turned into singular authentication tokens, like certs in the CA model, and this seems what the WikID company has done. They have a client that downloads and installs on a phone or PDA, and creates a nym for purposes of gaining entry to a web site or other server asset.

    Any exposure for the concept would be good, so I wish them well. It's been a hard lonely road for us nym developers for the last decade or so.

    Posted by iang at 04:39 PM | Comments (5) | TrackBack

    November 24, 2004

    DIY fingerprint idea thwarts ID thieves

    More from The Register! A man has chosen to use identity biometrics to block fraudsters. He's done this by putting a "Correction" notice into his credit report, thus alerting potential credit suppliers to his imposed condition: get a thumprint. How elegant, how innovative :)

    DIY fingerprint idea thwarts ID thieves
    By John Leyden
    Published Wednesday 24th November 2004 07:59 GMT

    The Home Office is touting ID cards as a solution to ID theft in today's Queen's Speech but a Yorkshire man has taken matters into his own hands. Jamie Jameson, a civil servant from Scarborough in North Yorkshire, insists that credit can only be extended in his name on production of a thumbprint.

    Jameson hit on the idea of writing to the UK's three main credit reference agencies - Equifax, Experian and Call Credit - and requesting that they put a 'Notice of Correction' on his file stating that a print must be offered with applications for loans or credit cards issued in his name. At the same time he submitted his fingerprint.

    This Notice of Correction of the first thing a prospective lender will see when it calls up his records. Normally this facility provides a way for individuals to explain why they have a county court judgement against their name or other qualifications to their credit history. Jameson is using it to do a cheap security check.

    Although uncommon in the UK, thumbprints are often used as an audit mechanism for people cashing cheques in US banks. A similar scheme was trialled in Wales. Jameson takes a little ink pad similar to that used in US banks around with him all the time just in case he might need it.

    If an application for credit is accepted without a thumbprint - against Jameson's express instructions - then he will not be liable for losses. If a would-be fraudster gives a false print on an application then it makes it easier for them to be traced by the police. "Lenders don?t have to match prints. Using prints just establishes an audit trail if anything goes wrong," Jameson explained. "It's not so much me proving who I am as preventing someone else being me."

    Jameson has been using the idea successfully for over a year. He concedes that the scheme isn't foolproof and that it's possible to fake fingerprints ("nothing?s perfect," as he puts it). As far as Jameson knows he's the only person who's using the technique in the UK. The scheme delays the issuing of credit, which could be a problem with people who apply for multiple accounts but this is a minor inconvenience for Jameson. "This is driven by the individual so there are no data protection issues. It's a real deterrent to ID theft," he told El Reg. ®

    Posted by iang at 12:19 PM | Comments (0) | TrackBack

    November 15, 2004

    A further challenge to Strong Identity - Nerve Coupled Cooperating Humans

    While we are on this Identity thread, one of the more interesting talks at the Digital Identity forum was Kevin Warwick's keynote on his experiences as a Cyborg. Professor Warwick from Reading University spends his idle experimental time inventing special chips designed to interface into nerves and poking them into himself.

    By coupling himself up to a computer, at the nerve level rather than the fingertip level, he has made himself into a Cyborg, at least some of the time. His standard quip refers to a documentary made 20 years or so back with a well known politician, called Terminator, gets a nervous laugh.

    Cutting to the chase, what has he done? Well, he's connected his nerves to a mechanical arm and got it clutching in sync with his arm. There's a delay of about a second between his hand clutch and the mechanical clutch, and a loss of reliability as every second clutch seems to get lost. But impressive none the less. The article below refers to shenanigans where he extended the link over the Internet. Predictably the journo grasped the newsworthiness of hackers taking control of his extended limbs, but readers will quickly see the solution there. Get some cyborg crypto, Kevin!

    Better yet is the experiment to tap into an excitement or tension nerve, and use the input to change colours on a necklace his wife wore. Someone disconcertingly, the change in colour from blue to red was indicative of Professor Warwick getting excited about something that his wife could not see - as they were coupled with a radio link that let them drift apart.

    The latest experiment and the most interesting to date was when he impressed on his wife the need to more actively participate. In a moment of madness, she agreed to have a probe inserted into her nerves so she also could communicate in the new cyborgian fashion. What he didn't tell her (and claimed he didn't know) was that these things get inserted without anaesthetic, and as they strike nerve, the pain is something beyond. Something about needing a good contact, the surgeon said...

    But anyway, to the experiment. Once all the technical bugs had been ironed out, the couple were able to now communicate both ways using the clutching signal. That is, each could feel the other clutch at their fist. (That is, Kevin could feel his wife clutching her own fist, and vice versa.) Hence, once we get past the breach in English language grammer, the two have a rudimentary capability of communicating with Morse code or the like. This link could be happily extended over wireless (ain't 802.11 grand!) allowing for what may well be the first remote grope. Perhaps the remote slap is next on the experimental agenda.

    What they achieved is tiny in terms of actual results, but the implications are huge. We now have feedback at the nerve level between humans! A binary signal, yes, but a signal and a closed loop even. This is reminiscent of the early experiments in radio transmission, and the telephone. Yes, the results then were silly, too, but the implications were immense.

    So how do we drag the games of two nervously coupled people back to the question of Identity? There are two things. Firstly, it is now somewhat unclear where a person who is also a cyborg is going to be. What mechanical limbs do they control, and how remote are they?

    Secondly, there is some possibility of confusing just what an identity is. If a person can link at the cognitive level to a computer, and two people can link at the nervous level, and these entities start to merge their capabilities, what happens when you have a cell of half a dozen people across the planet and the odd 2 or 3 racks' worth of computing power, all working together?

    Are we at the cusp of losing cohesive identity altogether, or do we need to create some new legal fiction such as the Limited Liability Multiple Brain Multiple CPU Cyborg? And, just because there are some propertarians chewing their fingernails on this essential point, just who gets to own the output of our new LLMBMCCs?

    Anyway, back to lightness and mirth. Here's the article.

    Virus warning: Cyborgs at risk
    By Jo Best

    Story last modified November 12, 2004, 5:06 PM PST


    Kevin Warwick, professor of cybernetics at Reading University in England, is looking forward to becoming a cyborg again.

    But the academic, who has wired his nervous system up to a computer and put an RFID chip in his arm, is also warning that the day will come when computer viruses can infect humans as well as PCs.

    Speaking this week at Consult Hyperion's fifth Digital Identity Forum in London, Warwick spoke of a future when those who aren't cyborgs will be considered the odd ones.

    "For those of you that want to stay human...you'll be a subspecies in the future," he said.

    Warwick said he believes there are advantages for a human being networked to a computer. It would mean an almost "infinite knowledge base," he said, adding that it would be akin to upgrading humans.

    The security problems that dog modern computing won't be much different from those that could plague the cyborgs of the future. "We're looking at software viruses and biological viruses becoming one and the same," Warwick said. "The security problems (will) be much, much greater."
    Digital agenda

    If humans were networked, the implications of being hacked would be far more serious, and attitudes toward hackers would be radically changed, he added. At the moment, hackers' illegal input into a network is tolerated, he claimed. But if humans were connected to the Internet and hacks carried out, that would push the realms of tolerance, he said.

    In Warwick's own networking experiments, in which he used his body's connectivity to operate a mechanical arm in the United States, the scientist didn't publicize the IP address of his arm in case someone hijacked it.

    While networked humans may be a significant way off, Warwick's experiments are intended to have a practical purpose. He has been working with Stoke Mandeville Hospital in the United Kingdom on the possible implications of a networked nervous system for those with spinal injuries. Researchers are exploring, for example, whether people might be able to control a wheelchair through their nervous system.

    Nevertheless, Warwick said the idea of marrying humanity and technology isn't currently a popular one. Talking of his RFID experiments, he said, "I got a lot of criticism, I don't know why."

    Putting RFID chips in arms is now more than a novelty. Partygoers at one club in Spain can choose to have RFID chips implanted in their arms as a means of paying for their drinks. Some Mexican law enforcement officials had the chips implanted to fend off attempted kidnappings.

    The U.S. Food and Drug Administration has also recently approved the use of RFID in humans. One potential application would be allowing medical staff to draw information on a patient's health from the chip.

    Jo Best of Silicon.com reported from London.

    Posted by iang at 07:48 PM | Comments (1) | TrackBack

    First time a digital signature has been affirmed by court?

    The previous story tipped me to this court case last year where digital signatures as signatory evidence were disputed and then confirmed. This may be the first precendent! As far as I know this is the first time the signatures themselves were challenged (our own Ricardian contracts appeared in court in 2001, 2002, but their authenticity was not disputed).

    26.06.2003 Digital Signature Found to be Valid in Estonian Court System

    The process of court disputes becomes more convenient for participants: according to a ruling taken last week, a district court declared that documents may be sent to court by e-mail if they have a digital signature according to laws. This is the first such case in Estonia.

    The reason why the argument about digital signature came to Tallinn district court was a case between Estonian Railways, Estonian Competition Board and Valga Depot (Valga Külmutusvagunite Depoo). Andres Hallmägi, a lawyer representing the depot, sent a digitally signed document to court by e-mail. Tallinn administrative city court claimed that they are not able to read the document and thus rejected it.

    "I did not do this because of technological arrogance or bullying," said Hallmägi. "But this was a matter of principle - if you do not push the bureaucrats, they will not start innovating on their own."
    An European precedent

    The case was taken to district court, where it was ruled that digital signatures are equivalent to handwritten ones in Estonia and therefore the court should not have claimed that they cannot use it.

    The district court ruling claims: "The reception of a digitally signed document was not obstructed by the lack of appropriate software - it was and still is possible to immediately install such software at courts when necessary."

    Read on...

    Posted by iang at 10:11 AM | Comments (1) | TrackBack

    Surprise and Shock! Identity smart cards that work on a national level!

    In surprise and indeed shock, someone has made smart card identity systems work on a national scale: The Estonians. Last week, I attended Conult Hyperion's rather good Digital Identity forum where I heard the details from one of their techies.

    There are 1.35 million Estonians, and they hold an issued 650,000 cards. Each card carries the normal personal data, a nationally established email address, a photo, and two private keys. One key is useful for identification, and the other is used for signing.

    Did I say signing? Yes, indeed, there is a separate key there for signing purposes. They sign all sorts of things, right up to wills, which are regularly excluded from other countries' lists of valid uses, and they seem to have achieved this fairy tale result by ensuring that the public sector is obliged to accept documents so signed.

    I can now stop bemoaning that there are no other extant signatory uses of digital signatures other than our Ricardian Contracts. Check out OpenXades for their open source signatory architecture.

    When asked, presenter Tarvi Martens from operator AS Sertifitseerimiskeskus (SK) claimed that the number of applications was in the thousands. How is this possible? By the further simple exigency of not having any applications on the card, and asking the rest of the country to supply the applications. In other words, Estonia issued a zero-application smart card, and banks can use the basic tools as well as your local public transport system.

    Anybody who's worked on these platforms will get what I am saying - all the world spent the last decade talking about multi-application cards. This was a seller's dream, but to those with a little structural and markets knowledge this was never going to fly. But even us multi-blah-blah skeptics didn't make jump to hypersmartspace by realising that the only way to rollout massive numbers of applications is to go for zero-application smart cards.

    A few lucky architectural picks written up in the inevitable whitepaper will not however make your rollout succeed, as we've learnt from the last 10 years of financial cryptography. Why else did this work for the Estonians, and why not for anyone else?

    Gareth Crossman, presenter from Liberty, identified one huge factor: the Napoleonic Code countries have a legal and institutional basis that protects the privacy of the data. They've had this basis since the days of Napoleon, when that enlightened ruler conquered much of mainland Europe and imposed a new and consistant legal system.

    Indeed, Tarvi Martens confirmed this: you can use your card to access your data and to track who else has accessed your data! Can we imagine any Anglo government suggesting that as a requirement?

    Further, each of the countries that has had success (Sweden was also presented) in national smart card rollouts has a national registry of citizens. Already. These registries mean that the hard work of "enrollment" is already done, and the user education issues shrink to a change in form factor from paper to plastic and silicon.

    Finally,it has to be said that Estonia is small. So small that things can be got done fairly easily.

    These factors don't exist across the pond in the USA, nor across the puddle in the UK. Or indeed any of the Anglo world based on English Common Law. The right to call yourself John Smith today and Bill Jones tomorrow is long established in common law, and it derives from regular and pervasive inteference in basic rights by governments and others. Likewise, much as they might advertise to the contrary, there are no national databases in these countries that can guarantee to have one and only one record for every single citizen.

    The lesson is clear; don't look across the fence, or the water, and think the grass is greener. In reality, the colour you are seeing is a completely different ecosystem.

    Posted by iang at 09:35 AM | Comments (7) | TrackBack

    July 01, 2004

    Electronic Money is Traceable Money

    Curious how history turns out. Back in 1990 or so, David Chaum launched a revolution with his invention of untraceable digital cash tokens. For much of the 90s, there was a pervasive expectation that electronic money would be untraceable. Yet, it didn't turn out that way. Almost all systems were traceable. Even DigiCash's offerings were only loosely private. And now, a fairly tough analysis on the Bagdad Greenbacks mini-scandal has this engaging quote:

    A former consultant to the Senate Banking Committee opined after the Kelly hearings: "It is a monumental act of stupidity for Congress and the Treasury to allow the Fed to ship large amounts of physical currency around the world and then act shocked when the customers turn out to be drug dealers and money launderers. The U.S. government should promote the electronic use of dollars as a means of exchange, but not the use of physical currency [1]."


    Electronic Money is Traceable Money. So we rely on the Issuer to be careful with the data, by one means or another. In order to evaluate the privacy of the data, evaluate the policies and weaknesses of the Issuer. You are going to have to do that anyway, to evaluate the safety of the Issuer, and it adds maybe 5% to the workload to check out their privacy practices.

    I am reminded of my informal research into the heavy narcotics world. Being dragged to ever-frequent social gatherings in the "tolerant and open" Amsterdam party scene, I found myself a fish out of water. So I constructed a little plan to gain some benefit from the experiences on offer: I mixed financial cryptography and the available experience in movement of product, and verbally trialled the new untraceable digital cash products on the street dealers.

    The results were a surprise, even to me. Your average hard-core dealer would not use a digital cash product that was touted as being a privacy product. Reason? Because if it was touted as a privacy money, then that was probably a lie.

    That is, if it is claimed to be untraceable, or anonymous, it probably isn't. Whereas, if there is no such claim, at least you can examine the system and draw what benefits there are plainly on view.

    The fears of these dealers seemed to me to be overly paranoic at the time (and recall, the mid 90s was a time when everyone was paranoic!). But they have been confirmed empirically. At least two high profile money systems that claimed to be private were lying (these were the cases I *know* about). Literally, they were misleading the public and their own people by keeping secret their transactional tracing capabilities.

    Why was this? In both cases, the original aim was safety of the float. And, this is a good reason - hard analysis of an untraceable money system such as blinded token money (sometimes wrongly called anonymous money) leads one to the overwhelming conclusion that it will fall to the bank robbery problem. But, in the 90s, everyone believed that these money systems should be private, so that was what they claimed.

    Now, I guess, if people in Congress believe that electronic money is traceable money, then we can get over the myth of untraceable digital cash and get back to serious architecture. It was a nice myth while it lasted, but ultimately, a huge distraction, and a waste of the resource of countless programmers who believed it could be done.

    [1] Is Alan Greenspan a Money Launderer?
    http://www.insightmag.com/news/2004/06/11/National/Investigative.Reportis.Alan.Greenspan.A.Money.Launderer-690804.shtml

    Posted by iang at 03:06 AM | Comments (1) | TrackBack

    May 07, 2004

    "How is a capability different to an object?"

    A discussion on cap-talk on the definition of capabilities seems to have erupted... Here is one capabilty fan's interpretation of what a capability is, and how it relates to objects, in the sense of Java and other OO languages.

    -------- Original Message --------
    Subject: "How is a capability different to an object?"
    Date: 6 May 2004 20:45:04
    From: zooko@zooko.com
    To: iang@systemics.com

    You asked me that after MarkM's talk at FC'99, and I didn't know.

    Nowadays, I would say this:

    Think of a graph with circles connected by arrows. (I really like thinking in these terms. If you don't like thinking in terms of graphs, this probably isn't the best explanation for you.)

    Now, let any "thing" in the system under consideration, whether that thing be a person, computer, process, chunk of data, computational object, etc. be one of those circles.

    Now say that there are only three ways that one circle can get an arrow pointing to another circle:

    1. The first circle creates the second circle (e.g., a process spawns another process, a Java object constructs another Java object, etc.). In this case, the second circle begins life with only a single incoming arrow, coming from the first circle.

    2. There was a third circle who already had an arrow pointing to the first circle and an arrow pointing to the second circle. This third circle gives to the first circle an arrow which points to the second circle. This is the event captured in the famous Granovetter Diagram [1].

    3. A link between two objects comes in from outside of this world. Please ignore this case for now, and revisit it after you understand what capabilities are.

    Okay, now suppose you want to do some access control. You're writing a program, or a policy, or something that wants to specify who can touch what. To be concrete, let's say that you want to specify whether Alice can or cannot read a certain file. If you were never going to change your mind, and if you were not going to allow other people to make their own access control decisions while interoperating with yours, then this would be easy -- just write down "Alice, File, Yes", or "Alice, File, No". That is the basis of the Access Control List approach to access control.

    The Object Capabilities approach to access control is to draw a graph with a circle labelled "Alice" and an arrow pointing to a circle labelled "File". Or leave the arrow off if you don't want to grant Alice that access.

    Okay, now where are we? Well, the three rules (ignoring the 3rd) above tell you how the access control state can evolve over time. The basic ACL approach that we sketched above doesn't include this notion of evolving over time, so assuming that your access control decisions evolve over time, we would have to add it.

    [End of General Definition]

    Okay, now I wrote this in as general a manner as I could because I know that your interests include things outside of a specific thing like "this one virtual machine running on my computer". However, to make the notion of capabilities concrete, suppose you have a Java Virtual Machine, and the circles are Java objects and the arrows are Java references. Now suppose one of the objects is under the control of Alice and can be used by her to read files. Another object represents a file.

    There, now you are using capabilities for access control in that JVM.

    I'll stop for now!

    --Zooko

    [1] "Alice captured in the instant of giving Bob an arrow to Carol." Making this image required high speed photography by professional National Geographic photographers.

    Posted by iang at 07:03 PM | Comments (3) | TrackBack

    May 02, 2004

    Definition of Capabilities

    There are several models of rights out there - nyms, capabilities, bearer, account. One observation that has been made by Jeroen van Gelderen is that nyms (especially, SOX) as a model is a case of capabilities. What that means, beyond the superficial, has always been up in the air. The somewhat presumption was that SOX is a subset, or implementation of capabilities. Or, that SOX is capabilities hard-coded, whereas E, by contrast, is capabilities in the language.

    The capabilities people (them) and the nym people (us) haven't really seen eye to eye on the lucidity of each other's documentation, so distance remained. Now, Jed Donnelley has broken ranks and cast his view of a definition of an Internet capability model.

    With such a definition in hand, it's now possible to compare SOX, and any other nymous system, against the capabilities model. Best case, we'll show the original observation was right, and we can get on with the life of us and them. Worst case, we'll show it as being wrong, and we'll be forced to write our own definition.

    That, I'll defer. For now, here's Jed's definition :

    -------- Original Message --------
    Subject: [cap-talk] Re: "capabilities" as data vs. as descriptors - OS security discussion, restricted access processes, etc.
    Date: Thu, 29 Apr 2004
    From: Jed Donnelley <jed@nersc.gov>
    To: cap-talk@mail.eros-os.org

    [big snip]

    1. Definition of what you might call an Internet capability model. This
    could be something along the lines of:

    http://www.webstart.com/jed/papers/Managing-Domains/#s13

    though I think modern encryption technology would suggest a
    rework. The basic idea would be to define a protocol for sending
    blocks of bits that:

    a. Can securely represent the right to do anything that a service
    (server) process might chose to make available.

    b. Can be communicated securely - hopefully without contacting
    the service process except of course when it is the source or
    destination of the rights communication directly.

    c. Is safe from evesdropping. That is, the form that the capability takes
    when it's in, say, a processes memory space or in an email message,
    cannot be used by any entity other than the owner of the memory
    space (a process) or the email (presumably a person).

    d. Extra points for including a rights reduction mechanism that doesn't
    require permission from the server.

    [another big snip]

    Can we agree on that much?

    --Jed http://www.nersc.gov/~jed/

    _______________________________________________
    cap-talk mailing list
    cap-talk@mail..eros-os.org
    http://www.eros-os.org/mailman/listinfo/cap-talk

    Posted by iang at 02:21 PM | Comments (1) | TrackBack

    March 19, 2004

    "Micropayments for Peer-to-Peer Systems"

    "Emerging economic P2P applications share the common need for an efficient, secure payment mechanism. In this paper, we present PPay, a micropayment system that exploits unique characteristics of P2P systems to maximize efficiency while maintaining security properties. We show how the basic PPay protocol far out performs existing micropayment schemes, while guaranteeing that all coin fraud is detectable, traceable and unprofitable. We also present and analyze several extensions to PPay that further improve efficiency."

    I wouldn't normally bother with a micropayments scheme, as the economics are generally woeful. This design would be in the same basket of microeggs ("even if they hatch, they ain't worth counting") as it has a fairly impractical hack in it to achieve scaleability where other micropayment schemes allegedly fall short (!!).

    Ricardo does much better, as it scales down to all known applications, as well as up. But, there is one aspect in there that Ricardo afficionados will recognise, and, dare I say it, drool over, so I'm adding FTR ("for the record").

    Posted by iang at 07:02 AM | Comments (2) | TrackBack

    December 05, 2003

    The First IEEE International Workshop on Electronic Contracting (WEC)

    http://tab.computer.org/tfec/cec04/cfpWEC.html

    Real world commerce is largely built on a fabric of contracts. Considered abstractly, a contract is an agreed framework of rules used by separately interested parties to coordinate their plans in order to realize cooperative opportunities, while simultaneously limiting their risk from each other's misbehavior. Electronic commerce is encouraging the growth of contract-like mechanisms whose terms are partially machine understandable and enforceable.

    The First IEEE International Workshop on Electronic Contracting (WEC) is the forum to discuss innovative ideas at the interface between business, legal, and formal notions of contracts. The target audiences will be researchers, scientists, software architects, contract lawyers, economists, and industry professionals who need to be acquainted with the state of the art technologies and the future trends in electronic contracting. The event will take place in San Diego, California, USA on July 6 2004. IEEE SIEC 2004 will be held in conjunction with The International Conference on Electronic Commerce (IEEE CEC 2004).

    Topics of interest include but are not limited to the following:

    * Contract languages and user interfaces
    * Computer aided contract design, construction, and composition
    * Computer aided approaches to contract negotiation
    * "Smart Contracts"
    * "Ricardian Contracts"
    * Electronic rights languages
    * Electronic rights management and transfer
    * Contracts and derived rights
    * Relationship of electronic and legal enforcement mechanisms
    * Electronic vs legal concepts of non-repudiation
    * The interface between automatable terms and human judgement
    * Kinds of recourse, including deterrence and rollback
    * Monitoring compliance
    * What is and is not electronically enforceable?
    * Trans-jurisdictional commerce & contracting
    * Shared dynamic ontologies for use in contracts
    * Dynamic authorization
    * Decentralized access control
    * Security and dynamism in Supply Chain Management
    * Extending "Types as Contracts" to mutual suspicion
    * Contracts as trusted intermediaries
    * Anonymous and pseudonymous contracting
    * Privacy vs reputation and recourse
    * Instant settlement and counter-party risk

    Submissions and Important Dates:

    Full papers must not exceed 20 pages printed using at least 11-point type and single spacing. All papers should be in Adobe portable document format (PDF) format. The paper should have a cover page, which includes a 200-word abstract, a list of keywords, and author's e-mail address on a separate page. Authors should submit a full paper via electronic submission to boualem@cse.unsw.edu.au. All papers selected for this conference are peer-reviewed. The best papers presented in the conference will be selected for special issues of a related computer science journal.

    * Submissions must be received no later than January 10, 2004.
    * Authors will be notified of their submission's status by March 2, 2004
    * Camera-Ready versions must be received by April 2, 2004

    Posted by graeme at 03:58 AM | Comments (0) | TrackBack

    November 04, 2003

    Infinite Bandwidth

    What can you do if everyone has scads of bandwidth?

    http://www.cnn.com/2003/TECH/internet/10/15/internet.speed.reut/index.html

    I've been thinking about this a bit, and decided that there's only one thing you can do. Video.

    Obviously, streaming of movies, because that's what the Hollywood guys have been told will save their bacon.

    But, also, calls home to Mum. Watching little daughter when she's at preschool. Video tutoring, Virtual holidays. One-on-one private therapy sessions. Alarm systems, etc etc. Teenagers being teenagers with each other because they like to compete with daddy's bandwidth bill.

    Lots of apps.

    Now, to get scads of video means that there really needs to be some way to pay for it. There will never be enough at a price of zero. So some feedback loop is required, otherwise I'll just leave the camera on mama's plant growth experiment while I go on vacations.

    Metered access is very hard though. ISPs can't do it because they can only do their local bit. Telcos can't do it because they bicker too much and charge too much and they never deliver more than a pile of pretty brochures about what you could do if... Plus, they keep going broke.

    So some sort of way to market-acquire the bandwidth is required, which means having a money to pay for it, and having an asset to acquire.

    Going back to the first application, movies on demand, we can add that there is the fairly regular DRM requirement. That is, you want to show it, not lose it. This means the whole stream needs to be encrypted, and the device that decrypts is close to the monitor. In a nutshell.

    If we assume that's all true - DRM is not being debated today, thanks very much! - we can now see that if we are feeding video in gratiutous amounts into people's homes, we need a box.

    A box that handles the DRM, and buys the pipe to the other end. A vonage-with-a-video-camera. Or, an Internet Nutshell, because it has everything you need.

    Now, institutional-wise, this is all fairly obvious. You can bet your bottom dollar that Microsoft are building one right now. In fact they probably have three nutshells already and have bought out another couple of companies growing them.

    But the problem they will face is that everyone is scared of them.

    So, other people will build. And try and not be as scary as Microsoft, which is pretty easy, it seems. These new people are more likely to succeed because they can negotiate with Hollywood without having to hire actors to play out the script. And, like it or not, the seller of nutshells probably needs Hollywood as much as Hollywood needs him.

    The problem they will have is that they will need a payments solution.

    For various reasons they don't want to go with credit cards. Banks can't hack it either, just like telcos.

    Which leaves them a choice of - pick a payment system that's out there, or build their own. Now, doing their own is technically easy (for the likes of us here experienced payment system builders at least) but it does have the disadvantage that it makes the company that is bringing the box to the market somewhat stretched.

    Hollywood will want an independant money, so that they can audit the flows. Auditing of film revenues is one of the scarier things on the other side of the celluloid. Plenty of games go on there, and it would make a nice script for a movie, except that they don't want to encourage copycats.

    (Did you ever see a movie about where the producer got stabbed, and the director was put through the woodchipper? They just don't make those sorts of movies for some reason... OTOH, in films about films, the accountant always comes to a sticky end.

    Why is this?)

    Imagine how hard it gets when the company selling the film to the consumer is also the company running the payment, and it's all encrypted and protected behind this blah blah handwaving black box thingie?

    It won't be scary enough to send Hollywood running off to Bill Gates, but it might send them running out the door.

    So we need a payment system, independant of the nutshell.

    That immediately means several things. Firstly, it has to be many payment systems. That's because one is never enough for the consumer, and no big rollout of kit is ever going to rely on one little company. This is a group that is investing hundreds of millions on a huge risky play. The payment system must not be strategically risky.

    Which means we need multiple payment systems.

    Which means we need one standard to talk to them all.

    Which means something like XML-X!

    (Yeah, you knew I was going to say XML-X, didn't you :-)

    More than that, it needs to be solid. Those boxes need to work for up to five years with minimal patches. So you need the full kit. The full development cycle, the real McCoy when it comes to payments. Standards and testing and conformity.

    Now, this story is going to be repeated time and time again to our noble Issuers. They will walk into a big hot sale. And they'll walk out again knowing they can't make the credibility grade.

    It's for that reason that a common payment standard is needed. So you can go in and say "We will support you all the way to the vault on your project, but if the impossible happens, all your systems can switch across to our preferred backup payments processor..."

    Us lot over here on the XML-X team have done our bit, by writing a standard and letting people use it. We even wrote a couple of language bindings for it - the ones we needed.

    Pecunix picked it up and used it, and so far it seems to have worked for them.

    It's really up to the Issuers if they want to drive this forward, and if they don't, personally, I don't have an issue with that, because there's more:

    Think a bit more about the real layout of this new society where everyone has a vonage-with-a-video- camera. This nutshell thing.

    Your teenage daughter is doing her chat thing with some anonynous punk on the west coast, and decides to get a little bit bare with it all.

    Or, hubby is bored with you, and he needs some ... lift in his life. Or, you are conducting your new revolution for political tyranny by daring to use free speech in a rich democratic country, an increasingly popular crime. Or planning the next Greenpeace mission against the USS Enterprise.

    Or, you're just using the system to check your stocks.

    What you want is privacy. Which means crypto.

    Actually, it means nymous crypto, but I'll not explain why in these brief words (the payments could be bearer or nymous or whatever).

    So, the real obvious way to do this box is to pack in the video camera, the display board, the DRM decrypt chip, the fat fibre pipe connection into the box with the whole nymous infrastructure enabled.

    It's not that clear how this would all pan out to me, as I don't know anyone who's in the biz of growing nutshells, and big companies that have already written their biz plan tend to be rather convinced that they know what's what.

    But it is clear that what is needed in the payments field is a diversity approach.

    Either the Issuers can create this commonality within diversity environment, with a common standard for the merchant users of their payment systems, or they will live a fractured, hard and mean life, picking up each and every merchant one by one, at excrutiating cost to each merchant.

    And they'll miss out on the big future showing on a big screen in a big living room near them, tomorrow.

    iang

    PS: I'm also personally somewhat agnostic as to whether XML-X is used. We wrote it partly to help, and partly to solve a real inhouse problem. That problem is solved. So we got our money's worth out of it. Thanks for all the fish, guys :-)

    Posted by iang at 09:05 PM | Comments (0) | TrackBack