Steven J. Murdoch and Ross Anderson have released a paper entitled "Security Protocols and Evidence: Where Many Payment Systems Fail," to be presented in a few weeks in Financial Cryptography Conference, in Barbados. It is very welcome to point people in the direction of dispute resolution, because it is indeed a make or break area for payment systems.
The paper itself is a light read, with some discussion of failures, and some suggestions of what to do about it. Where it gets interesting is that the paper tries to espouse some Principles, a technique I often use to get my thoughts in order. Let's look at them:
Principle 1: Retention and disclosure. Protocols designed for evidence should allow all protocol data and the keys needed to authenticate them to be publicly disclosed, together with full documentation and a chain of custody.Principle 2: Test and debug evidential functionality. When a protocol is designed for use in evidence, the designers should also specify, test and debug the procedures to be followed by police officers, defence lawyers and expert witnesses.
Principle 3: Open description of TCB. Systems designed to produce evidence must have an open specification, including a concept of operations, a threat model, a security policy, a reference implementation and protection profiles for the evaluation of other implementations.
Principle 4: Failure-evidentness. Transaction systems designed to produce evidence must be failure-evident. Thus they must not be designed so that any defeat of the system entails the defeat of the evidence mechanism.
Principle 5: Governance of forensic procedures. The forensic procedures for investigating disputed payments must be repeatable and be reviewed regularly by independent experts appointed by the regulator. They must have access to all security breach notifications and vulnerability disclosures.
I have done these things in the past, in varying degrees and fashions, so they are pointing in the right direction, but I feel /as principles/, they fall short.
Let's work through them. With P1, public disclosure immediately strikes an issue. This is similar to the Bitcoin mentality that the blockchain should be public, something which has become so tantalising that regulators are even thinking about mandating it.
But we live in a world of crooks. Does this mean that a new attack is now about to become popular -- using the courts to force the publication of ones victim's secrets? The reason for financial privacy is to stop scumbags knowing where the loot is, and that is a good reason. As we enter a more transparent world for crooks, because of such innovations as Internet data tracking, economic intelligence harvesting, drugs-ML, AML, sharing of seized value by government agencies, monolithic banks incentivised to cross-sell and compete, etc, the need for financial privacy goes up not down.
If you look at M&A's paper, the frustration in the courts that they faced was that the banks argued they couldn't disclose the secrets. Yet, courts readily deal with this already. Lawyers know how to keep secrets, it's their job. So we're really facing a different problem, which is that the banks snowed the judge with bluff and bluster, and the judge didn't blink. As Stephen Mason writes in "Debit Cards, ATMs and the negligence of the bank and customer," in Butterworths Journal of International Banking and Financial Law, March 2012, also Appendix 10, When Bank Systems Fail:
"The only reason the weaknesses have been revealed in some instances, as discussed in this article, is because the banks were required to cooperate with the investigating authorities and explain and provide evidence of such weaknesses before the criminal courts. In civil actions, the banks have no incentive to reveal such weaknesses. The banks will deny that their systems suffer from any weaknesses, placing the blame squarely on the customer."
The real problem here is that banks do not want to provide the evidence; for them, suppression of the evidence is part of their business process, a feature not a bug. Hence, Principle 1 above is not sufficient, and it could be written more simply:
P1. Payment protocols should be designed for evidence.
which rules out the Banks' claims. But even that doesn't quite work. Now, I'm unsure how to make this point in words, so I'll simply slam it out:
P1. Payment protocols should be designed to support dispute resolution.
Which is a more subtle, yet comprehensive principle. To a casual outside observer it might appear the same, because people typically see dispute resolution as the presentation of evidence, and to our inner techie, they see our role as the creation of that evidence.
But, dispute resolution is far more important that that. How are you filing a dispute? Who is the judge? Where are you and what is your law? Who holds the burden of proof? What is the boundary between testimony and digital evidence? In the forum you have chose, what are the rules of procedure? How do they affect your case?
These are seriously messy questions. Take the recent British cases of Shojibur Rahman v Barclays Bank PLC as reported (judgement, appeal) in Digital Evidence and Electronic Signature Law Review, 10 (2013). In this case, a fraudster apparently tricked the victim into handing over a card and also the PIN. This encouraged Barclays to claim no liability for the frauds that followed.
Notwithstanding this claim, the bank is required to show that it authenticated the transactions. In both of the two major transactions conducted by the fraudster, the bank failed to show that they had authenticated the transactions correctly. In the first, Barclays presented no evidence one way or another, and the card was not in use for that transaction, so the bank simply failed to meet its burden of proof, as well as its own standards of authentication as it was undisputed that the fraudster initiated the transaction. In the second, secret questions were asked by the bank as the transaction was suitably huge, and wrong answers were accepted.
Yet, in district court and on appeal the judges held that because the victim had failed in his obligation to keep the card secure, defendant Barclays was relieved of its duty to authenticate the transactions. This is an outstanding blunder of justice -- if the victim makes even one mistake then the banks can rest easy.
Knowing that the banks can refuse to provide evidence, knowing that the systems are so complex that mistakes are inevitable, knowing that the fraudsters conduct sophisticated and elegant social attacks, and knowing that the banks prepared the systems in the first place, this leaves the banks in a pretty position. They are obviously encouraged to hold back from supporting their customer as much as possible.
What is really happening here is a species of deception, and/or fraud, sometimes known as liability shifting or dumping. The banks are actually making a play to control and corral the dispute resolution into the worst place possible for you, and the best place for them -- their local courts. Meanwhile, they are telling you the innocent victim, that they've got it all under control, and your rights are protected.
In terms of P1 above, they are actually designing their system to make dispute resolution tilted in their favour, not yours. They should not.
Then, let's take Principle 2, testing the evidence functionality. The problem with this is that, in software production, testing is always the little lost runt of the litter. Everyone says they will look after her, and promise to do their best, but when it matters, she's just the little squealing nuisance underfoot. Testing always gets left behind, locked in the back room with the aunt that nobody wants to speak to.
But we can take a more systemic view. What us financial cryptographers do for this situation is to flip it around. Instead of repeating the marketing blather of promises of more testing, we make the test part of the protocol. In other words, the only useful test is one that is done automatically as part of the normal routine.
P2. Evidence is part of the protocol.
You can see this with backups. Most backup problems occur because they were never actually used at the time they were created. So good backups open up their product and compare it back to what was saved. That is, part of the cycle is the test.
But we can go further. When we start presenting this evidence to the fraternity of dispute resolution we immediately run into another problem highlighted by the above words: "the designers should also specify, test and debug the procedures to be followed by police officers, defence lawyers and expert witnesses."
M&A were aware of cases such as the one discussed above, and seek to make the evidence stronger. But, the flaw in their proposal is that the process so promoted is *expensive* and it therefore falls into the trap of raising the costs of dispute resolution. Which make them commensurately less effective and less available, which breaches P1.
And to segway, Principle 3 above also fails to the same economic test. If you do provide all that good open TCB stuff, you now need to pull in expert witnesses to attest to the model. And one thing we've learnt over the years is that TCBs are fertile ground for two opposing expert witnesses to disagree entirely, both be right, and both be exceedingly expensive. As before, this approach increases the cost, and therefore reduces the availability of dispute resolution, and thus breaches P1. And, it should be noted that a developing popular theme is that standards and TCBs and audits and other big-costing complicated solutions are used as much to clobber the user as they are to achieve some protection. The TCB is always prepared in advance by the bank, so no prizes for guessing where that goes; the presence of the institution-designed TCB is as much antithetical to the principles of protection of the user, so it can have no place in principles.
Now, combining these points, it should be clear that we want to get the costs down. I can now introduce a third principle:
P3: The evidence is self-evident.
That is, the evidence must be self-proving, and it must be easily self-proving to the judge, who is no technical wizard. This standard is met if the judge can look at it and know instantly what it is, and, likewise, so can a jury. This also covers Principle 5. For an example of P3, look at the Ricardian Contract, which has met this test before judges.
Principle 4 is likewise problematic. It assumes so much! Being able to evidence a fraud, but not stop it is a sort of two-edged sword. Indeed, it assumes so much of an understanding of how the system is attacked that we can also say that if we know that much about the fraud, we should be able to eliminate it anyway. Why bother to be evidence-protected when we can stop it?
So I would prefer something like:
P4: The system is designed to reduce the attack surface areas, and where an attack cannot be eliminated, it should be addressed with a strong evidence trail.
In other words, let's put the horse before the cart.
Finally, another point I would like to bring out which might now be evident from the foregoing, is this:
P5: The system should be designed to reduce the costs to both parties, including the costs and benefits of dispute resolution.
It's a principle because that is precisely what the banks are not doing; without taking this attitude, they will also then go onto breach P1. As correctly pointed out in the paper, banks fight these cases for their own profit motive, not for their customers' costs motives. Regulation is not the answer, as raising the regulatory barriers plays into their hands and allows them to raise prices, but we are well out of scope here, so I'll drift no more into competition. As an example of how this has been done, see this comparison between the systems designed by CAcert and by Second Life. And, Steve Bellovin's "Why the US Doesn't have Chip-and-PIN Credit Cards Yet," might be seen as a case study of P5.
In conclusion, it is very encouraging that the good work that has been done in dispute resolution for payment systems now has a chance of being recognised.
But it might be too early for the principles as outlined, and as can be seen above, my efforts scratched out over a day are somewhat different. What is going to be interesting is to see how the Bitcoin space evolves to deal with the question, as it already has mounted some notable experiments in dispute resolution, such as Silk Road. Good timing for the paper then, and I look forward to reports of lively debate at FC in Barbados, where it is presumably to be presented.
Posted by iang at February 18, 2014 05:41 AM | TrackBackThanks for your comments on the paper. We will certainly be taking these and others into account when producing the final version.
There's a lot to take on with your discussion, but I'm particularly interested in your P3. Do you think this is an achievable goal for payments protocols? With your example of the Ricardian contract, the human readable part could be, but what about questions as to whether the digital signature was correct and questions as to what the presence of a digital signature means (e.g when was the document signed; does it imply the user saw the contract/agreed to the contract/had an opportunity to amend the contract)?
At the moment, disputes which make it to court are expensive because experts do need to get involved to answer such questions. I would however argue that this is because of a failure to follow our principles 2 and 5. There are no procedures so experts have to be creative and deal with building consensus with the opposing side on a case-by-case basis.
If there were widely agreed procedures to follow when resolving disputes, it would be much quicker and cheaper to resolve disputes, possibly avoiding the need for an expert to get involved at all if some of the steps can be automated.
Posted by: Steven Murdoch at February 19, 2014 05:50 PMInteresting and relevant thoughts.
The world clearly needs to have complete trust in the basic infrastructure that we all use - including communication networks and payment gateways.
But let's make no mistake: for this to happen we have to rely on something stronger than today's standards (network protocols, encryption algorithms, random numbers generators), or use a provably-safe layer to wrap these provably-unsafe standards.
So far, the only purpose of those standards has been to comfort the masses in a false sense of security to better abuse them - the point you are trying to address.
The only way to get a legitimacy in this process is to start using systems that can be proved inbreakable and stop using systems which vulnerability is a by-design feature.
Posted by: Ern at February 21, 2014 04:36 AM