At the Fed's Payment Systems conference this October, Roger W. Ferguson provided this summary to save us the trip.
Nothing much on security, but these remarks caught my attentions:
"Information integration and standards. A key point made at the conference was that large businesses, in particular, want payments system providers to understand that information about transactions (such as invoice numbers and shipping information) is critical to their use of electronic payments, for both domestic and global commerce. Many speakers identified a need for common standards to enable significant operational improvements in integrating payment and related transaction information in order to enable greater straight-through processing of electronic payments and automated reconciliation procedures."
What does this mean, in detail? Here's what I can guess at:
We have always provided an open memo field in Ricardo transactions, which could be used for any purpose, and we duplicated that in XML-X. Open XML is a possibility. And, it's within the realm of possibilities to add order numbers and other identifiers, either into the Memo, or in the actual packets.
Question is, what could be done here to make a difference? Any clues?
"A particularly important topic is how electronic payments systems can better meet the needs of business users. Business people frequently report that, from their perspective, a payment is only one part of an overall transaction or relationship with a counterparty. Other parts include orders, confirmations, shipping documents, invoices, and a variety of accounting and other information that supports a transaction or relationship. The complexity of this situation has created challenges for businesses as they integrate corporate information systems with electronic payment capabilities, and this complexity has likely slowed the adoption of electronic payments for a wide range of business purposes. I hope this conference will help underscore the need for businesses, financial institutions, technology vendors, and payments system operators to find common approaches and standards for addressing this issue."
OK, so AG is worried about "orders, confirmations, shipping documents, invoices, and a variety of accounting and other information that supports a transaction or relationship..." So there's nothing wrong with the payment, it's just everything else that is wrong?
Invisiblog - anonymous weblog publishing (added: 15-Dec-2003)
invisiblog.com lets you publish a weblog using GPG and the Mixmaster anonymous remailer network. You don't ever have to reveal your identity - not even to us. You don't have to trust us, because we'll never know who you are.
File-Exchange - File dump and public key retrieval mechanism (added: 15-Dec-2003)
File-Exchange allows you to exchange files with other people without giving away your identity or harming your privacy by just using a web browser.
XCA (added: 14-Dec-2003)
This application is intended for creating and managing X.509 certificates and RSA keys (DSA keys maybe supported in a later release since they are not wideley used in PKI cryptography). Everything that is needed for a CA is implemented. All CAs can sign sub-CAs rekursively. These certificate chains are shown clearly in a list-view. For an easy company-wide use there are customiseable templates that can be used for certificate or request generation. All crypto data is stored in a local Berkeley database.
Workshop on Electronic Contracting (WEC) (added: 22-Dec-2003)
http://tab.computer.org/tfec/cec04/cfpWEC.html
Real world commerce is largely built on a fabric of contracts. Considered abstractly, a contract is an agreed framework of rules used by separately interested parties to coordinate their plans in order to realize cooperative opportunities, while simultaneously limiting their risk from each other's misbehavior. Electronic commerce is encouraging the growth of contract-like mechanisms whose terms are partially machine understandable and enforceable.
Digital Money Forum (DM7) (added: 22-Dec-2003)
http://www.consult.hyperion.co.uk/digmon7.html
The programme will cover the key aspects surrounding the implementation and use of digital money; i.e the regulatory, technical, social, and economic.
Financial Cryptography '04 (added: 22-Dec-2003)
Money and trust in the digital world. Dedicated to the relationship between cryptography and data security and cutting-edge financial and payment technologies and trends... Emerging financial instruments and trends, legal regulation of financial technologies and privacy issues, encryption and authentication techologies, digital cash, and smartcard payment systems...
Financial Cryptography Payments Events Circuit (added: 22-Dec-2003)
http://www.financialcryptography.com/content/circuit/
A list of events in the Financial Cryptography space, including ones back to the birth of the field.
OpenMoney (added: 22-Dec-2003)
http://www.openmoney.org/omp/brief.html
OpenMoney's Brief on how community currencies can communicate. By way of a slide presentation.
GoldMoney (added: 22-Dec-2003)
More staid, stable and regular than the others. Users tend to be "store of value" people. Holders of the US patent on fully reserved digital gold currencies. Strong governance, fully implemented 5PM.
Pecunix (added: 22-Dec-2003)
Small, offshore, technically adept DGC. Notable for many related businesses such as integrated securities exchange and real time gold exchange. Strong governance model, well in excess of size.
eBullion (added: 22-Dec-2003)
Has cool cryptocard security widget to stop others from stealing value.
1mdc (added: 22-Dec-2003)
Derivative gold system, reserved in e-gold rather than physical metal. 1mdc is effectively a layered DGC providing protected and private claims on another DGC. This model is representative of future developments, where one Issuer
varies and improves the offerings of another.
e-gold (added: 22-Dec-2003)
The market leader, with some $20m in reserves, and 50k transactions per day (mostly tiny). The only one with decent statistics, but governance model is incomplete.
PayPal (added: 22-Dec-2003)
PayPal are the biggest splash in the FC world. Historically based on First Virtual, they are a bank/credit card derivative dollar currency. As they charge Retailers high fees, they are a retail payment system, rather than being a true money, but they do permit user-to-user payments (the mistake of many a retail system).
May Scale (added: 23-Dec-2003)
http://www.interestingsoftware.com/mayscale.html
A simple chart showing how different monies achieve different hardnesses. The May Scale puts digital monies into a perspective for loading and retail considerations.
WIN is not WASTE (added: 29-Dec-2003)
WINW is a small worlds networking utility. It was inspired by WASTE, a P2P darknet product released by Nullsoft in May 2003 and then withdrawn a few days later. The WINW project has diverged from its original mission to create a clean-room WASTE clone. Today, the WINW feature set is different from that of WASTE, and its protocol is incompatible with WASTE's protocol. However, WINW and WASTE achieve similar goals: they allow people who trust each other to communicate securely.
Phonebook - Linux Virtual Disk (added: 15-Dec-2003)
http://www.freenet.org.nz/Wiki/PhoneBook
Phonebook is an encrypted Linux filesystem (virtual disk) with unique 'plausible deniability' and 'disinformation' features.
COIN-OR - Computational Infrastructure for Operational Research (added: 30-Dec-2003)
http://www-124.ibm.com/developerworks/opensource/coin/
The Computational Infrastructure for Operations Research (COIN-OR**, or simply COIN) project is an initiative to spur the development of open-source software for the operational research community.
Computer Programs for Social Network Analysis (added: 30-Dec-2003)
http://www.sfu.ca/~insna/INSNA/soft_inf.html
Comprehensive and up to date list of SNA software.
Making reliable distributed systems in the presence of software errors (added: 12-Dec-2003)
http://www.sics.se/~joe/thesis/armstrong_thesis_2003.pdf
This thesis presents Erlang together with a design methodology, and set of libraries for building robust systems (called OTP). The central problem addressed by this thesis is the problem of constructing reliable systems from programs which may themselves contain errors. I argue how certain of the requirements necessary to build a fault-tolerant system are solved in the
language, and others are solved in the standard libraries. Together these form a basis for building fault-tolerant sodware systems.
In a debate on the use of "non-repudiation" over on the cryptography list, Carl Ellison raises the point that only people can repudiate.
I have to agree with Carl and stress that the issue is not that the definition for "non-repudiation" is bad or whatever, but the word is simply out of place. Repudiation is an act of a human being. So is the denial of that or any other act, to take a word from one definition.
We can actually learn a lot more from the legal world here, in how they solve this dilemma. Apologies in advance, as what follows is my untrained understanding, derived from a legal case I was involved with in recent years [1]. It is an attempt to show why the use of the word "repudiation" will never help us and will always hinder us.
The (civil) courts resolve disputes. They do *not* make contracts right, or tell wrong-doers to do the right thing, as is commonly thought.
Dispute resolution by definition starts out with a dispute, of course. That dispute, for sake of argument, is generally grounded in a denial, or a repudiation.
One party - a person - repudiates a contract or a bill or a something.
So, one might think that it would be in the courts' interest to reduce the number of repudiations. Quite the reverse - the courts bend over backwards, sideways, and tie themselves in knots to permit and encourage repudiations. In general, the rule is that anyone can file *anything* into a court.
The notion of "non-repudiation" is thus anathema to the courts. From a legal point of view, we, the crypto community, will never make headway if we use this term [2]. What terms we should use, I suggest below, but to see that, we need to get the whole process of the courts in focus.
Courts encourage repudiations so as to encourage all the claims to get placed in front of the forum [3]. The full process that is then used to resolve the dispute is:
1. filing of claims, a.k.a. "pleadings",
2. presentation of evidence,
3. application of law to the evidence,
4. a reasoned ruling on 1 is delivered based on 2,3.
Now, here's where cryptographers have made the mistake that has led us astray. In the mind of a cryptographer, a statement is useless if it cannot be proven beyond a shred of doubt.
The courts don't operate that way - and neither does real life. In this, it is the cryptographers that are the outsiders [4].
What the courts do is to encourage the presentation of all evidence, even the "bad" stuff. (That's what hearings are, the presentation of evidence.)
Then, the law is applied - and this means that each piece of evidence is measured and filtered and rated. It is mulled over, tested, probed, and brought into relationship with all the other pieces of evidence.
Unlike no-risk cryptography, there isn't such a thing as bad evidence. There is, instead, strong evidence and weak evidence. There is stuff that is hard to ignore, and stuff that doesn't add much. But, even the stuff that adds little is not discriminated against, at least in the early phases.
And this is where the cryptography field can help: a digital signature, prima facie, is just another piece of evidence. In the initial presentation of evidence, it is neither weak nor strong.
It is certainly not "non-repudiable." What it is is another input to be processed. The digsig is as good as all the others, first off. Later on, it might become stronger or weaker, depending.
We cryptographers can help by assisting in the process of determining the strength of the evidence. We can do this in three ways, I think:
Firstly, the emphasis should switch from the notion of non-repudiation to the strength of evidence. A digital signature is evidence - our job as crypto guys is to improve the strength of that evidence, with an eye to the economic cost of that strength, of course.
Secondly, any piece of evidence will, we know, be scrutinised by the courts, and assessed for its strength. So, we can help the process of dispute resolution by clearly laying out the assumptions and tests that can be applied. In advance. In as accessible a form as we know how.
For example, a simple test might be that a receipt is signed validly if:
a. the receipt has a valid hash,
b. that hash is signed by a private key,
c. the signature is verified by a public key, paired with that private key.
Now, as cryptographers, we can see problems, which we can present as caveats, beyond the strict statement that the receipt has a valid signature from the signing key:
d. the public key has been presented by the signing party (person) as valid for the purpose of receipts,
e. the signing party has not lost the private key,
f. the signature was made based on best and honest intents...
That's where it gets murky. But, the proper place to deal with these murky issues is in the courts. We can't solve those issues in the code, and we shouldn't try. What we should do is instead surface all the assumptions we make, and list out the areas where further care is needed.
Thirdly, we can create protocols that bear in mind the concept of evidence. That means we use various techniques such as signed receipts, logs, sharing of records and chains of signatures to create pieces of evidence.
We use the careful techniques of protocol design to marshal sufficient evidence of strength to make it easy to resolve any questions; before they become disputes, and ideally, before they leave the protocol!
And, when these questions do become disputes, we try and make it easy (read: cheap) to present strong evidence to those resolving any dispute.
iang
[1] It was highly instructive, and I'd almost recommend all to get in trouble with the courts at least once in your lives, if only it wasn't so darn destructive of ones life!
[2] It's even worse than the signature. At least there is some resemblance between the process and result of a digital signature and a legal signature. With (non)-repudiation, however, cryptographers are saying that the entire meta-concept of the court is wrong.
[3] Courts actually have a rule, that, only claims made up front can be heard - so you had better get your repudiations up there in the beginning!
[4] This is a characteristic of the no-risk school of cryptography, but even economic cryptographers fall into this trap with regularity.
The game, "Six Degrees of Kevin Bacon," is explained by Alberto-Laszlo Barabási in his new book, Linked. For a synopsis, see Gene Callahan's review .
The game is a fascinating one to computer scientists and financial cryptographers. If we were actors, we'd be only two degrees from Kevin Bacon.
Anguilla is the home of Financial Cryptography, and it is a favourite haunt of Kevin and his brother Michael.
FC really took its roots in the tiny island, through the first four conferences known by the same name. There, the very odd mix of geeks and academics blended in effortlessly into the relaxed island life of one of the Caribbean's misfits.
Anguilla was historically populated by escaped slaves, and escaped the plantation history of the Caribbean as nothing much can grow there. She is thus fiercely independent. But she has little of the baggage of suppression of the other islands, and no necessary need to lord it over the white visitors as revenge for centuries of slavery and plantations.
From abject poverty, this slightly open view allowed her, with a little help from British tax breaks and Canadian airport grants to create an island devoted to unspoiled tourism. By a quirk of fate, the locals banned franchises like MacDonalds and cruise ships; and only high-end, expensive resorts were permitted.
By universal acclaim, the Dune Preserve is the favourite place of visitors seeking that special beach therapy that other islands have on the posters only. Bankie Banks, who runs the Dune, also hosts the Moonsplash festival where the Bacon Brothers play most Marches, under a full moon.
And, Bankie Banks, who is in trouble as much as he is out, has almost certainly played in a movie sometime. If not, somebody will do it sometime!
Which leaves us all, in the FC community, 2 degrees from Kevin Bacon, if only we can get ourselves into a movie.
http://www.wws.princeton.edu/~dkarlan/downloads/impactperils.pdf
Microfinance Impact Assessments: The Perils of Using New Members as a Control Group
Microfinance institutions aim to reduce poverty. Some assess their impact through a cross-sectional impact methodology which compares veteran to new participants, and then calls any difference between these two groups the "impact" of the program. Such studies have risen recently in popularity because they are cheap, easy to implement, and often encouraged by donors. USAID, through its AIMS project, encourages this methodology with its SEEP/AIMS practitioner-oriented tools1. This paper intends to inform practitioners about the perils of using such a strategy, and suggests a couple solutions to some of the larger problems with this approach.
This approach makes many assumptions that are untested, and others that are tested and false. For example, this approach assumes that dropouts have, on average, identical income and consumption levels to those who remain. Furthermore, this approach assumes that dropouts are not made worse off by participating in the program. This approach also assumes that when lending groups form they do not sort themselves by economic background. These assumptions not only are brave theoretically, but are contradicted by existing empirical research. This paper suggests a method to address the attrition biases, and suggests further research be conducted on the other implicit assumptions before expending resources on a plausibly unreliable assessment
methodology.
Over on the Nuclear Blog , Pelle maps the NeuClear system on to the FC7 model:
"I remember when Ian Grigg presented his paper Financial Cryptography in 7 Layers at FC2000. I thought it was the single most useful paper presented in a conference dominated by academic non deployable variants of Chaumian Blinded Cash. The paper has inspired me ever since."
Go there to see the model in a table, but I've also copied the text below (with some editorial fixes):
The 7 layers is a way of seperating out and understanding the various parts that manage risk and ensure safety in a financial cryptography system like NeuClear.
In his abstract he describes it like this:
"Financial Cryptography is substantially complex, requiring skills drawn from diverse and incompatible, or at least, unfriendly, disciplines. Caught between Central Banking and Cryptography, or between accountants and programmers, there is a grave danger that efforts to construct Financial Cryptography systems will simplify or omit critical disciplines.
This paper presents a model that seeks to encompass the breadth of Financial Cryptography (at the clear expense of the depth of each area). By placing each discipline into a seven layer model of introductory nature, where the relationship between each adjacent layer is clear, this model should assist project, managerial and requirements people."
So its due time really to try to put the NeuClear architecture into perspective using his model.
7. Finance
NeuClear provides a flexible model for financial applications. Transactions can be either simple one way transactions like transfers or more advanced bidirectional transactions such as exchange transactions. The general requirements here are:
- Cheap
- Standardized across Asset Types
- Able to be generalized enough to be used for all kinds of securities
- Non Repudiable (You can not go back on a transaction)
- Realtime (no clearing period)
6. Value
Within NeuClear everything that carries value is an Asset. This is similar to what Ian calls a Ricardian contract. An asset can be a single item or a whole electronic currency. The only real definition is that it is backed by something of value.
5. Governance
Governance models are very important. Seperation of control is key to NeuClear. Ian has defined the 5 parties model which should be applied to any Asset system within NeuClear. The parties are as such:
a. Issuer Essentially the originator or promoter of an asset. To promote trust he contracts with the 3 following parties and does little else.
b. Mint The mint issues assets into the NeuClear System. This could be a trust company, but should be independent from the Issuer. They verify that the value implied in the above layer is actually entered into the system.
c. Operator The operator is like an Application Service Provider. He maintains the site and database in such a manner that neither of the other parties can interfere or otherwise modify the underlying accounting of the value system.
d. Manager The manager is contracted by the Issuer to do the day to day management of the asset. For example requesting that the Mint adds more assets to the system and customer service.
e. Users The users are in many ways the auditors of the system. As long as at least one of the 3 parties directly above is honest, they can monitor the runnings of the Asset live and instantly, to see if anything goes wrong.
4. Accounting
The accounting of the Asset maintains a constant real time view of who owns what within an Asset. It is managed by an AssetController which is a piece of software run on a server somewhere by an operator. The core book keeping is done by NeuClear Ledger a general purpose library for book keeping.
An Asset can be configured to be sent to various AssetControllers at the same time. One of them being the main one for real time purposes, the others being 3rd party auditing asset controllers. This is possible because each transaction is digitally signed and thus can be verified elsewhere. Regular end of day (or end of hour) statements can be exchanged and verified, to make sure that none of the operators are modifying the data.
3. Rights
The key to rights within NeuClear is the NeuClear ID framework. It provides a univeral Identity system for all parties within NeuClear and is universal across asset types. Thus each Identity can be thought of as not only and identifier, but an account, that works across all assets. The Identity model is flexible and supports many different kinds of Identities, including "Ticket" identities, that can be issued as part of a cash transaction at a physical agency as well as SmartCard based Identies that are controlled by the holder of a SmartCard.
2. Software Engineering
Our main software engineering infrastructure is based entirely on open common use standards such as:
+ Http
+ XML
+ XML Signatures
The current implementation uses Java as the main programming language. However it would be relatively straightforward to port it to Microsoft's .NET platform as well.
1. Cryptography
RSA Public Key Cryptography via Sun's JCE Spec
The Financial Cryptography 2004 conference has quietly (!) announced their accepted papers:
http://fc04.ifca.ai/program.htm
Click above, or read on for the full programme....
The Ephemeral Pairing Problem
Jaap-Henk Hoepman
Efficient Maximal Privacy in Voting and Anonymous Broadcast
Jens Groth
Practical Anonymity for the Masses with MorphMix
Marc Rennhard and Bernhard Plattner
Call Center Customer Verification by Query-Directed Passwords
Lawrence O'Gorman, Smit Begga, and John Bentley
A Privacy-Friendly Loyalty System Based on Discrete Logarithms over Elliptic Curves
Matthias Enzmann, Marc Fischlin, and Markus Schneider
Identity-based Chameleon Hash and Applications
Giuseppe Ateniese and Breno de Medeiros
Selecting Correlated Random Actions
Vanessa Teague
Addressing Online Dictionary Attacks with Login Histories and Humans-in-the-Loop
S. Stubblebine and P.C. van Oorschot
An Efficient and Usable Multi-Show Non-Transferable Anonymous Credential System
Pino Persiano and Ivan Visconti
Electronic National Lotteries
Elisavet Konstantinou, Vasiliki Liagokou, Paul Spirakis, Yannis C. Stamatiou, and Moti Yung
Mixminion: Strong Anonymity for Financial Cryptography
Nick Matthewson and Roger Dingledine
Interleaving Cryptography and Mechanism Design: The Case of Online Auctions
Edith Elkind and Helger Lipmaa
The Vector-Ballot E-Voting Approach
Aggelos Kiayias and Moti Yung
Microcredits for Verifiable Foreign Service Provider Metering
Craig Gentry and Zulfikar Ramzan
Stopping Timing Attacks in Low-Latency Mix-Based Systems
Brian N. Levine, Michael K. Reiter, and Chenxi Wang
Secure Generalized Vickrey Auction without Third-Party Servers
Makoto Yokoo and Koutarou Suzuki
Provable Unlinkability Against Traffic Analysis
Ron Berman, Amos Fiat, and Amnon Ta-Shma
http://www.usnews.com/usnews/issue/031222/usnews/22secrecy.htm
The above article talks a lot about how secrecy is the hallmark of the current USA administration. It includes this one snippet on page 8:
"Secret evidence of a different kind comes into play through a little-noticed effect of the U.S.A. Patriot Act. A key provision allows information from surveillance approved for intelligence gathering to be used to convict a defendant in criminal court."
Skipping past the rhetoric, and without examining the provisions of that act in detail, this signals a fairly significant shift in the threat models faced by ordinary civilians in the jurisdictions concerned. (By this, we include of course financial cryptography.)
In the past, it was possible to treat ones transmissions as protected from the average plausible attacks by the people similar to oneself. Encrypted email to your attorney was secure against, say, a bribed or nosy system administrator. An encrypted spreadsheet of ones hotel bills was secure against an ex-spouse's divorce attorney.
In addition to that, you took reasonable care of ones own machine, and hey presto, we had a security model for everyone. The closing statements of such a model said something like "secure against the threats we know about and expect. Does not include attacks by the intelligence services..."
In practical economic terms, this was grand. The common view amongst most practitioners was that if you were up against the spooks, then that was a whole different ballgame (and, we charged more...). We, as a society, relied on a shared understanding with the spooks that if they shared their product with civilians, it would weaken their effectiveness against real national security threats. In exchange for giving the spooks carte blanche with their activities, we also reduced society's costs in protecting against over-empowered public officials.
Now, we are seeing a progressive blurring of the lines of demarcation. This will make threat assessment much harder in the future. It will no longer be trivially possible to draw a line between civilian and military means, and say, for example, that "national technical means" are excluded from our threat model. It may now be necessary, for all sorts of civilian cryptography scenarios, to consider attacks by intelligence agencies operating under the direction of civilian agencies.
Take the office of the Leader of the Opposition. In the past, it was plausible to chortle and scoff at the notion that you needed to protect politically inspired attacks. We can no longer take for granted that a self respecting intelligence agent would protect their information and activities from politics. A rational analysis would now show there are just too many ways for spook material to be drafted into the re-election campaign.
Whether this means that cryptography practitioners should insist on very high standards in all crypto (as the no-risk cryptography school has it) or whether we should insist on lowering standards to increase adoption rates (as the economic cryptography school has it) is, I consider, an orthogonal issue.
What is clear is that the demand for crypto, and lots of it, will get stronger. More and more people will demand more and more cryptographic protection, as a routine feature in more and more applications. That's the inevitable consequence of more and more politicians pushing more and more power out to bureaucrats with private agendas.
iang
PS: in practice, this separation may have been illusory, as it appears to have only been maintained in "rich" countries, and only some of those. (Coincidentally, the ones that pioneered the Internet, it seems.) One of the threats that groups like cryptorights.org consider routine is that of the local security services working in concert with economic interests.
More and more people are thinking about payments being done with RFIDs. Here's another article on it:
http://www.cbsnews.com/stories/2003/12/12/tech/printable588346.shtml
I feel another repeat of the smart card money story coming on... RFIDs fall in the "cool and visible" category, not the "solves many problems" category. Still, there's nothing like paying for stuff that people can hold.
RFIDs amount to numbers that can't be easily copied. To do that, they have to be physical, and the production process is very expensive at the level of one RFID, but cheap at the level of many RFIDs.
In any physical payments situation, this could make it possible to easily identify a particular person's account or a paper note purporting to be worth a dollar [1]
But, let's get relativistic here: the advantage over existing techniques is strictly marginal. We can already print difficult-to-forge paper notes, check out the recent Euros as one not-so-leading example. And, credit cards are difficult to copy until one gets access to a database of them [2].
In practice, RFIDs may make a marginal difference to these characteristics. But, make no mistake - the real differences will come in the overall architecture that employs these techniques.
RFIDs will be sugar-coated journo-food, an important role in themselves for establishing the momentum for adoption, but not as critical to the security as people would like.
Think of RFIDs as a repeat of the smart card money story [3]. Anybody who's too focused on the tech will fail. Look for systems to have real architectures behind them.
iang
[1] Retail is the obvious one, but not the most useful. In fact, these devices have more likelihood of making headway in the transport sector - driver-based payments for truckers, petrol, tolls purchases.
[2] If you want to stop credit cards being copied, stop making databases of them!
[3] Next year will be the year of the smartcard, I promise!
Over on Slashdot [1], there was a long debate about the NASDAQ market deciding to cancel a dud series of orders that sent shock waves through the US trading markets, based on this NYT article .
The core problem underlying this event is that the trade - and all trades - are not real. That is, they are promises to make good on the orders, not trades in and of themselves. It's like imagining that everyone in the system, including the system itself, works with credit cards with really high limits on them.
This leads to two problems. Firstly, the trader might max out his creit card, and not make good on a promise. That's called settlement risk [2].
Secondly, any other party might decide to stick their wick in and futz with the order, because their own credit card gets maxed out. In this case, the other parties included
- the computer program that sent the repeating series of unreal orders,
- NASDAQ which cancelled these unreal orders after other traders had made yet further unreal orders, and
- Archipeligo which re-opened without alerting traders to the potential unwinding of the unreal state.
That's called operational risk. It's what happens when the systems make mistakes on you, and leave you with the mess to clean up [3].
Mistakes happen [4]. As time goes on, routine mistakes get caught and exotic ones slip through. Then, as they are discovered, these exotics are turned into routines by new patches. So, over time, more and more complexity marches together with more and more exotic mistakes.
Within any system, mistakes will be caught, or not, as the case may be. This mistake was not caught before it sent the shock waves through the system. Systems-wise, it's fairly clear that most mistakes will be caught, and an occasional one will get through and cause havoc. This will cause a rethink and repatching, which will catch that very mistake next time.
Until the next exotic mistake slips through, which by definition is more exotic, and more damaging [5].
As each ensuing trade is based, one after the other, on the original promise, the result is that the financial system works both as a breeding ground for bigger and more exotic mistakes, and as an amplifier of the exotic mistake. As is intiutively seen by the slashdotters (but glossed over by the financial participants) the original order is now lost in the confusion of responses by various agents trying to establish the "right" thing to do.
Say hello to systemic risk!
This is the risk that a shock cascades through the system, causing the collapse of entities unrelated to the original cause, and thus further propogating and amplifying the original shock.
Systemic risk in today's financial system rests primarily on the core premise that the whole system is based on promises to pay. It's as if every participant has a credit card, and too many of them are maxed out.
There is an easy solution to this. It's called real time gross settlement, and it means making every trade real. This isn't to eliminate credit, it instead amounts to taking away the credit cards from the systems and institutions in the center. That is, NASDAQ, Archipeligo, etc, who are not traders, but are part of the system.
Which would then make them into sinks of systemic risk, rather than amplifiers of it. Mistakes would be smothered one step away from the source, rather than cause waves of systemic failures through the global system.
But, don't hold your breath waiting for this solution. Keen students of financial cryptography will know that it is both a real solution, and a cost effective one. In fact, too cost-effective, as it will reduce the opportunities to make money by those with lots of credit cards.
iang
[1]
Slashdot discussion
[2] definition: "Settlement risk is the risk that a settlement in a transfer system does not take place as expected. Generally, this happens because one party defaults on its clearing obligations to one or more counterparties." The BIS calls this credit risk.
[3] It's also related to Herstatt Risk, after a German bank was suspended leaving counterparties sandwiched between such one-sided trades (see URL in [2]), and to legal risk, as some contracts got breached, but others got enforced, arbitrarily.
[4] vadelais reports on slashdot:
"Mistakes have been made in market trading before by other companies.
In May last year, London's FTSE 100 index dropped by more than 2%, after a trader typed 300m, instead of 30m, while selling a parcel of shares.
In 1998 a Salomon Brothers trader mistakenly sold 850m-worth of French government bonds by LEANING ON HIS KEYBOARD.
And at the end of 2001, shares in Exodus, a bankrupt internet firm, jumped by 59,000% when a trader accidentally bid $100 for its shares, at a time when its value was 17 cents."
[5] Lawrence_Bird on slashdot suggests an advantageous and more exotic mistake:
"To this day, I do not believe the order was entered in error. I believe a hedge fund or other firm had a need to buy, and buy large. What better way to get filled then to trigger a large move in a closely correlated market (Dow futures) where you know the majority of the trades will be cancelled, and just sit on the bid in the other market (CME) as people panic sell. And do not rule out collusion."
In a rare burst of journalistic research, the Economist has a good article on the state of viruses and similar security threats.
It downplays terrorism, up-plays phishing, agrees that Microsoft is a monoculture, but disagrees with any conclusions promoted.
Even better, the Economist goes on to be quite perceptive about the future of anonymity, psuedonymity, and how to protect privacy. It's almost as if they did their homework! Well done, and definitely worth reading.
In a recent Declan misinterpretation, discussed over on Larry Lessig''s Blog at the definition of pseudonymity was rasied [1].
Larry Lessig proposes that pseudonymity is a mechanism that hides a real identity, short of presentation of (say) a "warrant". I disagree with this definition, and it's certainly not the one that I've seen having dealt with these things in the last decade or so.
Pseudonymity is achieved when an entity is created that can clearly tie its activities together into one individuality. A Pseudonym, or nym for short, has individuality, distinction, a personality, and a name.
In contrast, Anonymity lacks these facets. It is achieved when it is not possible to tie one activity to another of the same entity.
In further contrast, we have the Real Person, who is fully identified down to the single human being [2]. That means we have three states for source of any event: anonymity, pseudonymity, and identified. And, there are two entities: nyms, and real persons [3].
Larry's definition presupposes the existence of the real person behind every nym.
In practice, that's not what happens - a pseudonymous account at Hotmail or Yahoo includes a bunch of details that may or may not point at a person behind that account. The existence of that information doesn't imply whether there are zero, one, or many people behind that account, and it doesn't imply that last week's people are next week's people. But, it does imply that a single email from the account can be related to all other emails from the same nym.
For example, it has been the case that groups of people have shared the same nym in order to maintain a prolific personality. In a sense, the nym is the set of all emails from that same entity. Their personality is derived solely from the words in those emails, whereas the personality of a real person includes more information outside their words.
In contrast, when an entity (either nymous or real) writes two different emails anonymously, there is no connection between them. A crucial feature of anonymity is that it lacks "distinctive character, individuality, factors of recognition, personality."
Nyms have all that. Nyms have names, they have personalities, built over many actions. They just lack a handle back to any single human.
Why is it crucial to get the distinction clear? Because of systems design: Systems built without a full appreciation of nymity migrate into real identity systems, and repeat the existing failures. Systems built with nymity at their core are capable of solving new and interesting challenges that real identity systems will always fail at.
iang
[1] How does one organise this trackback ping thing then?
[2] What is the proper term for a Real Person and their singular Identity in this context? I don't know.
[3] For sake of brevity, I skip over the interesting application of this to "legal persons" such as corporations.
Real world commerce is largely built on a fabric of contracts. Considered abstractly, a contract is an agreed framework of rules used by separately interested parties to coordinate their plans in order to realize cooperative opportunities, while simultaneously limiting their risk from each other's misbehavior. Electronic commerce is encouraging the growth of contract-like mechanisms whose terms are partially machine understandable and enforceable.
Workshop on Electronic Contracting
The First IEEE International Workshop on Electronic Contracting (WEC) is the forum to discuss innovative ideas at the interface between business, legal, and formal notions of contracts. The target audiences will be researchers, scientists, software architects, contract lawyers, economists, and industry professionals who need to be acquainted with the state of the art technologies and the future trends in electronic contracting. The event will take place in San Diego, California, USA on July 6 2004. IEEE SIEC 2004 will be held in conjunction with The International Conference on Electronic Commerce (IEEE CEC 2004).
Topics of interest include but are not limited to the following:
Contract languages and user interfaces
Computer aided contract design, construction, and composition
Computer aided approaches to contract negotiation
"Smart Contracts"
"Ricardian Contracts"
Electronic rights languages
Electronic rights management and transfer
Contracts and derived rights
Relationship of electronic and legal enforcement mechanisms
Electronic vs legal concepts of non-repudiation
The interface between automatable terms and human judgement
Kinds of recourse, including deterrence and rollback
Monitoring compliance
What is and is not electronically enforceable?
Trans-jurisdictional commerce & contracting
Shared dynamic ontologies for use in contracts
Dynamic authorization
Decentralized access control
Security and dynamism in Supply Chain Management
Extending "Types as Contracts" to mutual suspicion
Contracts as trusted intermediaries
Anonymous and pseudonymous contracting
Privacy vs reputation and recourse
Instant settlement and counter-party risk
Submissions and Important Dates:
Full papers must not exceed 20 pages printed using at least 11-point type and single spacing. All papers should be in Adobe portable document format (PDF) format. The paper should have a cover page, which includes a 200-word abstract, a list of keywords, and author's e-mail address on a separate page. Authors should submit a full paper via electronic submission to boualem@cse.unsw.edu.au. All papers selected for this conference are peer-reviewed. The best papers presented in the conference will be selected for special issues of a related computer science journal.
Submissions must be received no later than January 10, 2004.
Authors will be notified of their submission?s status by March 2, 2004
Camera-Ready versions must be received by April 2, 2004
General Chair
Ming-Chien Shan, Hewlett Packard, USA
Program Co-Chairs
Boualem Benatallah, University of New South Wales, Sydney, Australia <boualem@cse.unsw.edu.au>
Claude Godart, INRIA-LORIA, Nancy, France <Claude.Godart@loria.fr>
Program Committee:
to be added later
Mark S. Miller, Hewlett Packard Laboratories, Johns Hopkins University
Alan Karp, Hewlett Packard Laboratories
to be added later
For those of you following the US finance news, there is a developing scandal surrounding mutual funds, based on a regulatory or governance faulure. Which it would be - regulatory failure or governance failure - is a matter for individual choice.
It doesn't seem to be widely explained, but I have what might be called an inside scoop, having watched this develop over the last 6 months, so here's the story (not that I know any secrets).
Mutual funds within the US are technically unregulated. There is no designated regulator, and any regulation comes from "general jurisdiction" such as NASD's SRO status (SRO is self-regulatory-organisation) applying to the licensed broker/dealers within its ambit, or to the State Attorneys General. More about this below.
Mutual funds themselves are poorly governed. They have a bit of basic separation of roles, in that the assets held within are often held in the name of an independent party. So, that's one role of the 5PM, that which we would call the repository. But, there generally is no distinct co-signatory, and the digital side - the units in the funds - have a hotchpotch of arrangements.
IOW, governance that make the average top-notch DGC blush, as it were. One could equate the mutual fund sector to the less well governed DGCs, the ones that follow the "we don't do that, you have to trust us" mantra.
One more little thing needs to be explained: Mutual funds are not real time. Unlike DGCs, they settle in decidedly archeological time. Settlement can take months, and as long as 18 months has been seen ......
In essence, these issues are investments and you are meant to be "in" for a long time, to allow your manager to "do his thing" for you.
Of course, at some stage, the poorly liquid fund gets hit with a big redemption order. In order to satisfy redemptions, with some level of immediacy, what the mutual funds have been doing is taking on a clients that can take up the slack in the short term.
This is called "capacity." That is, if a big redemption hits, capacity is injected by a known big player, until the assets can be sold and monetarised. At which time, the big player is bought out again.
For a profit, of course. This involves some sort of good deal for the big capacity player, which is only fair, his capital is being used. In a sense, it is a bridging loan, but, as it occurs in the units of the fund, it's a bridge with a nice curve to it.
Nice, good, clean finance.
Until, that is, the managers realised that they can also benefit from letting the big players come in anytime, not just when they need the big money. Which is easy enough, as commissions are paid on th big amounts, and kickbacks are always plausible.
And this resulted in delito #1: "Market Timing."
This was a process whereby a big player and a mutual fund would negotiate a special deal to place large lumps over short periods of time, generating *two* commissions for the manager, and some big gain for the player.
In essence, it was a quantity discount, negotiated in advance.
But, in practice, it was egregious as it deliberately took money from the *other* unit holders: When the big money man saw a shift in the market, he could put a big lump in, wait a few days, and then pull it out again, knowing that others in the fund could not move as fast, and that the fund prices themselves were sticky.
However, it wasn't strictly illegal. There were no rules in the mutual fund agreement against it... So, market timing wasn't in essence something for anyone to get upset about, it was just one of those insider secrets.
Let's move on! One of the things that a mutual fund does is to mark-to-market its assets. Then, at a fixed time every day, it would declare the value of the fund, and the price of the units, based on that time and asset base.
And, as a consequence, it would also set a time by which all orders had to be in by. 3pm was common. If the order arrived after the timeslot, it got tomorrow's price.
Now comes delito #2: "Late Trading."
Unsatisfied with their bulk discounts, some players also negotiated an ability to trade *after* the cutoff time.
Which led to the perfect arbitrage. By watching a market in, say, Japan, they could see what the opening price was, after the cutoff, and if it moved favourably (to them), they could trade the fund at what was to become yesterday's price.
This was the perfect rort. Fortunes were being made. And, investors were being raped. If the trade looked good, it was confirmed at what was to be yesterday's price, and the manager would shift back the order in time. If it looked bad, it was forgotten.
All upside, no downside.
Into this fray swaggers the Attorney General of New York State. David Brown is the lead guy there, although most people will know the Attorney General as Elliot Spitzer, who is now challenging for top spot as Governor of the State of New York.
Somewhere along the end of 2002 or beginning of 2003, the AG heard whiff of crazy money being made in the mutual fund industry, and did some digging. Eventually, David Brown and his team managed to understand what was going on, and also found a case that they could take on.
That case was "Canary Capital." It was launched about mid 2003, and settled within two months for a $40 million fine and sundry banishments.
That's blinding speed in the regulatory business!
Which might indicate several things: The company got off very lightly, it was caught dead to rights, and it gave up some more info. Further, it set a precedent.
Funnily enough, Canary was an investor, and thus did not (presumably) do more than conspire to participate (and, as the settlement didn't admit blame, it maybe didn't even do that). But, even as an investor, it was sufficient to give the AG his lead case.
And now, all the big names in Wall Street are being hit with the same thing. Shocked by the crash of the first toppled domino, the whole lot are wobbling away from the vertical.
Security Trust Corp., a 13 billion dollar trust company, is being closed up by the federal government, and three executives are under indictment, facing long jail sentences (25 years, etc). A whole gaggle of famous names in the finance industry are under subpoena, and lots of hurried house cleaning operations are in progress.
It seems that almost everyone was doing it, or being done to.
Worse, big Wall Street firms were (it has been rumoured) front-running their own mutual funds! That is, one division would invest into the fund and pull out in a couple of days, generating more fees and profits for both sides.
Class action attorneys are foaming at the mouth with glee - because the investor never gets a dime of the fines, and thus, he still has a complaint! There has already been one big case launched, and the speed of it indicates that it is a strategic defence case, not a real aggresive play by injured parties.
And, of course, Elliot Spitzer has added another notch on his revolver grip, and his deputy David Brown is odds-on favourite to take the AG position.
The total fines that will be collected off Wall Street firms are rumoured to be in the order of 10 times more than the last scandal. That one set the record at 1.3 billion, so 13 billion sounds like a pretty nice haul.
Nice work if you can get it. What does it all mean to the world of DGCs? One thing is that the governance record of the unregulated mutual fund industry was pretty spotty.
To cut a long story short, the normal path for a badly governed finance sector is
1. success,
2. complaints, concerns, handwringing,
3. scandal,
4. regulation.
Mutual funds are well into phase 3. Bringing this back to DGCs, they are in phase 2. Does this mean phase 3 - scandal - follows inevitably? No, there is at least the possibility that the lesser-governed DGCs clean up their acts.
But, unless they do, any scandal will create the forces necessary to lead to regulation. As a general result. So, it's not inevitable, but one has to say that it is likely, unless there is a clean up.
I personally do not favour regulation.
(NB., that's an actual opinion, signalled explicitly, due to the rarity of such!)
To my mind, the addition of regulation causes the following sequence:
1. regulation
2. scandal
3. add more regulation, go back to 2.
But, the state of the world pertains. Which means that unless we provide a sector that can and does govern itself well, and removes the underlying (but poor) rationale for regulation, then we may well end up walking that treadmill.
iang
PS: all of the above is hearsay. I have taken no time to check the facts, and it could very well be that I've made some gross errors. These would be all mine and mine alone. I'm not "in" the MF industry, nor "in" a privileged position, so think of the above as no more than gossip. It's value is in seeing the MFs predict our own futures in advance, not in understanding the sordid details of their scandals.
PS: an earlier version of this rant was posted on the dgcchat list.
Addendum: It appears that some academic interest in the problem has occurred, with a paper by Eric Zitzewitz entitled Who Cares About Shareholders? Arbitrage Proofing Mutual Funds
See also an article at Stanford b-school The Blind Spot in Mutual Fund Investing
http://www.enhyper.com/content/erlangrisk.pdf
In this paper, some experiences of using the concurrent functional language Erlang to implement a classical vertical application, a risk management information system, are presented. Due to the complex nature of the business logic and the interactions involved in the client/server architecture deployed, traditional development techniques are unsatisfactory. First, the nature of the problem suggests an iterative design approach. The use of abstractions (functional patterns) and compositionality (both functional and concurrent composition) have been key factors to reduce the amount of time spent adapting the system to changes in requirements. Despite our initial concerns, the gap between classical software engineering and the functional programming paradigm has been successfully fullfiled.
http://tab.computer.org/tfec/cec04/cfpWEC.html
Real world commerce is largely built on a fabric of contracts. Considered abstractly, a contract is an agreed framework of rules used by separately interested parties to coordinate their plans in order to realize cooperative opportunities, while simultaneously limiting their risk from each other's misbehavior. Electronic commerce is encouraging the growth of contract-like mechanisms whose terms are partially machine understandable and enforceable.
The First IEEE International Workshop on Electronic Contracting (WEC) is the forum to discuss innovative ideas at the interface between business, legal, and formal notions of contracts. The target audiences will be researchers, scientists, software architects, contract lawyers, economists, and industry professionals who need to be acquainted with the state of the art technologies and the future trends in electronic contracting. The event will take place in San Diego, California, USA on July 6 2004. IEEE SIEC 2004 will be held in conjunction with The International Conference on Electronic Commerce (IEEE CEC 2004).
Topics of interest include but are not limited to the following:
* Contract languages and user interfaces
* Computer aided contract design, construction, and composition
* Computer aided approaches to contract negotiation
* "Smart Contracts"
* "Ricardian Contracts"
* Electronic rights languages
* Electronic rights management and transfer
* Contracts and derived rights
* Relationship of electronic and legal enforcement mechanisms
* Electronic vs legal concepts of non-repudiation
* The interface between automatable terms and human judgement
* Kinds of recourse, including deterrence and rollback
* Monitoring compliance
* What is and is not electronically enforceable?
* Trans-jurisdictional commerce & contracting
* Shared dynamic ontologies for use in contracts
* Dynamic authorization
* Decentralized access control
* Security and dynamism in Supply Chain Management
* Extending "Types as Contracts" to mutual suspicion
* Contracts as trusted intermediaries
* Anonymous and pseudonymous contracting
* Privacy vs reputation and recourse
* Instant settlement and counter-party risk
Submissions and Important Dates:
Full papers must not exceed 20 pages printed using at least 11-point type and single spacing. All papers should be in Adobe portable document format (PDF) format. The paper should have a cover page, which includes a 200-word abstract, a list of keywords, and author's e-mail address on a separate page. Authors should submit a full paper via electronic submission to boualem@cse.unsw.edu.au. All papers selected for this conference are peer-reviewed. The best papers presented in the conference will be selected for special issues of a related computer science journal.
* Submissions must be received no later than January 10, 2004.
* Authors will be notified of their submission's status by March 2, 2004
* Camera-Ready versions must be received by April 2, 2004