This is a great research attack on a SWIFT-using payment institution (likely a British bank allowing the research to be conducted) from Oliver Simonnet.
But I was struck by how the architectural flaw leapt out and screamed HIT ME HERE! 1/17
@mikko We were able to demonstrate a proof-of-concept for a fraudulent SWIFT payment message to move money. This was done by manually forging a raw SWIFT MT103 message. Research by our Oliver Simonnet: labs.f-secure.com/blog/forging-swift-mt-payment-messages
Here’s the flaw: SAC is a flag that says signature verification and RMA (Relationship Management Application) authorisation and verification was successful.
Let me say that more clearly: SAC says verification is done. 2/17
The flaw is this: the SAC isn’t the authorisation - it’s a flag saying there was an auth. Which means, in short SWIFT messages do not carry any role-based authorisations.
They might be authorised, but it’s like they slapped a sticker on to say that.
Not good enough. 3/17
It’s hard to share just how weird this is. Back in the mid nineties when people were doing this seriously on the net, we invented many patterns to make digital payments secure, but here we are in the ‘20s - and SWIFT and their banks haven’t got the memo. 4/17
They’ve done something: LUAs,MACs,CHKs.
But these are not authorisations over messages, they’re handshakes that carry the unchanged message to the right party. These are network-security mechanisms, not application-level authorisations - they don’t cover rogue injection. 5/17
Let’s talk about a solution. This is the pattern that Gary and I first explored in Ricardo in mid-nineties, and Chris Odom put more flesh on the design and named it:
Russian Dolls
It works like this: 6/17
Every node is an agent. And has a signing key.
The signing key says this message is authorised - not access controlled, not authentically from me but authorised by me and I take the consequences if it goes wrong. 7/17
Every node signs their messages going out at the application level.
(as an aside - to see this working properly, strip out LUAs and encryption, they’re just a distraction. This will work over the open net, over raw channels.) 8/17
Every node receives inbound messages, checks they are authorised, and only proceeds to then process authorised messages. At that point it takes over responsibility - this node is now responsible for the outcomes, but only its processing, bc it has an authorised instruction. 9/17
The present node now wraps the inbound message into its outbound message, and signs the whole message, including the received inbound.
The inbound is its authorisation to process, the outbound is an authorisation to the next node.
Sends, to the next node. 10/17
Repeat. Hence the message grows each time. Like Russian Dolls, it recursively includes each prior step. So, to verify:
take the message you received, check it is authorised.
THEN, pull out its inner message, and verify that.
Repeat all the way to the smallest message. 11/17
This solves the SWIFT design weakness: firstly there is no message injection possible - at the message layer.
You could do this over the open net and it would still be secure (where, secure means nobody can inject messages and cause a bad transaction to happen). 12/17
2- as the signing key is now responsible as a proxy for the role/owner of that node, the security model is much clearer - PROTECT THAT KEY!
3- compromise of any random key isn’t enough bc the attacker can’t forge the inbound.
For top points, integrate back into the customer. 13b/17
OK, if you’ve followed along & actually get the pattern, you’re likely saying “Why go to all that bother? Surely there is a better, more efficient way…”
Well, actually no, it’s an end to end security model. That should be the beginning and end of the discussion. 14/17
But there is more: The Russian Dolls pattern of role-based authorisation (note, this is not RBAC) is an essential step on the road to full triple entry:
"I know that what you see is what I see."
15/17
In triple entry, the final signed/authorised message, the Big Babushka, becomes the 'receipt'. You:
(a) reflect the Babushka back to the sender as the ACK, AND(b) send it to SWIFT and on to the receiving bank.
Now, the receiving bank sees what client sender sees. 16/17
And you both know it. Muse on just what this does to the reconciliation and lost payments problem - it puts designs like this in sight of having reliable payments.
(But don’t get your hopes up. There are multiple reasons why it’s not coming to SWIFT any time soon.)
17/17
"Access to Cash Review" confirms much that we have been warning of as UK walks into its future financial gridlock. From WolfStreet:
Transition to Cashless Society Could Lead to Financial Exclusion and System Vulnerability, Study Warnsby Don Quijones • Mar 14, 2019
“Serious risks of sleepwalking into a cashless society before we’re ready – not just to individuals, but to society.”
Ten years ago, six out of every ten transactions in the UK were done in cash. Now it’s just three in ten. And in fifteen years’ time, it could be as low as one in ten, reports the final edition of the Access to Cash Review. Commissioned as a response to the rapid decline in cash use in the UK and funded by LINK, the UK’s largest cash network, the review concludes that the UK is not nearly ready to go fully cashless, with an estimated 17% of the population – over 8 million adults – projected to struggle to cope if it did.
Although the amount of cash in circulation in the UK has surged in the last 10 years from £40 billion to £70 billion and British people as a whole continue to value it, with 97% of them still carrying cash on their person and another 85% keeping some cash at home, most current trends — in particular those of a technological and generational bent — are not in physical money’s favor:
Over the last 10 years, cash payments have dropped from 63% of all payments to 34%. UK Finance, the industry association for banks and payment providers, forecasts that cash will fall to 16% of payments by 2027.
...
Curiously, several factors are identified which speak to current politics:
I have always thought that Brexit was not a vote against Europe but a vote against London. The population of Britain split, somewhere around 2008 crisis into rich London and poor Britain. After the crash, London got a bailout and the poor got poorer.
By 2016-2017 the feeling in the countryside was palpably different to London. Even 100km out, people were angry. Lots of people living without the decent jobs, and no understanding as to why the good times had not come back again after the long dark winter.
Of course the immigrants got blamed. And it was all too easy to believe the silver-toungued lie of Londoners that EU regulation was the villain.
But London is bank territory. That massive bailout kept it afloat, and on to bigger and better things. E.g., a third of all European fintech startups are in London, and that only makes sense because the goal of a fintech is to be sold to ... a bank.
Meanwhile, the banks and the regulators have been running a decade long policy on financial exclusion:
"And it’s not all going in the right direction – tighter security requirements for Know Your Customer (KYC) and Anti-Money Laundering) (AML), for example, actually make digital even harder to use for some.
Note the politically cautious understatement: KYC/AML excludes millions from digital payments. The AML/KYC system as designed gives the banks the leeway to cut out all low end unprofitable accounts by raising barriers to entry and by giving them carte blanche to close at any time. Onboarding costs are made very high by KYC, and 'suspicion' is mandated by AML: there is no downside for the banks if they are suspicious early and often, and serious risk of huge fines if they miss one.
Moving to a cashless, exclusionary society is designed for London's big banks, but risks society in the process. Around the world, the British are the most slavish in implementing this system, and thus denying millions access to bank accounts.
And therefore jobs, trade, finance, society. Growth comes from new business and new bank accounts, not existing ones. Placing the UK banks as the gatekeepers to growth is thus a leading cause of why Britain-outside-London is in a secular depression.
Step by painful step we arrive at Brexit. Which the report wrote about, albeit in roundabout terms:
Government, regulators and the industry must make digital inclusion in payments a priority, ensuring that solutions are designed not just for the 80%, but for 100% of society.
But one does not keep ones job in London by stating a truth disagreeable to the banks or regulators.
7 years after we called the cancer that is criminal activity in Bitcoin-like cryptocurrencies, here comes a report that suggests that 4.3% of Monero mining is siphoned off by criminals.
A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth
Sergio Pastrana
Universidad Carlos III de Madrid*
spastran@inf.uc3m.esGuillermo Suarez-Tangil
King’s College London
guillermo.suarez-tangil@kcl.ac.ukAbstract—Illicit crypto-mining leverages resources stolen from victims to mine cryptocurrencies on behalf of criminals. While recent works have analyzed one side of this threat, i.e.: web-browser cryptojacking, only white papers and commercial reports have partially covered binary-based crypto-mining malware. In this paper, we conduct the largest measurement of crypto-mining malware to date, analyzing approximately 4.4 million malware samples (1 million malicious miners), over a period of twelve years from 2007 to 2018. Our analysis pipeline applies both static and dynamic analysis to extract information from the samples, such as wallet identifiers and mining pools. Together with OSINT data, this information is used to group samples into campaigns.We then analyze publicly-available payments sent to the wallets from mining-pools as a reward for mining, and estimate profits for the different campaigns.Our profit analysis reveals campaigns with multimillion earnings, associating over 4.3% of Monero with illicit mining. We analyze the infrastructure related with the different campaigns,showing that a high proportion of this ecosystem is supported by underground economies such as Pay-Per-Install services. We also uncover novel techniques that allow criminals to run successful campaigns.
This is not the first time we've seen confirmation of the basic thesis in the paper Bitcoin & Gresham's Law - the economic inevitability of Collapse. Anecdotal accounts suggest that in the period of late 2011 and into 2012 there was a lot of criminal mining.
Our thesis was that criminal mining begets more, and eventually pushes out the honest business, of all form from mining to trade.
Testing the model: Mining is owned by BotnetsLet us examine the various points along an axis from honest to stolen mining: 0% botnet mining to 100% saturation. Firstly, at 0% of botnet penetration, the market operates as described above, profitably and honestly. Everyone is happy.
But at 0%, there exists an opportunity for near-free money. Following this opportunity, one operator enters the market by turning his botnet to mining. Let us assume that the operator is a smart and careful crook, and therefore sets his mining limit at some non-damaging minimum value such as 1% of total mining opportunity. At this trivial level of penetration, the botnet operator makes money safely and happily, and the rest of the Bitcoin economy will likely not notice.
However we can also predict with confidence that the market for botnets is competitive. As there is free entry in mining, an effective cartel of botnets is unlikely. Hence, another operator can and will enter the market. If a penetration level of 1% is non-damaging, 2% is only slightly less so, and probably nearly as profitable for the both of them as for one alone.
And, this remains the case for the third botnet, the fourth and more, because entry into the mining business is free, and there is no effective limit on dishonesty. Indeed, botnets are increasingly based on standard off-the-shelf software, so what is available to one operator is likely visible and available to them all.
What stopped it from happening in 2012 and onwards? Consensus is that ASICs killed the botnets. Because serious mining firms moved to using large custom rigs of ASICS, and as these were so much more powerful than any home computer, they effectively knocked the criminal botnets out of the market. Which the new paper acknowledged:
... due to the proliferation of ASIC mining, which uses dedicated hardware, mining Bitcoin with desktop computers is no longer profitable, and thus criminals’ attention has shifted to other cryptocurrencies.
Why is botnet mining back with Monero? Presumably because Monero uses an ASIC-resistant algorithm that is best served by GPUs. And is also a heavy privacy coin, which works nicely for honest people with privacy problems but also works well to hide criminal gains.
So says NIST...
10 years ago I annoyed the entire crypto-supply industry:
Hypothesis #1 -- The One True Cipher Suite In cryptoplumbing, the gravest choices are apparently on the nature of the cipher suite. To include latest fad algo or not? Instead, I offer you a simple solution. Don't.
There is one cipher suite, and it is numbered Number 1.
Cypersuite #1 is always negotiated as Number 1 in the very first message. It is your choice, your ultimate choice, and your destiny. Pick well.
The One True Cipher Suite was born of watching projects and groups wallow in the mire of complexity, as doubt caused teams to add multiple algorithms- a complexity that easily doubled the cost of the protocol with consequent knock-on effects & costs & divorces & breaches & wars.
It - The One True Cipher Suite as an aphorism - was widely ridiculed in crypto and standards circles. Developers and standards groups like the IETF just could not let go of crypto agility, the term that was born to champion the alternate. This sacred cow led the TLS group to field something like 200 standard suites in SSL and radically reduce them to 30 or 40 over time.
Now, NIST has announced that AES as a single standard algorithm is worth $250 billion economic benefit over 20 years of its project lifetime - from 1998 to now.
h/t to Bruce Schneier, who also said:
"I have no idea how to even begin to assess the quality of the study and its conclusions -- it's all in the 150-page report, though -- but I do like the pretty block diagram of AES on the report's cover."
One good suite based on AES allows agility within the protocol to be dropped. Entirely. Instead, upgrade the entire protocol to an entirely new suite, every 7 years. I said, if anyone was asking. No good algorithm lasts less than 7 years.
Crypto-agility was a sacred cow that should have been slaughtered years ago, but maybe it took this report from NIST to lay it down: $250 billion of benefit.
In another footnote, we of the Cryptix team supported the AES project because we knew it was the way forward. Raif built the Java test suite and others in our team wrote and deployed contender algorithms.
Because it's Sunday, I thought I'd write a missive on that favourite depressing topic, the current trend of the British nation to practice economic self-immolation. I speak of course of the FCA's letter to the CEO.
Some many may be bemused and think that this letter shows signs of progress. No such. What the letter literally is, is the publication of a hitherto secret order sent to CEOs of banks to not bank crypto.
The instruction was delivered some time ago. Verbally. And deniably. The banks knew it, the FCA knew it, but all denied it unless under condition of confidentiality or alcohol.
Which put the FCA, the banks, and Britain into considerable difficulties - the FCA could neither move forward to adjust because there was no position to adjust, the British banks could not bank crypto because they were under instruction, and the British crypto-using public either got screwed by the banks or they left the country for Europe and places further afield.
(Oddly, it turns out that Berlin is the major beneficiary here, but that's a salacious distraction.)
After some pressure as the 'star chamber' process of business policy, it transpires a week ago that the FCA has come out of the cold and issued the actual instruction in writing. This is now sufficient to replace the secret instruction, so it represents a movement of sorts. We can now at least openly debate it.
However the message has not changed. To understand this, one has to read the entire letter and also has to know quite a lot about banks, compliance and the fine art of nah-saying while appearing to be encouraging.
The tone is set early on:
CRYPTOASSETS AND FINANCIAL CRIMEAs evidence emerges of the scope for cryptoassets1 to be used for criminal purposes, I am writing regarding good practice for how banks handle the financial crime risks posed by these products.
There are many non-criminal motives for using cryptoassets.
It's all about financial crime. The assumption is that crypto is probably used for financial crime, and the banks' job is to stop that happening. Which would logically set a pretty high bar because (like money, which is also used for financial crime) the use of crypto (money) by customers is generally opaque to the bank. But banks are used to being the frontline soldiers in the war on finance, so they will not notice a change here.
The problem is of course that it simply doesn't work. The bank can't see what you are doing with crypto, so what to do? When the banks ask customers what they do with the money, costs sky rocket, but the quality of the information doesn't increase.
There is therefore only one equilibrium in this mathematical puzzle that the FCA has set the banks: Just say no. Shut down any account that uses crypto. Because we will hold you for it and we will insist on no failures.
Now, if it was just that, a dismal letter conflating crime with business, one could argue the toss. But the letter goes on. Here's the smoking gun:
Following a risk-based approach does not mean banks should approach all clients operating in these activities in the same way. Instead, we expect banks to recognise that the risk associated with different business relationships in a single broad category can vary, and to manage those risks appropriately.
The risk-based approach!
Once that comes into play, what the bank reads is that they are technically permitted to use crypto, but they are strictly liable if it goes wrong. Because they've been told to do their risk analysis, and they've done their risk analysis, and therefore the risks are analysed which conflates with reduced to zero.
Which means, they can only touch crypto if they know . This knowledge being full knowledge not that other long-running joke called KYC. Banks must know that the crypto is all and totally bona fide.
But they generally have no such tool over customers, because (a) they are bankers and if they had such a tool (b) they wouldn't be bankers, they would be customers.
(NB., to those who know their history, anglo-world banks used to know their customers' business quite well. But that went the way of the local branch manager and the dodo. It was all replaced by online, national computer networks, and now AIs that do not manifestly know anything other than how to spit out false positives. NB2 - this really only refers to the anglo world being UK, USA, AU, CAN, NZ and the smaller followers. Continental banking and other places follow a different path.)
Back to today. The upshot of the relationship for regulator-bank in the anglo world is this: "risk-based analysis" is code for "if you get it wrong, you're screwed." Which latter part is code for fines and so forth.
So what is the significance of this, other than the British policy of not doing crypto as a business line (something that Berlin, Gibraltar, Malta, Bermuda and others are smiling about)? It's actually much more devastating that one would think.
It's not about just crypto. As a society, we don't care about crypto more than as a toy for rich white boys to play at politics and make money without actually doing anything. Fun for them, soap opera for observers but the hard work is elsewhere.
It's not in the crypto - it's in the compliance. As I wrote elsewhere, banks in the anglo world and their regulators are locked in a deadly embrace of compliance. It has gotten to the point where all banking product is now driven by compliance, and not by customer service, or by opportunity, or by new product.
In that missive on Identity written for R3's member banks, I made a startling claim:
FITS – the Financial Identity Trilemma SyndromeIf you’re suffering from FITS, if might be because of compliance. As discussed at length in the White Paper on Identity, McKinsey has called out costs of compliance as growing at 20% year on year. Meanwhile, the compliance issue has boxed in and dumbed down the security, and reduced the quality of the customer service. This is not just an unpleasantness for customers, it’s a danger sign – if customers desert because of bad service, or banks shed to reduce risk of fines, then banks shrink, and in the current environment of delicate balance sheets, rising compliance costs and a recessionary economy, reduction in bank customer base is a life-threatening issue.
For those who are good with numbers, you'll pretty quickly realise that 20% is great if it is profits or revenues or margin or one of those positive numbers, but it's death if it is cost. And, compliance is *only a cost*. So, death.
Which led us to wonder how long this can go on?
(youtube, transcript) Cost to compliance now is about 30% I’ve heard, but you pick your own number. Who works at a bank? Nobody, okay. That’s probably good news. [laughs] Who’s got a bank account? I’ve got bad news: if you do the compounding, in seven years you won’t have a bank accounts, because all the banks will be out of money. If you compound 30% forward by 20%, in seven years all of the money is consumed on compliance.
As I salaciously said at a recent Mattereum event on Identity, the British banks have 7 years before they die. That's assuming a 30% compliance cost today and 20% year on year increase. You could pick 5% now and 10% by 2020 as Duff & Phelps did which perversely seems more realistic as if the load is now 30% then banks are already dead, we don't need to wait until 100%. Either way, there is gloom and doom however society looks at it.
Now these are stupid numbers and bloody stupid predictions. The regulators are going to kill all the banks? Nah... that can only be believed with compelling evidence.
The FCA letter to the CEO is that compelling evidence. The FCA has doubled down on compliance. They have now, I suspect, achieved their 20% increase in compliance costs for this year, because now the banks must double down and test all accounts against crypto-closure policy. More.
We are one year closer to banking Armageddon - thanks, FCA.
What can we see here? Two things. Firstly, it cannot go on. Even Mark Carney must know that burning the banking village to save it is going to put Britain into the mother of all recessions. So they have to back off sometime, but when will that be? If I had to guess, it will be when major clients in the City of London desert the British banks, because no other message is heard in Whitehall or Threadneedle St. C.f., Brexit. Business itself isn't really listened to, or in the immortal words of the British Foreign Secretary,
Secondly, is it universal? No. There is a glimmer of hope. The major British banks cannot do crypto because the costs are asymmetric. But for a crypto bank that is constructed from ground up, within the FCA's impossible recipe, the costs are manageable. Such a bank would basically impose the crypto-compliance costs over all, so all must be in crypto. An equilibrium available to a specialist bank, not any high street dinosaur.
You might call that a silver lining. I call it a disaster. The regulators are hell bent on destroying the British economy by adding 20% extra compliance burden every year - note that they haven't begun to survey their costs, nor the costs to society.
After the collapse there will be green shoots, and we should cheer? Shoot me now.
Frankly, the broken window fallacy would be better than that process - we should just topple the banks with crypto right now and get the pain over with, because a fast transition is better than the FCA smothering the banks with compliance love.
But, obviously, the mandarins in Whitehall think differently. Watch the next seven years to learn how British people live in interesting times.
Stephen Mason has released the 4th edition of Electronic Evidence at
Electronic Evidence (4th edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2017)http://ials.sas.ac.uk/digital/humanities-digital-library/observing-law-ials-open-book-service-law/electronic-evidence
Note that this is free to download, and it's a real law book, so we're entering a new world of accessibility to this information. The following are also freely available:
Electronic Signatures in Law (4th edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2016)
http://ials.sas.ac.uk/digital/humanities-digital-library/observing-law-ials-open-book-service-law/electronic-signaturesFree journal: Digital Evidence and Electronic Signature Law Review
http://journals.sas.ac.uk/deeslr/
(also available in the LexisNexis and HeinOnline electronic databases)
For those of us who spend a lot of time travelling around, free digital downloads is a gift from heaven!
Courtesy of statistical analysis over this site by Uyen Ng.
If you've ever wondered what that Market for Silver Bullets paper was about, here's Pete Herzog with the easy version:
When the Security Community Eats Its Own
BY PETE HERZOG
http://isecom.org
The CEO of a Major Corp. asks the CISO if the new exploit discovered in the wild, Shizzam, could affect their production systems. He said he didn't think so, but just to be sure he said they will analyze all the systems for the vulnerability.
So his staff is told to drop everything, learn all they can about this new exploit and analyze all systems for vulnerabilities. They go through logs, run scans with FOSS tools, and even buy a Shizzam plugin from their vendor for their AV scanner. They find nothing.
A day later the CEO comes and tells him that the news says Shizzam likely is affecting their systems. So the CISO goes back to his staff to have them analyze it all over again. And again they tell him they don’t find anything.
Again the CEO calls him and says he’s seeing now in the news that his company certainly has some kind of cybersecurity problem.
So, now the CISO panics and brings on a whole incident response team from a major security consultancy to go through each and every system with great care. But after hundreds of man hours spent doing the same things they themselves did, they find nothing.
He contacts the CEO and tells him the good news. But the CEO tells him that he just got a call from a journalist looking to confirm that they’ve been hacked. The CISO starts freaking out.
The CISO tells his security guys to prepare for a full security upgrade. He pushes the CIO to authorize an emergency budget to buy more firewalls and secondary intrusion detection systems. The CEO pushes the budget to the board who approves the budget in record time. And almost immediately the equipment starts arriving. The team works through the nights to get it all in place.
The CEO calls the CISO on his mobile – rarely a good sign. He tells the CISO that the NY Times just published that their company allegedly is getting hacked Sony-style.
They point to the newly discovered exploit as the likely cause. They point to blogs discussing the horrors the new exploit could cause, and what it means for the rest of the smaller companies out there who can’t defend themselves with the same financial alacrity as Major Corp.
The CEO tells the CISO that it's time they bring in the FBI. So he needs him to come explain himself and the situation to the board that evening.
The CISO feels sick to his stomach. He goes through the weeks of reports, findings, and security upgrades. Hundreds of thousands spent and - nothing! There's NOTHING to indicate a hack or even a problem from this exploit.
So wondering if he’s misunderstood Shizzam and how it could have caused this, he decides to reach out to the security community. He makes a new Twitter account so people don’t know who he is. He jumps into the trending #MajorCorpFail stream and tweets, "How bad is the Major Corp hack anyway?"
A few seconds later a penetration tester replies, "Nobody knows xactly but it’s really bad b/c vendors and consultants say that Major Corp has been throwing money at it for weeks."
Read on for the more deeper analysis.
Here's what Greg Maxwell said about asset issuance in sidechains:
So the idea behind the issued assets functionality in Elements is to explicitly tag all of the tokens on the network with an asset type, and this immediately lets you use an efficient payment verification mechanism like bitcoin has today. And then all of the accounting rules can be grouped by asset type. Normally in bitcoin your transaction has to ensure that the sum of the coins that comes in is equal to the sum that come out to prevent inflation. With issued assets, the same is true for the separate assets, such that the sum of the cars that come in is the same as the sum of the cars that come out of the transaction or whatever the asset is. So to issue an asset on the system, you use a new special transaction type, the asset issuance transaction, and the transaction ID from that issuance transaction becomes the tag that traces them around. So this is early work, just the basic infrastructure for it, but there's a lot of neat things to do on top of this. *This is mostly the work of Jorge Tímon*.
Jorge documented all this back in 2013 in FreiMarkets with Mark Friedenbach. Basically he's adding issuance to the blockchain, which I also covered in principle back in that talk in January at PoW's Tools for the Future. As he covers issuance above, here's what I said about another aspect, being the creation of a new blockchain:
Where is this all going? We need to make some changes. We can look at the blockchain and make a few changes. It sort of works out that if we take the bottom layer, we've got a bunch of parameters from the blockchain, these are hard coded, but they exist. They are in the blockchain software, hard coded into the source code.
So we need to get those parameters out into some form of description if we're going to have hundreds, thousands or millions of block chains. It's probably a good idea to stick a smart contract in there, who's purpose is to start off the blockchain, just for that purposes. And having talked about the legal context, when going into the corporate scenario, we probably need the legal contract -- we're talking text, legal terms and blah blah -- in there and locked in.
We also need an instantiation, we need an event to make it happen, and that is the genesis transaction. Somehow all of these need to be brought together, locked into the genesis transaction, and out the top pops an identifier. That identifier can then be used by the software in various and many ways. Such as getting the particular properties out to deal with that technology, and moving value from one chain to another.
This is a particular picture which is a slight variation that is going on with the current blockchain technology, but it can lead in to the rest of the devices we've talked about.
The title of the talk is "The Sum of all Chains - Let's converge" for a reason. I'm seeing the same thinking in variations go across a bunch of different projects all arriving at the same conclusions.
It's all the same stuff.
Here's today's "legathon" at UBS, at which the banking chaps tried to figure out how to handle a swap of GBP and Canadian Dollars that goes sour - because of a bug, or a lapse in understanding, or a hack, who knows? The two traders end up in court trying to resolve their difference in views.
In the court, the judge asks for the evidence -- where is the contract? So it was discovered that the two traders had entered into a legal contract that was composed of the prose document (top black box) and the smart contract thingie (lower black box). Then some events had happened, but somehow we hadn't got to the end. At this point there is a deep dive into the tech and the prose and the philosophy, law, religion and anything else people can drag in.
However, there's a shortcut. On that prose on the whiteboard, the lawyer chap who's name escapes me wrote 3 clauses. Clause (1) said:
(1) longest chain is correct chain
See where this is going? The judge now asks which is the longest chain. One side looks happy, the other looks glum.
Let's converge; on the longest chain, *if that's what you wrote down*.
Editor's Preamble! Back in 1997 I gave a paper on crowdfunding - I believe the first ever proper paper, although there was one "lost talk" earlier by Eric Hughes - at Financial Cryptography 1997. Now, this conference was the first polymath event in the space, and probably the only one in the space, but that story is another day. Because this was a polymath event, law professor who's name escapes Michael Froomkin stood up and asked why I hadn't analysed the crowdfunding system from the point of view of transaction economics.
I blathered - because I'd not heard of it! But I took the cue, went home and read the Ronald Coase paper, and some of his other stuff, and ploughed through the immensely sticky earth of Williamson. Who later joined Coase as a Nobel Laureate.
The prof was right, and I and a few others then turned transaction cost discussion into a cypherpunk topic. Of course, we were one or two decades too early, and hence it all died.
Now, with gusto, Vinay Gupta has revived it all as an explanation of why the blockchain works. Indeed, it's as elegant a description of 'why triple entry' as I've ever heard! So here goes my Saturday writing out Coase's first half block, or the first 5 minutes of Gupta's talk.
This is the title of the talk - Coase's Blockchain. Does anyone in the audience know who Ronald Coase was? No? Ok. He got the Nobel Prize for Economics in 1940. Coase's question was, why does the company exist as an institution? Theoretically, if markets are more efficient than command economies, because of a better distribution of information, why do we then recreate little pockets of command economy in the form of a thing you call a company?
And, understanding why the company exists is a critical thing if you want to start companies or operate companies because the question you have to ask is why are we doing this rather than having a market of individual actors. And Coase says, the reason that we don't have seas of contractors, we've got structures like say IBM, is because making good decisions is expensive.
Last time you bought a telephone or a television or a car you probably spent 2 days on the Internet looking at reviews trying to make a decision, right? Familiar experience? All of that is a cost, that in a company is made by purchasing. Same thing for deciding strategy, if you're a small business person you spend a ton of time worrying about strategy, and all of those costs in a large company are amortised across the whole company. The company serves to make decisions and then amortise the costs of the decisions across the entire structure.
This is all essentially about transaction costs. Now, move onto Venture Capital.
Paul Graham's famous essay "Black Swan Farming." What they basically say is venture capitalists have no idea what will or won't work, we can't tell. We are in the business of guessing winners, but it turns out that our ability to successfully predict is less than one in a hundred. Of a hundred companies we fund, 90 will fail, 10 will make about 10 times what we put into them, and one in about a thousand will make us a ton of money. One thousand to one returns are higher, but we actually have no way of telling which is which, so we just fund everything.
Even with their very large sample size, they are unable to successfully predict what will or will not succeed. And if this is true for venture capitalists, how much truer is it for entrepreneurs? As a venture capitalist, you have an N of maybe 600 or 1000, as an entrepreneur you've got an N of 2 or 3. All entrepreneurs are basically guessing that their thing might work with totally inadequate evidence and no legitimate right to assume their guess is any good because if the VCs can't tell, how the heck is the entrepreneur supposed to tell?
We're in an environment with extremely scarce information about what will or will not succeed and even the people with the best information in the world are still just guessing. The whole thing is just guesswork.
History of Blockchains in a Nutshell, and I will bring all this back together in time.
In the 1970s the SQL database was basically a software system that was designed to make it possible to do computation using tape storage. You know how in databases, you have these fixed field lengths, 80 characters 40 characters, all this stuff, it was so that when you fast-forwarded the tape, you knew that each field would take 31 inches and you could fast forward by 41 and a half feet to get to the record you needed next. The entire thing was about tape.
In the 1990s, we invent the computer network, at this point we're running everything across telephone wires, basically this is all pre-Ethernet. It's really really early stuff and then you get Ethernet and standardisation and DNS and the web, the second generation of technology.
The bridges between these systems never worked. Anybody that's tried to connect two corporations across a database knows that it's just an absolute nightmare. You get hacks like XML-EDI or JSON or SOAP or anything like that but you always discover that the two databases have different models of reality and when you interconnect them together in the middle you wind up having to write a third piece of software.
The N-squared problem. So the other problem is that if we've got 50 companies that want to talk to 50 companies you wind up having to write 50-squared interconnectors which results in an unaffordable economic cost of connecting all of these systems together. So you wind up with a hub and spoke architecture where one company acts as the broker, everybody writes a connector to that company, and then that company sells all of you down the river because it has absolute power.
As a result, computers have had very little impact on the structure of business even though they've brought the cost of communication and the cost of knowledge acquisition down to a nickel.
This is where we get back to Coase. The revolution that Coase predicted that computers should bring to business hasn't yet happened, and my thesis is that blockchains is what it takes to get that to run.
Editor again: That was Vinay's first 5m after which he took it across to blockchains. Meanwhile, I'll fork back a little to triple entry.
Recall the triple entry concept as being an extension of double entry: The 700 year old invention of double entry used two recordings as a redundant check to eliminate errors and surface fraud. This allowed the processing of accounting to be so reliable that employed accountants could do it. But accounting's double entries were never accepted outside the company, because as Gupta puts it, companies had "different models of reality."
Triple entry flips it all upside down by having one record come from an independent center, and then that record is distributed back to the two companies of any trade, making for 3 copies. Because we used digital signatures to fix one record, triply recorded, triple entry collapses the double vision of classical accounting's worldview into one reality.
We built triple entry in the 1990s, and ran it successfully, but it may have been an innovationary bridge too far. It may well be that what we were lacking was that critical piece: to overcome the trust factor we needed the blockchain.
On that note, here's another minute of the talk I copied before I realised my task was done!
The blockchain, regardless of all the complicated stuff you've heard about it, is simply a database that works just like the network. A computer network is everywhere and nowhere, nobody really owns it, and everybody cooperates to make it work, all of the nodes participate in the process, and they make the entire thing efficient.
Blockchains are simply databases updated to work on the network. And those databases are ones with different properties than the databases made to run on tape. They're decentralised, you can't edit anything, you can't delete anything, the history is stored perfectly, if you want to make an update you just republish a new version of it, and to ensure the thing has appropriate accountability you use digital signatures.
It's not nearly as complicated as the tech guys in blockchainland will tell you. Yes it's as complicated as the inside of an SQL database. All of your companies run SQL databases, none of you really know how they work, it's going to be just like that with blockchains. Two years you'll forget the word blockchain, you'll just hear database, and it'll mean the same thing. Probably.
If there's any news worth blogging about, it is this:
Breaking: Kenya's M-Kopa Solar Closes $12.45 million Fourth Funding Round
M-KOPA Solar has today closed its fourth round of investment through a $12.45 million equity and debt deal, led by LGT Venture Philanthropy. The investment will be used to expand the company's product range, grow its operating base in East Africa and license its technology to other markets.
Lead investor LGT Venture Philanthropy has backed M-KOPA since 2011 and is making its biggest investment yet in the fourth round, which also includes reinvestments from Lundin Foundation and Treehouse Investments (advised by Imprint Capital)and a new investment from Blue Haven Initiative.
In less than two and a half years since launch, M-KOPA Solar has installed over 150,000 residential solar systems in Kenya, Uganda and Tanzania, and is now connecting over 500 new homes each day. The company plans to further expand its distribution and introduce new products to reach an even larger customer base.
Jesse Moore, Managing Director and Co-Founder M-KOPA Solar says, "Our investors see innovation and scale in what M-KOPA does. And we see a massive unmet market opportunity to provide millions of off-gridhouseholds with affordable, renewable energy. We are just getting started in terms of the scale and impact of what we will achieve.
Oliver Karius, Partner, LGT Venture Philanthropy says, "We believe that we are at the dawn of a multi-billion dollar 'pay-as-you-go' energy industry. LGT Venture Philanthropy is a long-term investor in M-KOPA Solar because they've proven to be the market leaders,both in terms of innovating and delivering scale. We have also seen first-hand what positive impacts their products have on customers lives - making low-income households wealthier and healthier."
This deal follows the successful $20 million (KES1.8 billion) third round funding closed in December 2013 - which featured a working capital debt facility, led by the Commercial Bank of Africa.
The reason this is real news in the "new" sense is that indigenous solutions can work because they are tailored to the actual events and activities on the ground. In contrast, the western aid / poverty agenda typically doesn't work and does more harm than good, because it is an export of western models to countries that aren't aligned to those assumptions. Message to the west: Go away, we've got this ourselves.
As predicted in the first day we saw it (and published on this blog much later), mass transit payment systems are failing in Kenya:
Payment cards rolling back gains in Kenya's public transport sectorby Bedah Mengo NAIROBI (Xinhua) -- "Cash payment only", reads a sticker in a matatu plying the city center-Lavington route in Nairobi, Kenya. The vehicle belonging to a popular company was among the first to implement the cashless fare payment system that the Kenya government is rooting for.
And as it left the city center to the suburb on Monday, some passengers were ready to pay their fares using payment cards. However, the conductor announced that he would not accept cashless payments.
"Please pay in cash," he said. "We have stopped using the cashless system."When one of the passengers complained, the conductor retorted as he pointed at the sticker pasted on the wall of the minibus. "This is not my vehicle, I only follow orders."
All passengers paid their fares in cash as those who had payment cards complained of having wasted money on the gadgets. The experience in the vehicle displays the fate of the cashless fare payment in the East African nation. The system is fast-fading from the country's transport barely a month after the government made frantic efforts to entrench it. ...
It's probably too late for them now, but I think there are ways to put such a system out through Kenya's mass transit. You just don't do it that way because the market will not accept it. Rejection was totally obvious, and not only because asymmetric payment mechanisms don't succeed if both sides have a choice:
The experience in the vehicle displays the fate of the cashless fare payment in the East African nation. The system is fast-fading from the country's transport barely a month after the government made frantic efforts to entrench it.The Kenya government imposed a December 1, 2014 deadline as the time when all the vehicles in the nation should be using payment cards. A good number of matatu operators installed the system as banks and mobile phone companies launched cards to cash in on the fortune.
Commuters, on the other hand, bought the cards to avoid being inconvenienced. However, the little gains that were made in that period are eroding as matatu operators who had embraced the system go back to the cash.
"We were stopped from using the cashless system, because the management found it cumbersome. They said they were having cash-flow problems due to the bureaucracies involved in getting payments from the bank. None of our fleet now uses the system," explained the conductor.A spot on various routes in the city indicated that there are virtually no vehicles using the cashless system.
"When it failed to take off on December 1 last year, many vehicles that had installed the system abandoned it. They have the gadgets, but they are not using them," said Manuel Mogaka, a matatu operator on the Umoja route.
As I pointed out the root issue they missed here was the incentives of, well, just about everyone on the bus!
If someone is serious about this, I can help, but I take the real cash, not fantasy plastic. I spent some time working the real issues in Kenya, and they have more latent potential waiting to be tapped than just about any country on the planet. Our designs were good, but that's because they were based on helping people with problems they wanted solved and were willing to work to solve.
The ditching of the payment cards means that the Kenya government has a herculean task in implementing the system.
"It is not only us who are uncomfortable with the system, even passengers. Mid December last year, some passengers disembarked from my vehicle when I insisted I was only taking cashless fare. I had to accept cash because passengers were not boarding," said Mogaka.
Not handwavy bureaucratic agendas like "stamp out corruption." Yes, you! Read this on corruption and not this:
Kenya's Cabinet Secretary for Transport and Infrastructure Joseph Kamau said despite the challenges, the system will be implemented fully. He noted that the government will only renew licences of the vehicles that have installed the system.But matatu operators have faulted the directive, noting that it would be pointless to have the system when commuters do not have payment cards.
"Those cards can only work with people who have regular income and are able to budget a certain amount of money for fare every month. But if your income is irregular and low, it cannot work," said George Ogundi, casual laborer who stays in Kayole on the east of Nairobi.Analysts note that the rush in implementing the system has made Kenya's public transport sector a "graveyard" where the cashless payment will be buried.
"The government should have first started with revamping the sector by coming up with a well-organized metropolitan buses like those found in developed world. People would have seen their benefits and embraced cashless fare payment," said Bernard Mwaso of Edell IT Solutions in Nairobi.
(Obviously, if the matatu owners don't install, government resistance will last about a day after the deadline.)
Paul of Moonbase has put a plea onto kickstarter to fund a run of open RNGs. As we all know, having good random numbers is one of those devilishly tricky open problems in crypto. I'd encourage one and all to click and contribute.
For what it's worth, in my opinion, the issue of random numbers will remain devilish & perplexing until we seed hundreds of open designs across the universe and every hardware toy worth its salt also comes with its own open RNG, if only for the sheer embarrassment of not having done so before.
OneRNG is therefore massively welcome:
About this projectAfter Edward Snowden's recent revelations about how compromised our internet security has become some people have worried about whether the hardware we're using is compromised - is it? We honestly don't know, but like a lot of people we're worried about our privacy and security.
What we do know is that the NSA has corrupted some of the random number generators in the OpenSSL software we all use to access the internet, and has paid some large crypto vendors millions of dollars to make their software less secure. Some people say that they also intercept hardware during shipping to install spyware.
We believe it's time we took back ownership of the hardware we use day to day. This project is one small attempt to do that - OneRNG is an entropy generator, it makes long strings of random bits from two independent noise sources that can be used to seed your operating system's random number generator. This information is then used to create the secret keys you use when you access web sites, or use cryptography systems like SSH and PGP.
Openness is important, we're open sourcing our hardware design and our firmware, our board is even designed with a removable RF noise shield (a 'tin foil hat') so that you can check to make sure that the circuits that are inside are exactly the same as the circuits we build and sell. In order to make sure that our boards cannot be compromised during shipping we make sure that the internal firmware load is signed and cannot be spoofed.
OneRNG has already blasted through its ask of $10k. It's definitely still worth contributing more because it ensures a bigger run and helps much more attention on this project. As well, we signal to the world:
*we need good random numbers*
and we'll fight aka contribute to get them.
So, sitting in a pub idling till my 5pm, thought I'd do some quick check on my mail. Someone likes my post on yesterday's rare evidence of MITMs, posts a comment. Nice, I read all comments carefully to strip the spam, so, click click...
Boom, Firefox takes me through the wrestling trick known as MITM procedure. Once I've signalled my passivity to its immoral arrest of my innocent browsing down mainstreet, I'm staring at the charge sheet.
Whoops -- that's not financialcryptography.com's cert. I'm being MITM'd. For real!
Fully expecting an expiry or lost exception or etc, I'm shocked! I'm being MITM'd by the wireless here in the pub. Quick check on twitter.com who of course simply have to secure all the full tweetery against all enemies foreign and domestic and, same result. Tweets are being spied upon. The horror, the horror.
On reflection, the false positive result worked. One reason for that on the skeptical side is that, as I'm one of the 0.000001% of the planet that has wasted significant years on the business of protecting the planet against the MITM, otherwise known as the secure browsing model (queue in acronyms like CA, PKI, SSL here...), I know exactly what's going on.
How do I judge it all? I'm annoyed, disturbed, but still skeptical as to just how useful this system is. We always knew that it would pick up the false positive, that's how Mozilla designed their GUI -- overdoing their approach. As I intimated yesterday, the real problem is whether it works in the presence of a flood of false negatives -- claimed attacks that aren't really attacks, just normal errors and you should carry on.
Secondly, to ask: Why is a commercial process in a pub of all places taking the brazen step of MITMing innocent customers? My guess is that users don't care, don't notice, or their platforms are hiding the MITM from them. One assumes the pub knows why: the "free" service they are using is just raping their customers with a bit of secret datamining to sell and pillage.
Well, just another another data point in the war against the users' security.
The real MITMs are so rare that protocols that are designed around them fall to the Bayesian impossibility syndrome (*). In short, false negatives cause the system to be ignored, and when the real negative indicator turns up it is treated as a false. Ignored. Fail.
Here's some evidence of that with Tor:
... I tested BDFProxy against a number of binaries and update processes, including Microsoft Windows Automatic updates. The good news is that if an entity is actively patching Windows PE files for Windows Update, the update verification process detects it, and you will receive error code 0x80200053..... If you Google the error code, the official Microsoft response is troublesome.
If you follow the three steps from the official MS answer, two of those steps result in downloading and executing a MS 'Fixit' solution executable. ... If an adversary is currently patching binaries as you download them, these 'Fixit' executables will also be patched. Since the user, not the automatic update process, is initiating these downloads, these files are not automatically verified before execution as with Windows Update. In addition, these files need administrative privileges to execute, and they will execute the payload that was patched into the binary during download with those elevated privileges.
(*) I'd love to hear a better name than Bayesian impossibility syndrome, which I just made up. It's pretty important, it explains why the current SSL/PKI/CA MITM protection can never work, relying on Bayesian statistics to explain why infrequent real attacks cannot be defended against when overshadowed by frequent false negatives.
A recent IMF report on shadow banking places it at in excess of $70 trillion.
"Shadow banking can play a beneficial role as a complement to traditional banking by expanding access to credit or by supporting market liquidity, maturity transformation and risk sharing," the IMF said in the report. "It often, however, comes with bank-like risks, as seen during the 2007-08 global financial crisis."
It's a concern, say the bankers, it keeps the likes of Jamie Dimon up at night. But, what is it? What is this thing called shadow banking? For that, the IMF report has a nice graphic:
Aha! It's the new stuff: securitization, hedge funds, Chinese 'wealth management products' etc. So what we have here is a genie that is out of the bottle. As described at length, the invention of securitization allows a shift from banking to markets which is unstoppable.
In theoretical essence, markets are more efficient than middlemen, although you'd be hard pressed to call either the markets or banking 'efficient' from recent history.
Either way, this genie is laughing and dancing. The finance industry had its three wishes, and now we're paying the cost.
Follows is the clearest exposition of the doublethink surrounding the word 'trust' that I've seen so far. This post by Jerry Leichter on Crypto list doesn't actually solve the definitional issue, but it does map out the minefield nicely. Trustworthy?
On Jul 20, 2014, at 1:16 PM, Miles Fidelman <...> wrote:
>> On 19/07/2014 20:26 pm, Dave Horsfall wrote:
>>>
>>> A trustworthy system is one that you *can* trust;
>>> a trusted system is one that you *have* to trust.
>>
> Well, if we change the words a little, the government
> world has always made the distinction between:
> - certification (tested), and,
> - accreditation (formally approved)
The words really are the problem. While "trustworthy" is pretty unambiguous, "trusted" is widely used to meant two different things: We've placed trust in it in the past (and continue to do so), for whatever reasons; or as a synonym for trustworthy. The ambiguity is present even in English, and grows from the inherent difficulty of knowing whether trust is properly placed: "He's a trusted friend" (i.e., he's trustworthy); "I was devastated when my trusted friend cheated me" (I guess he was never trustworthy to begin with).
In security lingo, we use "trusted system" as a noun phrase - one that was unlikely to arise in earlier discourse - with the *meaning* that the system is trustworthy.
Bruce Schneier has quoted a definition from some contact in the spook world: A trusted system (or, presumably, person) is one that can break your security. What's interesting about this definition is that it's like an operational definition in physics: It completely removes elements about belief and certification and motivation and focuses solely on capability. This is an essential aspect that we don't usually capture.
When normal English words fail to capture technical distinctions adequately, the typical response is to develop a technical vocabulary that *does* capture the distinctions. Sometimes the technical vocabulary simply re-purposes common existing English words; sometimes it either makes up its own words, or uses obscure real words - or perhaps words from a different language. The former leads to no end of problems for those who are not in the field - consider "work" or "energy" in physics. The latter causes those not in the field to believe those in it are being deliberately obscurantist. But for those actually in the field, once consensus is reached, either approach works fine.
The security field is one where precise definitions are *essential*. Often, the hardest part in developing some particular secure property is pinning down precisely what the property *is*! We haven't done that for the notions surrounding "trust", where, to summarize, we have at least three:
1. A property of a sub-system a containing system assumes as part of its design process ("trusted");
2. A property the sub-system *actually provides* ("trustworthy").
3. A property of a sub-system which, if not attained, causes actual security problems in the containing system (spook definition of "trusted").
As far as I can see, none of these imply any of the others. The distinction between 1 and 3 roughly parallels a distinction in software engineering between problems in the way code is written, and problems that can actually cause externally visible failures. BTW, the software engineering community hasn't quite settled on distinct technical words for these either - bugs versus faults versus errors versus latent faults versus whatever. To this day, careful papers will define these terms up front, since everyone uses them differently.
There's a fascinating signalling opportunity going on with US politics. As we all know, 99% the USA congress seats are paid for by contributions from corporate funders, through a mechanism called PACs or political action committees. Typically, the well-funded campaigns win the seats, and for that you need a big fat PAC with powerful corporate wallets behind.
Lawrence Lessig decided to do something about it.
"Yes, we want to spend big money to end the influence of big money... Ironic, I get it. But embrace the irony."
So, fighting fire with fire, he started the Mayday PAC:
"We’ve structured this as a series of matched-contingent goals. We’ve got to raise $1 million in 30 days; if we do, we’ll get that $1 million matched. Then we’ve got to raise $5 million in 30 days; if we do, we’ll get that $5 million matched as well. If both challenges are successful, then we’ll have the money we need to compete in 5 races in 2014. Based on those results, we’ll launch a (much much) bigger effort in 2016 — big enough to win."
They got to their first target, the 2nd of $5m will close in 30th June. Larry claims to have been inspired by Aaron Swartz:
“How are you ever going to address those problems so long as there’s this fundamental corruption in the way our government works?” Swartz had asked.
Something much at the core of the work I do in Africa.
The signalling opportunity is the ability to influence total PAC spending by claiming to balance it out. If MayDay PAC states something simple such as "we will outspend the biggest spend in USA congress today," then how do the backers for the #1 financed-candidate respond to the signal?
As the backers know that their money will be balanced out, it will no longer be efficacious to buy their decisions *with the #1 candidate*. They'll go elsewhere with their money, because to back their big man means to also attract the MayDay PAC.
Which will then leave the #2 paid seat in Congress at risk ... who will also commensurately lose funds. And so on ... A knock-on effect could rip the funding rug from many top campaigns, leveraging Lessig's measly $12m way beyond its apparent power.
A fascinating experiment.
The challenge of capturing people’s attention isn’t lost on Lessig. When asked if anyone has told him that his idea is ludicrous and unlikely to work, he answers with a smile: “Yeah, like everybody.”
Sorry, not this anybody. This will work. Economically speaking, signalling does work. Go Larry!
It's an article of faith that accounting is at the core of cryptocurrencies. Here's a nice story along those lines h/t to Graeme:
Ilya Grigorik provides a ground-up technologists' description of Bitcoin called "The Minimum Viable Blockchain." He starts at bartering, goes through triple-entry and the replacement of the intermediary with the blockchain, and then on to explain how all the perverse features strengthen the blockchain. It's interesting to see how others see the nexus between triple-entry and bitcoin, and I think it is going to be one of future historian's puzzles to figure out how it all relates.
Both Bob and Alice have known each other for a while, but to ensure that both live up to their promise (well, mostly Alice), they agree to get their transaction "notarized" by their friend Chuck.They make three copies (one for each party) of the above transaction receipt indicating that Bob gave Alice a "Red stamp". Both Bob and Alice can use their receipts to keep account of their trade(s), and Chuck stores his copy as evidence of the transaction. Simple setup but also one with a number of great properties:
- Chuck can authenticate both Alice and Bob to ensure that a malicious party is not attempting to fake a transaction without their knowledge.
- The presence of the receipt in Chuck's books is proof of the transaction. If Alice claims the transaction never took place then Bob can go to Chuck and ask for his receipt to disprove Alice's claim.
- The absence of the receipt in Chuck's books is proof that the transaction never took place. Neither Alice nor Bob can fake a transaction. They may be able to fake their copy of the receipt and claim that the other party is lying, but once again, they can go to Chuck and check his books.
- Neither Alice nor Bob can tamper with an existing transaction. If either of them does, they can go to Chuck and verify their copies against the one stored in his books.
What we have above is an implementation of "triple-entry bookkeeping", which is simple to implement and offers good protection for both participants. Except, of course you've already spotted the weakness, right? We've placed a lot of trust in an intermediary. If Chuck decides to collude with either party, then the entire system falls apart.
Grigorik then uses public key cryptography to ensure that the receipt becomes evidence that is reliable for all parties; which is how I built it, and I'm pretty sure that was what was intended by Todd Boyle.
However he walks a different path and uses the signed receipts as a way to drop the intermediary and have Alice and Bob keep separate, independent ledgers. I'd say this is more a means to an end, as Grigorik is trying to explain Bitcoin, and the central tenant of that cryptocurrency was the famous elimination of a centralised intermediary.
Moral of the story? Be (very) careful about your choice of the intermediary!
I don't have time right now to get into the rest of the article, but so far it does seem like a very good engineer's description. Well worth a read to sort your head out when it comes to all the 'extra' bits in the blockchain form of cryptocurrencies.
The separation of payments from banks is accelerating. News from Haiti:
The past year in Haiti has been marked by the slow pace of the earthquake recovery. But the poorest nation in the hemisphere is moving quickly on something else - setting up "mobile money" networks to allow cell phones to serve as debit cards.The systems have the potential to allow Haitians to receive remittances from abroad, send cash to relatives across town or across the country, buy groceries and even pay for a bus ride all with a few taps of their cell phones.
Using phones to handle money payments is something we know works. It works so well that some 35% of the economy in Kenya moves this way (I forget the numbers). It works so well Kenya doesn't care about the banks freezing up the economy any more because they have an alternate system, they have resiliance in payments. It works so well that everyone can do mPesa, even the unbanked, which is most of them, bank accounts costing the same in Kenya as the west.
It works so well that mPesa has been the biggest driver to new bank accounts...
Yet mPesa hasn't been followed around the world. The reason is pretty simple -- regulatory interference. Central banks, I'm looking at you. In Kenya, the mission of "financial inclusion" won the day; in other countries around the world, central banks worked against wealth for the poorest by denying them payments on the mobile.
Is it that drastic? Yes. Were the central banks well-minded? Sure, they thought they were doing the right thing, but they were wrong. Mobile money equals wealth for the poor and there is no way around that fact. Stopping mobile money means taking money from the poor, in the big picture. Everything else is noise.
So when the poorest of the poor -- the Haitian earthquake victims were left in the mud, there were no banks left to serve them (sell them?) and the only way to get value out there turned out to be using the mobile phone.
That included, giving the users free mobile phones.
Can you see an important value point here? The value to society of getting mobile money to the poor is in excess of the price of the mobile phone.
Well, this only happens in poor countries, right? Wrong again. The financial costs that are placed on the poor of every country by the decisions of the central banks are common across all countries. Now comes Walmart, for that very express same reason:
In a move that threatens to upend another piece of the financial services industry, Walmart, the country’s largest retailer, announced on Thursday that it would allow customers to make store-to-store money transfers within the United States at cut-rate fees.This latest offer, aimed largely at lower-income shoppers who often rely on places like check-cashing stores for simple transactions, represents another effort by the giant retailer to carve out a space in territory that once belonged exclusively to traditional banks.
...
Lower-income consumers have been a core demographic for Walmart, but in recent quarters those shoppers have turned increasingly to dollar stores.
...
More than 29 percent of households in the United States did not have a savings account in 2011, and about 10 percent of households did not have a checking account, according to a study sponsored by the Federal Deposit Insurance Corporation. And while alternative financial products give consumers access to services they might otherwise be denied, people who are shut out of the traditional banking system sometimes find themselves paying high fees for transactions as basic as cashing a check.
See the common thread with Bitcoin? Message to central banks: shut the people out, and they will eventually respond. The tide is now turning, and banks and central banks no longer have the credibility they once had to stomp on the poor. The question for the future is, which central banks will break ranks first, and align themselves with their countries and their peoples?
Stephen Mason, the world's foremost expert on the topic, writes (edited for style):
The entire Digital Evidence and Electronic Signature Law Review is now available as open source for free here:Current Issue Archives All of the articles are also available via university library electronic subscription services which require accounts:
EBSCO Host HeinOnline v|lex (has abstracts) If you know of anybody that might have the knowledge to consider submitting an article to the journal, please feel free to let them know of the journal.
This is significant news for the professional financial cryptographer! For those who are interested in what all this means, this is the real stuff. Let me explain.
Back in the 1980s and 1990s, there was a little thing called the electronic signature, and its RSA cousin, the digital signature. Businesses, politicians, spooks and suppliers dreamed that they could inspire a world-wide culture of digitally signing your everything with a hand wave, with the added joy of non-repudiation.
They failed, and we thank our lucky stars for it. People do not want to sign away their life every time some little plastic card gets too close to a scammer, and thankfully humanity had the good sense to reject the massively complicated infrastructure that was built to enslave them.
However, a suitably huge legacy of that folly was the legislation around the world to regulate the use of electronic signatures -- something that Stephen Mason has catalogued here.
In contrast to the nuisance level of electronic signatures, in parallel, a separate development transpired which is far more significant. This was the increasing use of digital techniques to create trails of activity, which led to the rise of digital evidence, and its eventual domination in legal affairs.
Digital discovery is now the main act, and the implications have been huge if little understated outside legal circles, perhaps because of the persistent myth in technology circles that without digital signatures, evidence was worth less.
Every financial cryptographer needs to understand the implications of digital evidence, because without this wisdom, your designs are likely crap. They will fail when faced with real-world trials, in both senses of the word.
I can't write the short primer on digital evidence for you -- I'm not the world's expert, Stephen is! -- but I can /now/ point you to where to read.That's just one huge issue, hitherto locked away behind a hugely dominating paywall. Browse away at all 10 issues!
In the long running saga of the Snowden revelations, another fact is confirmed by ashkan soltani. It's the last point on this slide showing some nice redaction minimisation.
In words:
(U) The CCP expects this Project to accomplish the following in FY 2013:
Confirmed: the NSA manipulates the commercial providers of cryptography to make it easier to crack their product. When I said, avoid American-influenced cryptography, I wasn't joking: the Consolidated Cryptologic Program (CCP) is consolidating access to your crypto.
Addition: John Young forwarded me the original documents (Guardian and NYT) and their blanket introduction makes it entirely clear:
(TS//SI//NF) The SIGINT Enabling Project actively engages the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products' designs. These design changes make the systems in question exploitable through SIGINT collection (e.g., Endpoint, MidPoint, etc.) with foreknowledge of the modification. ....
Note also that the classification for the goal above differs in that it is NF -- No Foreigners -- whereas most of the other goals listed are REL TO USA, FVEY which means the goals can be shared with the Five Eyes Intelligence Community (USA, UK, Canada, Australia, New Zealand).
The more secret it is, the more clearly important is this goal. The only other goal with this level of secrecy was the one suggesting an actual target of sensitivity -- fair enough. More confirmation:
(U) Base resources in this project are used to:
- (TS//SI//REL TO USA, FVEY) Insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets.
- ...
and in goals 4, 5:
- (TS//SI//REL TO USA, FVEY) Complete enabling for [XXXXXX] encryption chips used in Virtual Private Network and Web encryption devices. [CCP_00009].
- (TS//SI//REL TO USA, FVEY) Make gains in enabling decryption and Computer Network Exploitation (CNE) access to fourth generation/Long Term Evolution (4GL/LTE) networks via enabling. [CCP_00009]
Obviously, we're interested in the [XXXXXX] above. But the big picture is complete: the NSA wants backdoor access to every chip used for encryption in VPNs, wireless routers and the cell network.
This is no small thing. There should be no doubt now that the NSA actively looks to seek backdoors in any interesting cryptographic tool. Therefore, the NSA is numbered amongst the threats, and so are your cryptographic providers, if they are within reach of the NSA.
Granted that other countries might behave the same way. But the NSA has the resources, the will, the market domination (consider Microsoft's CAPI, Java's Cryptography Engine, Cisco & Juniper on routing, FIPS effect on SSL, etc) and now the track record to make this a more serious threat.
Dave Cohen says: "I wonder if I have what it takes to make presentations at the NSA."
H/t to Jeroen. So I wonder if the Second World Cryptowars are really on?
Our MissionTo bring the world our unique end-to-end encrypted protocol and architecture that is the 'next-generation' of private and secure email. As founding partners of The Dark Mail Alliance, both Silent Circle and Lavabit will work to bring other members into the alliance, assist them in implementing the new protocol and jointly work to proliferate the worlds first end-to-end encrypted 'Email 3.0' throughout the world's email providers. Our goal is to open source the protocol and architecture and help others implement this new technology to address privacy concerns against surveillance and back door threats of any kind.
Could be. In the context of the new google sniffing revelations, it may now be clearer how the NSA is accessing all of the data of all of the majors. What do we think about the NSA? Some aren't happy, like Kenton Varda:
If the NSA is indeed tapping communications from Google's inter-datacenter links then they are almost certainly using the open source protobuf release (i.e. my code) to help interpret the data (since it's almost all in protobuf format). Fuck you, NSA.
What about google? Some outrage from the same source:
I had to admit I was shocked by one thing: I'm amazed Google is transmitting unencrypted data between datacenters.
is met with Varda's comment:
We're (I think) talking about Google-owned fiber between Google-owned endpoints, not shared with anyone, and definitely not the public internet. Physically tapping fiber without being detected is pretty difficult and a well-funded state-sponsored entity is probably the only one that could do it.
Ah. So google did some risk analysis and thought this was one they can pass on. Google's bad. A bit of research shows BlackHat in 2003:
Commercially available taps are readily available that produce an insertion loss of 3 dB which cost less than $1000! Taps currently in use by state-sponsored military and intelligence organizations have insertion losses as low as 0.5 dB!
That document indicates 2001 published accounts of NSA tapping fibre, and I found somewhere a hint that it was first publically revealed in 1999. I'm pretty sure we knew about the USS Jimmy Carter back then, although my memory fades...
So maybe Google thought it hard to tap fibre, but actually we've known for over a decade that is not so. Google's bad, they are indeed negligent. Jeroen van Gelderen says:
Correct me if I'm wrong but you promise that "[w]e restrict access to personal information to Google employees, contractors and agents who need to know that information in order to process it for us, and who are subject to strict contractual confidentiality obligations and may be disciplined or terminated if they fail to meet these obligations.
Indeed, as a matter of degree, I would say google are grossly negligent: the care that they show for physical security at their data centers, and all the care that they purport in other security matters, was clearly not shown once the fiber left their house.
Meanwhile, given the nature of the NSA's operations, some might ask (as Jeroen does):
Now that you have been caught being utterly negligent in protecting customer data, to the point of blatantly violating your own privacy policies, can you please tell us which of your senior security people were responsible for downplaying the risk of your thousands of miles of unsecured, easily accessible fibers being tapped? Have they been fired yet?
Chances of that one being answered are pretty slim. I can imagine Facebook being pretty relaxed about this. I can sort of see Apple dropping the ball on this. I'm not going to spare any time with Microsoft, who've been on the contract teet since time immemorial.
But google? That had security street cred? Time to call a spade a spade: if google are not analysing and revealing how they came to miss these known and easy threats, then how do we know they aren't conspirators?
In the on-going tits-for-tat between the White House and the world (Western cyberwarriors versus Chinese cyberspies; Obama and will-he-won't he scramble his forces to intercept a 29 year old hacker who is-flying-isn't-flying; the ongoing search for the undeniable Iranian cassus belli, the secret cells in Apple and Google that are too secret to be found but no longer secret enough to be denied), and one does wonder...
Who can we believe on anything? Here's a data point. This must be the loudest YOU-ARE-STOOPID response I have ever seen from a government agency to its own populace:
The National Security Agency lacks the technology to conduct a keyword search of its employees’ emails, even as it collects data on every U.S. phone call and monitors online communications of suspected terrorists, according to NSA’s freedom of information officer.“There’s no central method to search an email at this time with the way our records are set up, unfortunately,” Cindy Blacker told a reporter at the nonprofit news website ProPublica.
Ms. Blacker said the agency’s email system is “a little antiquated and archaic,” the website reported Tuesday.
One word: counterintelligence. The NSA is a spy agency. It has a department that is mandated to look at all its people for deviation from the cause. I don't know what it's called, but more than likely there are actually several departments with this brief. And they can definately read your email. In bulk, in minutiae, and in ways we civilians can't even conceive.
It is standard practice at most large organizations — not to mention a standard feature of most commercially available email systems — to be able to do bulk searches of employees’ email as part of internal investigations, discovery in legal cases or compliance exercises.
The claim that the NSA cannot look at its own email system is either a) a declaration of materially aiding the enemy by not completing its necessary and understood role of counterintelligence (in which case it should be tried in a military court, being wartime, right?), or b) a downright lie to a stupid public.
I'm inclined to think it's the second (which leaves a fascinating panopoly of civilian charges). In which case, one wonders just how STOOPID the people governing the NSA are? Here's another data point:
The numbers tell the story — in votes and dollars. On Wednesday, the House voted 217 to 205 not to rein in the NSA’s phone-spying dragnet. It turns out that those 217 “no” voters received twice as much campaign financing from the defense and intelligence industry as the 205 “yes” voters..... House members who voted to continue the massive phone-call-metadata spy program, on average, raked in 122 percent more money from defense contractors than those who voted to dismantle it.
.... Lawmakers who voted to continue the NSA dragnet-surveillance program averaged $41,635 from the pot, whereas House members who voted to repeal authority averaged $18,765.
So one must revise ones opinion lightly in the face of overwhelming financial evidence: Members of Congress are financially savvy, anything but stupid.
Which makes the voting public...
Eighteenth International Conference
March 3–7, 2014
Accra Beach Hotel & Spa
Barbados
Financial Cryptography and Data Security is a major international forum for research, advanced development, education, exploration, and debate regarding information assurance, with a specific focus on financial, economic and commercial transaction security. Original works focusing on securing commercial transactions and systems are solicited; fundamental as well as applied real-world deployments on all aspects surrounding commerce security are of interest. Submissions need not be exclusively concerned with cryptography. Systems security, economic or financial modeling, and, more generally, inter-disciplinary efforts are particularly encouraged.
Topics of interests include, but are not limited to:
Anonymity and Privacy
Applications of Game Theory to Security
Auctions and Audits
Authentication and Identification
Behavioral Aspects of Security and Privacy
Biometrics
Certification and Authorization
Cloud Computing Security
Commercial Cryptographic Applications
Contactless Payment and Ticketing Systems
Data Outsourcing Security
Digital Rights Management
Digital Cash and Payment Systems
Economics of Security and Privacy
Electronic Crime and Underground-Market Economics
Electronic Commerce Security
Fraud Detection
Identity Theft
Legal and Regulatory Issues
Microfinance and Micropayments
Mobile Devices and Applications Security and Privacy
Phishing and Social Engineering
Reputation Systems
Risk Assessment and Management
Secure Banking and Financial Web Services
Smartcards, Secure Tokens and Secure Hardware
Smart Grid Security and Privacy
Social Networks Security and Privacy
Trust Management
Usability and Security
Virtual Goods and Virtual Economies
Voting Systems
Web Security
Important Dates
Workshop Proposal Submission July 31, 2013
Workshop Proposal Notification August 20, 2013
Paper Submission October 25, 2013, 23:59 UTC
(19:59 EDT, 16:59 PDT) -- FIRM DEADLINE, NO EXTENSIONS WILL BE GRANTED
Paper Notification December 15, 2013
Final Papers January 31, 2014
Poster and Panel Submission January 8, 2014
Poster and Panel Notification January 15, 2014
Conference March 3-7, 2014
[ snip ... more at CFP ]
In an extraordinary clean sweep of disclosure from the Washington Post and the Guardian:
The National Security Agency and the FBI are tapping directly into the central servers of nine leading U.S. Internet companies, extracting audio and video chats, photographs, e-mails, documents, and connection logs that enable analysts to track foreign targets, according to a top-secret document obtained by The Washington Post.
The process is by direct connection to the servers, and requires no intervention by the companies:
Equally unusual is the way the NSA extracts what it wants, according to the document: “Collection directly from the servers of these U.S. Service Providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, Apple.”....
Dropbox, the cloud storage and synchronization service, is described as “coming soon.”
From outside direct access might appear unusual, but it is actually the best way as far as the NSA is concerned. Not only does it give them access at Level Zero, and probably better access than the company has itself, it also provides the victim supplier plausible deniability:
“We do not provide any government organization with direct access to Facebook servers,” said Joe Sullivan, chief security officer for Facebook. ....“We have never heard of PRISM,” said Steve Dowling, a spokesman for Apple. “We do not provide any government agency with direct access to our servers, ..." ....
“Google cares deeply about the security of our users’ data,” a company spokesman said. “We disclose user data to government in accordance with the law, and we review all such requests carefully. From time to time, people allege that we have created a government ‘back door’ into our systems, but Google does not have a ‘back door’ for the government to access private user data.”
Microsoft also provided a statement: “.... If the government has a broader voluntary national security program to gather customer data we don’t participate in it.”
Yahoo also issued a denial. “Yahoo! takes users’ privacy very seriously,” the company said in a statement. “We do not provide the government with direct access to our servers, systems, or network.”
How is this apparent contradiction possible? It is generally done via secret arrangements not with the company, but with the employees. The company does not provide back-door access, but the people do. The trick is to place people with excellent tech skills and dual loyalties into strategic locations in the company. These 'assets' will then execute the work required in secret, and spare the company and most all of their workmates the embarrassment.
Patriotism and secrecy are the keys. The source of these assets is easy: retired technical experts from the military and intelligence agencies. There are huge numbers of these people exiting out of the armed forces and intel community every year, and it takes only a little manipulation to present stellar CVs at the right place and time. Remember, the big tech companies will always employ a great CV that comes highly recommended by unimpeachable sources, and leapfrogging into a stable, very well paid civilian job is worth any discomfort.
Everyone wins. It is legal, defensible and plausibly deniable. Such people are expert at keeping secrets -- about their past and about their current work. This technique is age-old, refined and institutionalised for many a decade.
Questions remain: what to do about it, and how much to worry about it? Once it has started, this insertion tactic is rather difficult to stop and root out. At CAcert, we cared and a programme developed over time that was strong and fair -- to all interests. Part of the issue is dealing with the secrecy of it all:
Government officials and the document itself made clear that the NSA regarded the identities of its private partners as PRISM’s most sensitive secret, fearing that the companies would withdraw from the program if exposed. “98 percent of PRISM production is based on Yahoo, Google and Microsoft; we need to make sure we don’t harm these sources,” the briefing’s author wrote in his speaker’s notes.
But for the big US companies, is it likely that they care? Enough? I am not seeing it, myself, but if they are interested, there are ways to deal with this. Fairly, legally and strongly.
How much should we worry about it? That depends on (a) what is collected, (b) who sees it, and (c) who's asking the question?
There has been “continued exponential growth in tasking to Facebook and Skype,” according to the PRISM slides. With a few clicks and an affirmation that the subject is believed to be engaged in terrorism, espionage or nuclear proliferation, an analyst obtains full access to Facebook’s “extensive search and surveillance capabilities against the variety of online social networking services.”According to a separate “User’s Guide for PRISM Skype Collection,” that service can be monitored for audio when one end of the call is a conventional telephone and for any combination of “audio, video, chat, and file transfers” when Skype users connect by computer alone. Google’s offerings include Gmail, voice and video chat, Google Drive files, photo libraries, and live surveillance of search terms.
Firsthand experience with these systems, and horror at their capabilities, is what drove a career intelligence officer to provide PowerPoint slides about PRISM and supporting materials to The Washington Post in order to expose what he believes to be a gross intrusion on privacy. “They quite literally can watch your ideas form as you type,” the officer said.
Live access to everything, it seems. So who sees it?
My rule of thumb was that if the information stayed in the NSA, then that was fine. Myself, my customers and my partners are not into "terrorism, espionage or nuclear proliferation." So as long as there is a compact with the intel community to keep that information clean and tight, it's not so worrying to our business, our privacy, our people or our legal situation.
But there is no such compact. Firstly, they have already engaged the FBI and the US Department of Justice as partners in this information:
In exchange for immunity from lawsuits, companies such as Yahoo and AOL are obliged to accept a “directive” from the attorney general and the director of national intelligence to open their servers to the FBI’s Data Intercept Technology Unit, which handles liaison to U.S. companies from the NSA. In 2008, Congress gave the Justice Department authority for a secret order from the Foreign Surveillance Intelligence Court to compel a reluctant company “to comply.”
Anyone with a beef with the Feds is at risk of what would essentially be a corrupt bypass of the legal justice system of fair discovery (see this for the start of this process).
Secondly, their credibility is zero: The NSA has lied about their access. They have deceived most if not all employees of the companies they have breached. They've almost certainly breached the US constitution and the US law in gaining warrant-free access to citizens. Dismissively. From the Guardian:
"Fisa was broken because it provided privacy protections to people who were not entitled to them," the presentation claimed. "It took a Fisa court order to collect on foreigners overseas who were communicating with other foreigners overseas simply because the government was collecting off a wire in the United States. There were too many email accounts to be practical to seek Fisas for all."
The FISA court that apparently governs their access is evidently ungovernable, as even members of Congress cannot breach its secrecy.
And that's within their own country -- the NSA feels that it is under no such restrictions or niceties outside their own country.
A reasonable examination of the facts and the record of the NSA (1, 2, 3) would therefore conclude that they cannot be trusted to keep the information secret. American society should therefore be worried. Scared, even. The risk of corruption of the FBI is by itself enough to pull the plug on not just this programme, but the system that allowed it to arise.
What does it mean to foreign society, companies, businesses, and people? Not a lot different as all of this was reasonable anyway. Under the history-not-rules of foreign espionage, anything goes. The only difficulty likely to be experienced is when the NSA conspires with American companies to benefit them both, or when American assets interfere with commercial businesses that they've targetted as assisting enemies.
One thing that might now get a boost is the Internet in other countries:
The presentation ... noted that the US has a "home-field advantage" due to housing much of the internet's architecture.
Take note, the rest of the world!
It's confirmed -- Skype is revealing traffic to Microsoft.
A reader informed heise Security that he had observed some unusual network traffic following a Skype instant messaging conversation. The server indicated a potential replay attack. It turned out that an IP address which traced back to Microsoft had accessed the HTTPS URLs previously transmitted over Skype. Heise Security then reproduced the events by sending two test HTTPS URLs, one containing login information and one pointing to a private cloud-based file-sharing service. A few hours after their Skype messages, they observed the following in the server log:65.52.100.214 - - [30/Apr/2013:19:28:32 +0200]
"HEAD /.../login.html?user=tbtest&password=geheim HTTP/1.1"Utrace map
Zoom The access is coming from systems which clearly belong to Microsoft.
Source: Utrace They too had received visits to each of the HTTPS URLs transmitted over Skype from an IP address registered to Microsoft in Redmond. URLs pointing to encrypted web pages frequently contain unique session data or other confidential information. HTTP URLs, by contrast, were not accessed. In visiting these pages, Microsoft made use of both the login information and the specially created URL for a private cloud-based file-sharing service.
Now, the boys & girls at Heise are switched-on, unlike their counterparts on the eastern side of the pond. Notwithstanding, Adam Back of hashcash fame has confirmed the basics: URLs he sent to me over skype were picked up and probed by Microsoft.
What's going on? Microsoft commented:
In response to an enquiry from heise Security, Skype referred them to a passage from its data protection policy:"Skype may use automated scanning within Instant Messages and SMS to (a) identify suspected spam and/or (b) identify URLs that have been previously flagged as spam, fraud, or phishing links."
A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites.
Which means Microsoft can scan ALL messages to ANYONE. Which means they are likely fed into Echelon, either already, or just as soon as someone in the NSA calls in some favours. 10 minutes later they'll be realtimed to support, and from thence to datamining because they're pissed that google's beating the hell out of Microsoft on the Nasdaq.
Game over?
Or exaggeration? It's just fine and dandy as all the NSA are interested in is matching the URLs to jihadist websites. I don't care so much for the towelheads. But, from the manual of citizen control comes this warning:
First they came for the jihadists,
and I didn't speak out because I wasn't a jihadist.Then they came for the cypherpunks,
and I didn't speak out because I wasn't a cypherpunk.Then they came for the bloggers,
and I didn't speak out because I wasn't a blogger.Then they came for me,
and there was no one left to speak for me.
Skype, game over.
Bitcoin has surged in price in the post-Cyprus debacle. To put the finger on the issue, the Troika recommended that the bank deposit holders be hit, and thus the people should directly participate in the pain of bank reconstruction. This is the right solution and the wrong solution, as I pointed out at the time.
Now we see why it is wrong. The faith of the people in banks is undermined, and rightly so. The "only this time" solution of haircutting the depositors is nonsense, and the people know it. (For those who don't follow this sort of arcania, the FDIC has already rolled out the plan for hitting the depositors in USA, and the other countries are following suit.)
Then, ordinary people putting faith elsewhere is a growing momentum. With the recent surge of post-Cyprus purchases of Bitcoin, it would seem that more and more people are piling in. I counted 3 independent pings in the last 2 days: journos, geeks and my mother.
This tells me that ordinary people are getting involved in Bitcoin. The old adage about bubbles is that you should get out when the delivery boy gives you a stock tip when riding up in the lift (elevator). And bubble is what we are seeing, as Bitcoin price is purely driven by supply and demand, and right now the surge in demand is outstripping the supply.
But let's take a step back and ponder where we are going. The big picture. A year ago I write with Philipp Güring about the effect of Gresham's Law and criminal elements on Bitcoin, and opined that this would limit the Bitcoin unit in the long term. Gresham's Law is simply that, a law, and is mostly fixed by the mining algorithm (which will end) and the block-agreement algorithm (which continues).
However, the criminal effect is an artifact of people and is an effect that is reversible.
If the mass of people get into it, then this can swamp the bad elements and reverse the effect of the disease. And, this isn't abnormal, in fact it is the normal new market growth process: early adopters are replaced by the mass market. (People familiar with the VHS story will see rhymes here.) If mass adoption reduces the nasty element to the point where it is below some threshold of cancer, then the unit can and should survive.
Right now, it feels like that - everyone is positive about Bitcoin. We can predict that the emotion will feel the reverse rapidly when the current bubble bursts and the stories of loss circulate. But maybe Bitcoin will survive that too, this is its second bubble, and may not be its last.
Which leads us to consider the regulatory angle. I would say the regulators have a problem. A big problem! If we analogise the Bitcoin market along the lines of say MP3 music, back in 1997, we could be at the cusp of a revolution which is going to undermine the CBs in a big way. Bitcoin could survive, be mostly illegal in the eyes of the regulator, and mostly acceptable in the eyes of the people.
It's game on. The regulators are starting it issue new regs, new guidance notes, etc. But that isn't going to do it, and may even backfire, because while the regulators are looking to attack Bitcoin at front, they are still working to undermine the credibility of banks from behind. A credible message is lacking.
And the timing couldn't be better, as the European crisis gathers steam. Spain, which is where the Bitcoin surge started, post-Cyprus, and Italy are likely both in for another bailout. Slovenia was in the news this week. No analyst I've read believes that Europe can survive more than one big bailout, nor the USA can survive Europe.
Which leads to that old Chinese curse -- may you regulators live in interesting times. This would be a fantastic time to be there, debating and crafting solutions and new world orders. Good luck, but a word of advice: your challenge is to find a credible message -- from the point of view of the people.
We've all seen the various rumours of digital and electronic attacks carried out over the years by the USA on those countries it targets. Pipelines in Russia, fibre networks in Iraq, etc. And we've all watched the rise of cyber-sabre rattling in Washington DC, for commercial gain.
What is curious is whether there are any limits on this behaviour. Sigint (listening) and espionage are one thing, but outright destruction takes things to a new plane.
Which Stuxnet evidences. Reportedly, it destroyed some 20% or so of the Iranian centrifugal capacity (1, 2). And, the tracks left by Stuxnet were so broad, tantalising and insulting that the anti-virus community felt compelled to investigate and report.
But what do other countries think of this behaviour? Is it isolated? Legal? Does the shoe fit for them as well?
Now comes NATO to opine that the attack was “an act of force”:
The 2009 cyberattack by the U.S. and Israel that crippled Iran’s nuclear program by sabotaging industrial equipment constituted “an act of force” and was likely illegal under international law, according to a manual commissioned by NATO’s cyber defense center in Estonia.“Acts that kill or injure persons or destroy or damage objects are unambiguously uses of force,” according to “The Tallinn Manual on the International Law Applicable to Cyber Warfare.”
Michael N. Schmitt, the manual’s lead author, told The Washington Times that “according to the U.N. charter, the use of force is prohibited, except in self-defense.”
That's fairly unequivocal. What to make of this? Well, the USA will deny all and seek to downgrade the report.
James A. Lewis, a researcher at the Center for Strategic and International Studies, said the researchers were getting ahead of themselves and there had not been enough incidents of cyberconflict yet to develop a sound interpretation of the law in that regard.“A cyberattack is generally not going to be an act of force. That is why Estonia did not trigger Article 5 in 2007,” he said, referring to the coordinated DDoS attacks that took down the computer networks of banks, government agencies and media outlets in Estonia that were blamed on Russia, or hackers sympathetic to the Russian government.
Cue in all the normal political tricks to call white black and black white. But beyond the normal political bluster and management of the media?
Under the U.N. charter, an armed attack by one state against another triggers international hostilities, entitling the attacked state to use force in self-defense, and marks the start of a conflict to which the laws of war, such as the Geneva Conventions, apply.
What NATO might be suggesting is that if the USA and Israel have cast the first stone, then Iran is entitled to respond. Further, although this conclusion might be more tenuous, if Iran does respond, this is less interesting to alliance partners. Iran would be within its rights:
[The NATO Manual] makes some bold statements regarding retaliatory conduct. According to the manual's authors, it's acceptable to retaliate against cyberattacks with traditional weapons when a state can prove the attack lead to death or severe property damage. It also says that hackers who perpetrate attacks are legitimate targets for a counterstrike.
Not only is Iran justified in targetting the hackers in Israel and USA, NATO allies might not ride to the rescue. Tough words!
Now is probably a good time to remind ourselves what the point of all this is. We enter alliances which say:
Article 5 of the NATO treaty requires member states to aid other members if they come under attack.
Which leads to: Peace. The point of NATO was peace in Europe, and the point of most alliances (even the ones that trigger widespread war such as WWI) is indeed peace in our time, in our place.
One of the key claims of alliances of peace is that we the parties shall not initiate. This is another game theory thing: we would not want to ally with some other country only to discover they had started a war, to which we are now dragged in. So we all mutually commit to not start a war.
And therefore, Stuxnet must be troubling to the many alliance partners. They see peace now in the Middle East. And they see that the USA and Israel have initiated first strike in cyber warfare.
This is no Pearl Harbour scenario. It's not even an anticipatory self-defence, as, bluster and goading aside, no nation that has developed nuclear weapons has ever used them because of the mechanics of MAD - mutually assured destruction. Iran is not stupid, it knows that use of the weapons would result in immediate and full retaliation. It would be the regime's last act. And, as the USA objective is regime change, this is a key factor.
So it is entirely welcome and responsible of NATO -- in whatever guise it sees fit -- to stand up and say, NO, this is not what the alliance is about. And it can't really be any other way.
News over the weekend has it that Cyprus has agreed to a bailout, but in exchange for the most terrible of conditions: Cypriot depositors are to be taxed at rates from 6.75% to 9.9% of their deposits.
This is utter madness, and the reasons are legion. Speaks the Economist:
EVERYONE agrees that taxpayers should be protected from the cost of bailing out failing banks. But imposing blanket losses on creditors is still taboo. Depositors have escaped the financial crisis largely unscathed for fear of sparking panic, which is why the idea of hitting uninsured depositors in Cypriot banks has caused policymakers angst.
You muck around with deposit holders or your own people at your peril. There is now a fair chance of a bank run in Cyprus, and a non-trivial chance of riots.
Further, the bond holders don't get hit. Not even the unprotected ones!
Worse, yet, the status of deposit is enshrined in a century of law, decisions and custom. It is not going to be clear for years whether the law will sustain ahead of legal challenges. Consider the mess about Greek bonds in London, and that allegedly big powerful Russian oligarchs are involved? A legal challenge is a dead certainty.
Finally, and what is the worst reason of all - the signal has been sent. What happened to the Cypriots can and will happen to the Spanish. And the Italians. And if them, the French. And finally, those safe in the north of Europe will now see that they are not safe.
The point is not whether this will happen or not: the point is whether you as an individual saver wish to gamble your money in your bank that it won't happen?
The direction of efforts to improve banks’ liquidity position is to encourage them to hold more deposits; the aim of bail-in legislation planned to come into force by 2018 is to make senior debt absorb losses in the event of a bank failure. The logic behind both of these reform initiatives is that bank deposits have two, contradictory properties. They are both sticky, because they are insured; and they are flighty, because they can be pulled instantly. So deposits are a good source of funding provided they never run. The Cyprus bail-out makes this confidence trick harder to pull off.Other than that, it is a really good deal.
In short words, Cyprus bail out means: start a run on European banks. Only time will tell how this goes on.
What's to take solace? Perversely, there is an element of justice in this decision. Moral hazard is the problem that has pervaded the corpus bankus for a decade now, and has laid low the financial system.
Moral hazard has it that if you fully insure the risk, then nobody cares. And indeed, nobody in the banking world cares, it seems, since they've all acquired TBTF status. None of the people care, either, as they happily deposited at those banks, even knowing that the financial sector of Cyprus was many times larger.
Go figure ... here comes a financial crisis, and our banks are bigger than our country? What did the Cypriot people do? Did they join the dots and wind back their risk?
However the figures are massaged down, the nub of the problem will remain: a country with a broken banking model. Unlike Greece, brought low by its unsustainable public finances, Cyprus has succumbed to losses in its oversize banks. By mid-2011 the Cypriot banking sector was eight times as big as GDP; its three big commercial banks were five times as large.
No. Moral hazard therefore has it the stakeholders must be punished for their errors. And the stake holders of last resort are the Cypriot people, or at least their depositors. And their pensioners, it seems:
In practice the main answer will be to dragoon Cyprus’s pension funds and domestic banks into financing the €4.5 billion of government bonds due to be redeemed over the next three years.
It is highly likely that Cypriot pensioners will lose the lot, as it worked for Spain.
Which does nothing to obviate the other arguments listed above. Regardless of this sudden and surprising display of backbone by the Troika, it is still madness. While we may actually be on the cusp of cure to the disease, the patient might die anyway.
European leaders could at long last bite the bullet and insist on a bail-in of bank creditors to cover expected losses. The snag is that any such action would set alarm-bells ringing for investors with serious money at stake in banks elsewhere in the euro area. Mario Draghi, the ECB’s president, said on March 7th that “Cyprus’s economy is a small economy but the systemic risks may not be small.”
Watch Cyprus with interest, as if your future depends on it. It does.
Human Resources is one of those areas that seemed fatally closed to the geek world. Warning to reader: if you do not think Human Resources displays the highest volatility in ROI of any decision you can make, you're probably not going to want to read anything more of this rant. However, if you are bemused about oddball questions asked at interviews, maybe there is something here for you.
A rant in three parts (I, II, III).
Let's talk about google, which leads the world in infamous recruiting techniques. So much so that an entire industry of truthsayers, diviners and astrologers have sprung up around companies like it, in order to prepare willing victims with recipes of puzzlers, newts eyes and moon dances.
Why is this? Well, one of the memes in the press is about strange interview questions, and poking sly fun at google in the process:
- "Using only a four-minute hourglass and a seven-minute hourglass, measure exactly nine minutes--without the process taking longer than nine minutes,"
- "A man pushed his car to a hotel and lost his fortune. What happened?"
These oddball questions are all very cute and the sort of teasers we all love to play as children. More. But what do they have to do with google?
To be fair to them, it looks like google don't ask these questions at all and indeed may have banned them entirely but we need a foil to this topic, so let's play along as if they do spin some curveballs for the fun of it.
Let's answer the implied question of "what's the benefit?" by reference to other so-called oddball questions:
- "If Germans were the tallest people in the world, how would you prove it?" -- Asked at Hewlett-Packard, Product Marketing Manager candidate
- "Given 20 'destructible' light bulbs (which break at a certain height), and a building with 100 floors, how do you determine the height that the light bulbs break?" -- Asked at Qualcomm, Engineering candidate
- "How do you feel about those jokers at Congress?" -- Asked at Consolidated Electrical, Management Trainee candidate
The first one is straight marketing, understanding how to segment the buyers. The second is straight engineering, and indeed every computer scientist should know the answer: binary search. Third one? How to handle a loaded question.
So, all these have their explanation. Oddball questions might have merit. They are searching... but more than that, they are *directly related to the job*. But what about:
- "How would you cure world hunger?" -- Asked at Amazon.com, Software Developer candidate
A searching question, I'll grant! But this question has flaws. Other than discovering ones knowledge of modern economics (c.f., Yunis, de Soto) or politics or entrepreneurship or farming, how does it relate to ... software? Amazon? Retail markets? It doesn't, I'll say (and I'll leave what it does relate to, and how dangerous that is, as an exercise to the reader after having read all 3 posts).
Now back to google's alleged questions. First above (hourglasses) was a straight mathematical teaser, but maths has little to do with software. OK, so there is a relationship but these days *any discipline on the planet* is about as related as mathematics, and some are far more relevant. We'll develop this theme in the next post.
Second question above, about pushing cars to a hotel. What's that about? Actually, the real implied question is, "did you grow up with a certain cultural reference?" Again, nothing to do with software (which I think google still does) and bugger all to do with anything else google might get access to. Also rather discriminatory, but that's life.
In closing, I'll conclude: asking or being asked oddball questions is not a correlation with a great place to work. Indeed, chances are, it is reversely correlated, but I'll leave the proof of that for part 2.
From Facebook:
The company identifies three types of accounts that don’t represent actual users: duplicate accounts, misclassified accounts and undesirable accounts. Together, they added up to just over 7 percent of its worldwide monthly active users last year.Facebook disclosed the figures in its annual report filed with the U.S. Securities & Exchange Commission on Friday.
Duplicate accounts, or those maintained by people in addition to their principal account, represent 53 million accounts, or 5 percent of the total, Facebook said.
Misclassified accounts, including those created for non-human entities such as pets or organizations, which instead should have Facebook Pages, accounted for almost 14 million accounts, or 1.3 percent of the total.
And undesirable accounts, such as those created by spammers, rounded out the tally with 9.5 million accounts, or 0.9 percent of users.
Context - systems that maintain user accounts and expect each account to map universally and uniquely to one person and only only person are in for a surprise -- people don't do that. The experience from CAcert is that even with a system that is practically only there for the purpose of mapping a human, for certificates attesting to some aspect of their Identity, there are a host of reasons why some people have multiple accounts. And most of those are good reasons, including ones laid at the door of the system provider.
There is no One True Account, just as there is no One True Name or One True Identity. Nor are there any One True Statistics, but what we have can help.
In the closing days of 2012, another CA was caught out making mistakes:
2012.5 -- A CA here issued 2 intermediate roots to two separate customers 8th August 2011 Mozilla mail/Mert Özarar. The process that allowed this to happen was discovered later on, fixed, and one of the intermediates was revoked. On 6th December 2012, the remaining intermediate was placed into an MITM context and used to issue an unauthorised certificate for *.google.com DarkReading. These certificates were detected by Google Chrome's pinning feature, a recent addition. "The unauthorized Google.com certificate was generated under the *.EGO.GOV.TR certificate authority and was being used to man-in-the-middle traffic on the *.EGO.GOV.TR network" wired. Actions. Vendors revoked the intermediates microsoft, google, Mozilla. Damages. Google will revoke Extended Validation status on the CA in January's distro, and Mozilla froze a new root of the CA that was pending inclusion.
I collect these stories for a CA risk history, which can be useful in risk analysis.
Beyond that, what is there to say? It looks like this CA made a mistake that let some certs slip out. It caught one of them later, not the other. The owner/holder of the cert at some point tried something different, including an MITM. One can see the coverup proceeding from there...
Mistakes happen. This so far is somewhat distinct from issuing root certs for the deliberate practice of MITMing. It is also distinct from the overall risk equation that says that because all CAs can issue certs for your browser, only one compromise is needed, and all CAs are compromised. That is, brittle.
But what is now clear is that the trend started in 2011 is confirmed in 2012 - we have 5+ incidents in each year. For many reasons, the CA business has reached a plateau of aggressive attention. It can now consider itself under attack, after 15 or so years of peace.
In what is stunning news - because we never believed it would happen - it transpires that, as reported by The Economist:
In 2002 the Sarbanes-Oxley act limited what kind of non-audit services an American accounting firm can offer to an audit client. But contrary to what many people believe, it did not forbid all of them. In its last full proxy statement before being bought by JPMorgan, Bear Stearns reported paying Deloitte in 2006 not only $20.8m for audit, but $6.3m for other services. The perception that auditors and clients are hand-in-glove, fair or not, is a reason why shareholders of Bear Stearns sued Deloitte along with the defunct bank. (JPMorgan and Deloitte settled in June. Deloitte paid out $20m, denying any wrongdoing.)
So, when anybody ever asks "did any auditor get taken to task over a failed bank?" the answer is YES. In the global financial crisis, Mark 1, which cost the world a trillion or two, we can now authoritively state that Deloitte paid out $20m and denied any wrongdoing.
In related news, the same article reports:
IT IS hardly news that the “Big Four” accounting firms get bigger nearly every year. But where they are growing says a lot about how they will look like in a decade, and the prospects worry some regulators and lawmakers. On September 19th Deloitte Touche Tohmatsu was the first to report revenues for its 2012 fiscal year, crowing of 8.6% growth, to $31.3 billion. Ernst & Young, PwC and KPMG will soon report their revenues (as private firms the Big Four choose not to report profits).
OK, does anyone know what the margin on audit & consulting is typically thought to be?
OK, so I edited the title, to fit in with an old Audit cycle I penned a while ago (I, II, III, IV, V, VI, VII).
Here's the full unedited quote from Avivah Litan, who comments on the latest 1.5m credit card breach in US of A:
What’s the takeaway on PCI? The same one that’s been around for years. Passing a PCI compliance audit does not mean your systems are secure. Focus on security and not on passing the audit.
Just a little emphasis, so audit me! PCI is that audit imposed by the credit card industry on processors. It's widely criticised. I imagine it does the same thing as most mandated and controlled audits - sets a very low bar, one low enough to let everyone pass if they've got the money to pay to enter the race.
For those wondering what happened to the audits of Global Payments, DigiNotar, Heartland, and hell, let's invite a few old friends to the party: MFGlobal, AIG, Lehman Brothers, Northern Rock, Greece PLC, the Japanese Nuclear Industry disaster recovery team and the Federal Reserve.... well, here's Avivah's hint:
In the meantime, Global Payments who was PCI compliant at the time of their breach is no longer PCI compliant – and was delisted by Visa – yet they continue to process payments.
That's a relief! So PCI comes with a handy kill-switch. If something goes wrong, we kill your audit :)
Problem solved. I wonder what the price of the kill-switch is, without the audit?
Over on New School, my threat-modelling-is-for-birds rant last month went down like a lead balloon.
Now, I'm gonna take the rap for this one. I was so happy to have finally found the nexus between threat modelling and security failure that has been bugging me for a decade, that I thought everyone else would get it in a blog post. No such luck, schmuck. Have another go!
Closest pin to the donkey tail went to David, who said:
Threat modeling is yet another input into an over all risk analysis.
Right. And that's the point. Threat modelling by itself is incomplete. What's the missing bit? The business. Look at this gfx, risk'd off some site. This is the emerging 31000 risk management typology (?) in slightly cut down form.
The business is the 'context' as shown by the red arrow. When you get into "the biz" you discover it's a place of its own, a life, a space, an entire world. More pointedly, the biz provides you with (a) the requirements and (b) a list of validated threats. E.g., history, as we already deal with them.
The biz provides the foundation and context for all we do - we protect the business, without which we have no business meddling.
(Modelling distraction: As with the graphic, the source of requirements is often painted at the top of the diagram, and requirements-driven architecture is typically called top-down. Alternatively and depending on your contextual model, we can draw it as a structural model: our art or science can sit on top of the business. We are not lifting ourselves up by our bootstraps; we are lifting the business to greater capabilities and margins. So it may be rational and accurate to call the business a bottom-up input.)
Either way, business is a mess, and one we can't avoid. We have to dive in and absorb, and in our art we filter out the essence of the problem from the business into a language we call 'requirements'.
Then, the "old model" of threat modelling is somewhere in that middle box. For sake of this post, turn the word risk into threat. Follow the blue arrow, it's somewhere in there.
The point then of threat modelling is that it is precisely the opposite of expected: it's perfect. In more particular words, it lives without a context. Threat modelling proceeds happily without requirements that are grounded in a business problem or a customer base, without a connection to the real world.
Threat modelling is perfect by definition, which we achieve by cutting the scope off at the knees.
Bring in the business and we get real messy. Human, tragic. And out of the primordial swamp of our neighbors crawls the living, breathing, propogating requirements that make a real demand on us - survival, economy, sex, war, whatever it is that real people ask us for.
Adam talks about Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of Privilege... which sounds perfect. And I'd love to play the game :)
But real life is these things: insider trading, teenager versus teenager versus parent versus parent, revolutions of colour, DVD sharing, hiding in plain sight, movement tracking of jealous loved ones, don't ask, don't tell, whistleblowing, revenge,...
Which philosophy of business context first and foremost also explains a lot of other things. Just by way of one example, it gives an essential clue as to why only end-to-end security is worth anything. Application security automatically has a better chance of including the business; point-to-point designs like IPSec, SSL, DNSSec, etc have no chance. They've already handwaved anything past the relevant nodes into utter obscurity.
Chris points to:
Google once considered issuing its own currency, to be called Google Bucks, company Chairman Eric Schmidt said on stage in Barcelona at the Mobile World Congress Tuesday.At the end of his keynote speech, Schmidt hit on a wide array of topics in response to audience questions. "We've had various proposals to have our own currency we were going to call Google Bucks," Schmidt said.
The idea was to implement a "peer-to-peer money" system. However, Google discovered that the concept is illegal in most areas, he said. Governments are typically wary of the potential for money laundering with such proposals. "Ultimately we decided we didn't want to get into that because of these issues," Schmidt said.
Offered without too much analysis. This confirms what we suspected - that they looked at it and decided not to. Technically, this is a plausible and expected decision that will be echoed by many conventional companies. I would expect Apple to do this too, and Microsoft know this line very well.
However we need to understand that this result is intentional, the powers that be want you to think this way. Banks want you to play according to their worldview, and they want you to be scared off their patch. Sometimes however they don't tell the whole truth, and as it happens, p2p is not illegal in USA or Europe - the largest markets. You are also going to find surprising friends in just about any third world country.
Still, google did their own homework, and at least they investigated. As a complicated company with many plays, they and they alone must do their strategy. Still, as we move into GFC-2 with the probability of mass bank nationalisations in order to save the payments systems, one wonders how history will perceive their choice.
As an aside to the old currency market currently collapsing, in the now universally known movie GFC-2 rolling on your screens right now, some people have commented that perhaps online currencies and LETS and so forth will fill the gap. Unlikely, they won't fill the gap, but they will surge in popularity. From a business perspective, it is then some fun to keep an eye on them. An article on Facebook credits by George Anders, which is probably the one to watch:
Facebook’s 27-year-old founder, Mark Zuckerberg, isn’t usually mentioned in the same breath as Ben Bernanke, the 58-year-old head of the Federal Reserve. But Facebook’s early adventures in the money-creating business are going well enough that the central-bank comparison gets tempting.
Let's be very clear here: the mainstream media and most commentators will have very little clue what this is about. So they will search for easy analogues such as a comparison with national units, leading to specious comparisons of Zuckerberg to Bernanke. Hopeless and complete utter nonsense, but it makes for easy copy and nobody will call them on it.
Edward Castronova, a telecommunications professor at Indiana University, is fascinated by the rise of what he calls “wildcat currencies,” such as Facebook Credits. He has been studying the economics of online games and virtual worlds for the better part of a decade. Right now, he calculates, the Facebook Credits ecosystem can’t be any bigger than Barbados’s economy and might be significantly smaller. If the definition of digital goods keeps widening, though, he says, “this could be the start of something big.”
This is a little less naive and also slightly subtle. Let me re-write it:
If you believe that Facebook will continue to dominate and hold its market size, and if you believe that they will be able to successfully walk the minefield of self-issued currencies, then the result will be important. In approximate terms, think about PayPal-scaled importance, order of magnitude.
Note the assumptions there. Facebook have a shot at the title, because they have massive size and uncontested control of their userbase. (Google, Apple, Microsoft could all do the same thing, and in a sense, they already are...)
The more important assumption is how well they avoid the minefield of self-issued currencies. The problem here is that there are no books on it, no written lore, no academic seat of learning, nothing but the school of hard-knocks. To their credit, Facebook have already learnt quite a bit from the errors of their immediate predecessors. Which is no mean feat, as historically, self-issuers learn very little from their forebears, which is a good predictor of things to come.
Of the currency issuers that spring up, 99% are destined to walk on a mine. Worse, they can see the mine in front of them, they successfully aim for it, and walk right onto it with aplomb. No help needed at all. And, with 15 years of observation, I can say that this is quite consistent.
Why? I think it is because there is a core dichotomy at work here. In order to be a self-issuer you have to be independent enough to not need advice from anyone, which will be familiar to business observers as the entrepreneur-type. Others will call it arrogant, pig-headed, too darned confident for his own good... but I prefer to call it entrepreneurial spirit.
*But* the issuance of money is something that is typically beyond most people's ken at an academic or knowledge level. Usage of money is something that we all know, and all learnt at age 5 or so. We can all put a predictions in at this level, and some players can make good judgements (such as Peter Vodel's Predictions for Facebook Credits in 2012).
Issuance of money however is a completely different thing to usage. It is seriously difficult to research and learn; by way of benchmark, I wrote in 2000 you need to be quite adept at 7 different disciplines to do online money (what we then called Financial Cryptography). That number was reached after as many years of research on issuance, and nearly that number working in the field full time.
And, I still got criticised by disciplines that I didn't include.
Perhaps fairly...
You can see where I'm heading. The central dichotomy of money issuance then is that the self-issuer must be both capable of ignoring advice, and putting together an overwhelming body of knowledge at the same time; which is a disastrous clash as entrepreneurs are hopeless at blindspots, unknowns, and prior art.
There is no easy answer to this clash of intellectual challenges. Most people will for example assume that institutions are the way to handle any problem, but that answer is just another minefield:
If Facebook at some point is willing to reduce its cut of each Credits transaction, this new form of online liquidity may catch the eye of many more merchants and customers. As Castronova observes: “there’s a dynamic here that the Federal Reserve ought to look at.”
Now, we know that Castronovo said that for media interest only, but it is important to understand what really happens with the Central Banks. Part of the answer here is that they already do observe the emerging money market :) They just won't talk to the media or anyone else about it.
Another part of the answer is that CBs do not know how to issue money either; another dichotomy easily explained by the fact that most CBs manage a money that was created a long time ago, and the story has changed in the telling.
So, we come to the the really difficult question: what to do about it? CBs don't know, so they will definately keep the stony face up because their natural reaction to any question is silence.
But wait! you should be saying. What about the Euro?
Well, it is true that the Europeans did indeed successfully manage to re-invent the art and issue a new currency. But, did they really know what they were doing? I would put it to you that the Euro is the exception that proves the rule. They may have issued a currency very well, but they failed spectacularly in integrating that currency into the economy.
Which brings us full circle back to the movie now showing on media tonight and every night: GFC-2.
And so it came to pass that, after my aggressive little note on GFC-1's causes found in securitization (I, II, III, IV), I am asked to describe the current, all new with extra whitening Global Financial Crisis - the Remix, or GFC-2 to those who love acronyms and the pleasing rhyme of sequels.
Or, the 2nd Great Depression, depending on how it pans out. Others have done it better than I, but here is my summary.
Part 1. In 2000, European countries joined together in the EMU or European Monetary Union. A side-benefit of this was the Bundesbank's legendary and robust control of inflation and stiff conservative attitude to matters monetary. Which meant other countries more or less got to borrow at Bundesbank's rates, plus a few BPs (that's basis points, or hundredths of percentage points for you and I).
Imagine that?! Italy, who had been perpetually broke under the old Lira, could now borrow at not 6 or 7% but something like 3%. Of course, she packed her credit card and went to town, as 3% on the CC meant she could buy twice as much stuff, for the same regular monthly payments. So did Ireland, Portugal, Greece and Spain. Everyone in the EMU, really.
The problem was, they still had to pay it back. Half the interest with the same serviceable monthly credit card bill means you can borrow twice as much. Leverage! It also means that if the rates move against you, you're in it twice as deep.
And the rates, they did surely move. For this we can blame GFC-1 which put the heebie-jeebies into the market and caused them to re-evaluate the situation. And, lo and behold, the European Monetary Union was revealed as no more than a party trick because Greece was still Greece, banks were still banks, debt was still debt, and the implicit backing from the Bundesbank was ... not actually there! Or the ECB, which by charter isn't allowed to lend to governments nor back up their foolish use of the credit card.
Bang! Rates moves up to the old 6 or 7%, and Greece was bankrupt.
Now we get to Part 2. It would have been fine if it had stopped there, because Greece could just default. But the debt was held by (owed to) ... the banks. Greece bankrupt ==> banks bankrupt. Not just or not even the Greek ones but all of them: as financing governments is world-wide business, and the balance sheets of the banks post-GFC-1 and in a non-rising market are anything but 'balanced.' Consider this as Part 0.
Now stir in a few more languages, a little contagion, and we're talking *everyone*. To a good degree of approximation, if Greece defaults, USA's banking system goes nose deep in it too.
So we move from the countries, now the least of our problems because they can simply default ... to the banks. Or, more holistically, the entire banking system. Is bankrupt.
In its current today form, there is the knowledge that the banks cannot deal with the least hiccup. Every bank knows this, knows that if another bank defaults on a big loan, they're in trouble. So every bank pulls its punches, liquidity dries up, and credit stops flowing ... to businesses, and the economy hits a brick wall. Internationally.
In other words, the problem isn't that countries are bankrupt, it is that they are not allowed to go bankrupt (clues 1, 2).
We saw something similar in the Asian Financial Crisis, where countries were forced to accept IMF loans ... which paid out the banks. Once the banks had got their loans paid off, they walked, and the countries failed (because of course they couldn't pay back the loans). Problem solved.
This time however there is no IMF, no external saviour for the banking system, because we are it, and we are already bankrupt.
Well, there. This is as short as I can get the essentials. We need scholars like Kevin Dowd or John Maynard Keynes, those whos writing is so clear and precise as to be intellectual wonders in their own lifetimes. And, they will emerge in time to better lay down the story - the next 20 years are going to be a new halcyon age of economics. So much to study, so much new raw data. Pity they'll all be starving.
As we all know by now, MF Global crashed with some many billions of losses, filing for bankrupcy on 31st October. James Turk wonders aloud:
First of all investors should be concerned because everything is so inter-connected today. People call it contagion and this contagion is real because the MF Global bankruptcy is going to have a knock on effect, just like Lehman Brothers had a knock on effect.”
The point being that we know there is a big collapse coming, but we don't know what it will that will trigger it. James is making the broad point that a firm collapsing on the west side of the Atlantic could cause collapse in Europe. But wait, there's more:
So the contagion is the first reason for concern. The second reason for concern is it’s taking so long for them to find this so called missing money, which I find shocking. It’s been three weeks now since the MF Global bankruptcy was declared and they started talking about $600 million of missing funds.So I’m not too surprised that now they are talking about $1.2 billion of missing customer funds. I think they are just trying to delay the inevitable as to how bad the situation at MF Global really is.
And more! Chris points to an article by Bloomberg / Jonathan Weil:
This week the trustee for the liquidation of its U.S. brokerage unit said as much as $1.2 billion of customer money is missing, maybe more. Those deposits should have been kept segregated from the company’s funds. By all indications, they weren’t.
Jonathan zeroes in on the heart of the matter:
Six months ago the accounting firm PricewaterhouseCoopers LLP said MF Global Holdings Ltd. and its units “maintained, in all material respects, effective internal control over financial reporting as of March 31, 2011.” A lot of people who relied on that opinion lost a ton of money.
So when I asked:
Let's check the record: did any audit since Sarbanes-Oxley pick up any of the problems seen in the last 18 months to do with the financial crisis?
we now know that PricewaterhouseCoopers LLP will not be stepping up to the podium with MF Global! Jonathan echoes some of the questions I asked:
What’s the point of having auditors do reports like this? And are they worth the cost? It’s getting harder to answer those questions in a way the accounting profession would favor.
But now that we have a more cohesive case study to pick through, some clues are emerging:
“Their books are a disaster,” Scott O’Malia, a commissioner at the Commodity Futures Trading Commission, told the Wall Street Journal in an interview two weeks ago. The newspaper also quoted Thomas Peterffy, CEO of Interactive Brokers Group Inc., saying: “I always knew the records were in shambles, but I didn’t know to what extent.” Interactive Brokers backed out of a potential deal to buy MF last month after finding discrepancies in its financial reports.
That's a tough start for PricewaterhouseCoopers LLP. Then:
For fiscal 2007, MF Global paid Pricewaterhouse $17.1 million in audit fees. By fiscal 2011, that had fallen to $10.9 million, even as warning signs about MF’s internal controls were surfacing publicly.In 2007, MF and one of its executives paid a combined $77 million to settle CFTC allegations of mishandling hedge-fund clients’ accounts, as well as supervisory and record-keeping violations. In 2009, the commission fined MF $10 million for four instances of risk-supervision failures, including one that resulted in $141 million of trading losses on wheat futures. Suffice it to say, Pricewaterhouse should have been on high alert.
On top of that, Pricewaterhouse’s main regulator, the Public Company Accounting Oversight Board, released a nasty report this week on the firm’s audit performance. The agency cited deficiencies in 28 audits, out of 75 that it inspected last year. The tally included 13 clients where the board said the firm had botched its internal-control audits. The report didn’t name the companies. One of them could have been MF, for all we know.
In a response letter to the board, Pricewaterhouse’s U.S. chairman, Bob Moritz, and the head of its U.S. audit practice, Tim Ryan, said the firm is taking steps to improve its audit quality.
Ha! Jonathan asks the pointed question:
The point of having a report by an independent auditor is to assure the public that what a company says is true. Yet if the reports aren’t reliable, they’re worse than worthless, because they sucker the public with false promises. Maybe, just maybe, we should stop requiring them altogether.
Exactly. This was what I was laying out for the reader in my Audit cycle. But I was doing it from observation and logic, not from knowing about any particular episode. One however was expected to follow from the other...
The Audit brand depletes. Certainly time to start asking hard questions. Is there value in using a big 4 auditor? Could a firm get by on a more local operation? Are there better ways?
And, what does a big N auditor do in the new world? Well, here's one suggestion: take the bull by the horns and start laying out the truth! KPMG's new Chairman seems to be keen to add on to last week's revelation with some more:
KPMG International LLP’s global chairman, Michael Andrew, said fraud was evident at Olympus Corp. (7733) and his firm met all legal obligations to pass on information related to Olympus’s 2008 acquisition of Gyrus Group Ltd. before it was replaced as the camera maker’s auditor.“We were displaced as a result of doing our job,” Andrew told reporters at the Foreign Correspondents’ Club in Hong Kong today. “It’s pretty evident to me there was very, very significant fraud and that a number of parties had been complicit.”
Now, if I was a big N auditor, that's exactly what I'd do. Break the cone of silence and start revealing the dirt. We can't possibly make things any worse for audit, so let's shake things up. Go, Andrew.
Google radically expanded Tuesday its use of bank-level security that prevents Wi-Fi hackers and rogue ISPs from spying on your searches.Starting Tuesday, logged-in Google users searching from Google’s homepage will be using https://google.com, not http://google.com — even if they simply type google.com into their browsers. The change to encrypted search will happen over several weeks, the company said in a blog post Tuesday.
We have known for a long time that the answer to web insecurity is this: There is only one mode, and it is secure.
(I use the royal we here!)
This is evident in breaches led by phishing, as the users can't see the difference between HTTP and HTTPS. The only solution at several levels is to get rid of HTTP. Entirely!
Simply put, we need SSL everywhere.
Google are seemingly the only big corporate that have understood and taken this message to heart.
Google has been a leader in adding SSL support to cloud services. Gmail is now encrypted by default, as is the company’s new social network, Google+. Facebook and Microsoft’s Hotmail make SSL an option a user must choose, while Yahoo Mail has no encryption option, beyond its intial sign-in screen.
EFF and CAcert are small organisations that are doing it as and when we can... Together, security-conscious organisations are slowly migrating all their sites to SSL and HTTPS all the time.
It will probably take a decade. Might as well start now -- where's your organisation's commitment to security? Amazon, Twitter, Yahoo? Facebook!
Long term readers will know that I have often written of the failure of the browser vendors to provide effective security against phishing. I long ago predicted that nothing will change until the class-action lawsuit came. Now signs are appearing that this is coming to pass:
That's changing rapidly. Recently, Sony faced a class action lawsuit for losing the private information of millions of users. And this week, it was reported that Dropbox is already being sued for a recent security breach of its own.It's too early to know if these particular lawsuits will get anywhere, but they're part of a growing trend. As online services become an ever more important part of the American economy, the companies that create them increasingly find that security problems are hitting them where it really hurts: the bottom line.
See also the spate of lawsuits against banks over losses; although it isn't the banks' direct fault, they are complicit in pushing weak security models, and a law will come to make them completely liable. Speaking of laws:
Computer security has also been an area of increasing activity for the Federal Trade Commission. In mid-June, FTC commissioner Edith Ramirez testified to Congress about her agency's efforts to get companies to beef up their online security. In addition to enforcing specific rules for the financial industry, the FTC has asserted authority over any company that makes "false or misleading data security claims" or causes harm to consumers by failing to take "reasonable security measures." Ramirez described two recent settlements with companies whose security vulnerabilities had allowed hackers to obtain sensitive customer data. Among other remedies, those firms have agreed to submit to independent security audits for the next 20 years.
Skip over the sad joke at the end. Timothy B. Lee and Ars Technica, author of those words, did more than just recycle other stories, they actually did some digging:
Alex Halderman, a computer science professor at the University of Michigan, to help us evaluate these options. He argued that consumer choice by itself is unlikely to produce secure software. Most consumers aren't equipped to tell whether a company's security claims are "snake oil or actually have some meat behind them." Security problems therefore tend not to become evident until it's too late.But he argued the most obvious regulatory approach—direct government regulation of software security practices—was also unlikely to work. A federal agency like the FTC has neither the expertise nor the manpower to thoroughly audit the software of thousands of private companies. Moreover, "we don't have really widely regarded, well-established best practices," Halderman said. "Especially from the outside, it's difficult to look at a problem and determine whether it was truly negligent or just the kind of natural errors that happen in every software project."
And when an agency found flaws, he said, it would have trouble figuring out how urgent they were. Private companies might be forced to spend a lot of time fixing trivial flaws while more serious problems get overlooked.
(Buyers don't know. Sellers don't know.)
So what about liability? I like others have recognised that liability will eventually arise:
This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year's attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don't repeat that mistake in future.
But:
Still, Halderman warned that too much litigation could cause companies to become excessively security-conscious. Software developers always face a trade-off between security and other priorities like cost and time to market. Forcing companies to devote too much effort to security can be as harmful as devoting too little. So policymakers shouldn't focus exclusively on liability, he said.
Actually, it's far worse. Figure out some problem, and go to a company and mention that this issue exists. The company will ignore you. Mention liability, and the company will immediately close ranks and deny-by-silence any potential liability. Here's a variation written up close by concerning privacy laws:
...For everything else, the only rule for companies is just “don’t lie about what you’re doing with data.”The Federal Trade Commission enforces this prohibition, and does a pretty good job with this limited authority, but risk-averse lawyers have figured out that the best way to not violate this rule is to not make explicit privacy promises at all. For this reason, corporate privacy policies tend to be legalistic and vague, reserving rights to use, sell, or share your information while not really describing the company’s practices. Consumers who want to find out what’s happening to their information often cannot, since current law actually incentivizes companies not to make concrete disclosures.
Likewise with liability: if it is known of beforehand, it is far easier to slap on a claim of gross negligence. Which means in simple layman's terms: triple damages. Hence, companies have a powerful incentive to ignore liability completely. As above with privacy: companies are incentivised not to do it; and so it comes to pass with security in general.
Try it. Figure out some user-killer problem in some sector, and go talk to your favourite vendor. Mention damages, liability, etc, and up go the shutters. No word, no response, no acknowledgement. And so, the problem(s) will never get fixed. The fear of liabilities is greater than the fear of users, competitors, change, even fear itself.
Which pretty much guarantees a class-action lawsuit one day. And the problem still won't be fixed, as all thoughts are turned to denial.
So what to do? Halderman drifts in the same direction as I've commented:
Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it's hard to mandate, or even to measure, "security consciousness" from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it's not likely to be effective unless management's heart is in it.
It's completely meaningless to mandate, which is the flaw behind the joke of audit. But it is possible to measure. Here's an attempt by yours truly.
What's not clear as yet is how is it possible to incentivise companies to pursue that lofty goal, even if we all agree it is good?
It's been a long time since I wrote up my Security Top Tips, and things have changed a bit since then. Here's an update. (You can see the top tips about half way down on the right menu block of the front page.)
Since then, browsing threats have got a fair bit worse. Although browsers have done some work to improve things, their overall efforts have not really resulted in any impact on the threats. Worse, we are now seeing MITBs being experimented with, and many more attackers get in on the act.
To cope with this heightened risk to our personal node, I experimented a lot with using private browsing, cache clearing, separate accounts and so forth, and finally hit on a fairly easy method: Use another browser.
That is, use something other than the browser that one uses for browsing. I use Firefox for general stuff, and for a long time I've been worried that it doesn't really do enough in the battle for my user security. Safari is also loaded on my machine (thanks to Top Tip #1). I don't really like using it, as its interface is a little bit weaker than Firefox (especially the SSL visibility) ... but in this role it does very well.
So for some time now, for all my online banking and similar usage, I have been using Safari. These are my actions:
I don't use bookmarks, because that's an easy place for an trojan to look (I'm not entirely sure of that technique but it seems like an obvious hint).
"Use another browser" creates an ideal barrier between a browsing browser and a security browser, and Safari works pretty well in that role. It's like an air gap, or an app gap, if you like. If you are on Microsoft, you could do the same thing using IE and Firefox, or you could download Chrome.
I've also tested it on my family ... and it is by far the easiest thing to tell them. They get it! Assume your browsing habits are risky, and don't infect your banking. This works well because my family share their computers with kids, and the kids have all been instructed not to use Safari. They get it too! They don't follow the logic, but they do follow the tool name.
What says the popular vote? Try it and let me know. I'd be interested to hear of any cross-browser threats, as well :)
The Economist summarises who the Financial Crisis Inquiry Commission of USA's Congress would like to blame in three tranches. For the Democrats, it's the financial industry and the de-regulation-mad Republicans:
The main report, endorsed by the Democrats, points to a broad swathe of failures but pins much of the blame on the financial industry, be it greed and sloppy risk management at banks, the predations of mortgage brokers, the spinelessness of ratings agencies or the explosive growth of securitisation and credit-default swaps. To the extent that politicians are to blame, it is for overseeing a quarter-century of deregulation that allowed Wall Street to run riot.
For the Republicans:
A dissenting report written by three of the Republicans could be characterised as the Murder on the Orient Express verdict: they all did it. Politicians, regulators, bankers and homebuyers alike grew too relaxed about leverage, helping to create a perfect financial storm. This version stresses broad economic dynamics, placing less emphasis on Wall Street villainy and deregulation than the main report does.
Finally, one lone dissenter:
A firmer (and, at 43,000 words, longer) rebuttal of the report by the fourth Republican, Peter Wallison, puts the blame squarely on government policies aimed at increasing home ownership among the poor. Mr Wallison argues that the pursuit of affordable-housing goals by government and quasi-government agencies, including Fannie Mae and Freddie Mac, caused a drastic decline in loan-underwriting standards. Over 19m of the 27m subprime and other risky mortgages created in the years leading up to the crisis were bought or guaranteed by these agencies, he reckons. These were "not a cigarette butt being dropped in a tinder-dry forest" but "a gasoline truck exploding" in the middle of one, Mr Wallison says.
Yessss..... That's getting closer. Not exactly a gasoline truck, as that would have one unfortunate spark. More like several containers, loaded with 19m fully-loaded zippo lighters driven into the forest of housing finance one hot dry summer, and distributed to as many needy dwellers as could be found.
Now, who would have driven that truck, and why? Who would have proposed it to the politicians? Ask these questions, and we're almost there.
To understand what's happening today in the economy, we have to understand what banking is, and by that, I mean really understand how it works.
This time it's personal, right? Let's starts with what Niall Ferguson says about banking:
To understand why we have come so close to a rerun of the 1930s, we need to begin at the beginning, with banks and the money they make. From the Middle Ages until the mid-20th century, most banks made their money by maximizing the difference between the costs of their liabilities (payments to depositors) and the earnings on their assets (interest and commissions on loans). Some banks also made money by financing trade, discounting the commercial bills issued by merchants. Others issued and traded bonds and stocks, or dealt in commodities (especially precious metals). But the core business of banking was simple. It consisted, as the third Lord Rothschild pithily put it, "essentially of facilitating the movement of money from Point A, where it is, to Point B, where it is needed."
As much as the good Prof's comments are good and fruitful, we need more. Here's what banking really is:
Banking is borrowing from the public on demand, and lending those demand deposits to the public at term.
Sounds simple, right? No, it's not. Every one of those words is critically important, and change one or two of them and we've broken it. Let's walk it through:
Banking is borrowing from the public ..., and lending ... to the public.
Both from the public, and to the public. The public at both ends of banking is essential to ensure a diversification effect (A to B), a facilitation effect (bank as intermediary), and ultimately a public policy interest in regulation (the central bank). If one of those conditions aren't met, if one of those parties aren't "the public", then: it's not banking. For example,
So now we can see that there is actually a reason why the Central Banks are concerned about banks, but less so about funds, S&Ls, etc. Back to the definition:
Banking is borrowing ... on demand, and lending those demand deposits ... at term.
On demand means you walk into the bank and get your money back. Sounds quite reasonable. At term means you don't. You have to wait until the term expires. Then you get your money back. Hopefully.
The bank has a demand obligation to the public lender, and a (long) term promise from the public borrower. This is quaintly called a maturity mismatch in the trade. What's with that?
The bank is stuck between a rock and a hard place. Let's put more meat on these bones: if the bank borrows today, on demand, and lends that out at term, then in the future, it is totally dependent on the economy being kind to the people owing the money. That's called risk, and for that, banks make money.
This might sound a bit dry, but Mervyn King, the Governor of the Bank of England, also recently took time to say it in even more dry terms (as spotted by Hasan):
3. The theory of bankingWhy are banks so risky? The starting point is that banks make heavy use of short-term debt. Short-term debt holders can always run if they start to have doubts about an institution. Equity holders and long-term debt holders cannot cut and run so easily. Douglas Diamond and Philip Dybvig showed nearly thirty years ago that this can create fragile institutions even in the absence of risk associated with the assets that a bank holds. All that is required is a cost to the liquidation of long-term assets and that banks serve customers on a first-come, first-served basis (Diamond and Dybvig, 1983).
This is not ordinary risk. For various important reasons, banking risk is extraordinary risk, because no bank, no matter where we are talking, can deal with unexpected risks that shift the economy against it. Which risks manifest themselves with an increase in defaults, that is, when the long term money doesn't come back at all.
Another view on this same problem is when the lending public perceive a problem, and decide to get their money out. That's called a run; no bank can deal with unexpected shifts in public perception, and all the lending public know this, so they run to get the money out. Which isn't there, because it is all lent out.
(If this is today, and you're in Ireland, read quietly...)
A third view on this is the legal definition of fraud: making deceptive statements, by entering into contracts that you know you cannot meet, with an intent to make a profit. By this view, a bank enters into a fraudulent contract with the demand depositor, because the bank knows (as does everyone else) that the bank cannot meet the demand contract for everyone, only for around 1-2% of the depositors.
Historically, however, banking was very valuable. Recall Mr Rothschild's goal of "facilitating the movement of money from Point A, where it is, to Point B, where it is needed." It was necessary for society because we simply had no other efficient way of getting small savings from the left to large and small projects on the right. Banking was essential for the rise of modern civilisation, or so suggests Mervyn King, in an earlier speech:
Writing in 1826, under the pseudonym of Malachi Malagrowther, [Sir Walter Scott] observed that:"Not only did the Banks dispersed throughout Scotland afford the means of bringing the country to an unexpected and almost marvellous degree of prosperity, but in no considerable instance, save one [the Ayr Bank], have their own over-speculating undertakings been the means of interrupting that prosperity".
Banking developed for a fairly long period, but as a matter of historical fact, it eventually settled on a structure known as central banking [1]. It's also worth mentioning that this historical development of central banking is the history of the Bank of England, and the Governor is therefore the custodian of that evolution.
Then, the Central Bank was the /lender of last resort/ who would stop the run.
Nevertheless, there are benefits to this maturity transformation - funds can be pooled allowing a greater proportion to be directed to long-term illiquid investments, and less held back to meet individual needs for liquidity. And from Diamond's and Dybvig's insights, flows an intellectual foundation for many of the policy structures that we have today - especially deposit insurance and Bagehot's time-honoured key principle of central banks acting as lender of last resort in a crisis.
Regulation and the structure we know today therefore rest on three columns:
That which we know today as banking is really central banking. Later on, we find refinements such as the BIS and their capital ratio, the concept of big strong banks, national champions, coinage and issuance, interest rate targets, non-banking banking, best practices and stress testing, etc etc. All these followed in due course, often accompanied with a view of bigger, stronger, more diversified.
Which sets half of the scene for how the global financial crisis is slowly pushing us closer to our future. The other half in a future post, but in the meantime, dwell on this: Why is Mervyn King, as the Guv of the Old Lady of Threadneedle Street (a.k.a. Bank of England), spending time teaching us all about banking?
Iran's Bushehr nuclear power plant in Bushehr Port:
"An error is seen on a computer screen of Bushehr nuclear power plant's map in the Bushehr Port on the Persian Gulf, 1,000 kms south of Tehran, Iran on February 25, 2009. Iranian officials said the long-awaited power plant was expected to become operational last fall but its construction was plagued by several setbacks, including difficulties in procuring its remaining equipment and the necessary uranium fuel. (UPI Photo/Mohammad Kheirkhah)"
Click onwards for full sized image:
Compliant? Minor problem? Slight discordance? Conspiracy theory?
(spotted by Steve Bellovin)
I'm reading a govt. security manual this weekend, because ... well, doesn't everyone?
To give it some grounding, I'm building up a cross-reference against my work at the CA. I expected it to remain rather dry until the very end, but I've just tripped up on this Risk in the section on detecting incidents:
2.5.7. An agency constructs a honeypot or honeynet to assist in capturing intrusion attempts, resulting in legal action being taken against the agency for breach of privacy.
My-oh-my!
Another metaphor (I) that has gained popularity is that Infosec security is much like war. There are some reasons for this: there is an aggressive attacker out there who is trying to defeat you. Which tends to muck a lot of statistical error-based thinking in IT, a lot of business process, and as well, most economic models (e.g., asymmetric information assumes a simple two-party model). Another reason is the current beltway push for essential cyberwarfare divisional budget, although I'd hasten to say that this is not a good reason, just a reason. Which is to say, it's all blather, FUD, and oneupsmanship against the Chinese, same as it ever was with Eisenhower's nemesis.
Having said that, infosec isn't like war in many ways. And knowing when and why and how is not a trivial thing. So, drawing from military writings is not without dangers. Consider these laments about applying Sun Tzu's The Art of War to infosec from Steve Tornio and Brian Martin:
In "The Art of War," Sun Tzu's writing addressed a variety of military tactics, very few of which can truly be extrapolated into modern InfoSec practices. The parts that do apply aren't terribly groundbreaking and may actually conflict with other tenets when artificially applied to InfoSec. Rather than accept that Tzu's work is not relevant to modern day Infosec, people tend to force analogies and stretch comparisons to his work. These big leaps are professionals whoring themselves just to get in what seems like a cool reference and wise quote.
"The art of war teaches us to rely not on the likelihood of the enemy's not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable." - The Art of War
The Art of SunTzu is not a literal quoting and thence mad rush to build the tool. Art of War was written from the context of a successful general talking to another hopeful general on the general topic of building an army for a set piece nation-to-nation confrontation. It was also very short.
Art of War tends to interlace high level principles with low level examples, and dance very quickly through most of its lessons. Hence it was very easy to misinterpret, and equally easy to "whore oneself for a cool & wise quote."
However, Sun Tzu still stands tall in the face of such disrespect, as it says things like know yourself FIRST, and know the enemy SECOND, which the above essay actually agreed with. And, as if it needs to be said, knowing the enemy does not imply knowing their names, locations, genders, and proclivities:
Do you know your enemy? If you answer 'yes' to that question, you already lost the battle and the war. If you know some of your enemies, you are well on your way to understanding why Tzu's teachings haven't been relevant to InfoSec for over two decades. Do you want to know your enemy? Fine, here you go. your enemy may be any or all of the following:
- 12 y/o student in Ohio learning computers in middle school
- 13 y/o home-schooled girl getting bored with social networks
- 15 y/o kid in Brazil that joined a defacement group
- ...
Of course, Sun Tzu also didn't know the sordid details of every soldier's desires; "knowing" isn't biblical, it's capable. Or, knowing their capabilities, and that can be done, we call it risk management. As Jeffrey Carr said:
The reason why you don't know how to assign or even begin to think about attribution is because you are too consumed by the minutia of your profession. ... The only reason why some (OK, many) InfoSec engineers haven't put 2+2 together is that their entire industry has been built around providing automated solutions at the microcosmic level. When that's all you've got, you're right - you'll never be able to claim victory.
Right. Most all InfoSec engineers are hired to protect existing installations. The solution is almost always boxed into the defensive, siege mentality described above, because the alternate, as Dan Geer apparently said:
When you are losing a game that you cannot afford to lose, change the rules. The central rule today has been to have a shield for every arrow. But you can't carry enough shields and you can run faster with fewer anyhow.The advanced persistent threat, which is to say the offense that enjoys a permanent advantage and is already funding its R&D out of revenue, will win as long as you try to block what he does. You have to change the rules. You have to block his success from even being possible, not exchange volleys of ever better tools designed in response to his. You have to concentrate on outcomes, you have to pre-empt, you have to be your own intelligence agency, you have to instrument your enterprise, you have to instrument your data.
But, at a corporate level, that's simply not allowed. Great ideas, but only the achievable strategy is useful, the rest is fantasy. You can't walk into any company or government department and change the rules of infosec -- that means rebuilding the apps. You can't even get any institution to agree that their apps are insecure; or, you can get silent agreement by embarrassing them in the press, along with being fired!
I speak from pretty good experience of building secure apps, and of looking at other institutional or enterprise apps and packages. The difference is huge. It's the difference between defeating Japan and defeating Vietnam. One was a decision of maximisation, the other of minimisation. It's the difference between engineering and marketing; one is solid physics, the other is facade, faith, FUD, bribes.
It's the difference between setting up a world-beating sigint division, and fixing your own sigsec. The first is a science, and responds well by adding money and people. Think Manhattan, Bletchley Park. The second is a societal norm, and responds only to methods generally classed by the defenders as crimes against humanity and applications. Slavery, colonialism, discrimination, the great firewall of China, if you really believe in stopping these things, then you are heading for war with your own people.
Which might all lead the grumpy anti-Sun Tzu crowd to say, "told you so! This war is unwinnable." Well, not quite. The trick is to decide what winning is; to impose your will on the battleground. This is indeed what strategy is, to impose ones own definition of the battleground on the enemy, and be right about it, which is partly what Dan Geer is getting at when he says "change the rules." A more nuanced view would be: to set the rules that win for you; and to make them the rules you play by.
And, this is pretty easily answered: for a company, winning means profits. As long as your company can conduct its strategy in the face of affordable losses, then it's winning. Think credit cards, which sacrifice a few hundred basis points for the greater good. It really doesn't matter how much of a loss is made, as long as the customer pays for it and leaves a healthy profit over.
Relevance to Sun Tzu? The parable of the Emperor's Concubines!
In summary, it is fair to say that Sun Tzu is one of those texts that are easy to bandy around, but rather hard to interpret. Same as infosec, really, so it is no surprise we see it in that world. Also, war as a very complicated business, and Art of War was really written for that messy discipline ... so it takes somewhat more than a familiarity from both to successfully relate across beyond a simple metaphor level.
And, as we know, metaphors and analogues are descriptive tools, not proofs. Proving them wrong proves nothing more than you're now at least an adolescent.
Finally, even war isn't much like war these days. If one factors in the last decade, there is a clear pattern of unilateral decisions, casus belli at a price, futile targets, and effervescent gains. Indeed, infosec looks more like the low intensity, mission-shy wars in the Eastern theaters than either of them look like Sun Tzu's campaigns.
memes in infosec I - Eve and Mallory are missing, presumed dead
Some things I've seen that match predictions from a long time back, just weren't exciting enough to merit an entire blog post, but were sufficient to blow the trumpet in orchestra:
Chris Skinner of The Finanser puts in his old post written in 1997, which says that retailers (Tesco and Sainsbury's) would make fine banks, and were angling for it. Yet:
Thirteen years later, we talk about Tesco and Virgin breaking into UK banking again.A note of caution: after thirteen years, these names have not made a dent on these markets. Will they in the next thirteen years?
Answer: in 1997, none of these brands stood a cat in hell’s chance of getting a banking licence. Today, Virgin and Tesco have banking licences.
Exactly. As my 1996 paper on electronic money in Europe also made somewhat clear, the regulatory approach of the times was captured by the banks, for the banks, of the banks. The intention of the 1994 directive was to stop new entrants in payments, and it did that quite well. So much so that they got walloped by the inevitable (and predicted) takeover by foreign entrants such as Paypal.
However regulators in the European Commission working groups(s) seemed not to like the result. They tried again in 2000 to open up the market, but again didn't quite realise what a barrier was, and didn't spot the clauses slipped in that killed the market. However, in 2008 they got it more right with the latest eMoney directive, which actually has a snowball's chance in hell. Banking regulations and the PSD (Payment Services Directive) also opened things up a lot, which explains why Virgin and Tesco today have their licence.
One more iteration and this might make the sector competitive...
Then, over on the Economist, an article on task markets
Over the past few years a host of fast-growing firms such as Elance, oDesk and LiveOps have begun to take advantage of “the cloud”—tech-speak for the combination of ubiquitous fast internet connections and cheap, plentiful web-based computing power—to deliver sophisticated software that makes it easier to monitor and manage remote workers. Maynard Webb, the boss of LiveOps, which runs virtual call centres with an army of over 20,000 home workers in America, says the company’s revenue exceeded $125m in 2009. He is confidently expecting a sixth year of double-digit growth this year.Although numerous online exchanges still act primarily as brokers between employers in rich countries and workers in poorer ones, the number of rich-world freelancers is growing. Gary Swart, the boss of oDesk, says the number of freelancers registered with the firm in America has risen from 28,000 at the end of 2008 to 247,000 at the end of April.
Back in 1997, I wrote about how to do task markets, and I built a system to do it as well. The system worked fine, but it lacked a couple of key external elements, so I didn't pursue it. Quite a few companies popped up over the next decade, in successive waves, and hit the same barriers.
Those elements are partly in place these days (but still partly not) so it is unsurprising that companies are getting better at it.
And, over on this blog by Eric Rescorla, he argues against rekeying in a cryptographically secure protocol:
It's IETF time again and recently I've reviewed a bunch of drafts concerned with cryptographic rekeying. In my opinion, rekeying is massively overrated, but apparently I've never bothered to comprehensively address the usual arguments.
Which I wholly concur with, as I've fought about all sorts of agility before (See H1 and H3). Rekeying is yet another sign of a designer gone mad, on par with mumbling to the moon and washing imaginary spots from hands.
The basic argument here is that rekeying is trying to maintain a clean record of security in a connection; yet this is impossible because there will always be other reasons why the thing fails. Therefore, the application must enjoy the privileges of restarting from scratch, regardless. And, rekeying can be done then, without a problem. QED. What is sad about this argument is that once you understand the architectural issues, it has far too many knock-on effects, ones that might even put you out of a job, so it isn't a *popular argument* amongst security designers.
Oh well. But it is good to see some challenging of the false gods....
An article "Why Hawks Win," examines national security, or what passes for military and geopolitical debate in Washington DC.
In fact, when we constructed a list of the biases uncovered in 40 years of psychological research, we were startled by what we found: All the biases in our list favor hawks. These psychological impulses -- only a few of which we discuss here -- incline national leaders to exaggerate the evil intentions of adversaries, to misjudge how adversaries perceive them, to be overly sanguine when hostilities start, and overly reluctant to make necessary concessions in negotiations. In short, these biases have the effect of making wars more likely to begin and more difficult to end.
It's not talking about information security, but the analysis seems to resonate. In short, it establishes a strong claim that in a market where there is insufficient information (c.f., the market for silver bullets), we will tend to fall to a FUD campaign. Our psychological biases will carry us in that direction.
This ArsTechnica article explores what happens when the CA-supplied certificate is used as an MITM over some SSL connection to protect online-banking or similar. In the secure-lite model that emerged after the real-estate wars of the mid-1990s, consumers were told to click on their tiny padlock to check the cert:
Now, a careful observer might be able to detect this. Amazon's certificate, for example, should be issued by VeriSign. If it suddenly changed to be signed by Etisalat (the UAE's national phone company), this could be noticed by someone clicking the padlock to view the detailed certificate information. But few people do this in practice, and even fewer people know who should be issuing the certificates for a given organization.
Right, so where does this go? Well, people don't notice because they can't. Put the CA on the chrome and people will notice. What then?
A switch in CA is a very significant event. Jane Public might not be able to do something, but if a customer of Verisign's was MITM'd by a cert from Etisalat, this is something that effects Verisign. We might reasonably expect Verisign to be interested in that. As it effects the chrome, and as customers might get annoyed, we might even expect Verisign to treat this as an attack on their good reputation.
And that's why putting the brand of the CA onto the chrome is so important: it's the only real way to bring pressure to bear on a CA to get it to lift its game. Security, reputation, sales. These things are all on the line when there is a handle to grasp by the public.
When the public has no handle on what is going on, the deal falls back into the shadows. No security there, in the shadows we find audit, contracts, outsourcing. Got a problem? Shrug. It doesn't effect our sales.
So, what happens when a CA MITM's its own customer?
Even this is limited; if VeriSign issued the original certificate as well as the compelled certificate, no one would be any the wiser. The researchers have devised a Firefox plug-in that should be released shortly that will attempt to detect particularly unusual situations (such as a US-based company with a China-issued certificate), but this is far from sufficient.
Arguably, this is not an MITM, because the CA is the authority (not the subscriber) ... but exotic legal arguments aside; we clearly don't want it. When it goes on, what we do need is software like whitelisting, like Conspiracy and like the other ideas floating around to do it.
And, we need the CA-on-the-chrome idea so that the responsibility aspect is established. CAs shouldn't be able to MITM other CAs. If we can establish that, with teeth, then the CA-against-itself case is far easier to deal with.
In a paper Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL_, by Christopher Soghoian and Sid Stammby, there is a reasonably good layout of the problem that browsers face in delivering their "one-model-suits-all" security model. It is more or less what we've understood all these years, in that by accepting an entire root list of 100s of CAs, there is no barrier to any one of them going a little rogue.
Of course, it is easy to raise the hypothetical of the rogue CA, and even to show compelling evidence of business models (they cover much the same claims with a CA that also works in the lawful intercept business that was covered here in FC many years ago). Beyond theoretical or probable evidence, it seems the authors have stumbled on some evidence that it is happening:
The company’s CEO, Victor Oppelman confirmed, in a conversation with the author at the company’s booth, the claims made in their marketing materials: That government customers have compelled CAs into issuing certificates for use in surveillance operations. While Mr Oppelman would not reveal which governments have purchased the 5-series device, he did confirm that it has been sold both domestically and to foreign customers.
(my emphasis.) This has been a lurking problem underlying all CAs since the beginning. The flip side of the trusted-third-party concept ("TTP") is the centralised-vulnerability-party or "CVP". That is, you may have been told you "trust" your TTP, but in reality, you are totally vulnerable to it. E.g., from the famous Blackberry "official spyware" case:
Nevertheless, hundreds of millions of people around the world, most of whom have never heard of Etisalat, unknowingly depend upon a company that has intentionally delivered spyware to its own paying customers, to protect their own communications security.
Which becomes worse when the browsers insist, not without good reason, that the root list is hidden from the consumer. The problem that occurs here is that the compelled CA problem multiplies to the square of the number of roots: if a CA in (say) Ecuador is compelled to deliver a rogue cert, then that can be used against a CA in Korea, and indeed all the other CAs. A brief examination of the ways in which CAs work, and browsers interact with CAs, leads one to the unfortunate conclusion that nobody in the CAs, and nobody in the browsers, can do a darn thing about it.
So it then falls to a question of statistics: at what point do we believe that there are so many CAs in there, that the chance of getting away with a little interception is too enticing? Square law says that the chances are say 100 CAs squared, or 10,000 times the chance of any one intercept. As we've reached that number, this indicates that the temptation to resist intercept is good for all except 0.01% of circumstances. OK, pretty scratchy maths, but it does indicate that the temptation is a small but not infinitesimal number. A risk exists, in words, and in numbers.
One CA can hide amongst the crowd, but there is a little bit of a fix to open up that crowd. This fix is to simply show the user the CA brand, to put faces on the crowd. Think of the above, and while it doesn't solve the underlying weakness of the CVP, it does mean that the mathematics of squared vulnerability collapses. Once a user sees their CA has changed, or has a chance of seeing it, hiding amongst the crowd of CAs is no longer as easy.
Why then do browsers resist this fix? There is one good reason, which is that consumers really don't care and don't want to care. In more particular terms, they do not want to be bothered by security models, and the security displays in the past have never worked out. Gerv puts it this way in comments:
Security UI comes at a cost - a cost in complexity of UI and of message, and in potential user confusion. We should only present users with UI which enables them to make meaningful decisions based on information they have.
They love Skype, which gives them everything they need without asking them anything. Which therefore should be reasonable enough motive to follow those lessons, but the context is different. Skype is in the chat & voice market, and the security model it has chosen is well-excessive to needs there. Browsing on the other hand is in the credit-card shopping and Internet online banking market, and the security model imposed by the mid 1990s evolution of uncontrollable forces has now broken before the onslaught of phishing & friends.
In other words, for browsing, the writing is on the wall. Why then don't they move? In a perceptive footnote, the authors also ponder this conundrum:
3. The browser vendors wield considerable theoretical power over each CA. Any CA no longer trusted by the major browsers will have an impossible time attracting or retaining clients, as visitors to those clients’ websites will be greeted by a scary browser warning each time they attempt to establish a secure connection. Nevertheless, the browser vendors appear loathe to actually drop CAs that engage in inappropriate be- havior — a rather lengthy list of bad CA practices that have not resulted in the CAs being dropped by one browser vendor can be seen in [6].
I have observed this for a long time now, predicting phishing until it became the flood of fraud. The answer is, to my mind, a complicated one which I can only paraphrase.
For Mozilla, the reason is simple lack of security capability at the *architectural* and *governance* levels. Indeed, it should be noticed that this lack of capability is their policy, as they deliberately and explicitly outsource big security questions to others (known as the "standards groups" such as IETF's RFC committees). As they have little of the capability, they aren't in a good position to use the power, no matter whether they would want to or not. So, it only needs a mildly argumentative approach on the behalf of the others, and Mozilla is restrained from its apparent power.
What then of Microsoft? Well, they certainly have the capability, but they have other fish to fry. They aren't fussed about the power because it doesn't bring them anything of use to them. As a corporation, they are strictly interested in shareholders' profits (by law and by custom), and as nobody can show them a bottom line improvement from CA & cert business, no interest is generated. And without that interest, it is practically impossible to get the various many groups within Microsoft to move.
Unlike Mozilla, my view of Microsoft is much more "external", based on many observations that have never been confirmed internally. However it seems to fit; all of their security work has been directed to market interests. Hence for example their work in identity & authentication (.net, infocard, etc) was all directed at creating the platform for capturing the future market.
What is odd is that all CAs agree that they want their logo on their browser real estate. Big and small. So one would think that there was a unified approach to this, and it would eventually win the day; the browser wins for advancing security, the CAs win because their brand investments now make sense. The consumer wins for both reasons. Indeed, early recommendations from the CABForum, a closed group of CAs and browsers, had these fixes in there.
But these ideas keep running up against resistance, and none of the resistance makes any sense. And that is probably the best way to think of it: the browsers don't have a logical model for where to go for security, so anything leaps the bar when the level is set to zero.
Which all leads to a new group of people trying to solve the problem. The authors present their model as this:
The Firefox browser already retains history data for all visited websites. We have simply modified the browser to cause it to retain slightly more information. Thus, for each new SSL protected website that the user visits, a Certlock enabled browser also caches the following additional certificate information:
A hash of the certificate.
The country of the issuing CA.
The name of the CA.
The country of the website.
The name of the website.
The entire chain of trust up to the root CA.When a user re-visits a SSL protected website, Certlock first calculates the hash of the site’s certificate and compares it to the stored hash from previous visits. If it hasn’t changed, the page is loaded without warning. If the certificate has changed, the CAs that issued the old and new certificates are compared. If the CAs are the same, or from the same country, the page is loaded without any warning. If, on the other hand, the CAs’ countries differ, then the user will see a warning (See Figure 3).
This isn't new. The authors credit recent work, but no further back than a year or two. Which I find sad because the important work done by TrustBar and Petnames is pretty much forgotten.
But it is encouraging that the security models are battling it out, because it gets people thinking, and challenging their assumptions. Only actual produced code, and garnered market share is likely to change the security benefits of the users. So while we can criticise the country approach (it assumes a sort of magical touch of law within the countries concerned that is already assumed not to exist, by dint of us being here in the first place), the country "proxy" is much better than nothing, and it gets us closer to the real information: the CA.
From a market for security pov, it is an interesting period. The first attempts around 2004-2006 in this area failed. This time, the resurgence seems to have a little more steam, and possibly now is a better time. In 2004-2006 the threat was seen as more or less theoretical by the hoi polloi. Now however we've got governments interested, consumers sick of it, and the entire military-industrial complex obsessed with it (both in participating and fighting). So perhaps the newcomers can ride this wave of FUD in, where previous attempts drowned far from the shore.
Someone told me last night that payments would get better when done on phone! Yessssss.... how does one comment to that? and today I spotted this:
Everyone's getting real excited about Jack Dorsey, the co-founder of Twitter, and his new payments application for the iPhone called Square.
OK, except we've seen it all before. Remember Paypal? No, not the one you now know, but the original one, on a PDA. So the process is being repeated. First, do the stuff that looks sexy on the platform that gets you the buzz-appeal. And then, move to where the market is: merchants who pay fees. And, here's where the founder is being more than normally forthright:
... the biggest friction point around accepting credit cards is actually getting a merchant account. So being able to become someone who actually can accept a payment off a credit card, off a prepaid card, off a debit card, is actually quite difficult, and it takes a long time – it's a very complicated process. So we wanted to get people in and accepting this new form of payments, and this very widely used form of payments in under 10 seconds.
Exactly the same. And the one before that -- First Virtual :) And I recall another after that which was popular in the media: peppercoin. And and and... So when Chris Skinner says
The thing is that Square is good for the American markets, but it is very last century because it focuses upon a card's magnetic stripe and signature for authentication. That's the way Americans pay for things but other markets have moved away from this as it is so insecure.
He's right in that it is very last century. But Skinner is concentrating on the technology, whereas Dorsey is looking at the market. Thus, maybe right conclusions, but the wrong reasons. What are the right reasons?
Last century was the century of Central Banking. One of the artifacts of this was that banks and payment systems were twinned, joined at the hip, latter granted as a favour to the former. However as we move forward, several things have loosened that tight grip. Chief amongst them, securitization, the financial crisis, financial cryptography, the cost of hardware and the rise of the net.
So, the observation of many is that the phone is now the real platform of choice, and not the Xiring, which is just an embarrassing hack in comparison. And, the institution that can couple the phone to the application in a secure and user-friendly way is the winner.
Question then is, how will this unfold? Well, it will unfold in the normal entrepreneurship fashion. Several waves of better and better payment systems will emerge from the private sector, to compete, live and die and be swallowed by banks. Hundreds of attempts every year, and one success, every year. Gradually, the experiments will pay off, literally and ideas-wise, and gradually people will read the real story about how to do it (you know where) and increase the success ratio from 1:100 to 1:10.
And, gradually, payments will stand separate from banks. It might take another 20 years, but that's short in the comparison to the time it took for the dinosaurs to fade away, so just be patient.
Lynn in comments points to news that Mastercard has eased up on the PCI (association for credit card issuers) standard for merchant auditing:
But come Dec. 31, 2010, MasterCard planned to require that all Level 1 and, for the first time, Level 2 merchants, use a QSA for the annual on-site PCI assessment.
(Level 1 merchants are above 6 million transactions per year, with 352 merchants bringing in around 50% of all transactions in the USA. Level 2 merchants are from 1 to 6 million, 895 merchants and 13% of all merchants.)
Now, this rule would have cost your merchant hard money:
That policy generated many complaints from Level 2 merchants, who security experts say would have to pay anywhere from $100,000 to $1 million for a QSA’s services.
These Qualified Security Assessors (QSA) are certified by the PCI Security Standards Council for an on-site assessment, or audit. Because of kickback, complaints, etc, MasterCard backed down:
This month, however, MasterCard pushed back the deadline by six months, to June 30, 2011. And instead of requiring use of a QSA, MasterCard will let Level 2 merchants do the assessments themselves provided they have staff attend merchant-training courses offered by the PCI Council, and each year pass a PCI Council accreditation program. Level 2 merchants are free to use QSAs if they wish. Come June 30, 2011, Level 1 merchants can use an internal auditor provided the audit staff has PCI Council training and annual accreditation.
That's you, that is. Or close enough that it hurts. Your company, being a retail merchant bringing in say 100 million dollars a year over 1 million transactions, can now save itself some $100,000 to $1 million. You can do it with your own staff as long as they go on some courses.
If a merchant with millions to billions of direct value on the line, and measurable losses of say 1% of that (handwave and duck) can choose to self-audit, why can't you?
Which reminds me to push out yet another outrageous chapter in secure protocol design. In my hypothesis #4 on Protocol Design, I claim this:
#4.3 Simplicity is Inversely Proportional to the Number of DesignersNever doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.
Margaret MeadSimplicity is proportional to the inverse of the number of designers. Or is it that complexity is proportional to the square of the number of designers?
Sad but true, if you look at the classic best of breed protocols like SSH and PGP, they delivered their best results when one person designed them. Even SSL was mostly secure to begin with, and it was only the introduction of PKI with its committees, world-scale identity models, digital signature laws, accountants and lawyers that sent it into orbit around Pluto. Committee-designed monsters such as IPSec and DNSSEC aren't even in the running.
Sometimes a protocol can survive a team of two, but we are taking huge risks (remember the biggest failure mode of all is failing to deliver anything). Either compromise with your co-designer quickly or kill him. Your users will thank you for either choice, they do not benefit if you are locked in a deadly embrace over the sublime but pernickety benefits of MAC-then-encrypt over encrypt-then-MAC, or CBC versus Counter-mode, or or or...
More at hypotheses on Secure Protocol Design.
From a couple of sources posted by Lynn:
The primary source was a survey run by an anti-phishing software vendor, so caveats apply. Still interesting!
For more meat on the bigger picture, see this article: Ending the PCI Blame Game. Which reads like a compressed version of this blog! Perhaps, finally, the thing that is staring the financial operators in the face has started to hit home, and they are really ready to sound the alarm.
One of the brief positive spots in the last decade was the California bill to make breaches of data disclosed to effected customers. It took a while, but in 2005 the flood gates opened. Now reports the FBI:
"Of the thousands of cases that we've investigated, the public knows about a handful," said Shawn Henry, assistant director for the Federal Bureau of Investigation's Cyber Division. "There are million-dollar cases that nobody knows about."
That seems to point at a super-iceberg. To some extent this is expected, because companies will search out new methods to bypass the intent of the disclosure laws. And also there is the underlying economics. As has been pointed out by many (or perhaps not many but at least me) the reputation damage probably dwarfs the actual or measurable direct losses to the company and its customers.
Companies that are victims of cybercrime are reluctant to come forward out of fear the publicity will hurt their reputations, scare away customers and hurt profits. Sometimes they don't report the crimes to the FBI at all. In other cases they wait so long that it is tough to track down evidence.
So, avoidance of disclosure is the strategy for all properly managed companies, because they are required to manage the assets of their shareholders to the best interests of the shareholders. If you want a more dedicated treatment leading to this conclusion, have a look at "the market for silver bullets" paper.
Meanwhile, the FBI reports that the big companies have improved their security somewhat, so the attackers direct at smaller companies. And:
They also target corporate executives and other wealthy public figures who it is relatively easy to pursue using public records. The FBI pursues such cases, though they are rarely made public.
Huh. And this outstanding coordinated attack:
A similar approach was used in a scheme that defrauded the Royal Bank of Scotland's (RBS.L: Quote, Profile, Research, Stock Buzz) RBS WorldPay of more than $9 million. A group, which included people from Estonia, Russia and Moldova, has been indicted for compromising the data encryption used by RBS WorldPay, one of the leading payment processing businesses globally.The ring was accused of hacking data for payroll debit cards, which enable employees to withdraw their salaries from automated teller machines. More than $9 million was withdrawn in less than 12 hours from more than 2,100 ATMs around the world, the Justice Department has said.
2,100 ATMs! worldwide! That leaves that USA gang looking somewhat kindergarten, with only 50 ATMs cities. No doubt about it, we're now talking serious networked crime, and I'm not referring to the Internet but the network of collaborating, economic agents.
Compromising the data encryption, even. Anyone know the specs? These are important numbers. Did I miss this story, or does it prove the FBI's point?
Around three weeks ago, I had a data disaster. In a surprise attack, 2 weeks worth of my SQL DATABASE was wiped out. Right after my FIRST weekend demo of some new code. The horror!
On that Monday morning I went into shell-shock for hours as I tried to trace where the results of the demo -- the very first review of my panzer code -- had disappeared to. By 11:00 there was no answer, and finger-of-blame pointed squarely at some SQL database snafu. The decision was reached to replace the weaponry with tried and trusty blades of all wars previous: flat files, the infantry of data. By the end of the day, the code was written to rebuild the vanguard from its decimated remains, and the next day, work-outs and field exercises proved the results. Two tables entirely replaced, reliably.
That left the main body, a complex object split across many tables, and the rearguard of various sundry administrative units. It took another week to write the object saving & restoring framework, including streaming, model-objects along the lines of MVC for each element, model-view-controller conversions and unit-testing. (I need a name for this portable object pattern. It came from SOX and I didn't think much of it at the time, but it seems nobody else does it.) Then, some days of unit tests, package tests, field tests, and so forth. Finally, 4-5 days of application re-working to use object database methods, not SQL.
16 days later, up and going. The army is on the march, SQL is targetted, acquired, destroyed. Defeated, wiped off the map, no longer to blot on the territory of my application. 2 days of mop-up and I'm back to the demo.
Why go on a holy crusade against SQL? There are several motives for this war:
And then there's the interface. Let us not shame the mere technical _casus belli_ above, let us put the evil that is SQL in a section of abomination, all of its own:
SQL is in the top 5 of the computing world's most awful anachronisms. It's right up there with ASN1, X.509, LDAP, APL, and other embarrassments to the world of computer science. In this case, there is one reason why SQL stands out like the sideways slice of death of a shamed samurai: data! these things, SQL included, were all designed when data was king, we all had to bow before the august power of our corporate bytes, while white-suited priests inserted the holy decks and the tapes of glory into bright shining temples of the mainframe of enlightenment.
But those imperial times are over. The false data-is-god was slain, discarded and buried, in the glorious & bloody revolution of the Object, that heathen neologism that rose up and slew and enslaved data during the early 1980s. Data-only is slain, data is dead. Data is captured, enslaved, owned. It is now objects, objects, objects. Data is an immoral ghost of its former self, when let out from its rightful context of semantic control.
These are the reasons why I leapt to the field to do battle to the death with the beast that is SQL. My report from the field is as follows:
I continue to mop up. What's the bottom line? Control. The application controls the objects and the objects control their data.
So what went wrong with SQL? Fundamentally it was designed in a day when data was seriously different. Those days are gone, now data is integrally related to code. It's called "object oriented" and even if you don't know how to do it, it is how it is doing it to you. The object owns the data, not the database. Data seen naked is an abomination, and SQL is just rags on a beast; it's still a beast.
Sadly, the world is still a decade or two away from this. And, to be fair, hattip to Jeroen van Gelderen for the OO database he designed and built for Webfunds. Using that was a lesson in how much of a delight it was to be OO all the way down to the disk-byte.
The decision to conduct a war on drugs was inevitably a decision to hollow-out Mexico. The notion of hollowing-out states is a time-honoured tradition in the Great Game, the way you control remote and wild places. The essential strategy is that you remove the institutions that keep places strong and stable, and bring them to a chaos which then keeps the countries fighting each other.
While they fight each other they are easier to control and extract value from. This is the favourite conspiracy theory behind the middle east and the famous Kissinger Deal: The Sheiks are propped up and given control of weak states as long as they trade their oil in dollars, and use the money to buy American goods. Of course we only speculate these details, and sometimes things look a little loose.
There are weaknesses in the strategy. Obviously, we are playing with fire when hollowing out a state ... so this is quite a lot of danger to the nearby states. (Which of course leads to the next part of the strategy, to play fire against fire and undermine an entire region.)
Which brings us to the War on Drugs and the decision to place Mexico into the role of hollowed-out state. John Robb points to this article:
Beheadings and amputations. Iraqi-style brutality, bribery, extortion, kidnapping, and murder. More than 7,200 dead-almost double last year's tally-in shoot-outs between federales and often better-armed drug cartels. This is modern Mexico, whose president, Felipe Calderón, has been struggling since 2006 to wrest his country from the grip of four powerful cartels and their estimated 100,000 foot soldiers.
So, quite obviously if one understands the strategy, don't do this nearby. Do it far away. Reagan's famous decision to do this must have been taken on one his less memorable days ... no matter how the decision was taken on Mexico, now Reagan's chickens have cross the border to roost in mainland USA:
But chillingly, there are signs that one of the worst features of Mexico's war on drugs - law enforcement officials on the take from drug lords - is becoming an American problem as well. Most press accounts focus on the drug-related violence that has migrated north into the United States. Far less widely reported is the infiltration and corruption of American law enforcement, according to Robert Killebrew, a retired U.S. Army colonel and senior fellow at the Washington-based Center for a New American Security. "This is a national security problem that does not yet have a name," he wrote last fall in The National Strategy Forum Review. The drug lords, he tells me, are seeking to "hollow out our institutions, just as they have in Mexico."
Quite what is going on in these people's minds is unclear to me. The notion that it "has no name" is weird: it's the standard strategy with the standard caveat. They overdid the prescription, now the disease bounces back stronger, more immune, with a vengeance! Further, I don't actually think it is possible to ascribe this as a deliberate plot by the Mexican drug lords, because it is already present in the USA:
Experts disagree about how deep this rot runs. Some try to downplay the phenomenon, dismissing the law enforcement officials who have succumbed to bribes or intimidation from the drug cartels as a few bad apples. Peter Nuñez, a former U.S. attorney who lectures at the University of San Diego, says he does not believe that there has been a noticeable surge of cartel-related corruption along the border, partly because the FBI, which has been historically less corrupt than its state and local counterparts, has significantly ratcheted up its presence there. "It's harder to be as corrupt today as locals were in the 1970s, when there wasn't a federal agent around for hundreds of miles," he says.But Jason Ackleson, an associate professor of government at New Mexico State University, disagrees. "U.S. Customs and Border Protection is very alert to the problem," he tells me. "Their internal investigations caseload is going up, and there are other cases that are not being publicized." While corruption is not widespread, "if you increase the overall number of law enforcement officers as dramatically as we have| - from 9,000 border agents and inspectors prior to 9/11 to a planned 20,000 by the end of 2009 - "you increase the possibility of corruption due to the larger number of people exposed to it and tempted by it." Note, too, that Drug Enforcement Agency data suggest that Mexican cartels are operating in at least 230 American cities.
By that I mean, the drug situation has already corrupted large parts of the USA governance structure. I've personally heard of corruption stories in banks, politics, police and as far up the pecking order as FINCEN, intel agencies and other powerful agencies. As an outside observer it looks to me like they've made their peace with the drugs a long time ago, heaven knows what it looks like to a real insider.
So I see a certain sense of hubris in these writings. This feels to me that the professional journalist did not want to talk about the corruption that has always been there (e.g., how else did the stuff get distributed before?). What seems to be happening is that now that Mexico is labelled in the serious press (*) as hollowed-out, it has become easier to talk about the problem in mainstreet USA because we can cognitively blame the Mexicans. Indeed, the title of the piece is The Mexicanization of American Law Enforcement:
And David Shirk, director of the San Diego-based Trans-Border Institute and a political scientist at the University of San Diego, says that recent years have seen an "alarming" increase in the number of Department of Homeland Security personnel being investigated for possible corruption. "The number of cases filed against DHS agents in recent years is in the hundreds," says Shirk. "And that, obviously, is a potentially huge problem." An August 2009 investigation by the Associated Press supports his assessment. Based on records obtained under the Freedom of Information Act, court records, and interviews with sentenced agents, the AP concluded that more than 80 federal, state, and local border-control officials had been convicted of corruption-related crimes since 2007, soon after President Calderón launched his war on the cartels. Over the previous ten months, the AP data showed, 20 Customs and Border Protection agents alone had been charged with a corruption-related crime. If that pace continued, the reporters concluded, "the organization will set a new record for in-house corruption."
Well, whatever it takes. If the US-Americans have to blame the Mexican-Americans in order to focus on the real problems, that might be the cost of getting to the real solution: the end of Prohibition. Last word to Hayden, no stranger to hubris:
Michael Hayden, director of the Central Intelligence Agency under President George W. Bush, called the prospect of a narco-state in Mexico one of the gravest threats to American national security, second only to al-Qaida and on par with a nuclear-armed Iran. But the threat to American law enforcement is still often underestimated, say Christesen and other law enforcement officials.
* Mind you, I do not see how they are going to blame the Mexicans for the hollowing-out of the mainstream press. Perhaps the Canadians?
We've established that Audit isn't doing it for us (I, II, III). And that it had its material part to play in the financial crisis (IV). Or its material non-part (II). I think I've also had a fair shot at explaining why this happened (V).
I left off last time asking why the audit industry didn't move to correct these things, and especially why it didn't fight Sarbanes-Oxley as being more work for little reward? In posts that came afterwards, thanks to Todd Boyle, it is now clear that the audit industry will not stand in front of anything that allows its own house to grow. The Audit Industry is an insatiable growth machine, and this is its priority. No bad thing if you are an Auditor.
Which leaves us with curious question: What then stops the Audit from growing without bound? What force exists to counterbalance the natural tendency of the auditor to increase the complexity, and increase the bills? Can we do SOX-1, SOX-2, SOX-3 and each time increase the cost?
Those engineers and others familiar with "systems theory" among us will be thinking in terms of feedback: all sustainable systems have positive feedback loops which encourage their growth, and negative feedback loops which stop their growth exploding out of control. In earlier posts, we identify the positive feedback loop of the insider interest. The question would then be (for the engineers) what is the negative feedback control?
Wikipedia helpfully suggests that Audit is a feedback control over organisations, but where is the feedback control over Audit? Even accepting the controversial claim that Sarbanes-Oxley delivered better reliability and quality in audits, we do know there must be a point where that quality is too expensive to pay for. So there must be a limit, and we must know when to stop paying.
And now the audit penny drops: There is no counterbalancing force!
We already established that the outside world has no view into the audit. Our chartered outsider has taken the keys to the citadel and now owns the most inner sanctums. The insider operates to standard incentives which is to improve own position at the expense of the outsider; the Auditor is now the insider. Which leads to a compelling desire to increase size, complexity and fees of Audit.
Yet the machine of audit growth has no brake. So it has no way to stop it moving from a useful position of valuable service to society to an excessive position of unsustainable drain on the public welfare. There is nothing to stop audit consuming the host it parasites off, nor is there anything that keeps even the old part of the Audit on the straight and narrow.
And this is more or less what has happened. That which was useful 30 years ago -- the opinion of financial statements, useful to educated investors -- has migrated well beyond that position into consuming the very core of the body corporate. IT security audits, industry compliance audits, quality audits, consulting engagements, market projects, manufacturing advice, and the rest of it now consume far more of their proper share.
Many others will point at other effects. But I believe this is at the core of it: the auditor promises a result for outsiders, has taken the insiders' keys and crafted a role of great personal benefit, without any external control. So it follows that it must grow, and it must drift off any useful agenda. And so it has, as we see from the financial crisis.
Which leads to a rather depressing conclusion: Audit cannot regulate itself. And we can't look to the government to deal with it, because that was part & parcel of our famous financial crisis. Indeed, the agencies have their hands full right now making the financial crisis worse, we hardly want to ask them to fix this mess. Today's evidence of agency complicity is only just more added to a mountain of depression.
Nobody likes criminals. Even criminals don't like criminals; they are unfair competition.
So it is with some satisfaction that our civilisation has worked for a 1000 years to suppress the criminal within; going back to the Magna Carta where the institution of the monarch was first separated from the money making classes, and the criminal classes, both. Over time, this genesis was developed to create the rights of the people to hold assets, and the government as firmly oriented to defending those rights.
One of those hallowed principles was that of consolidated revenue. This boring, dusty old thing was a foundation for honest government because it stopped any particular agency from becoming a profitable affair. That is, no longer government for the people, but one of the money making or money stealing classes mentioned above.
Consolidated Revenue is really simple: all monies collected go to the Treasury and are from there distributed according to the budget process. Hence, all monies collected, for whatever purpose, are done so on a policy basis, and are checked by the entire organisation. If you have Budget Day in your country, that means the entire electorate. Which latter, if unhappy, throws the whole sorry group out on the streets every electoral cycle, and puts an entirely new group in to manage the people's money.
This simple rule separates the government from the profit-making classes and the criminal classes. Break it at your peril.
Which brings us to the FATF, the rot within modern civilisation. This Paris-based body with the soft and safe title of "Financial Action Task Force" deals with something called money laundering. Technically, money laundering exists and there is little dispute about this; criminals need a way to turn their ill-gotten gains into profit. When criminals get big, they need to turn a lot of bad money into good money. So part of the game for the big boys was to set up large businesses that could wash a lot of money. It is called laundering, and washing because the first large-scale money-cleansing businesses were launderies or launderettes: shops with coin-operated washing machines, which took lots and lots of cash, in a more or less invisible fashion. Etc etc, this is all well known, undisputed, a history full of colour.
What is much more disputable is how to deal with it. And this is where the FATF took us on the rather short path to a long stay in hell. Their prescription was simple: seize the money, and keep it. It is indeed as simple as the law of Consolidated Revenue. Which they then proceeded to break, as well, in their innocence and goodliness.
The Economist reports on how far Britain, a leader in this race to disaster, has come in 30 short years it has taken to unravel centuries of governance:
The public sale of criminals' property, usually through auction houses or salvage merchants, has been big business for a long time. The goods are those that crooks have acquired legitimately but with dirty money, as opposed to actual stolen property, which the police must try to reunite with its rightful owners. Half the proceeds go to the Home Office, and the rest to the police, prosecutors and courts. The bigger police forces cream off millions of pounds a year in this way (see chart).
So if a crook steals goods, the police work for the victim. But if a crook makes money by any other means, the police no longer works for the victim, but for itself. We now have the Home Office, the prosecutors, the courts, and the humble British Bobby well incentivised to promote money laundering in all its guises. Note that the profit margin in this business is *well in excess of standard business rates of return* and we will then have no surprise at all that the business of legal money laundering is booming:
Powers to confiscate criminals' ill-gotten gains have grown steadily. A drugs case in 1978, in which the courts were unable to strip the traffickers of £750,000 of profits, caused Parliament to pass asset-seizure laws that applied first to drug dealers, and then more widely. The 2002 Proceeds of Crime Act expanded these powers greatly, allowing courts to seize more or less anything owned by a convict deemed to have a "criminal lifestyle", and introducing a power of civil recovery, whereby assets may be confiscated through the civil courts even if their owner has not been convicted of a crime.
Everyone's happy with that of course! (Read the last two paragraphs for a good, honest middle-class belly laugh.) Of course, the normal argument is that the police are the good guys, and they do the right thing. And if you oppose them, you must be a criminal! Or, you like criminals or benefit from criminals or in some way, you are dirty like a criminal.
And such it is. This is the sort of thought level that characterizes the discussion, and is frequently brought up by supporters of the money laundering programmes. It's also remarkably similar to the rhetoric leading up to most bad wars (who said "you're either with us or against us?"), pogroms and other crimes against civilisation.
Serious students of economics and society can do better. Let's follow the money:
Since then, police cupboards have filled up fast. Confiscations of criminal proceeds in 2001-02 amounted to just £25m; in 2007-08 they were £136m, and the Home Office has set a goal of £250m for the current financial year. To meet this, new powers are planned: a bill before parliament would allow property to be seized from people who have been arrested but not yet charged, though it would still not be sold until conviction. This, police hope, will prevent criminals from disposing of their assets during the trial.
This is the standard evolution of a new product cycle in profitable business. First, mine the easy gold that is right there in front of you. Next, develop variations to increase revenues. Third, institute good management techniques to reduce wastage. The Home Office is setting planning targets for profit raising, and searching for more revenue. The government has burst its chains of public service and is now muckraking with the rest of the dirty money-grubbing corporates, and is now in a deadly embrace of profitability with the dirty criminal classes.
All because the legislature forgot the fundamental laws of governance!
Can the British electorate possibly reel in this insatiable tiger, now they've incentivised it to chase and seize profit? Probably not. But, "surely that doesn't matter," cry the middle-class masses, safe in their suburban homes? Surely the police would never cross the NEXT line and become the criminals, seizing money and assets that was not ill-gotten?
Don't be so sure. There is enough anecdotal evidence in the USA (1) that this is routine and regular. And unchallenged. It will happen in Britain, and if it goes unchallenged, the next step will become institutionalised: deliberate targetting of quasi-criminal behaviour for revenue raising purposes. Perhaps you've already seen it: are speeding fines collected on wide open motorways, or in danger spots?
The FATF have broken the laws of civilisation, and now we are at the point where the evidence of the profit-making police-not-yet-gang is before us. The Economist's article is nearly sarcastic .. uncomfortable with this immoral behaviour, but not yet daring to name the wash within Whitehall. Reading between the lines of that article, it is both admiring of the management potential of the Home Office (should we advise them to get an MBA?), and deeply disgusted. As only an economist can be, when it sees the road to hell.
Britain stands at the cusp. What do we see when we look down?
We see Mexico, the country that Ronald Reagan hollowed out. That late great President of the USA had one massive black mark on his career, which is a cross for us all to bear, now that he's skipped off to heaven.
Ronald Reagan created the War on Drugs, which was America's part in the FATF alliance. It was called "War" for marketing reasons: nobody criticises the patriotic warriors, nobody dare challenge their excesses. This was another stupidity, another breach of the natural laws of civilisation (separation of powers, or in USA, this might be better known as the destruction of the Posse Comitatus Act). This process took the "War" down south of the border, and turned the Mexican political parties, judiciary, police force and other powerful institutions into victims of Ronald Reagan's "War". From a police perspective, Mexico was already hollowed out last decade; what we are seeing in the current decade is the hollowing out of the Army. The carving up of battalions and divisions into the various gangs that control the flow of hot-demand items to from the poor south to the rich north of the Americas.
When considering these issues, and our Future in Mexico, there are several choices.
The really sensible one would be to shut down the FATF and its entire disastrous experiment. Tar&feather anyone involved with them, run them out of town backwards on a donkey, preferably to a remote spot in the Pacific, with or without speck of land. The FATF are irreparable, convinced that they are the good guys, and can do no wrong. But politically, this is unlikely, because it would damn the politicians of a generation for adopting childish logic while on duty before the public. And the FATF's influence is deep within the regulatory and financial structure, everyone will be reminded that "you backed us then, you don't want people to think you're wrong..." Nobody will admit the failure, nobody will say «¡Discuplanos!» to the Mexican pueblo for depriving them of honest policing and a civilised life.
The simple choice is to go back to our civilised roots and impose the principle of Consolidated Revenue back into law. In this model, the Home Office should have its business permit taken away from it, and budget control be restored. The Leicestershire Constabulary should be raided by Treasury and have its eBay and Paypal accounts seized, like any other financial misfits. This is the Al Capone solution, which nobody is comfortable with, because it admits we can't deal with the problem properly. But it does seem to be the only practical solution of a very bad lot.
Or we choose to go to Mexico. Step by step, slowly but in our lifetimes. It took 20 years to hollow out Mexico, we have a bit longer in other countries, because the institutions are staffed by stiffer, better educated people.
But not that long. That is the thing about the natural laws: breach them, and the policing power of the economy will come down on you eventually. The margins on the business of sharing out ill-gotten gains are way stronger than any principled approach to policing or governance can deal with. I'd give it another 20 years for Britain to get to where Mexico is now.
Stephen Mason reports that MITB is in court:
A gang of internet fraudsters used a sophisticated virus to con members of the public into parting with their banking details and stealing £600,000, a court heard today.Once the 'malicious software' had infected their computers, it waited until users logged on to their accounts, checked there was enough money in them and then insinuated itself into cash transfer procedures.
(also on El Reg.) This breaches the 2-factor authentication system commonly in use because it (a) controls the user's PC, and (b) the authentication scheme that was commonly pushed out over the last decade or so only authenticates the user, not the transaction. So as the trojan now controls the PC, it is the user. And the real user happily authenticates itself, and the trojan, and the trojan's transactions, and even lies about it!
Numbers, more than ordinarily reliable because they have been heard in court:
'In fact as a result of this Trojan virus fraud very many people - 138 customers - were affected in this way with some £600,000 being fraudulently transferred.'Some of that money, £140,000, was recouped by NatWest after they became aware of this scam.'
This is called Man-in-the-browser, which is a subtle reference to the SSL's vaunted protection against Man-in-the-middle. Unfortunately several things went wrong in this area of security: Adi's 3rd law of security says the attacker always bypasses; one of my unnumbered aphorisms has it that the node is always the threat, never the wire, and finally, the extraordinary success of SSL in the mindspace war blocked any attempts to fix the essential problems. SSL is so secure that nobody dare challenge browser security.
The MITB was first reported in March 2006 and sent a wave of fear through the leading European banks. If customers lost trust in the online banking, this would turn their support / branch employment numbers on their heads. So they rapidly (for banks) developed a counter-attack by moving their confirmation process over to the SMS channel of users' phones. The Man-in-the-browser cannot leap across that air-gap, and the MITB is more or less defeated.
European banks tend to be proactive when it comes to security, and hence their losses are miniscule. Reported recently was something like €400k for a smaller country (7 million?) for an entire year for all banks. This one case in the UK is double that, reflecting that British banks and USA banks are reactive to security. Although they knew about it, they ignored it.
This could be called the "prove-it" school of security, and it has merit. As we saw with SSL, there never really was much of a threat on the wire; and when it came to the node, we were pretty much defenceless (although a lot of that comes down to one factor: Microsoft Windows). So when faced with FUD from the crypto / security industry, it is very very hard to separate real dangers from made up ones. I felt it was serious; others thought I was spreading FUD! Hence Philipp Güring's paper Concepts against Man-in-the-Browser Attacks, and the episode formed fascinating evidence for the market for silver bullets. The concept is now proven right in practice, but it didn't turn out how we predicted.
What is also interesting is that we now have a good cycle timeline: March 2006 is when the threat first crossed our radars. September 2009 it is in the British courts.
Postscript. More numbers from today's MITB:
A next-generation Trojan recently discovered pilfering online bank accounts around the world kicks it up a notch by avoiding any behavior that would trigger a fraud alert and forging the victim's bank statement to cover its tracks.The so-called URLZone Trojan doesn't just dupe users into giving up their online banking credentials like most banking Trojans do: Instead, it calls back to its command and control server for specific instructions on exactly how much to steal from the victim's bank account without raising any suspicion, and to which money mule account to send it the money. Then it forges the victim's on-screen bank statements so the person and bank don't see the unauthorized transaction.
Researchers from Finjan found the sophisticated attack, in which the cybercriminals stole around 200,000 euro per day during a period of 22 days in August from several online European bank customers, many of whom were based in Germany....
"The Trojan was smart enough to be able to look at the [victim's] bank balance," says Yuval Ben-Itzhak, CTO of Finjan... Finjan found the attackers had lured about 90,000 potential victims to their sites, and successfully infected about 6,400 of them. ...URLZone ensures the transactions are subtle: "The balance must be positive, and they set a minimum and maximum amount" based on the victim's balance, Ben-Itzhak says. That ensures the bank's anti-fraud system doesn't trigger an alert, he says.
And the malware is making the decisions -- and alterations to the bank statement -- in real time, he says. In one case, the attackers stole 8,576 euro, but the Trojan forged a screen that showed the transferred amount as 53.94 euro. The only way the victim would discover the discrepancy is if he logged into his account from an uninfected machine.
To summarise previous posts, what do we know? We know so far that the hallowed financial Audit doesn't seem to pick up impending financial disaster, on either a micro-level like Madoff (I) or a macro-level like the financial crisis (II). We also know we don't know anything about it (III), trying harder didn't work (II), and in all probability the problem with Audit is systemic (IV). That is, likely all of them, the system of Audits, not any particular one. The financial crisis tells us that.
Notwithstanding its great brand, Audit does not deliver. How could this happen? Why did our glowing vision of Audit turn out to be our worst nightmare? Global financial collapse, trillions lost, entire economies wallowing in the mud and slime of bankruptcy shame?
Let me establish the answer to this by means of several claims.
First, complexity . Consider what audit firm Ernst & Young told us a while back:
The economic crisis has exposed inherent weaknesses in the risk management practices of banks, but few have a well-defined vision of how to tackle the problems, according to a study by Ernst & Young.Of 48 senior executives from 36 major banks around the world questioned by Ernst & Young, just 14% say they have a consolidated view of risk across their organisation. Organisational silos, decentralisation of resources and decision-making, inadequate forecasting, and lack of transparent reporting were all cited as major barriers to effective enterprise-wide risk management.
The point highlighted above is this: This situation is complex! In essence, the process is too complex for anyone to appreciate from the outside. I don't think this point is so controversial, but the next are.
My second claim is that in any situation, stakeholders work to improve their own position . To see this, think about the stakeholders you work with. Examine every decision that they take. In general, every decision that reduces the benefit to them will be fiercely resisted, and any decision that increases the benefit to them will be fiercely supported. Consider what competing audit firm KPMG says:
A new study put out by KPMG, an audit, tax and advisory firm said that pressure to do "whatever it takes" to achieve business goals continues as the primary driver behind corporate fraud and misconduct.Of more than 5,000 U.S. workers polled this summer, 74 percent said they had personally observed misconduct within their organizations during the prior 12 months, unchanged from the level reported by KPMG survey respondents in 2005. Roughly half (46 percent) of respondents reported that what they observed "could cause a significant loss of public trust if discovered," a figure that rises to 60 percent among employees working in the banking and finance industry.
This is human nature, right? It happens, and it happens more than we like to admit. I suggest it is the core and prime influence, and won't bother to argue it further, although if you are unsatisfied at this claim, I suggest you read Lewis on The End (warning it's long).
As we are dealing with complexity, even insiders will not find it easy to identify the nominal, original benefit to end-users. And, if the insiders can't identify the benefit, they can't put it above their own benefit. Claims one and two, added together, give us claim three: over time, all the benefit will be transferred from the end-users to the insiders . Inevitably. And, it is done naturally, subconciously and legally.
What does this mean to Audits? Well, Auditors cannot change this situation. If anything, they might make it worse. Consider these issues:
As against all that complexity and all that secrecy, there is a single Auditor, delivering a single report. To you. A rather single very small report, as against a rather frequent and in sum, huge series of bills.
So in all this complexity, although the Audit might suggest that they can reduce the complexity by means of compressing it all into one single "opinion", the complexity actually works to the ultimate benefit of the Auditor. Not to your benefit. It is to the Auditor's advantage to increase the complexity, and because it is all secret and you don't understand it anyway, any negative benefit to you is not observable. Given our second claim, this is indeed what they do.
Say hello to SOX, a doubling of the complexity, and a doubling of your auditor's invoice.Say thank you, Congressmen Sarbanes and Oxley, and we hope your pension survives!
Claim Number 4: The Auditor has become the insider. Although he is the one you perceive to be an outsider, protecting your interests, in reality, the Auditor is only a nominal, pretend outsider. He is in reality a stakeholder who was given the keys to become an insider a long time ago. Is there any surprise that, with the passage of time, the profession has moved to secure its role? As stakeholder? As insider? To secure the benefit to itself?
Over time, the noble profession of Auditing has moved against your interests. Once, it was a mighty independent observer, a white knight riding forth to save your honour, your interest, your very patrimony. Audits were penetrating and meticulous!
Now, the auditor is just another incumbent stakeholder, another mercenary for hire. Test this: did any audit firm fight the rise of Sarbanes-Oxley as being unnecessary, overly costly and not delivering value for money to clients? Does any audit firm promote a product that halves the price? Halves the complexity? Has any audit firm investigated the relationship between the failed banks and the failed audits over those banks? Did any audit firm suggest that reserves weren't up to a downturn? Has any audit firm complained that mark-to-market depends on a market? Did any auditor insist on stress testing? Has ... oh never mind.
I'm honestly interested in this question. If you know the answer: posted it in comments! With luck, we can change the flow of this entire research, which awaits ... the NEXT post.
It's terrifically cliched to say it these days, but the net is one of the great engineering marvels of science. The Economist reports it as 40 years old:
Such contentious issues never dawned on the dozen or so engineers who gathered in the laboratory of Leonard Kleinrock (pictured below) at the University of California, Los Angeles (UCLA) on September 2nd, 1969, to watch two computers transmit data from one to the other through a 15-foot cable. The success heralded the start of ARPANET, a telecommunications network designed to link researchers around America who were working on projects for the Pentagon. ARPANET, conceived and paid for by the defence department’s Advanced Research Projects Agency (nowadays called DARPA), was unquestionably the most important of the pioneering “packet-switched” networks that were to give birth eventually to the internet.
Right, ARPA funded a network, and out of that emerged the net we know today. Bottom-up, not top-down like the European competitor, OSI/ISO. Still, it wasn't about doing everything from the bottom:
The missing link was supplied by Robert Kahn of DARPA and Vinton Cerf at Stanford University in Palo Alto, California. Their solution for getting networks that used different ways of transmitting data to work together was simply to remove the software built into the network for checking whether packets had actually been transmitted—and give that responsibility to software running on the sending and receiving computers instead. With this approach to “internetworking” (hence the term “internet), networks of one sort or another all became simply pieces of wire for carrying data. To packets of data squirted into them, the various networks all looked and behaved the same.
I hadn't realised that this lesson is so old, but that makes sense. It is a lesson that will echo through time, doomed to be re-learnt over and over again, because it is so uncomfortable: The application is responsible for getting the message across, not the infrastructure. To the extent that you make any lower layer responsible for your packets, you reduce reliability.
This subtlety -- knowing what you could push down into the lower layers, and what you cannot -- is probably one of those things that separates the real engineers from the journeymen. The wolves from the sheep, the financial cryptographers from the Personal-Home-Pagers. If you thought TCP was reliable, you may count yourself amongst latter, the sheepish millions who believed in that myth, and partly got us to the security mess we are in today. (Related, it seems is that cloud computing has the same issue.)
Curiously, though, from the rosy eyed view of today, it is still possible to make the same layer mistake. Gunnar reported on the very same Vint Cerf saying today (more or less):
Internet Design Opportunities for ImprovementThere's a big Gov 2.0 summit going on, which I am not at but in the event apparently John Markoff asked Vint Cerf ths following question: "what would you have designed differently in building the Internet?" Cerf had one answer: "more authentication"
I don't think so. Authentication, or authorisation or any of those other shuns is again something that belongs in the application. We find it sits best at the very highest layer, because it is a claim of significant responsibility. At the intermediate layers you'll find lots of wannabe packages vying for your corporate bux:
* IP * IP Password * Kerberos * Mobile One Factor Unregistered * Mobile Two Factor Registered * Mobile One Factor Contract * Mobile Two Factor Contract * Password * Password Protected transport * Previous Session * Public Key X.509 * Public Key PGP * Public Key SPKI * Public Key XML Digital Signature * Smartcard * Smartcard PKI * Software PKI * Telephony * Telephony Nomadic * Telephony Personalized * Telephony Authenticated * Secure remote password * SSL/TLS Client Authentication * Time Sync Token * Unspecified
and that's just in SAML! "Holy protocol hodge-podge Batman! " says Gunnar, and he's not often wrong.
Indeed, as Adam pointed out, the net works in part because it deliberately shunned the auth:
The packet interconnect paper ("A Protocol for Packet Network Intercommunication," Vint Cerf and Robert Kahn) was published in 1974, and says "These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate."
So what was Vint Cerf getting at? He clarified in comments to Adam:
The point is that the current design does not have a standard way to authenticate the origin of email, the host you are talking to, the correctness of DNS responses, etc. Does this autonomous system have the authority to announce these addresses for routing purposes? Having standard tools and mechanisms for validating identity or authenticity in various contexts would have been helpful.
Right. The reason we don't have standard ways to do this is because it is too hard a problem. There is no answer to what it means:
people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn't get us in too much trouble. (Hi S! Hi C!)
which therefore means, it is precisely and only whatever the application wants. Or, if your stack design goes up fully past layer 7 into the people layer, like CAcert.org, then it is what your boss wants. So, Skype has it, my digital cash has it, Lynn's X959 has it, and PGP has it. IPSec hasn't got it, SSL hasn't got it, and it looks like SAML won't be having it, in truck-loads :) Shame about that!
Digital signature technology can help here but just wasn't available at the time the TCP/IP protocol suite was being standardized in 1978.
(As Gunnar said: "Vint Cerf should let himself off the hook that he didn't solve this in 1978.") Yes, and digital signature technology is another reason why modern clients can be designed with it, built in and aligned to the application. But not "in the Internet" please! As soon as the auth stuff is standardised or turned into a building block, it has a terrible habit of turning into treacle. Messy brown sticky stuff that gets into everything, slows everyone down and gives young people an awful insecurity complex derived from pimples.
Oops, late addition of counter-evidence: "US Government to let citizens log in with OpenID and InfoCard?" You be the judge!
Court cases often give us glimpses of security issues. A court in Britain has just convicted three from the liquid explosives gang, and now that it is over, there are press reports of the evidence. It looks now like the intelligence services achieved one of two possible victories by stopping the plot. Wired reports that NSA intercepts of emails have been entered in as evidence.
According to Channel 4, the NSA had previously shown the e-mails to their British counterparts, but refused to let prosecutors use the evidence in the first trial, because the agency didn’t want to tip off an alleged accomplice in Pakistan named Rashid Rauf that his e-mail was being monitored. U.S. intelligence agents said Rauf was al Qaeda’s director of European operations at the time and that the bomb plot was being directed by Rauf and others in Pakistan.The NSA later changed its mind and allowed the evidence to be introduced in the second trial, which was crucial to getting the jury conviction. Channel 4 suggests the NSA’s change of mind occurred after Rauf, a Briton born of Pakistani parents, was reportedly killed last year by a U.S. drone missile that struck a house where he was staying in northern Pakistan.
Although British prosecutors were eager to use the e-mails in their second trial against the three plotters, British courts prohibit the use of evidence obtained through interception. So last January, a U.S. court issued warrants directly to Yahoo to hand over the same correspondence.
So there are some barriers between intercept and use in trial. The reason they came from the NSA is probably that old trick of avoiding prohibitions on domestic surveillance: if the trial had been in the USA, GCHQ might have provided the intercepts.
What however was more interesting is the content of the alleged messages. This BBC article includes 7 of them, here's one:
4 July 2006: Abdulla Ahmed Ali to Pakistan Accused plotter Abdulla Ahmed AliListen dude, when is your mate gonna bring the projectors and the taxis to me? I got all my bits and bobs. Tell your mate to make sure the projectors and taxis are fully ready and proper I don't want my presentation messing up.
WHAT PROSECUTORS SAID IT MEANT Prosecutors said that projectors and taxis were code for knowledge and equipment because Ahmed Ali still needed some guidance. The word "presentation" could mean attack.
The others also have interesting use of code words, such as Calvin Klein aftershave for hydrogen peroxide (hair bleach). The use of such codes (as opposed to ciphers) is not new; historically they were well known. Code words tend not to be used now because ciphers cover more of the problem space, and once you know something of the activity, the listener can guess at the meanings.
In theory at least, and code words clearly didn't work to protect the liquid bombers. Worse for them, it probably made their conviction easier, because Muslims discussing the purchase of 4 litres of aftershave with other Mulsims in Pakistan seems very odd.
One remaining question was whether the plot would actually work. We all know that the airlines banned liquids because of this event. Many amateurs have opined that it is simply too hard to do liquid explosives. However, the BBC employed an expert to try it, and using what amounts to between half a litre to a liter of finished product, they got this result:
Certainly a dramatic explosion, enough to kill people within a few metres, and enough to blow a 2m hole in the fuselage. (The BBC video is only a minute long, well worth watching.)
Would this have brought down the aircraft? Not necessarily as there are many examples of airlines with such damage that have survived. Perhaps if the bomb was in a strategic spot (over wing? or near the fuel lines?) or the aircraft was stuck over the Atlantic with no easy vector. Either way, a bad day to fly, and as the explosives guy said, pity the passengers that didn't have their seat belt on.
Score one for the intel agencies. But the terrorists still achieved their second victory out of two: passengers are still terrorised in their millions when they forget to dispose of their innocent drinking water. What is somewhat of a surprise is that the terrorists have not as yet seized on the disruptive path that is clearly available, a la John Robb. I read somewhere that it only takes a 7% "security tax" on a city to destroy it over time, and we already know that the airport security tax has to be in that ballpark.
This rhymes with: "what's your business model?" The bit lacking from most orientations is the enabler, why are we here in the first place? It's not to show the most elegant protocol for achieving C-I-A (confidentiality, integrity, authenticity), but to promote the business.
How do we do that? Well, most technologists don't understand the business, let alone can speak the language. And, the business folks can't speak the techno-crypto blah blah either, so the blame is fairly shared. Dr. Garigue points us to Charlemagne as a better model:
King of the Franks and Holy Roman Emperor; conqueror of the Lombards and Saxons (742-814) - reunited much of Europe after the Dark Ages.He set up other schools, opening them to peasant boys as well as nobles. Charlemagne never stopped studying. He brought an English monk, Alcuin, and other scholars to his court - encouraging the development of a standard script.
He set up money standards to encourage commerce, tried to build a Rhine-Danube canal, and urged better farming methods. He especially worked to spread education and Christianity in every class of people.
He relied on Counts, Margraves and Missi Domini to help him.
Margraves - Guard the frontier districts of the empire. Margraves retained, within their own jurisdictions, the authority of dukes in the feudal arm of the empire.
Missi Domini - Messengers of the King.
In other words, the role of the security person is to enable others to learn, not to do, nor to critique, nor to design. In more specific terms, the goal is to bring the team to a better standard, and a better mix of security and business. Garigue's mandate for IT security?
Knowledge of risky things is of strategic valueHow to know today tomorrow’s unknown ?
How to structure information security processes in an organization so as to identify and address the NEXT categories of risks ?
Curious, isn't it! But if we think about how reactive most security thinking is these days, one has to wonder where we would ever get the chance to fight tomorrow's war, today?
In the previous post on Audits (1, 2, 3) I established that you yourself cannot determine from the outside whether an audit is any good. So how do we deal with this problem?
We can take a statistical approach to the investigation. We can probably agree that some audits are not strong (the financial crisis thesis), and some are definitely part of the problem (Enron, Madoff, Satyam, Stanford) not the solution. This rules out all audits being good.
The easy question: are all audits in the bad category, and we just don't know it, or are some good and some bad? We can rule out all audits being bad, because Refco was caught by a good audit, eventually.
So we are somewhere in-between the extremes. Some good, some bad. The question then further develops into whether the ones that are good are sufficiently valuable to overcome the ones that are bad. That is, one totally fraudulent result can be absorbed in a million good results. Or, if something is audited, even badly or with a percentage chance of bad results, some things should be improved, right?
Statistically, we should still get a greater benefit.
The problem with this view is that we the outside world can't tell which is which, yet the point of the audit is to tell us: which is which. Because of the intent of the audit -- entering into the secrets of the corporate and delivering a judgment over the secrets -- there are no tools for us to distinguish. This is almost deliberate, almost by definition! The point of the audit is for us to distinguish the secretively good from the secretively bad; if we also have to distinguish amongst the audits, we have a problem.
Which is to say, auditing is highly susceptible to the rotten apples problem: a few rotten apples in a barrel quickly makes the whole barrel worthless.
How many is a few? One failed audit is not enough. But 10 might be, or 100, or 1% or 10%, it all depends. So we need to know some sort of threshold, past which, the barrel is worthless. Once we determine that some percentage of audits above the threshold are bad, all of them are dead, because confidence in the system fails and all audits become ignored by those that might have business in relying on them.
The empirical question of what that percentage would be is obviously a subject of some serious research, but I believe we can skip it by this simple check. Compare the threshold to our by now painfully famous financial crisis test. So far, in the financial crisis, all the audits failed to pick up the problem (and please, by all means post in comments any exceptions! Anonymously is fine!).
Whatever the watermark for general failure is, if the financial crisis is any guide, we've probably reached it. We are, I would claim, in the presence of material evidence that the Audit has passed the threshold for public reliance. The barrel is rotten.
But, how did we reach this terrible state of affairs? How could this happen? Let's leave that speculation for another post.
(Afterword: Since the last post on Audit, I resigned my role as Auditor over at CAcert. This moves me from slightly inside the profession to mostly outside. Does this change these views written here? So far, no, but you can be the judge.)
Best practices has always seemed to be a flaky idea, and it took me a long time to unravel why, at least in my view. It is that, if you adopt best practices, you are accepting, and proving, that you yourself are not competent in this area. In effect, you have no better strategy than to adopt whatever other people say.
The "competences" theory would have it that you adopt best practices in security if you are an online gardening shop, because your competences lie in the field of delivering gardening tools, plants and green thumbs advice. Not in security, and gosh, if someone steals a thousand plants then perhaps we should also throw in the shovel and some carbon credits to ease them into a productive life...
On the other hand, if you are dealing with, say, money, best practices in security is not good enough. You have entered a security field, not through fault of your own but because crooks really do always want to steal it. So your ability in defending against that must be elevated, above and beyond the level of "best practices," above and beyond the ordinary.
In the language of core competences, you must develop a competence in security. Now, Adam comes along and offers an alternate perspective:
Best practices are ideas which make intuitive sense: don't write down your passwords. Make backups. Educate your users. Shoot the guy in the kneecap and he'll tell you what you need to know.
I guess it is true that best practices do make some form of intuitive sense, as otherwise they are too hard to propogate. More importantly:
The trouble is that none of these are subjected to testing. No one bothers to design experiments to see if users who write down their passwords get broken into more than those who don't. No one tests to see if user education works. (I did, once, and stopped advocating user education. Unfortunately, the tests were done under NDA.)The other trouble is that once people get the idea that some idea is a best practice, they stop thinking about it critically. It might be because of the authority instinct that Milgram showed, or because they've invested effort and prestige in their solution, or because they believe the idea should work.
What Adam suggests is that best practices survive far longer than is useful, because they have no feedback loop. Best practices are not tested, so they are a belief, not a practice. Once a belief takes hold, we are into a downward spiral (as described in the Silver Bullets paper, which itself simply applies the full _asymmetric literature_ to security) which at its core is due to the lack of a confirming test in the system that nudges the belief to keep pace with the times; if there is nothing that nudges the idea towards relevancy, it meanders by itself away from relevancy and eventually to wrongness.
But it is still a belief, so we still do it and smile wisely when others repeat it. For example, best practices has it that you don't write your passwords down. But, in the security field, we all agree now that this is wrong. "Best" is now bad, you are strongly encouraged to write your passwords down. Why do we call the bad idea, "best practices" ? Because there is nothing in the system of best practices that changes it to cope with the way we work today.
The next time someone suggests something because it's a best practice, ask yourself: is this going to work? Will it be worth the cost?
I would say -- using my reading of asymmetric goods and with a nod to the systems theory of feedback loops, as espoused by Boyd -- that the next time someone suggests that you use it because it is a best practice, you should ask yourself:
Do I need to be competent in this field?
If you sell seeds and shovels, don't be competent in online security. Outsource that, and instead think about soil acidity, worms, viruses and other natural phenomena. If you are in online banking, be competent in security. Don't outsource that, and don't lower yourself to the level of best practices.
Understand the practices, and test them. Modify them and be ready to junk them. Don't rest on belief, and dismiss others attempts to have you conform to belief they themselves hold, but cannot explain.
(Then, because you are competent in the field, your very next question is easy. What exactly was the genesis of the "don't write passwords down" belief? Back in the dim dark mainframe days, we had one account and the threat was someone reading the post-it note on the side of the monitor. Now, we each have hundreds of accounts and passwords, and the desire to avoid dictionary attacks forces each password to be unmemorable. For those with the competence, again to use the language of core competences, the rest follows. "Write your passwords down, dear user.")
[Lynn writes somewhere else, copied without shame:]
A repeated theme in the Madoff hearing (by the person trying for a decade to get SEC to do something about Madoff) was that while new legislation and regulation was required, it was much more important to have transparency and visibility; crooks are inventive and will always be ahead of regulation.
however ... from The Quiet Coup:
But there's a deeper and more disturbing similarity: elite business interests -- financiers, in the case of the U.S. -- played a central role in creating the crisis, making ever-larger gambles, with the implicit backing of the government, until the inevitable collapse. More alarming, they are now using their influence to prevent precisely the sorts of reforms that are needed, and fast, to pull the economy out of its nosedive. The government seems helpless, or unwilling, to act against them.
From The DNA of Corruption:
While the scale of venality of Wall Street dwarfs that of the Pentagon's, I submit that many of the central qualities shaping America's Defense Meltdown (an important new book with this title, also written by insiders, can be found here) can be found in Simon Johnson's exegesis of America's even more profound Financial Meltdown.
... and related to above, Mark-to-Market Lobby Buoys Bank Profits 20% as FASB May Say Yes:
Officials at Norwalk, Connecticut-based FASB were under "tremendous pressure" and "more or less eviscerated mark-to-market accounting," said Robert Willens, a former managing director at Lehman Brothers Holdings Inc. who runs his own tax and accounting advisory firm in New York. "I'd say there was a pretty close cause and effect."
From Now-needy FDIC collected little in premiums:
The federal agency that insures bank deposits, which is asking for emergency powers to borrow up to $500 billion to take over failed banks, is facing a potential major shortfall in part because it collected no insurance premiums from most banks from 1996 to 2006.
with respect to taxes, there was roundtable of "leading expert" economists last summer about current economic mess. their solution was "flat rate" tax. the justification was:
their bottom line was that it probably would only be temporary before the special interests reestablish the current pervasive atmosphere of graft & corruption.
a semi-humorous comment was that a special interest that has lobbied against such a change has been Ireland ... supposedly because some number of US operations have been motivated to move to Ireland because of their much simpler business environment.
with respect to feedback processes ... I (Lynn) had done a lot with dynamic adaptive (feedback) control algorithms as an undergraduate in the 60s ... which was used in some products shipped in the 70s & 80s. In theearly 80s, I had a chance to meet John Boyd and sponsor his briefings. I found quite a bit of affinity to John's OODA-loop concept (observe, orient, decide, act) that is now starting to be taught in some MBA programs.
No, not this stupidity: "The Breach of All Breaches?" but this one, spotted by JP (and also see Fraud, Phishing and Financial Misdeeds, scary, flashmob, and fbi wanted poster seen to right):
* Reported by John DeutzmanPhotos from security video (see photo
gallery at leftat bottom right) obtained by Fox 5 show of a small piece of a huge scam that took place all in one day in a matter of hours. According to the FBI , ATMs from 49 cities were hit -- including Atlanta, Chicago, New York, Montreal, Moscow and Hong Kong."We've seen similar attempts to defraud a bank through ATM machines but not, not anywhere near the scale we have here," FBI Agent Ross Rice told Fox 5.
...."Over 130 different ATM machines in 49 cities worldwide were accessed in a 30-minute period on November 8," Agents Rice said. "So you can get an idea of the number of people involved in this and the scope of the operation."
Here is the amazing part: With these cashers ready to do their dirty work around the world, the hacker somehow had the ability to lift those limits we all have on our ATM cards. For example, I'm only allowed to take out $500 a day, but the cashers were able to cash once, twice, three times over and over again. When it was all over, they only used 100 cards but they ripped off $9 million.
This lifts the level of capability of the attacker several notches up. This is a huge coordinated effort. Are we awake now to the problems that we created for ourselves a decade ago?
(Apologies, no time to do the real research and commentary today! Thanks, JP!)
Payments fraud seems up in Britain:
Matters found that around 26% fell victim to card fraudsters in 2008, up five per cent on the previous year.Kerry D'Souza, card fraud expert, CPP, says: "The dramatic increase in card fraud shows no sign of abating which isn't surprising given the desperate measures some people will resort to during the recession."
The average sum fraudulently transacted is over £650, with one in 20 victims reporting losses of over £2000. Yet 42% of victims did not know about these transactions and only found out they had been defrauded when alerted by their bank.
Online fraud affected 39% of victims, while card cloning from a cash point or chip and pin device accounted for a fifth of cases. Out of all cards that are physically lost and stolen, one in ten are also being used fraudulently.
One in 4 sounds quite high. That's a lot higher than one would expect. So either fraud has been running high and only now are better figures available, or it is growing? They say it is growing.
While researching origins of failure I came across this interesting snippet the other day from Richard Veryard:
The economist J.K Gailbraith used the term "bezzle" to denote the amount of money siphoned (or "embezzled") from the system. In good times, he remarked, the bezzle rises sharply, because everyone feels good and nobody notices. "In [economic] depression, all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks." [Galbraith, The Great Crash 1929]
If this is true, then likely people will be waking up and demanding more from the payments infrastructure. No more easy money for them. Signs of this were spotted by Lynn:
"Up to this point, there has been no information sharing, thus empowering cyber criminals to use the same or slightly modified techniques over and over again. I believe that had we known the details about previous intrusions, we might have found and prevented the problem we learned of last week."Heartland's goal is to turn this event into something positive for the public, the financial institutions which issue credit/debit cards and payments processors.
Carr concluded, "Just as the Tylenol(R) crisis engendered a whole new packaging standard, our aspiration is to use this recent breach incident to help the payments industry find ways to protect its data - and therefore businesses and consumers - much more effectively."
For the past year, Carr has been a strong advocate for industry adoption of end-to-end encryption - which protects data at rest as well as data in motion - as an improved and safer standard of payments security. While he believes this technology does not wholly exist on any payments platform today, Heartland has been working to develop this solution and is more committed than ever to deploying it as quickly as possible.
Now, if you've read Lynn's rants on naked transactions, you will know exactly what this person is asking for. And you might even have a fair stab at why the payment providers denied Heartland that protection.
SecretSquirrel writes:
it's a "get rich quick" guide for sale ... but actually for the virtual money inside the WoW game
Around a year or two ago I penned a series of rants called "GP" which predicted that the primary success signal of a new money was ... crime! The short summary is that in the battle for mindspace between issuers, users, critics & regulators, the press (who?) the offended and the otherwise religious ... there is no way for the external observer to figure out whether this is worthwhile or not.
But wait, there is one way: if a criminal is willing to put his time, his investment, indeed his very freedom on the line for something, it's got to be worth something! GP is undeniably crossed, I theorise, when criminals steal the value, and therefore provide a most valuable signal to the world that this stuff is worth something.
(it's not a parody!)it's exactly following the format to the line, of any of the famous get-rich-quick newsletters.
(eg, http://www.landingpagecashmachine.com or hundreds of others) ... even the famous "three-line centered upper-lower case headline"
Call me cynical, but I have seen hundreds of digital cash systems live and die without meriting a second thought. There have been thousands I haven't seen! In my decade++ of time in this field, I've only seen one external signal that is reliable. Even this:
You know they say WoW is over $150 million per month in player fees now!
Is ... well, ya know, could be a fake. Did we see that Satyam, a huge audited IT outsourcing firm in India added some 13,000 jobs ... and nobody noticed?
If I am right, I'll also be blamed for the upsurge in fake crimes :)
Hasan points to this:
Remember just over one year ago? RBS (Royal Bank of Scotland) paid $100bn for ABN Amro.For this amount it could now buy:
- Citibank $22.5bn
- Morgan Stanley $10.5bn
- Goldman Sachs $21bn
- Merrill Lynch $12.3bn
- Deutsche Bank $13bn
- Barclays $12.7bn
And still have $8bn in change with which you would be able to pick up:
GM, Ford, Chrysler and the Honda Formula 1 Racing-Team.
Those of us who are impacted by the world of security suffer under a sort of love-hate relationship with the word; so much of it is how we build applications, but so much of what is labelled security out there in the rest of the world is utter garbage.
So we tend to spend a lot of our time reverse-engineering popular security thought and finding the security bugs in it. I think I've found another one. Consider this very concise and clear description from Frank Stajano, who has published a draft book section seeking comments:
The viewpoint we shall adopt here, which I believe is the only one leading to robust system security engineering, is that security is essentially risk management. In the context of an adversarial situation, and from the viewpoint of the defender, we identify assets (things you want to protect, e.g. the collection of magazines under your bed), threats (bad things that might happen, e.g. someone stealing those magazines), vulnerabilities (weaknesses that might facilitate the occurrence of a threat, e.g. the fact that you rarely close the bedroom window when you go out), attacks (ways a threat can be made to happen, e.g. coming in through the open window and stealing the magazines—as well as, for good measure, that nice new four-wheel suitcase of yours to carry them away with) and risks (the expected loss caused by each attack, corresponding to the value of the asset involved times the probability that the attack will occur). Then we identify suitable safeguards (a priori defences, e.g. welding steel bars across the window to prevent break-ins) and countermeasures (a posteriori defences, e.g. welding steel bars to the window after a break-in has actually occurred4 , or calling the police). Finally, we implement the defences that are still worth implementing after evaluating their effectiveness and comparing their (certain) cost with the (uncertain) risk they mitigate5
(my emphasies.) That's a good description of how the classical security world sees it. We start by saying, "What's your threat model?" Then out of that we build a security model to deal with those threats. The security model then incorporates some knowledge of risks to manage the tradeoffs.
The bit that's missing is the business. Instead of asking "What's your threat model?" as the first question, it should be "What's your business model?" Security asks that last, and only partly, by asking questions like "what's are the risks?"
Calling security "risk management" then is a sort of nod to the point that security has a purpose within business; and by focussing on some risks, this allows the security modellists to preserve their existing model while tying it to the business. But it is still backwards; it is still seeking to add risks at the end, and will still result in "security" being just the annoying monkey on the back.
Instead, the first question should be "What's your business model?"
This unfortunately opens Pandora's box, because that implies that we can understand a business model. Assuming it is the case that your CISO understands a business model, it does rather imply that the only security we should be pushing is that which is from within. From inside the business, that is. The job of the security people is not therefore to teach and build security models, but to improve the abilities of the business people to incorporate good security as they are doing their business.
Which perhaps brings us full circle to the popular claim that the best security is that which is built in from the beginning.
The next question on unwinding secrecy is how to actually do it. It isn't as trivial as it sounds. Perhaps this is because the concept of "need-to-know" is so well embedded in the systems and managerial DNA that it takes a long time to root it out.
At LISA I was asked how to do this; but I don't have much of an answer. Here's what I have observed:
"I can't see why document X is secret, it seems wrong. Therefore, in 1 month, I intend to publish it. If there is any real reason, let me know before then."This protocol avoids the endless discussions as to why and whether.
Well, that's what I have thought about so far. I am sure there is more.
I have a curious question: why hasn't eBay's share price plummeted? Also, to a lesser extent, those of amazon and google who also depend on a robust retail sales trade.
My theory is this: when cash dries up, or in USA terms, credit, then people stop buying, and start to save. Last week, apparently, the credit spigot was not only tightened, it was turned off and padlocked! Rumours circulated of consumer credit disappearing before ones very eyes.
So, surely this should effect the consumer markets, and especially those who are more likely to be in rarities rather than rice. Hence, eBay.
Anecdotally, I've been watching a particular class of out-of-date, dead-tech items over the last 2 months. 2 months ago, they were going for around $200-300. Last week I saw offers hovering around $100, but, surprisingly, one went for $300, another for $250.
This weekend, I'm watching two offers with zero bids, days into their auctions, and heading towards giveaway prices.
This is great news for me, if I can pick them up for shipping costs. But what's the news for the US economy and the retailers? Hence the question: is eBay likely to weather a collapse in auctions and retail sales? Are amazon and google to follow?
Graeme points to this entry that posits that security people need a legal background:
My own experience and talking to colleagues has prompted me to wonder whether the day has arrived that security professionals will need a legal background. The information security management professional is under increasing pressure to cope with the demands of the organization for access to information, to manage the expectations of the data owner on how and where the information is going to be processed and to adhere to regulatory and legal requirements for the data protection and archiving. In 2008, a number of rogue trader and tax evasion cases in the financial sector have heightened this pressure to manage data.
The short, sharp answer is no, but it is a little more nuanced than that. First, let's take the rogue trader issue, being someone who has breached the separation of roles within a trading company, and used it to bad effect. To spot and understand this requires two things: an understanding of how settlement works, and the principle of dual control. It does not require the law, at all. Indeed, the legal position of someone who has breached the separation, and has "followed instructions to make a lot of money" is a very difficult subject. Suffice to say, studying the law here will not help.
Secondly, asking security people to study law so as to deal with tax evasion is equally fruitless but for different reasons: it is simply too hard to understand, it is less law than an everlasting pitched battle between the opposing camps.
Another way of looking at this is to look at the FC7 thesis, which says that, in order to be an architect in financial cryptography, you need to be comfortable with cryptography, software engineering, rights, accounting, governance, value and finance. The point is not whether law is in there or not, but that there are an awful lot of important things that architects or security directors need before they need law.
Still, an understanding of the law is no bad thing. I've found several circumstances where it has been very useful to me and people I know:
In this case, the law knowledge helps a lot. Another area which is becoming more and more an issue is that of electronic evidence. As most evidence is now entering the digital domain (80% was a recent unreferenced claim) there is much to understand here, and much that one can do to save ones company. The problem with this, as lamented at the recent conference, is that any formal course of law includes nothing on electronic evidence. For that, you have to turn to books like those by Stephen Mason on Electronic Evidence. But that you can do yourself.
cwe points to this new way to improve your passport profile:
Using his own software, a publicly available programming code, a £40 card reader and two £10 RFID chips, Mr van Beek took less than an hour to clone and manipulate two passport chips to a level at which they were ready to be planted inside fake or stolen paper passports.A baby boy’s passport chip was altered to contain an image of Osama bin Laden, and the passport of a 36-year-old woman was changed to feature a picture of Hiba Darghmeh, a Palestinian suicide bomber who killed three people in 2003. The unlikely identities were chosen so that there could be no suggestion that either Mr van Beek or The Times was faking viable travel documents.
OK, so costs is what we track here at FC-central: we need 60 quid of parts, and let's call it 40 quid for the work. Add to that, a fake or stolen passport, which seems to run to around 100 depending. Call it 200, all-up, for the basic package. The fake may possibly be preferred because you can make it with the right photo inside the jacket, without having to do the professional dicey slicey work. Now that the border people are convinced that the RFID chip is perfectly secure, they won't be looking for that definitively British feel.
Folks, if you are going to try this at home, use your own passport, because using fake passports is a bit naughty! There are all sorts of reasons to improve ones image, and cosmetics is a booming industry these days. Let's say, we change the awful compulsory taliban image to a studio photo by a professional photographer. Easy relaxed pose, nice smile, and with your favourite Italian holiday scenes in the background. Add some photoshop work to smooth out the excess lines, lighten up those hungover dark eyes, and shrink those tubby parts off. We'll be a hit with the senior citizens.
We can also improve your hard details: For the 40-somethings, we'll take 10 years taken off your age, and for the teenager, we'll boost you up to 18 or 21. For the junior industry leader, we can add a title or two, and some grey at the side. Would you prefer Sir or Lord?
Your premium vanity upgrade, with all the trimmings, is likely to set you back around 500, and less if you bring your own base. Think of the savings on gym fees, and all the burgers you can eat!
One small wrinkle: there is a hint in the article that the British Government is offering these special personality units only until next year. Rush now...
Electronic signatures are now present in legal cases to the extent that while they remain novel, they are not without precedence. Just about every major legal code has formed a view in law on their use, and many industries have at least tried to incorporate them into vertical applications. It is then exceedingly necessary that there be an authoritative tome on the legal issues surrounding the topic.
Electronic Signatures in Law is such a book, and I'm now the proud owner of a copy of the recent 2007 second edition, autographed no less by the author, Stephen Mason. Consider this a review, although I'm unaccustomed to such. Like the book, this review is long: intro, stats, a description of the sections, my view of the old digsig dream, and finally 4 challenges I threw at the book to measure its paces. (Shorter reviews here.)
First the headlines: This is a book that is decidedly worth it if you are seriously in the narrow market indicated by the title. For those who are writing directives or legislation, architecting software of reliance, involved in the Certificate Authority business of some form, or likely to find themselves in a case or two, this could well be the essential book.
At £130 or so, I'd have to say that the Financial Cryptographer who is not working directly in the area will possibly find the book too much for mild Sunday afternoon reading, but if you have it, it does not dive so deeply and so legally that it is impenetrable to those of us without an LLB up our sleeves. For us on the technical side, there is welcome news: although the book does not cover all of the failings and bugs that exist in the use of PKI-style digital signatures, it covers the major issues. Perhaps more importantly, those bugs identified are more or less correctly handled, and the criticism is well-ground in legal thinking that itself rests on centuries of tradition.
Raw stats: Published by Tottel publishing. ISBN 978-1-84592-425-6. At over 700 pages, it includes a comprehensive indexes of statutory instruments, legislation and cases that runs to 55 pages, by my count, and a further 10 pages on United Kingdom orders. As well, there are 54 pages on standards, correspondents, resources, glossary, followed by a 22 page index.
Description. Mason starts out with serious treatments on issues such as "what is a signature?" and "what forms a good signature?" These two hefty chapters (119 pages) are beyond comprehensive but not beyond comprehension. Although I knew that the signature was a (mere) mark of intent, and it is the intent which is the key, I was not aware of how far this simple subject could go. Mason cites case law where "Mum" can prove a will, where one person signs for another, where a usage of any name is still a good signature, and, of course, where apparent signatures are rejected due to irregularities, and others accepted regardless of irregularities.
Next, there is a fairly comprehensive (156 pages) review of country and region legal foundations, covering the major anglo countries, the European Union, and Germany in depth, with a chapter on International comparisons covering approaches, presumptions, liabilities and other complexities and a handful of other countries. Then, Mason covers electronic signatures comprehensively and then seeks to compare them to Parties and risks, liability, non-contractual issues, and evidence (230 pages). Finally, he wraps up with a discussion of digital signatures (42 pages) and data protection (12 pages).
Let me briefly summarise the financial cryptography view of the history of Digital Signatures: The concept of the digital signature had been around since the mid-1970s, firstly in the form of the writings by the public key infrastructure crowd, and secondly, popularised to a small geeky audience in the form of PGP in the early 1990s. However, deployment suffered as nobody could quite figure out the application. When the web hit in 1994, it created a wave that digital signatures were able to ride. To pour cold water on a grand fire-side story, RSA Laboratories manage to convince Netscape that (a) credit cards needed to be saved from the evil Mallory, (b) the RSA algorithm was critical to that need, and (c) certificates were the way to manage the keys required for RSA. Verisign was a business created by (friends of) RSA for that express purpose, and Netscape was happily impressed on the need to let other friends in. For a while everything was mom's apple pie, and we'll all be rich, as, alongside VeriSign and friends, business plans claiming that all citizens would need certificates for signing purposes were floated around Wall Street, and this would set Americans back $100 a pop.
Neither the fabulous b-plans nor the digital signing dream happened, but to the eternal surprise of the technologists, some legislatures put money down on the cryptographers' dream to render evidence and signing matters "simpler, please." The State of Utah led the way, but the politicians dream is now more clearly seen in the European Directive on Electronic Signatures, and especially in the Germanic attitude that digital signatures are as strong by policy, as they are weak in implementation terms. Today, digital signatures are relegated to either tight vertical applications (e.g., Ricardian contracts), cryptographic protocol work (TLS-style key exchanges), or being unworkable misfits lumbered with the cross of law and the shackles of PKI. These latter embarrassments only survive in those areas where (a) governments have rolled out smart cards for identity on a national basis, and/or (b) governments have used industrial policy to get some of that certificate love to their dependencies.
In contrast to the above dream of digital signatures, attention really should be directed to the mere electronic signature, because they are much more in use than the cryptographic public key form, and arguably much more useful. Mason does that well, by showing how different forms are all acceptable (Chapter 10, or summarised here): Click-wrap, typing a name, PINs, email addresses, scanned manuscript signatures, and biometric forms are all contrasted against actual cases.
The digital signature, and especially the legal projects of many nations get criticised heavily. According to the cases cited, the European project of qualified certificates, with all its CAs, smart cards, infrastructure, liabilities, laws, and costs ad infinitum ... are just not needed. A PC, a word processor program and a scan of a hand signature should be fine for your ultimate document. Or, a typewritten name, or the words "signed!" Nowhere does this come out more clearly than the Chapter on Germany, where results deviate from the rest of the world.
Due to the German Government's continuing love affair with the digital signature, and the backfired-attempt by the EU to regularise the concept in the Electronic Signature Directives, digital and electronic signatures are guaranteed to provide for much confusion in the future. Germany especially mandated its courts to pursue the dream, with the result that most of the German case results deal with rejecting electronic submissions to courts if not attached with a qualified signature (6 of 8 cases listed in Chapter 7). The end result would be simple if Europeans could be trusted to use fax or paper, but consider this final case:
(h) Decision of the BGH (Federal Supreme Court, 'Bundesgerichtshof') dated 10 October 2006,...: A scanned manuscript signature is not sufficient to be qualified as 'in writing' under §130 VI ZPO if such a signature is printed on a document which is then sent by facsimile transmission. Referring to a prior decision, the court pointed out that it would have been sufficient if the scanned signature was implemented into a computer fax, or if a document was manually signed before being sent by facsimile transmission to court.
How deliciously Kafkaesque! and how much of a waste of time is being imposed on the poor, untrustworthy German lawyer. Mason's book takes on the task of documenting this confusion, and pointing some of the way forward. It is extraordinarily refreshing to find that the first to chapters, and over 100 pages, are devoted to simply describing signatures in law. It has been a frequent complaint that without an understanding of what a signature is, it is rather unlikely that any mathematical invention such as digsigs would come even close to mimicing it. And it didn't, as is seen in the 118 pages romp through the act of signing:
What has been lost in the rush to enact legislation is the fact that the function of the signature is generally determined by the nature and content of the document to which it is affixed.
Which security people should have recognised as a red flag: we would generally not expect to use the same mechanism to protect things of wildly different values.
Finally, I found myself pondering these teasers:
Athenticate. I found myself wondering what the word "authenticate" really means, and from Mason's book, I was able to divine an answer: to make an act authentic. What then does "authentic" mean and what then is an "act"? Well, they are both defined as things in law: an "act" is something that has legal significance, and it is authentic if it is required by law and is done in the proper fashion. Which, I claim, is curiously different to whatever definition the technologists and security specialists use. OK, as a caveat, I am not the lawyer, so let's wait and see if I get the above right.
Burden of Liability. The second challenge was whether the burden of liability in signing has really shifted. As we may recall, one of the selling points of digital signatures was that once properly formed, they would enable a relying party to hold the signing party to account, something which was sometimes loosely but unreliably referred to as non-repudiation.
In legal terms, this would have shifted the burden of proof and liability from the recipient to the signer, and was thought by the technologists to be a useful thing for business. Hence, a selling point, especially to big companies and banks! Unfortunately the technologists didn't understand that burden and liability are topics of law, not technology, and for all sorts of reasons it was a bad idea. See that rant elsewhere. Still, undaunted, laws and contracts were written on the advice of technologists to shift the liability. As Mason puts it (M9.27 pp270):
For obvious reasons, the liability of the recipient is shaped by the warp and weft of political and commercial obstructionism. Often, a recipient has no precise rights or obligations, but attempts are made using obscure methods to impose quasi-contractual duties that are virtually impossible to comply with. Neither governments nor commercial certification authorities wish to make explicit what they seek to achieve implicitly: that is, to cause the recipient to become a verifying party, with all the responsibilities that such a role implies....
So how successful was the attempt to shift the liability / burder in law? Mason surveys this question in several ways: presumptions, duties, and liabilities directly. For a presumption that the sender was the named party in the signature, 6 countries said yes (Israel, Japan, Argentina, Dubai, Korea, Singapore) and one said no (Australia) (M9.18 pp265) Britain used statutory instruments to give a presumption to herself, the Crown only, that the citizen was the sender (M9.27 pp270). Others were silent, which I judge an effective absence of a presumption, and a majority for no presumption.
Another important selling point was whether the CA took on any especial presumption of correctness: the best efforts seen here were that CAs were generally protected from any liability unless shown to have acted improperly, which somewhat undermines the entire concept of a trusted third party.
How then are a signer and recipient to share the liability? Australia states quite clearly that the signing party is only considered to have signed, if she signed. That is, she can simply state that she did not sign, and the burden falls on the relying party to show she did. This is simply the restatement of the principle in the English common law; and in effect states that digital signatures may be used, but they are not any more effective than others. Then, the liability is exactly as before: it is up the to relying party to check beforehand, to the extent reasonable. Other countries say that reliance is reasonable, if the relying party checks. But this is practically a null statement, as not only is it already the case, it is the common-sense situation of caveat emptor deriving from Roman times.
Although murky, I would conclude that the liability and burden for reliance on a signature is not shifted in the electronic domain, or at least governments seem to have held back from legislating any shift. In general, it remains firmly with the recipient of the signature. The best it gets in shiftyville is the British Government's bounty, which awards its citizens the special privilege of paying for their Government's blind blundering; same as it ever was. What most governments have done is a lot of hand-waving, while permitting CAs to utilise contract arrangements to put the parties in the position of doing the necessary due diligence,. Again, same as it ever was, and decidedly no benefit or joy for the relying party is seen anywhere. This is no more than the normal private right to a contract or arrangement, and no new law nor regulation was needed for that.
Digital Signing, finally, for real! The final challenge remains a work-in-progress: to construct some way to use digital signatures in a signing protocol. That is, use them to sign documents, or, in other words, what they were sold for in the first place. You might be forgiven for wondering if the hot summer sun has reached my head, but we have to recall that most of the useful software out there does not take OpenPGP, rather it takes PKI and x.509 style certificate cryptographic keys and certificates. Some of these things offer to do things called signing, but there remains a challenge to make these features safe enough to be recommended to users. For example, my Thunderbird now puts a digital signature on my emails, but nobody, not it, not Mozilla, not CAcert, not anyone can tell me what my liability is.
To address this need, I consulted the first two chapters, which lay out what a signature is, and by implication what signing is. Signing is the act of showing intent to give legal effect to a document; signatures are a token of that intention, recorded in the act of signing. In order, then, to use digital certificates in signing, we need to show a user's intent. Unfortunately, certificates cannot do that, as is repeatedly described in the book: mostly because they are applied by the software agent in a way mysterious and impenetrable to the user.
Of course, the answer to my question is not clearly laid out, but the foundations are there: create a private contract and/or arrangement between the parties, indicate clearly the difference between a signed and unsigned document, and add the digital signature around the document for its cryptographic properties (primarily integrity protection and confirmation of source).
The two chapters lay out the story for how to indicate intention in the English common law: it is simple enough to add the name, and the intention to sign, manually. No pen and ink is needed, nor more mathematics than that of ASCII, as long as the intention is clear. Hence, it suffices for me to write something like signed, iang at the bottom of my document. As the English common law will accept the addition of merely ones name as a signature, and the PKI school has hope that digital signatures can be used as legal signatures, it follows that both are required to be safe and clear in all circumstances. For the champions of either school, the other method seems like a reduction to futility, as neither seems adequate nor meaningful, but the combination may ease the transition for those who can't appreciate the other language.
Finally, I should close with a final thought: how does the book effect my notions as described in the Ricardian Contract, still one of the very few strong and clear designs in digital signing? I am happy to say that not much has changed, and if anything Mason's book confirms that the Ricardo designs were solid. Although, if I was upgrading the design, I would add the above logic. That is, as the digital signature remains impenetrable to the court, it behoves to add the words seen below somewhere in the contract. Hence, no more than a field name-change, the tiniest tweak only, is indicated:
Signed By: Ivan
The Fed roared into action mid July to rescue IndyMac, one of the USA's biggest banks. It's the normal story: toxic loans, payouts by the government, all accompanied by the USG moving to make matters worse. Chart of the week award goes to James Turk of Goldmoney:
One of the basic functions of a central bank is to act as the 'lender of last resort'. This facility is used to keep banks liquid during a period of distress.For example, if a bank is experiencing a run on deposits, it will borrow from the central bank instead of trying to liquidate some of its assets to raise the cash it needs to meet its obligations. In other words, the central bank offers a 'helping hand' by providing liquidity to the bank in need.
The following chart is from the Economic Research Department of the St. Louis Federal Reserve Bank. Here is the link: http://research.stlouisfed.org/fred2/series/BORROW. This long-term chart illustrates the amount of money banks have borrowed from the Federal Reserve from 1910 to the present.
This chart proves there is truth to the adage that a picture is worth a thousand words. It's one thing to say that the present financial crisis is unprecedented, but it is something all together different to provide a picture putting real meaning to the word 'unprecedented'.
It is an understatement to say that the U.S. banking system is in uncharted territory. The Federal Reserve is providing more than just a 'helping hand'.
Also check the original so you can see the source!
The problem with the "basic function" of the poetically-named 'lender of last resort' is that it is more a theory than a working practice. Such a thing has to be proven in action before we can rely on it. Unlike insurance, the lending of last resort function rarely gets proven, so it languishes until found to be broken in our very hour of need. Sadly, that is happening now in Switerland. Over at the Economist they also surveyed the Fed's recent attempts to prove their credibility in the same game. FM & FM were bailed out, and gave the dollar holder a salutory lesson. The mortgage backers were supposed to be private:
The belief in the implicit government guarantee allowed the pair to borrow cheaply. This made their model work. They could earn more on the mortgages they bought than they paid to raise money in the markets. Had Fannie and Freddie been hedge funds, this strategy would have been known as a “carry trade”.It also allowed Fannie and Freddie to operate with tiny amounts of capital. The two groups had core capital (as defined by their regulator) of $83.2 billion at the end of 2007 (see chart 2); this supported around $5.2 trillion of debt and guarantees, a gearing ratio of 65 to one. According to CreditSights, a research group, Fannie and Freddie were counterparties in $2.3 trillion-worth of derivative transactions, related to their hedging activities.
There is no way a private bank would be allowed to have such a highly geared balance sheet, nor would it qualify for the highest AAA credit rating. In a speech to Congress in 2004, Alan Greenspan, then the chairman of the Fed, said: “Without the expectation of government support in a crisis, such leverage would not be possible without a significantly higher cost of debt.” The likelihood of “extraordinary support” from the government is cited by Standard & Poor’s (S&P), a rating agency, in explaining its rating of the firms’ debt.
Now, we learn that FM & FM are government-sponsored enterprises, and the US is just another tottering socialist empire. OK, so the Central Bank, Treasury and Congress of the United States of America lied about the status of their subsidised housing economy. Now what? We probably would be wise to treat all other pronouncements with the skepticism due to a fundamentally flawed and now failing central monetary policy.
The illusion investors fell for was the idea that American house prices would not fall across the country. This bolstered the twins’ creditworthiness. Although the two organisations have suffered from regional busts in the past, house prices have not fallen nationally on an annual basis since Fannie was founded in 1938.... Of course, this strategy only raises another question. Why does America need government-sponsored bodies to back the type of mortgages that were most likely to be repaid? It looks as if their core business is a solution to a non-existent problem.
Although there is an obvious benefit in paying for good times, there is an obvious downside: you have to pay it back one day, and you pay it back double big in the down times, likely with liberal doses of salt in your gaping wounds. Welcome, Angst!
We keep coming back to the same old problem in the financial field as with, say, security, which is frequently written about in this blog. So many policies eventually founder on one flawed assumption: that we believe we know how to do it right.
However, Fannie and Freddie did not stick to their knitting. In the late 1990s they moved heavily into another area: buying mortgage-backed securities issued by others (see chart 3). Again, this was a version of the carry trade: they used their cheap financing to buy higher-yielding assets.
Why did they drift from the original mission?
Because they could. Because they were paid on results. Because it was fun. Because, they could be players, they could get some of that esteemed Wall Street respect.
A thousand likely reasons, none of which are important, because the general truth here is that a subsidy will always turn around and hurt the very people who it intends to help. Washington DC's original intention of providing some nice polite subsidy would and must be warped to come around and bite them. Some day, some way.
Sometimes the mortgage companies were buying each other’s debt: turtles propping each other up. Although this boosted short-term profits, it did not seem to be part of the duo’s original mission. As Mr Greenspan remarked, these purchases “do not appear needed to supply mortgage market liquidity or to enhance capital markets in the United States”.
References to the comments of Mr Greenspan are generally to be taken as insider financial code for the real story. Apparently also of Mervyn King, yet, evidently, neither is a wizard who can repair the dam before it breaches, merely farseers who can talk about the spreading cracks.
Now, the USA housing market gets what it deserves for its hubris. The problems for the rest of us are twofold: it drags everything else in the world down as well, and it is not as if those in the Central Banks, the Congresses, the Administrations or the Peoples of the world will learn the slightest bit of wisdom over this affair. Plan on this happening again in another few decades.
If you think I jest, you might like to invest in a new book by George Selgin entitled Good Money. Birmingham Button Makers, the Royal Mint, and the Beginnings of Modern Coinage, 1775-1821
Although it has long been maintained that governments alone are fit to coin money, the story of coining during Great Britain’s Industrial Revolution disproves this conventional belief. In fact, far from proving itself capable of meeting the monetary needs of an industrializing economy, the Royal Mint presided over a cash shortage so severe that it threatened to stunt British economic growth. For several decades beginning in 1775, the Royal Mint did not strike a single copper coin. Nor did it coin much silver, thanks to official policies that undervalued that metal.
To our great and enduring depression, the lesson of currency shortage was not learnt until after well after the events of the 1930s. The story of Matthew Boulton is salutory:
Late in 1797 Matthew Boulton finally managed to land his long-hoped-for regal coining contract, a story told in chapter five, “The Boulton Copper.” Once Boulton gained his contract, other private coiners withdrew from the business, fearing that the government was now likely to suppress their coins. Although the official copper coins Boulton produced were better than the old regal copper coinage had been, and were produced in large numbers, in many respects they proved less effective at addressing the coin shortage than commercial coins had been.Eventually Boulton took part in the reform of the Royal Mint, equipping a brand new mint building with his steam-powered coining equipment. By doing so, Boulton unwittingly contributed to his own mint’s demise, because contrary to his expectations the government reneged on its promise to let him go on supplying British copper coin.
Then, policy was a charade and promises were not to be believed. Are we any better off now?
Whoops:
SEC Spares Market Makers From `Naked-Short' Sales BanJuly 18 (Bloomberg) -- The U.S. Securities and Exchange Commission exempted market makers in stocks from the emergency rule aimed at preventing manipulation in shares of Fannie Mae, Freddie Mac and 17 Wall Street firms.
The SEC granted relief for equity and option traders responsible for pairing off orders from a rule that seeks to bar the use of abusive tactics when betting on a drop in share prices. Exchange officials said limits on ``naked-short'' sales would inhibit the flow of transactions and raise costs for investors.
``The purpose of this accommodation is to permit market makers to facilitate customer orders in a fast-moving market,'' the SEC said in the amendment.
A reader writes: "that lasted what, 12 hours ?" I don't know, but it certainly clashes with the dramatic news of earlier in the week from the SEC, as the Economist reports:
Desperate to prevent more collapses, the main stockmarket regulator has slapped a ban for up to one month on “naked shorting” of the shares of 17 investment banks, and of Fannie Mae and Freddie Mac, the two mortgage giants. Some argue that such trades, in which investors sell shares they do not yet possess, make it easier to manipulate prices. The SEC has also reportedly issued over 50 subpoenas to banks and hedge funds as part of its investigation into possibly abusive trading of shares of Bear Stearns and Lehman Brothers.
Naked selling is technically illegal but unenforceable. The fact that it is illegal is a natural extension of contract laws: you can't sell something you haven't got; the reason it is technically easy is that the markets work on delayed settlement. That is, all orders to sell are technically short sales, as all sales are agreed before you turn up with the shares,. Hence, all orders are based on trust, and if your broker trusts you then you can do it, and do it for as long as your broker trusts you.
"Short selling" as manipulation, as opposed to all selling, works like this: imagine I'm a trusted big player. I get together with a bunch of mates, and agree, next Wednesday, we'll drive the market in Microsoft down. We conspire to each put in a random order for selling large lumps of shares in the morning, followed by lots of buy orders in the afternoon. As long as we buy in the afternoon what we sold in the morning, we're fine.
On the morning of the nefarious deed, buyers at the top price are absorbed, then the next lower price, then the next ... and so the price trickles lower. Because we are big, our combined sell orders send signals through the market to say "sell, sell, sell" and others follow suit. Then, at the pre-arranged time, we start buying. By now however the price has moved down. So we sold at a high price and bought back at a lower price. We buy until we've collected the same number we sold in the morning, and hence our end-of-day settlement is zero. Profit is ours, crack open the gin!
This trick works because (a) we are big enough to buy/sell large lumps of shares, and (b) settlement is delayed as long as we can convince the brokers, so (c) we don't actually need the shares, just the broker's trust. Generally on a good day, no more than 1% of a company's shares move, so we need something of that size. I'd need to be very big to do that with the biggest fish, but obviously there are some sharks around:
The S&P500 companies with the biggest rises in short positions relative to their free floats in recent weeks include Sears, a retailer, and General Motors, a carmaker.
Those driven by morality and striven with angst will be quick to spot that (a) this is only available to *some* customers, (b) is therefore discriminatory, (c) that it is pure and simple manipulation, and (d) something must be done!
Noting that service of short-selling only works when the insiders let outsiders play that game, the simple-minded will propose that banning the insiders from letting it happen will do the trick nicely. But, this is easier said than done: selling without shares is how the system works, at its core, so letting the insiders do it is essential. From there, it is no distance at all to see that insiders providing short sales as a service to clients is ... not controllable, because fundamentally all activities are provided to a client some time, some way. Any rule will be bypassed *and* it will be bypassed for those clients who can pay more. In the end, any rule probably makes the situation worse than better, because it embeds the discrimination in favour of the big sharks, in contrast to ones regulatory aim of slapping them down.
Rules making things worse could well be the stable situation in the USA, and possibly other countries. The root of the problem with the USA is historical: Congress makes the laws, and made most of the foundational laws for stock trading in the aftermath of the crash of 1929. Then, during the Great Depression, Congress didn't have much of a clue as to why the panic happened, and indeed nobody else knew much of what was going on either, but they thought that the SEC should be created to make sure it didn't happen again.
Later on, many economists established their fame in studying the Great Depression (for example, Keynes and Friedman). However, whether any parliament in the world can absorb that wisdom remains questionable: Why should they? Lawmakers are generally lawyers,and are neither traders nor economists, so they rely on expert testimony. And, there is no shortage of experts to tell the select committees how to preserve the benefits of the markets for their people.
Which puts the lie to a claim I made repeatedly over the last week: haven't we figured out how to do safe and secure financial markets by now? Some of us have, but the problem with making laws relying on that wisdom is that the lawmakers have to sort out those who profit by it from those who know how to make it safe. That's practically impossible when the self-interested trader can outspend the economist or the financial cryptographer 1000 to 1.
And, exactly the same logic leads to the wide-spread observation that the regulators are eventually subverted to act on behalf of the largest and richest players:
The SEC’s moves deserve scrutiny. Investment banks must have a dizzying influence over the regulator to win special protection from short-selling, particularly as they act as prime brokers for almost all short-sellers...The SEC’s initiatives are asymmetric. It has not investigated whether bullish investors and executives talked bank share prices up in the good times. Application is also inconsistent. ... Like the Treasury and the Federal Reserve, the SEC is improvising in order to try to protect banks. But when the dust settles, the incoherence of taking a wild swing may become clear for all to see.
When the sheepdog is owned by the wolves, the shepherd will soon be out of business. Unlike the market for sheep, the shareholder cannot pick up his trusty rifle to equalise the odds. Instead, he is offered a bewildering array of new sheepdogs, each of which appear to surprise the wolves for a day or so with new fashionable colours, sizes and gaits. As long as the shareholder does not seek a seat at the table, does not assert primacy over the canines, and does not defend property rights over the rustlers from the next valley, he is no more than tomorrow's mutton, reared today.
I've notched up two events in London: the International Conference on Digital Evidence 10 days ago, and yesterday I attended BarCampBankLondon. I have to say, they were great events!
Another great conference in our space was the original FC in 1997 in Anguilla. This was a landmark in our field because it successfully brought together many disciplines who could each contribute their specialty. Law, software, cryptography, managerial, venture, economics, banking, etc. I had the distinct pleasure of a professor in law gently chiding me that I was unaware of an entire school of economics known as transaction economics that deeply affected my presentation. You just can't get that at the regular homogeneous conference, and while I notice that a couple of other conferences are laying claim to dual-discipline audiences, that's not the same thing as Caribbean polyglotism.
Digital Evidence was as excellent as that first FC97, and could defend a top rating in conferences in the financial cryptography space. It had some of interactivity, perhaps for two factors: it successfully escaped the trap or fixation on local jurisdiction, and it had a fair smattering of technical people who could bring the practical perspective to the table.
Although I'd like to blog more about the presentations, it is unlikely that I can travel that long journey; I've probably enough material for a month, and no month to do it in. Which highlights a continuing theme here at on this blog: there is clearly a hole in the knowledge-to-wisdom market. It is now even an archaic cliche that we have too much data, too much information to deal with, so how do we make the step up through knowledge and on to wisdom?
Conferences can help; but I feel it is far too easy to fall into the standard conference models. Top quality names aimed at top paying attendees, blindness by presumptions about audience and presenters (e.g., academic or corporate), these are always familiar complaints.
Another complaint is that so much of the value of conferences happens when the "present" button is set to "off". And that leads to a sort of obvious conclusion, in that the attendees don't so much want to hear about your discoveries, rather, what they really want is to develop solutions to their own problems. FC solved this in a novel way by having the conference in the Caribbean and other tourist/financial settings. This lucky choice of a pleasant holiday environment, and the custom of morning papers leaving afternoons freer made for a lot of lively discussion.
There are other models. I experimented at EFCE, which Rachel, Fearghas and I ran a few years back in Edinburgh. My call (and I had to defend my corner on this one) was that the real attendees were the presenters. If you could present to peers who would later on present to you, then we could also more easily turn off the button and start swapping notes. If we could make an entire workshop of peers, then structure would not be imposed, and relationships could potentially form naturally and evolve without so many prejudices.
Which brings us to yesterday's event: BarCampBankLondon. What makes this bash unusual is that it is a meeting of peers (like EFCE), there is a cross-discipline focus (finance and computing, balanced with some legal and consulting people) and there isn't much of an agenda or a selection process (unlike EFCE). Addendum: James Gardner suggests that other conferences are dead, in the face of BarCamp's model.
I'm all for experimentation, and BCBL seemed to manage the leading and focussing issue with only the lightest of touches. What is perhaps even more indicative of the (this?) process was that it was only 10 quid to get in, but you consume your Saturday on un-paid time. Which is a great discriminator: those who will sacrifice to work this issue turned up, and those looking for easy, paid way to skive off work did not.
So, perhaps an ideal format would be a BarCamp coupled with the routine presentations? Instead of a panel session (which I find a bit fruitless) replace one afternoon with a free-for-all? This is also quite similar to the "rump sessions" that are favoured in the cryptography world. Something to think about when you are running your next conference.
My notes of a presentation by Dr Ugo Bechini at the Int. Conf. on Digital Evidence, London. As it touches on many chords, I've typed it up for the blog:
The European or Civil Law Notary is a powerful agent in commerce in the civil law countries, providing a trusted control of a high value transaction. Often, this check is in the form of an Apostille which is (loosely) a stamp by the Notary on an official document that asserts that the document is indeed official. Although it sounds simple, and similar to common law Notaries Public, behind the simple signature is a weighty process that may be used for real estate, wills, etc.
It works, and as Eliana Morandi puts it, writing in the 2007 edition of the Digital Evidence and Electronic Signature Law Review:
Clear evidence of these risks can be seen in the very rapid escalation, in common law countries, of criminal phenomena that are almost unheard of in civil law countries, at least in the sectors where notaries are involved. The phenomena related to mortgage fraud is particularly important, which the Mortgage Bankers Association estimates to have caused the American system losses of 2.5 trillion dollars in 2005.
OK, so that latter number came from Choicepoint's "research" (referenced somewhere here) but we can probably agree that the grains of truth sum to many billions.
Back to the Notaries. The task that they see ahead of them is to digitise the Apostille, which to some simplification is seen as a small text with a (dig)sig, which they have tried and tested. One lament common in all European tech adventures is that the Notaries, split along national lines, use many different systems: 7 formats indicating at at least 7 softwares, frequent upgrades, and of course, ultimately, incompatibility across the Eurozone.
To make notary documents interchangeable, there are (posits Dr Bechini) two solutions:
A commercial alternative was notably absent. Either way, IVTF (or CNUE) has adopted and built the second solution: a website where documents can be uploaded and checked for digsigs; the system checks the signature, the certificate and the authority and translates the results into 4 metrics:
In the IVTF circle, a notary can take full responsibility for a document from another notary when there are 4 green boxes above, meaning that all 4 things check out.
This seems to be working: Notaries are now big users of digsigs, 3 million this year. This is balanced by some downsides: although they cover 4 countries (Deustchland, España, France, Italy), every additional country creates additional complexity.
Question is (and I asked), what happens when the expired or revoked certificate causes a yellow or red warning?
The answer was surprising: the certificates are replaced 6 months before expiry, and the messages themselves are sent on the basis of a few hours. So, instead of the document being archived with digsig and then shared, a relying Notary goes back to the originating Notary to request a new copy. The originating Notary goes to his national repository, picks up his *original* which was registered when the document was created, adds a fresh new digsig, and forwards it. The relying notary checks the fresh signature and moves on to her other tasks.
You can probably see where we are going here. This isn't digital signing of documents, as it was envisaged by the champions of same, it is more like real-time authentication. On the other hand, it does speak to that hypothesis of secure protocol design that suggests you have to get into the soul of your application: Notaries already have a secure way to archive the documents, what they need is a secure way to transmit that confidence on request, to another Notary. There is no problem with short term throw-away signatures, and once we get used to the idea, we can see that it works.
One closing thought I had was the sensitivity of the national registry. I started this post by commenting on the powerful position that notaries hold in European commerce, the presenter closed by saying "and we want to maintain that position." It doesn't require a PhD to spot the disintermediation problem here, so it will be interesting to see how far this goes.
A second closing thought is that Morandi cites
... the work of economist Hernando de Soto, who has pointed out that a major obstacle to growth in many developing countries is the absence of efficient financial markets that allow people to transform property, first and foremost real estate, into financial capital. The problem, according to de Soto, lies not in the inadequacy of resources (which de Soto estimates at approximately 9.34 trillion dollars) but rather in the absence of a formal, public system for registering property rights that are guaranteed by the state in some way, and which allows owners to use property as collateral to obtain access to the financial captal associated with ownership.
But, Latin America, where de Soto did much of his work, follows the Civil Notary system! There is an unanswered question here. It didn't work for them, so either the European Notaries are wrong about their assertation that this is the reason for no fraud in this area, or de Soto is wrong about his assertation as above. Or?
Cryptographers, software and hardware architects and others in the tech world have developed a strong belief that everything can be solved with more bits and bites. Often to our benefit, but sometimes to our cost. Just so with matters of law and disputes, where inventions like digital signatures have laid a trail of havoc and confusion through security practices and tools. As we know in financial cryptography, public-key reverse encryptions -- confusingly labelled as digital signatures -- are more usefully examined within the context of the law of evidence than within that of signatures.
Now here cometh those who have to take these legal theories from the back of the technologists' napkins and make them really work: the lawyers. Stephen Mason leads an impressive line-up from many countries in a conference on Digital Evidence:
Digital evidence is ubiquitous, and to such an extent, that it is used in courts every day in criminal, family, maritime, banking, contract, planning and a range of other legal matters. It will not be long before the only evidence before most courts across the globe will all be in the form of digital evidence: photographs taken from mobile telephones, e-mails from Blackberries and laptops, and videos showing criminal behaviour on You Tube are just some of the examples. Now is the time for judges, lawyers and in-house counsel to understand (i) that they need to know some of the issues and (ii) they cannot ignore digital evidence, because the courts deal with it every day, and the amount will increase as time goes by. The aim of the conference will be to alert judges, lawyers (in-house lawyers as well as lawyers in practice), digital forensic specialists, police officers and IT directors responsible for conducting investigations to the issues that surround digital evidence.
Not digital signatures, but evidence! This is a genuinely welcome development, and well worth the visit. Here's more of the blurb:
Conference Programme International Conference on Digital Evidence26th- 27th June 2008, The Vintner's Hall, London – UNITED KINGDOM
Conference: 26th & 27th June 2008, Vintners' Hall, London
Cocktail & Dinner: 26th June 2008, The Honourable Society of Gray's InnTHE FIRST CONFERENCE TO TREAT DIGITAL EVIDENCE FULLY ON AN INTERNATIONAL PLATFORM...
12 CPD HOURS - ACCREDITED BY THE LAW SOCIETY & THE BAR STANDARDS BOARD
This event has also been accredited on an ad hoc basis under the Faculty's CPD Scheme and will qualify for 12 hoursUnderstanding the Technology: Best Practice & Principles for Judges, Lawyers, Litigants, the Accused & Information Security & Digital Evidence Specialists
MIS is hosting & developing this event in partnership with & under the guidance of Stephen Mason, Barrister & Visiting Research Fellow, Digital Evidence Research, British Institute of International and Comparative Law.
Mr. Mason is in charge of the programme's content and is the author of Electronic Signatures in Law (Tottel, 2nd edn, 2007) [This text covers 98 jurisdictions including case law from Argentina, Australia, Brazil, Canada, China, Colombia, Czech Republic, Denmark, Dominican Republic, England & Wales, Estonia, Finland, France, Germany, Greece, Hungary, Israel, Italy, Lithuania, Netherlands, Papua New Guinea, Poland, Portugal, Singapore, South Africa, Spain, Switzerland and the United States of America]. He is also an author and general editor of Electronic Evidence: Disclosure, Discovery & Admissibility (LexisNexis Butterworths, 2007) [This text covers the following jurisdictions: Australia, Canada, England & Wales, Hong Kong, India, Ireland, New Zealand, Scotland, Singapore, South Africa and the United States of America]. Register Now!Stephen is also International Electronic Evidence, general editor, (British Institute of International and Comparative Law, 2008), ISBN 978-1-905221-29-5, covering the following jurisdictions: Argentina, Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Egypt, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Italy, Japan, Latvia, Lithuania, Luxembourg, Malta, Mexico, Netherlands, Norway, Poland, Romania, Russia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Thailand and Turkey.
Back in the 1990s, a group called the cypherpunks waged the crypto wars with the US government. They wanted easy access to crypto, the US government didn't want them to have it. RAH points to Sameer's blog:
The solution for my company, C2Net Software, Inc., was to develop an offshore development team and have them develop the software there. Other companies developed different strategies. Most opted to sell broken products to their overseas customers. One other company cared about the security of their customers. That company was PGP.PGP chose a different strategy however. They published their source code as a book. The book was then exported, the contents of that book were then scanned in, and then a completely legal international version of PGP was born.
Sameer is selling his copy of PGP 5.0i, the book that was printed in the USA and exported in boxes to the international scanning project.
PGP 5.0i, on the other hand, was compiled from source code that was printed in a book (well, actually 12 books - over 6000 pages!). The books were exported from the USA in accordance with the US Export Regulations, and the pages were then scanned and OCRed to make the source available in electronic form.This was not an easy task. More than 70 people from all over Europe worked for over 1000 hours to make the PGP 5.0i release possible. But it was worth it. PGP 5.0i was the first PGP version that is 100% legal to use outside the USA, because no source code was exported in electronic form.
The last 1% was done at HIP 1997, or hacking-in-progress, a Dutch open-air festival conducted once every 4 years. (You can see an early attempt at blogging here and here and a 2004 post.)
Lucky Green turned up with a box and ... left it lying around, at which point the blogging stopped and the work started. A team of non-Americans then spent around 2 days working through the last, unscanned and broken pages. There were about 20 at the peak, working in teams of 2 and 3 across all of HIP, swapping their completed files back at the cypherpunks tent. Somewhere around is a photo of the last file being worked through, with three well-known hackers on one keyboard.
It was uploaded around 3 in the morning, Sunday if I recall, as the party was winding down; some brave souls waited around for the confirmed download, but by 5am, only Sameer was still up and willing to download and compile a first international PGP 5.0.
The story has a sad ending. In the last months of 1999, the US government released the controls on exporting free and open cryptography. Hailed by all as a defeat, it was really a tactical withdrawal from ground that wasn't sustainable. The cypherpunks lost more: with the departure of their clear enemy, they dispersed over time, and emerging security and financial cryptography entrepreneurs lost our coolness factor and ready supply of cryptoplumbers. Lots of crypto projects migrated back to the US, where control was found by other means. The industry drifted back to insecure-practice-by-fiat. Buyers stopped being aware of security, and they were setup for the next failure and the next and the next...
Strategic victory went to the US government, which still maintains a policy of keeping the Internet insecure by suppressing crypto where and when it can. Something to remember if you ever get offered a nice public relations job in the DHS, or if you ever get phished.
If you've been following the story of the Internet and Information Security, by now you will have worked out that there are two common classes of damage that are done when data is breached: The direct damage to the individual victims and the scandal damage to the organisation victim when the media get hold of it. From the Economist:
Illustration by Peter Schrank ... Italians had learnt, to their varying dismay, amusement and fascination, that—without warning or consultation with the data-protection authority—the tax authorities had put all 38.5m tax returns for 2005 up on the internet. The site was promptly jammed by the volume of hits. Before being blacked out at the insistence of data protectors, vast amounts of data were downloaded, posted to other sites or, as eBay found, burned on to disks.
The uproar in families and workplaces caused by the revelation of people's incomes (or, rather, declared incomes) can only be guessed at. A society aristocrat, returning from St Tropez, found himself explaining to the media how he financed a gilded lifestyle on earnings of just €32,043 ($47,423). He said he had generous friends.
...Vincenzo Visco, who was responsible for stamping out tax dodging, said it promoted “transparency and democracy”. Since the 1970s, tax returns have been sent to town halls where they can be seen by the public (which is how incomes of public figures reach the media). Officials say blandly that they were merely following government guidelines to encourage the use of the internet as a means of communication.
The data-protection authority disagreed. On May 6th it ruled that releasing tax returns into cyberspace was “illicit”, and qualitatively different from making them available in paper form. It could lead to the preparation of lists containing falsified data and meant the information would remain available for longer than the 12 months fixed by law.
The affair may not end there. A prosecutor is investigating if the law has been broken. And a consumer association is seeking damages. It suggests €520 per taxpayer would be appropriate compensation for the unsolicited exposure.
An insight of the 'silver bullets' approach to the market is that these damages should be considered separately, not lumped together. The one that is the biggest cost will dominate the solution, and if the two damages suggest opposing solutions, the result may be at the expense of the weaker side.
What makes Information Security so difficult is that the public scandal part of the damage (the indirect component) is generally the greater damage. Hence, breaches have been classically hushed up, and the direct damages to the consumers are untreated. In this market, then, the driving force is avoiding the scandal, which not only means that direct damage to the consumer is ignored, it is likely made worse.
We then see more evidence of the (rare) wisdom of breach disclosure laws, even if, in this case, the breach was a disclosure by intention. The legal action mentioned above puts a number on the direct damage to the consumer victim. We may not agree with €520, but it's a number and a starting position that is only possible because the breach is fully out in the open.
Those then that oppose stronger breach laws, or wish to insert various weasel words such as "you're cool to keep it hush-hush if you encrypted the data with ROT13" should ask themselves this: is it reasonable to reduce the indirect damage of adverse publicity at the expense of making direct damages to the consumer even worse?
Lots of discussion, etc etc blah blah. My thought is this: we need to get ourselves to a point, as a society, where we do not turn the organisation into more of a secondary victim that it already is through its breach. We need to not make matters worse; we should work to remove the incentives to secrecy, rather than counterbalancing them with opposing and negative incentives such as heavy handed data protection regulators. If there is any vestige of professionalism in the industry, then this is one way to show it: let's close down the paparazzi school of infosec and encourage and reward companies for sharing their breaches in the open.
Who said that? Was it Andy Warhol or Marshal McLuhan or Maurice Saatchi?
A few days ago, we reflected on the medium of the RSA conference, and how the message has lost its shine. One question is how to put the shine back on it, but another question is, why do we want shine on the conference? As Ping mused on, what is the message in the first place?
The medium is the message. Here's an example that I stumbled across today: Neighbours. If you don't know what that is, have a look at wikipedia. Take a few moments to skim through the long entry there ...
If you didn't know what it was before, do you know now? Wikipedia tells us about the popularity, the participants, the ratings, the revamps, the locations, the history of the success, the theme tune, and the awards. Other than these brief lines at the beginning:
Neighbours is a long-running Australian soap opera. The series follows the daily lives of several families who live in the six houses at the end of Ramsay Street, a quiet cul-de-sac in the fictional middle-class suburb of Erinsborough. Storylines explore the romances, family problems, domestic squabbles, and other key life events affecting the various residents.
Wikipedia does not tell the reader what Neighbours is. There are 5998 words in the article, and 55 words in that message above. If we were being academic, we could call them message type I and type II and note that there is a ratio of 100 to 1 between them!
At a superficial, user-based level, the 55 words above is the important message. To me and you, that is. But, to whoever wrote that article, the other 99% is clearly the most important. Their words are about the medium, not what we outsiders would have called the message, and it is here that the medium has become the message.
Some of that stuff *is* important. If we drag through the entire article we find that the TV show does one million daily audience in Australia, peaked at 18 million in the UK, and other countries had their times too. That you can take to the bank, advertisers will line up out on the street to buy that.
We can also accurately measure the cost and therefore benefit to consumers: 30 minutes each working day. So we know, objectively, that this entertainment is worth 30 minutes of prime time for the viewers. (The concept of a soap opera guarantees repeat business, so you know you are also targetting a consistent set of people, consistently.)
We can then conclude that, on the buy side and the sell side of this product, we have some sort of objective meeting of the minds. And, we can compress this mind meeting into a single number called ratings. Based on that one number alone, we can trade.
That number, patient reader, is a metric. A metric is something that is objectively important to both buyer and seller. It's Okay that we don't know what "it" is, as long as we have the metric of it. In television, the medium is the message, and that's cool.
Now, if we turn back to the RSA channel .. er .. conference, we can find similar numbers: In 2007, 17,000 attendees and 340 exhibitors. Which is bankable, you can definitely get funding for that, so that conference is in good shape. On the sell side, all is grand.
However, as the recent blog thread pointed out, on the buy side, there is a worrying shortage of greatness: the message was, variously, buyers can't understand the products, buyers think the products are crap, buyers don't know why they're there, and buyers aren't buying.
In short, buyers aren't, anymore. And this separates Neighbours from RSA in a way that is extremely subtle. When I watch an episode of Neighbours, my presence is significant in and of itself because the advertising works on a presence & repeat basis. I'm either entertained and come back tomorrow, or I stop watching, so entertainment is sufficient to make the trade work.
However, if I go to the RSA conference, the issue of my *presence* isn't the key. Straight advertising isn't the point here, so something other than my presence is needed.
What is important is that the exhibitors sell something. Marketing cannot count on presence alone because the buyer is not given that opportunity statistically (1 buyer, 340 exhibitors, zero chance of seeing all the adverts) so something else has to serve as the critical measurement of success.
Recent blog postings suggest it is sales. Whatever it is, we haven't got that measurement. What we do have is exhibitors and participants, but because these numbers fail to have relevance to both sides of the buy-sell divide then these numbers fail to be metrics.
Which places RSA in a different space to Neighbours. Readers will recognise the frequent theme of security being in the market for silver bullets, and that the numbers of exhibitors and participants are therefore signals, not metrics.
And, in this space, when the medium becomes the message, that's very uncool, because we are now looking at a number that doesn't speak to sales. When Marshal McLuhan coined his phrase, he was speaking generally positively about electronic media such as TV, but we can interpret this in security more as a warning: In a market based on signals not metrics, when the signals become the system, when the medium becomes the message, it is inevitable that the system will collapse, because it is no longer founded on objective needs.
Signals do not by definition capture enough of the perfect quality that is needed, they only proxy it in some uncertain and unreliable sense. Which is fine, if we all understand this. To extend Spence's example, if we know that a degree in Computer Science is not a guarantee that the guy can program a computer, that's cool.
Or, to put it another way: there are no good signals, only less bad ones. The signal is less bad than the alternate, which is nothing. Which leads us to the hypothesis that the market will derail when we act as if the the signal is a metric, as if the Bachelor's in CompSci is a certification of programming skill, as if booth size is the quality of security.
Have another look at Neighbours. It's still going on after 22 years or so. It is around one million, because of some revamp. That metric is still being taken to to the bank. The viewer is entertained, the advertiser markets. Buyer and seller are comfortable, the message and the medium therefore are in happy coincidence, they can happily live together because the medium lives on solid metrics. All of this, and we still don't know what it is. That's TV.
Whereas with the world of security, we know that the signal of the RSA conference is as strong as ever, but we also know that, in this very sector that the conference has become the iconic symbol for, the wheels are coming off. And, what's even more disturbing, we know that the RSA conference will go from strength to strength, even as the wheels are spinning out of view, and we the users are sliding closer to the proverbial cliff.
I know the patient reader is desperate to find out what Neighbours really is, so here goes. Read the following with an Aussie sense of humour:
About 10 years back I and a partner flew to Prague and then caught a train to a a Czech town near the Polish border, in a then-devastated coal belt. We were to consult to a privatised company that was once the Ministry of Mines. Recalling communist times, the Ministry had shrunk from many hundreds of thousands of miners down to around 20,000 at that time.Of which, only 2 people spoke English. These two English speakers, both of them, picked us up at the train station. As we drove off, the girl of the pair started talking to us, and her accent immediately jolted us out of our 24 hours travel stupor: Australian! Which was kind of unexpected in such a remote place, off the beaten track, as they say down under.
I looked slowly at my friend, who was Scandinavian. He looked at me, slowly. Okay, so there's a story here, we thought... Then, searching for the cautious approach, we tried to figure it out:
"How long have you lived here?" I asked.
She looked back at me, with worry in her face. "All ma life. Ah'm Czech." In pure, honest dinkum Strine, if you know what that means.
"No, you're not, you're Aussie!"
"I'm Czech! I kid you not!"
"Okay...." I asked slowly, "then why do you have an Australian accent."
Nothing, except more worry on her face. "Where did you learn English?"
This she answered: "London. I did a couple of year's Uni there."
"But you don't have an English accent. Where did you pick up an Australian accent?"
"Promise you won't laugh?" We both duly promised her we would not laugh, which was easy, as we were both too tired to find anything funny any more.
"Well," she went on, "I was s'posed to do English at Uni but I didn't." That is, she did not attend the University's language classes.
"Instead, I stayed at home and watched Neighbours every lunchtime!"
Of course, we both cracked up and laughed until she was almost in tears.
That's what Neighbours is -- a cultural phenomena that swept through Britain by presenting an idyllic image of a sunny, happy place in a country far far away. Lots of fun people, lots of sunshine, lots of colour, lots of simple dramas, albeit all in that funny Aussie drawl. A phenomena strong enough that, in an unfair competition of 22 minutes, squeezed between daily life on the streets of the most cosmopolitan city in the world, it was able to imprint itself on the student visitor, and totally dominate the maturing of her language. The result was perfect English, yet with no trace of the society in which she lived.
But you won't read that in Wikipedia, because, for the world of TV, the medium is the message, and they have a metric. They only care that she watched, not what it did to her. And, in the converse, the language student got what she wanted, and didn't care what they thought about that.
Chandler spots a post by Michael on those pervasively two-wheeled Dutch, who all share one standard beaten-up old bike model, apparently mass-produced in a beaten-up old bike factory.
The Dutch are also prosperous, and they have a strong engineering and technology culture, so I was surprised on two visits in the last few years to see that their bikes are all junkers: poorly maintained, old, heavy, three-speeds. The word I used was all. ...I asked about this and everyone immediately said "if you had a good bike it would be immediately stolen." On reflection, I'm not satisfied with the answer, for a couple of reasons. First, the Dutch are about as law-abiding as Americans, perhaps more. Second, the serious lock that has kept my pretty good bikes secure on sketchy streets in two US cities for decades is available for purchase all over the world.
Third, and most important, I don't see how this belief could be justified by real data, because there were absolutely no bikes worth stealing anywhere I looked. ...
Right. So here's an interesting case of an apparently irreconcilable conundrum. Why does all the evidence suggest that bike insecurity is an improbability, yet we all believe it to be pervasive? Let's tear this down, because there are striking parallels between Micheal's topic and the current debate on security. (Disclosure: like half of all good FCers, I've spent some time on Amsterdam wheels, but it is a decade or so back.)
At least, back then, I can confirm that bicycle theft was an endemic problem. I can't swear to any figures, but I recall this: average lifespan of a new bike was around 3 months (then it becomes someone else's old bike). I do recall frequent discussions about a German friend who lost her bike, stolen, several times, and had to go down to the known areas where she could buy another standard beat-up bike from some shady character. Two or three times per year, and I was even press-ganged into riding shotgun once, so I have some first-hand evidence that she wasn't secretly building a bike out of spare parts she had in her handbag. Back then, the going price was around 25-50 guilders (hazy memory) which would be 10-30 euros. Anyone know the price at the moment?
For the most part, I used inline skates. However when I did some small job somewhere (for an FC connection), I was faced with the issue. Get a bike, lose it! As a non-native, I lacked the bicycle-loss-anti-angst-gene, so I was emotionally constrained from buying the black rattler. I faced and defeated the demon with a secret weapon, the Brompton!
The Dutch being law-abiding: well, this is just plain wrong. The Dutch are very up-right, but that doesn't mean they aren't human. Law-abiding is an economic issue, not an absolute. IMO, there is no such thing as a region where everyone abides by the law, there are just regions where they share peculiarities in their attitudes about the law. For tourists, there are stereotypes, but the wise FCer gnaws at the illusion until the darker side of economic reality and humanity is revealed. It's fun, because without getting into the character of the people, you can't design FC systems for them!
As it turns out, there is even a casual political term for this duality: the Dutch Compromise describes their famous ability to pass a law to appease one group of people, and then ignore it totally to appease another. A rather well-known counterexample: it is technically illegal to trade in drugs and prostitution. E.g., for the latter, you are allowed to display your own wares in your own window. For an example, look around for a concentration of red lights in the window.
Final trick: when they buy a new bike (as new stock has to be inserted into the population of rotating wheels), the wise Dutch commuter will spend a few hours making it look old and tatty. Disguise is a skill, which may explain the superficial observation that no bicycle is worth stealing.
What I don't know: why the trade persists. One factor that may explain this is that enough of the Dutch will buy a stolen bike to make it work. I also asked about this, and recall discussions where very up-right, very "law-abiding" citizens did indeed admit to buying stolen wheels. So the mental picture here is of a rental or loaning system, and as a society, they haven't got it together to escape their cyclical prisoner's dilemma.
Also: are bike locks totally secure? About as secure as crypto, I'd say. Secure when it works, a broken bucket of worthless bits when it doesn't. But let's hear from others?
Addendum: citybikes are another curiosity. Adam reportst that they are now being tried in the US.
Phishing still works, says Verisign:
...these latest messages masquerade as an official subpoena requiring the recipient to appear before a federal grand jury. The emails correctly address CEOs and other high-ranking executives by their full name and include their phone number and company name, according to Matt Richard, director of rapid response at iDefense, a division of VeriSign that helps protect financial institutions from fraud. ...About 2,000 executives took the bait on Monday, and an additional 70 have fallen for the latest scam, Richard said. Operating under the assumption that as many as 10 percent of recipients fell for the ruse, he estimated that 21,000 executives may have received the email. Only eight of the top 35 anti-virus products detected the malware on Monday, and on Wednesday, only 11 programs were flagging the new payload, which has been modified to further evade being caught.
I find 10% to be exceptionally large, but, OK, it's a number, and we collect numbers!
Disclosure for them: Verisign sells an an anti-phishing technology called secure browsing, or at least the certificates part of that. (Hence they and you are interested in phishing statistics.) Due to problems in the browser interface, they and other CAs now also sell a "green" version called Extended Validation. This -- encouragingly -- fixes some problems with the older status quo, because more info is visible for users to assess risks (a statement by the CA, more or less). Less encouragingly, EV may trade future security for current benefit, because it further cements the institutional structure of secure browsing, meaning that as attackers spin faster in their OODA loops, browsers will spin slower around the attackers.
Luckily, Johnath reports that further experiments are due in Firefox 3.1, so there is still some spinning going on:
Here’s my initial list of the 3 things I care most about, what have I missed?1. Key Continuity Management
Key continuity management is the name for an approach to SSL certificates that focuses more on “is this the same site I saw last time?” instead of “is this site presenting a cert from a trusted third party?” Those approaches don’t have to be mutually exclusive, and shouldn’t in our case, but supporting some version of this would let us deal more intelligently with crypto environments that don’t use CA-issued certificates.
Jonath's description sells it short, perhaps for political reasons. KCM is useful when the user knows more than the CA, which unfortunately is most of the time. This might mean that the old solution should be thrown out in favour of KCM, but the challenge lies in extracting the user's knowledge in an efficacious way. As the goal with modern software is to never bother the user then this is much more of a challenge than might be first thought. Hence, as he suggests, KCM and CA-certified browsing will probably live side by side for some time.
If there was a list of important security fixes for phishing, I'd say it should be this: UI fixes, KCM and TLS/SNI. Firefox is now covering all three of those bases. Curiously, Johnath goes on to say:
The first is for me to get a better understanding of user certificates. In North America (outside of the military, at least) client certificates are not a regular matter of course for most users, but in other parts of the world, they are becoming downright commonplace. As I understand it, Belgium and Denmark already issue certs to their citizenry for government interaction, and I think Britain is considering its options as well. We’ve fixed some bugs in that UI in Firefox 3, but I think it’s still a second-class UI in terms of the attention it has gotten, and making it awesome would probably help a lot of users in the countries that use them. If you have experience and feedback here, I would welcome it.
Certainly it is worthy of attention (although I'm surprised about the European situation) because they strictly dominate over username-passwords in such utterly scientific, fair and unbiased tests like the menace of the chocolate bar. More clearly, if you are worried about eavesdropping defeating your otherwise naked and vulnerable transactions, client-side private keys are the start of the way forward to proper financial cryptography.
I've found x.509 client certificates easier to use than expected, but they are terribly hard to install into the browser. There are two real easy fixes for this: 1. allow the browser to generate a self-signed cert as a default, so we get more widespread use, and 2. create some sort of CA <--> browser protocol so that this interchange can happen with a button push. (Possible 3., I suspect there may be some issues with SSL and client certs, but I keep getting that part wrong so I'll be vague this time!)
Which leaves us inevitably and scarily to our other big concern: Browser hardening against MITB. (How that is done is ... er ... beyond scope of a blog post.) What news there?
In our ELTEcrypt research group [writes Dani Nagy], we discussed opportunistic public key exchange from a cost-benefit point of view and came up with an important improvement over the existing schemes (e.g. ssh), which, I think, must be advertised as broadly as possible. It may even merit a short paper to some conference, but for now, I would like to ask you to publish it in your blog.
Opportunistic public key exchange is when two communicating parties perform an unauthenticated key exchange before the first communication session, assume that this key is trustworthy and then only verify that the same party uses the same key every time. This lowers the costs of defense significantly by not imposing authentication on the participants, while at the same time it does not significantly lower the cost of the dominant attack (doing MITM during the first communication session is typically not the dominant attack). Therefore, it is a Pareto-improvement over an authenticated PKI.
One successful implementation of this principle is ssh. However, it has one major flaw, stemming from misplaced costs: when an ssh host is re-installed or replaced by a new one, the cost of migrating the private key of the host is imposed on the host admin, while most of the costs resulting from not doing so are imposed on the clients.
In the current arrangement, when a new system is installed, the ssh host generates itself a new key pair. Migrating the old key requires extra work on the system administrator's part. So, he probably won't do it.
If the host admin fails to migrate the key pair, clients will get a frightening error message that won't let them do their job, until they exert significant effort for removing the "offending" old public key from their key cache. This is their most straightforward solution, which both weakens their security (they lose all protection against MITM) and punishes them for the host admin's mistake.
This could be improved in the following way: if the client detects that the host's public key has changed, instead of quitting after warning the user, it allows the user to accept the new key temporarily for this one session with hitting "yes" and SENDS AN EMAIL TO THE SYSTEM ADMINISTRATOR.
Such a scheme metes out punishment where it is due. It does not penalize the client too much for the host admin's mistake, and provides the latter with all the right incentives to do his duty (until he fixes the migration problem, he will be bombarded by emails by all the clients and the most straightforward solution to his problem is to migrate the key, which also happens to be the right thing to do).
As an added benefit, in some attack scenarios, the host admin will learn about an ongoing attack.
We've all heard it a hundred times: what's your threat model? But how many of us have been able to answer that question? Sadly, less than we would want, and I myself would not have a confident answer to the question. As writings on threat modelling are few and far between, it is difficult to draw a hard line under the concept. Yet more evidence of gaping holes in the security thinker's credibility.
Adam Shostack has written a series of blog posts on threat modelling in action at Microsoft (read in reverse order). It's good: readable, and a better starting point, if you need to do it, than anything else I've seen. Here are a couple of plus points (there are more) and a couple of criticisms:
Surprisingly, the approach is written to follow the practice that it is the job of the developers to do the security work:
We ask feature teams to participate in threat modeling, rather than having a central team of security experts develop threat models. There’s a large trade-off associated with this choice. The benefit is that everyone thinks about security early. The cost is that we have to be very prescriptive in how we advise people to approach the problem. Some people are great at “think like an attacker,” but others have trouble. Even for the people who are good at it, putting a process in place is great for coverage, assurance and reproducibility. But the experts don’t expose the cracks in a process in the same way as asking everyone to participate.
What is written between the lines is that the central security team at Microsoft provides a moderator or leader for the process. This is good thinking, as it brings in the experience, but it still makes the team do the work. I wonder how viable this is for general practice? Outside the megacorps where they have made this institutional mindshift happen, would it be possible to ask a security expert to come in, swallow 2 decades of learning, and be a leader of a process, not a doer of a process?
There are many ramifications of the above discovery, and it is fascinating to watch them bounce around the process. I'll just repeat one here: simplification! Adam hit the obvious problem that if you take the mountain to Mohammad, it should be a small mountain. Developers are employed to write good code, and complex processes just slow that down, and so an aggressive simplification was needed to come up with a very concise model. A more subtle point is that the moderator wants to impart something as well as get through the process, and complexity will kill any retention. Result: one loop on one chart, and one table.
The posts are not a prescription on how to do the whole process, and indeed in some places, they are tantalisingly light (we can guess that it is internal PR done through a public channel). With that understanding, they represent a great starting point.
There are two things that I would criticise. One major error, IMHO: Repudiation. This was an invention by PKI-oriented cryptographers in the 1990s or before, seeking yet another marketing point for the so-called digital signature. It happens to be wrong. Not only is the crypto inadequate to the task, the legal and human processes implied by the Repudiation concept are wrong. Not just misinformed or misaligned, they are reversed from reality, and in direct contradiction, so it is no surprise that after a decade of trying, Non-Repudiation has never ever worked in real life.
It is easy to fix part of the error. Where you see Non-Repudiation, put Auditing (in the sense of logging) or Evidence (if looking for a more juridical flavour). What is a little bit more of a challenge is how to replace "Repudation" as the name of the attack ... which on reflection is part of the error. The attack alleged as repudiation is problematic, because, before it is proven one way or the other, it is not possible to separate a real attack from a mistake. Then, labelling it as an attack creates a climate of guilty until proven innocent, but without the benefit of evidence tuned to proving innocence. This inevitably leads to injustice which leads to mistrust and finally, (if a fair and open market is in operation) rejection of the technology.
Instead, think of it as an attack born of confusion or uncertainty. This is a minor issue when inside one administrative or trust boundary, because one person elects to carry the entire risk. But it becomes a bigger risk when crossing into different trust areas. Then, different agents are likely to call a confusing situation by different viewpoints (incentives differ!).
At this point the confusion develops into a dispute, and that is the real name for the attack. To resolve the dispute, add auditing / logging and evidence. Indeed, signatures such as hashes and digsigs make mighty fine evidence so it might be that a lot of the work can be retained with only a little tweaking.
I then would prefer to see the threat - property matrix this way:
Threat | Security Property | |
Spoofing | --> | Authentication |
Tampering | --> | Integrity |
Dispute | --> | Evidence |
Information Disclosure | --> | Encryption |
Denial of Service | --> | Availability |
Elevation of Privilege | --> | Authorisation |
A minor criticism I see is in labelling. I think the whole process is not threat modelling but security modelling. It's a minor thing, which Adam neatly disposes of by saying that arguing about terms is not only pointless but distracts from getting the developers to do the job. I agree. If we end up disposing of the term 'security modelling' then I think that is a small price to pay to get the developers a few steps further forward in secure development.
For a decade now, SSH has successfully employed a simple opportunistic protection model that solved the shared-key problem. The premise is quite simple: use the information that the user probably knows. It does this by caching keys on first sight, and watching for unexpected changes. This was originally intended to address the theoretical weakness of public key cryptography called MITM or man-in-the-middle.
Critics of the SSH model, a.k.a. apologists for the PKI model of the Trusted Third Party (certificate authority) have always pointed out that this simply leaves SSH open to a first-time MITM. That is, when some key changes or you first go to a server, it is "unknown" and therefore has to be established with a "leap of faith."
The SSH defenders claim that we know much more about the other machine, so we know when the key is supposed to change. Therefore, it isn't so much a leap of faith as educated risk-taking. To which the critics respond that we all suffer from click-thru syndrome and we never read those messages, anyway.
Etc etc, you can see that this argument goes round and round, and will never be solved until we get some data. So far, the data is almost universally against the TTP model (recall phishing, which the high priests of the PKI have not addressed to any serious extent that I've ever seen). About a year or two back, attack attention started on SSH, and so far it has withstood difficulties with no major or widespread results. So much so that we hear very little about it, in contrast to phishing, which is now a 4 year flood of grief.
After which preamble, I can now report that I have a data point on an attack on SSH! As this is fairly rare, I'm going to report it in fullness, in case it helps. Here goes:
Yesterday, I ssh'd to a machine, and it said:
zhukov$ ssh some.where.example.net WARNING: RSA key found for host in .ssh/known_hosts:18 RSA key fingerprint 05:a4:c2:cf:32:cc:e8:4d:86:27:b7:01:9a:9c:02:0f. The authenticity of host can't be established but keys of different type are already known for this host. DSA key fingerprint is 61:43:9e:1f:ae:24:41:99:b5:0c:3f:e2:43:cd:bc:83. Are you sure you want to continue connecting (yes/no)?
OK, so I am supposed to know what was going on with that machine, and it was being rebuilt, but I really did not expect SSH to be effected. The ganglia twitch! I asked the sysadm, and he said no, it wasn't him. Hmmm... mighty suspicious.
I accepted the key and carried on. Does this prove that click-through syndrome is really an irresistable temptation and the archilles heel of SSH, and even the experienced user will fall for it? Not quite. Firstly, we don't really have a choice as sysadms, we have to get in there, compromise or no compromise, and see. Secondly, it is ok to compromise as long as we know it, we assess the risks and take them. I deliberately chose to go ahead in this case, so it is fair to say that I was warned, and the SSH security model did all that was asked of it.
Key accepted (yes), and onwards! It immediately came back and said:
iang@somewhere's password:
Now the ganglia are doing a ninja turtle act and I'm feeling very strange indeed: The apparent thought of being the victim of an actual real live MITM is doubly delicious, as it is supposed to be as unlikely as dying from shark bite. SSH is not supposed to fall back to passwords, it is supposed to use the keys that were set up earlier. At this point, for some emotional reason I can't further divine, I decided to treat this as a compromise and asked my mate to change my password. He did that, and then I logged in.
Then we checked. Lo and behold, SSH had been reinstalled completely, and a little bit of investigation revealed what the warped daemon was up to: password harvesting. And, I had a compromised fresh password, whereas my sysadm mates had their real passwords compromised:
$ cat /dev/saux foo@...208 (aendermich) [Fri Feb 15 2008 14:56:05 +0100] iang@...152 (changeme!) [Fri Feb 15 2008 15:01:11 +0100] nuss@...208 (43Er5z7) [Fri Feb 15 2008 16:10:34 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:23:15 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:35:59 +0100] $
The attacker had replaced the SSH daemon with one that insisted that the users type in their passwords. Luckily, we caught it with only one or two compromises.
In sum, the SSH security model did its job. This time! The fallback to server-key re-acceptance triggered sufficient suspicion, and the fallback to passwords gave confirmation.
As a single data point, it's not easy to extrapolate but we can point at which direction it is heading:
In terms of our principles, we can then underscore the following:
All in all, SSH did a good job. Which still leaves us with the rather traumatic job of cleaning up a machine with 3-4 years of crappy PHP applications ... but that's another story.
Everyone is talking about Société Générale and how they managed to mislay EUR 4.7bn. The current public line is that a rogue trader threw it all away on the market, but some of the more canny people in the business don't buy it.
One superficial question is how to avoid this dilemma?
That's a question for financial cryptographers, I say. If we imagine a hard payment system is used for the various derivative trades, we would have to model the trades as two or more back-to-back payments. As they are positions that have to be made then unwound, or cancelled off against each other, this means that each trader is an issuer of subsidiary instruments that are combined into a package that simulates the intent of the trade (theoretical market specialists will recall the zero-coupon bond concept as the basic building block).
So, Monsieur Kerviel would have to issue his part in the trades, and match them to the issued instruments of his counterparty (whos name we would dearly love to know!). The two issued instruments can be made dependent on each other, an implementation detail we can gloss over today.
Which brings us to the first part: fraudulent trades to cover other trades would not be possible with proper FC because it is not possible to forge the counterparty's position under triple-entry systems (that being the special magic of triple-entry).
Higher layer issues are harder, because they are less core rights issues and more human constructs, so they aren't as yet as amenable to cryptographic techniques, but we can use higher layer governance tricks. For example, the size of the position, the alarms and limits, and the creation of accounts (secret or bogus customers). The backoffice people can see into the systems because it is they who manage the issuance servers (ok, that's a presumption). Given the ability to tie down every transaction, we are simply left with the difficult job of correctly analysing every deviation. But, it is at least easier because a whole class of errors is removed.
Which brings us to the underlying FC question: why not? It was apparent through history, and there are now enough cases to form a pattern, that the reason for the failure of FC was fundamentally that the banks did not want it. If anything, they'd rather you dropped dead on the spot than suggest something that might improve their lives.
Which leads us to the very troubling question of why banks hate to do it properly. There are many answers, all speculation, and as far as I know, nobody has done research into why banks do not employ the stuff they should if they responded to events as other markets do. Here are some speculative suggestions:
Every one of those reasons is a completely standard malaise which strikes every company, but not other industries. The difference is competition; in every other industry, the competition would eat up the poorer players, but in banking, it keeps the poorer players alive. So the #1 fundamental reason why rogue traders will continue to eat up banks, one by one, is lack of competitive pressures to do any better.
And of course, all these issues feed into each other. Given all that, it is hard to see how FC will ever make a difference from inside; the only way is from outside, to the extent that challengers find an end-run around the rules for non-competition in banking.
What then would we propose to the bank to solve the SocGen dilemma as a short term hack? There are two possibilities that might be explored.
This works because it is an independent and financially motivated check. It also helps to start the inevitable shift of moving parts of regulation from the current broken 20th century structure over to a free market governance mechanism. That is, it is aligned with the eventual future economic structure.
So outsource the whole lot of risk governance to specialists in a separate board-level structure. This structure should have visibility of all accounts, all SPEs, all positions, and should also be the main conduit to the regulator. It has to be equal to the business board, because it has to have the power to make it happen.
The existing board maintains the business side: HR, markets, products, etc. This would nicely divide into two the "special" area of banking from the "general" area of business. Then, when things go wrong, it is much easier to identify who to sack, which improves the feedback to the point where it can be useful. It also puts into more clear focus the specialness of banks, and their packaged franchises, regulatory costs and other things.
Why or how these work is beyond scope of a blog. Indeed, whether they work is a difficult experiment to run, and given the Competition finding above, it might be that we do all this, and still fail. But, I'd still suggest them, as both those ideas can be rolled out in a year, and the current central banking structure has at least another decade to run, and probably two, before the penny drops, and people realise that the regulation is the problem, not the solution.
(PS: Jim invented the second one!)
The UK data breach a month or two back counted another victim: one Jeremy Clarkson. The celebrated British "motormouth" thought that nobody should really worry about the loss of the disks, because all the data is widely available anyway. To stress this to the island of nervous nellies, he posted his bank details in the newspaper.
Back in November, the Government lost two computer discs containing half the population's bank details. Everyone worked themselves into a right old lather about the mistake but I argued we should all calm down because the details in question are to be found on every cheque we hand out every day to every Tom, Dick and cash and carry.
Unfortunately, some erstwhile scammer decided to take him to task at it and signed him up for a contribution to a good charity. (Well, I suppose it's good, all charities and non-profits are good, right?) Now he writes:
I opened my bank statement this morning to find out that someone has set up a direct debit which automatically takes £500 from my account. I was wrong and I have been punished for my mistake.Contrary to what I said at the time, we must go after the idiots who lost the discs and stick cocktail sticks in their eyes until they beg for mercy.
What can we conclude from this data point of one victim? Lots, as it happens.
And, yes, he was wrong to stick his neck out and say the truth.
b. because he asked them not to reverse the transaction, as now he gets an opportunity to write another column. Cheap press.
Hat-tip to JP! And, I've just noticed DigitalMoney's contribution for another take!
Capabilities is one of the few bright spots in theoretical computing. In Internet software terms, caps can be simply implemented as nymous public/private keys (that is, ones without all that PKI baggage). The long and deep tradition of capabilities is unchallenged seriously in the theory literature.
It is heavily challenged in the practical world in two respects: the (human) language is opaque and the ideas are simply not widely deployed. Consider this personal example: I spent many years trying to figure out what caps really was, only to eventually discover that it was what I was doing all along with nymous keys. The same thing happens to most senior FC architects and systems developments, as they end up re-inventing caps without knowing it: SSH, Skype, Lynn's x95.9, and Hushmail all have travelled the same path as Gary Howland's nymous design. There's no patent on this stuff, but maybe there should have been, to knock over the ivory tower.
These real world examples only head in the direction of caps, as they work with existing tools, whereas capabilities is a top-down discipline. Now Ben Laurie has announced that Google has a project to create a Caps approach for Javascript (hat tip to JPM, JQ, EC and RAH :).
Rather... than modify Javascript, we restrict it to a large subset. This means that a Caja program will run without modification on a standard Javascript interpreter - though it won’t be secure, of course! When it is compiled then, like CaPerl, the result is standard Javascript that enforces capability security. What does this mean? It means that Web apps can embed untrusted third party code without concern that it might compromise either the application’s or the user’s security.
Caja also means box in Spanish which is a nice cross-over as capabilities is like the old sandbox ideas of Java applet days. What does this mean, other than the above?
We could also point to the Microsoft project Cardspace (was/is now Inforcard) and claim parallels, as that, at a simplistic level, implements a box for caps as well. Also, the HP research labs have a constellation of caps fans, but it is not clear to me what application channel exists for their work.
There are then at least two of the major financed developers pursuing a path guided by theoretical work in secure programming.
What's up with Sun, Mozilla, Apple, and the application houses, you may well ask! Well, I would say that there is a big difference in views of security. The first-mentioned group, a few tiny isolated teams within behemoths, are pursuing designs, architectures and engineering that is guided by the best of our current knowledge. Practically everyone else believes that security is about fixing bugs after pushing out the latest competitive feature (something I myself promote from time to time).
Take Sun (for example, as today's whipping boy, rather than Apple or Mozo or the rest of Microsoft, etc).
They fix all their security bugs, and we are all very grateful for that. However, their overall Java suite becomes more complex as time goes on, and has had no or few changes in the direction of security. Specifically, they've ignored the rather pointed help provided by the caps school (c.f., E, etc). You can see this in J2EE, which is a melting pot of packages and add-ons, so any security it achieves is limited to what we might called bank-level or medium-grade security: only secure if everything else is secure. (Still no IPC on the platform? Crypto still out of control of the application?)
Which all creates 3 views;
The Internet as a whole is stalled at the 2nd level, and everyone is madly busy fixing security bugs and deploying tools with the word "security" in them. Breaking through the glass ceiling and getting up to high security requires deep changes, and any sign of life in that direction is welcome. Well done Google.
Some good news: after a long hard decade, OpenPGP is now on standards track. That means that it is a standard, more or less, for the rest of us, and the IETF process will make it a "full standard" according to their own process in due course.
RFC4880 is now OpenPGP and OpenPGP is now RFC4880. Hooray!
Which finally frees up the OpenPGP community to think what to do next?
Where do we go from here? That's an important question because OpenPGP provides an important base for a lot of security work, and a lot of security thinking, most of which is good and solid. The OpenPGP web of trust is one of the seminal security ideas, and is used by many products (referenced or not).
However, it is fair to say that OpenPGP is now out of date. The knowledge was good around the early 1990s, and is ready for an update. (I should point out that this is not as embarrassing as it sounds, as one competitor, PKI/x.509, is about 30 years out of date, deriving its model from pre-Internet telco times, and there is no recognition in that community of even the concept of being out of date.)
Rising to the challenge, the OpenPGP working group are thinking in terms of remodelling the layers such that there is a core/base component, and on top of that, a major profile, or suite, of algorithms. This will be a long debate about serious software engineering, security and architecture, and it will be the most fun that a software architect can have for the next ten years. In fact, it will be so much fun that it's time to roll out my view, being hypothesis number one:
H1: The One True Cipher Suite In cryptoplumbing, the gravest choices are apparently on the nature of the cipher suite. To include latest fad algo or not? Instead, I offer you a simple solution. Don't.
There is one cipher suite, and it is numbered Number 1.Cypersuite #1 is always negotiated as Number 1 in the very first message. It is your choice, your ultimate choice, and your destiny. Pick well.
If your users are nice to you, promise them Number 2 in two years. If they are not, don't. Either way, do not deliver any more cipher suites for at least 7 years, one for each hypothesis.And then it all went to pot... We see this with PGP. Version 2 was quite simple and therefore stable -- there was RSA, IDEA, MD5, and some weird padding scheme. That was it. Compatibility arguments were few and far between. Grumbles were limited to the padding scheme and a few other quirks.
Then came Versions 3-8, and it could be said that the explosion of options and features and variants caused more incompatibility than any standards committee could have done on its own.
Avoid the Champagne Hangover Do your homework up front.
Pick a good suite of ciphers, ones that are Pareto-Secure, and do your best to make the combination strong. Document the short falls and do not worry about them after that. Cut off any idle fingers that can't keep from tweaking. Do not permit people to sell you on the marginal merits of some crazy public key variant or some experimental MAC thing that a cryptographer knocked up over a weekend or some minor foible that allows an attacker to learn your aunty's birth date after asking a million times.
Resist the temptation. Stick with The One.
Rasika pointed to a serious attempt to research false passports for all of EUs countries by Panorama, a British soft-investigation TV series:
I am attending an informal seminar led by a passport dealer, along with six hopefuls who are living illegally in the UK. We are told that all our problems can be solved by a "high quality" Czech passport. It will take just two weeks to obtain and cost a mere £1,500.This may already sound surreal enough, but it was just the beginning of my journey across Europe in search of fake passports from all 25 EU member states.
What's lacking here is hard costs of the passports she actually did obtain. That's why it is soft investigation.
I am directed to somebody who introduces me to somebody else, and finally I end up face to face with two innocent-looking pensioners. They say that for just 300 euros they can get me a Polish passport in less than 24 hours.This deal falls through, but another dealer has delivered Polish and Lithuanian passports, complete with my own photos and two different identities.
But the breadth of the success makes it worthy of reporting:
It took me just five months to get 20 fake EU passports. Some of them were of the very best quality and were unlikely to be spotted as fakes by even the most stringent of border controls.
This is probably a good time to remind FC readers that you can find a long running series on the cost of false identity, taken from news articles that specify actual costs, here in the blog. Also note that on the Panorama show there is a video segment, but it is in a format that I cannot read for some reason.
Update: in one of the accompanying articles:
They ranged in price from just #250 to more than #1,500. Some were provided within several days, while others took weeks.
(Currency is unclear, it was shown as #.) Also, from one of the accompanying articles:
Police believe they were on the brink of producing 12,000 fake EU passports - potentially earning them £12m, when they were arrested in November 2005. .... Det Insp Nick Downing, who led the investigation, said the passports could have sold for up to £1,000 each.
Same as FC.
I didn't spot it when Peter Gutmann called it the world's biggest supercomputer (I thought he was talking about a game or something ...). Now John Robb pointed to Bruce Schneier who has just published a summary. Here's my paraphrasing:
Bruce Schneier reports that the anti-virus companies are pretty much powerless, and runs through a series of possible defences. I can think of a few too, and I'm sure you can as well. No doubt the world's security experts (cough) will spend a lot of time on this question.
But, step back. Look at the big picture. We've seen all these things before. Those serious architects in our world (you know who you are) have even built these systems before.
But: we've never seen the combination of these tactics in an attack .
This speaks to a new level of sophistication in the enemy. In the past, all the elements were basic. Better than script kiddie, but in that area. What we had was industrialisation of the phishing industry, a few years back, which spoke to an increasing level of capital and management involved.
Now we have some serious architects involved. This is in line with the great successes of computer science: Unix, the Internet, Skype all achieved this level of sophistication in engineering, with real results. I tried with Ricardo, Lynn&Anne tried with x9.59. Others as well, like James and the Digicash crew. Mojo, Bittorrent and the p2p crowd tried it too.
So we have a new result: the enemy now has architects as good as our best.
As a side-issue, well predicted, we can also see the efforts of the less-well architected groups shown for what they are. Takedown is the best strategy that the security-shy banks have against phishing, and that's pretty much a dead duck against the above enemy. (Banks with security goals have moved to SMS authentication of transactions, sometimes known as two channel, and that will still work.)
But that's a mere throwaway for the users. Back to the atomic discussion of architecture. This is an awesome result. In warfare, one of the dictums is, "know yourself and win half your battles. Know your enemy and win 99 of 100 battles."
For the first time in Internet history, we now have a situation where the enemy knows us, and is equal to our best. Plus, he's got the capital and infrastructure to build the best tools against us.
Where are we? If the takedown philosophy is any good data point, we might know ourselves but we know little about the enemy. But, even if we know ourselves, we don't know our weaknesses, and our strengths are useless.
What's to be done? Bruce Schneier said:
Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest.
As I suggested in last year's roundup, we were approaching this decision. Start re-writing, Microsoft. For sake of fairness, I'd expect that Linux and Apple will have a smaller version of the same problem, as the 1970s design of Unix is also a bit out-dated for this job.
Over on Second Life, they (LL) are trying to solve a problem by providing an outsourced service on identity verification with a company called Integrity. This post puts it in context (warning, it's quite long, longer even than an FC post!):
So now we understand better what this is all about. In effect, Integrity does not really provide “just a verification service”. Their core business is actually far more interesting: they buy LL’s liability in case LL gets a lawsuit for letting minors to see “inappropriate content”. Even more interesting is that LL does not need to worry about what “inappropriate content” means: this is a cultural question, not a philosophic one, but LL does not need to care. Whatever lawsuits will come LL’s way, they will simply get Integrity to pay for them.Put into other words: Integrity is an insurance company. In this day and age where parents basically don’t care what their children are doing, and blame the State for not taking care of a “children-friendly environment” by filing lawsuits against “the big bad companies who display terrible content”, a new business opportunity has arisen: selling insurance against the (albeit remote) possibility that you get a lawsuit for displaying “inappropriate content”.
(Shorter version maybe here.)
Over on Perilocity, which is a blog about the insurance world, John S. Quarterman points at the arisal of insurance to cover identity theft from a company called LifeLock.
I have to give them credit for honesty, though: LifeLock admits right out that the main four preventive things they do you could do for yourself. Beyond that, the main substance they seem to offer is essentially an insurance package:
"If your Identity is stolen while you are our client, we’re going to do whatever it takes to recover your good name. If you need lawyers, we’re going to hire the best we can find. If you need investigators, accountants, case managers, whatever, they’re yours. If you lose money as a result of the theft, we’re going to give it back to you."For $110/year or $10/month, is such an insurance policy overpriced, underpriced, or what?
It's possible easier for the second provider to be transparent and open. After all they are selling insurance for stuff that is a validated disaster. The first provider is trying to cover a problem which is not yet a disaster, so there is a sort of nervousness about baring all.
How viable is this model? The first thing would be to ask: can't we fix the underlying problem? For identity theft, apparently not, Americans want their identity system because it gives them their credit system, and there aren't too many Americans out there that would give up the right to drive their latest SUV out of the forecourt.
On the other hand, a potential liability issue within a game would seem to be something that could be solved. After all, the game operator has all the control, and all the players are within their reach. Tonight's pop-quiz: Any suggestions on how to solve the potential for large/class-action suits circling around dodgy characters and identity?
(Manual trackbacks: Perilocity suggests we need identity insurance in the form of governments taking the problem more seriously and dealing with identity thefts more proactively when they occur.)
I've been using S/MIME for around a year now for encrypted comms, and I can report that the overall process is easier than OpenPGP. The reasons are twofold:
Sadly, S/MIME sucks. I reported previously on Thunderbird's most-welcome improvements to its UI (from unworkable to woeful) and also its ability to encrypt-not-sign, which catapulted the tool into legal sensibility. Recall, we don't know what a signature means, and the lawyers say "don't sign anything you don't read" ... I'd defy you to read an S/MIME signed email.
The problem that then occurs is that the original S/MIME designers (early 1990s?) used an unfortunate trick which is now revealed as truly broken: the keys are distributable with signing.
Ooops. Worse, the keys are only distributable with signing as far as I can see, which uncovers the drastic failings of tools designed by cryptographers and not software engineers. This sort of failure derives from such claims as, you must sign everything "to be trusted" ... which we disposed of above.
So, as signing is turned off, we now need to distribute the keys. This occurs by 2 part protocol that works like this:
With various error variations built in. OK, our first communal thought was that this would be workable but it turns out not to scale.
Consider that we change email clients every 6 months or so, and there appears no way to export your key collection. Consider that we use other clients, and we go on holidays every 3 months (or vacations every 12 months), and we lose our laptops or our clients trash our repositories. Some of us even care about cryptographic sanitation, and insist on locking our private keys in our secured laptop in the home vault with guards outside. Which means we can't read a thing from our work account.
Real work is done with a conspiracy of more than 2. It turns out that with around 6 people in the ring, someone is AFK ("away from keys"), all the time. So, someone cannot read and/or write. This either means that some are tempted to write in clear text (shame!), or we are all running around Alice-Bobbing each other. All the time.
Now, of course, we could simply turn on signing. This requires (a) a definition of signing, (b) written somewhere like a CPS, (c) which is approved and sustainable in agreements, (d) advised to the users who receive different signature meanings, and (e) acceptance of all the preceeding points as meaningful. These are very tough barriers, so don't hold your breath, if we are talking about emails that actually mean something (kid sister, knock yourself out...).
Turning on the signing also doesn't solve the core problem of key management, it just smothers it somewhat by distributing the keys every chance we get. It still doesn't solve the problem of how to get the keys when you lose your repository, as you are then locked out of posting out until you have everyone's keys. In every conspiracy, there's always one important person who's notoriously shy of being called Alice.
This exposes the core weakness of key management. Public Key cryptography is an engineering concept of 2 people, and beyond that it scales badly. S/MIME's digsig-distro is just a hack, and something like OpenPGP's key server mechanism would be far more sensible, far more scaleable. However, I wonder if we can improve on even OpenPGP, as the mere appearance of a centralised server reduces robustness by definition (TTPs, CVP, central points of attack, etc).
If an email can be used to send the key (signed), then why can't an email be used to request a key? Imagine that we added an email convention, a little like those old maillist conventions, that did this:
Subject: GETSMIME fc@example.com
and send it off. A mailclient like Thunderbird could simply reply by forwarding the key. (How this is done is an exercise for the reader. If you can't think of 3 ways in the next 3 minutes, you need more exercise.)
Now, the interesting thing about that is that if Tbird could respond to the GETSMIME, we wouldn't need key servers. That is, Alice would simply mail Bob with "GETSMIME Carol@example.com" and Bob's client could respond, perhaps even without asking because Bob already knows Alice. Swarm key distro, in other words. Or, Dave could be a key server that just sits there waiting for the requests, so we've got a key server with no change to the code base.
In closing, I'll just remind that the opinion of this blog is that the real solution to the almost infinite suckiness of S/MIME is that the clients should generate the keys opportunistically, and enable use of crypto as and when possible.
This solution will never be ideal, and that's because we have to deal with email's legacy. But the goal with email is to get to some crypto, some of the time, for some of the users. Our current showing is almost no crypto, almost none of the time, for almost none of the users. Pretty dire results, and nothing a software engineer would ever admit to.
Jonath over at Mozilla takes up the flame and publishes lots of stats on the current state of SSL, phishing and other defences. Headline issues:
I hope he keeps it up, as it will save this blog from having done it for many years :) The connection between SSL and phishing can't be overstressed, and it's welcome to see Mozilla take up that case. (Did I forget to mention TLS/SNI in Apache and Microsoft? Shame on me....)
Jonath concludes with this odd remark:
If I may be permitted one iota of conclusion-drawing from this otherwise narrative-free post, I would submit this: our users, though they may be confused, have an almost shocking confidence in their browsers. We owe it to them to maintain and improve upon that, but we should take some solace from the fact that the sites which play fast and loose with security, not the browsers that act as messengers of that fact, really are the ones that catch the blame.
You, like me, may have read that too quickly, and thought that he suggests that the web sites are to blame, with their expired certs, fast and loose security, etc.
But, he didn't say that, he simply said those are the ones that *are* blamed. And that's true, there are lots and lots of warnings out there like campaigns to drop SSL v2 and stop sites doing phishing training and other things ... The sites certainly catch the blame, that's definately true.
But, who really *deserves* the blame? According to the last table in Jonath's post, the users don't really blame the site as much as might be expected: 24%. More are unsure and thus wise, I say: 32%. And yet more imagine an actual attack taking place: 40%.
That leaves 4% who suspect a "glitch" in the browser itself. Surely one lonely little group there, I wonder if they misunderstood what a "glitch" is... What is a "glitch," anyway, and how did it get into their browsers?
From the where did you read it first? department here comes an interesting claim:
Beyond obvious tips like activating firewalls, shutting computers down when not in use, and exercising caution when downloading software or using public computers, Consumer Reports offered one safety tip that's sure to inflame online passions: Consider a Mac."Although Mac owners face the same problems with spam and phishing as Windows users, they have far less to fear from viruses and spyware," said Consumer Reports.
Spot the difference between us and them? Consumer Report is not in the computing industry. What this suggests about being helpful about security will haunt computing psychologists for years to come.
For amusement, count how many security experts will pounce on the ready excuse:
"Because Macs are less prevalent than Windows-based machines, online criminals get less of a return on their investment when targeting them."Of course if that's true, it becomes less so with every Mac bought.
Can you say "monoculture!?"
U.S. consumers lost $7 billion over the last two years to viruses, spyware, and phishing schemes, according to Consumer Report's latest State of the Net survey. The survey, based on a national sample of 2,000 U.S. households with Internet access, suggests that consumers face a 25% chance of being victimized online, which represents a slight decline from last year.Computer virus infections, reported by 38% of respondents, held steady since last year, which Consumer Reports considers to be a positive sign given the increasing sophistication of virus attacks. Thirty-four percent of respondents' computers succumbed to spyware in the past six months. While this represents a slight decline, according to Consumer Reports, the odds of a spyware infection remain 1 in 3 and the odds of suffering serious damage from spyware are 1 in 11.
Phishing attacks remained flat, duping some 8% of survey respondents at a median cost of $200 per incident. And 650,000 consumers paid for a product or service advertised through spam in the month before the survey, thereby seeding next year's spam crop.
Perversely, insecurity means money for computer makers: Computer viruses and spyware turn out to be significant drivers of computer sales. According to the study, virus infections drove about 1.8 million households to replace their computers over the past two years. And over the past six months, spyware infestations prompted about 850,000 households replace their computers.
Dani reports that WebMoney is now doing a DGC or gold based currency.
This is big news for the gold community, as there is currently (I am told) a resurgance of interest in new gold issuers, perhaps on the expectation that e-gold does not survive the meatgrinders, also known as the Federal prosecutors in Washington D.C. (Perhaps as part of their defence strategy, e-gold now run a blog!)
What's different about WebMoney? They had financial cryptography thinkers in at the beginning, it seems, and they are successful. They know how to do this stuff. They did it, and they maintained their innovation base. They are big. They do multiple countries. They quite possibly dwarf any other gold operator in overall size, already. I could run through the checklist for a while, and it looks pretty good. (oh, and they do a downloadable client which does some sort of facsimile of blinded transactions, as presented at EFCE!)
Expect them to take off where e-gold left off, with the exception of the Ponzi based traffic. Big strategic question: will they go green or red on Ponzis?
In the PKI ("public key infrastructure") world, there is a written practice that the user, sometimes known as the relying party, should read the CPS ("certificate practice statement") and other documents before being qualified to rely on a certificate. This would qualify as industry practice and is sensible, at least on the face of it, in that the CA ("certificate authority") can not divine what you are going to use the cert for. Ergo, the logic goes, as relying party, you have to do some of the work yourself.
However, in the "PKI-lite" that is in place in x.509 browser and email world, this model has been simplified. Obviously, all of us who've come into contact with user software and the average user know that the notion of a user reading a CPS is so ludicrous that it's hardly worth discussing. Of course, we need another answer.
There are many suggestions, but the one that is in effect is that the browser, or more precisely, the software vendor, is the one who reads the CPS, on behalf of the users. One way to look at this is that this makes the browser the relying party by proxy, as, in making its assessment, it reads the CPS, measures it against own needs, and relies on audits and other issues. (By way of disclosure, I audit a CA.)
Unfortunately, it cannot be the relying party because it simply isn't a party to any transaction. The user remains the relying party, but isn't encouraged to do any of the relying and reading stuff that was mentioned above. That is, the user is encouraged to rely on the certificate, the vendor, the CA and the counter-party, all on an article of blind faith in these people and processes she has never heard of.
This dilemma is better structured as multi-tiered authorities: The CA is the authority on the certificates and their "owners." The software vendor is the authority on the CAs, by means of their CPSs, audits, etc.
Such a re-drawing of the map has fairly dramatic consequences for the PKI. The widespread perception is that the CA is highly liable -- because that's the "trust" product that they sell -- and the browser is not. In principle, and in contract law, it might be the other way around, as the browser has an agreement with the user, and the CA has not. Where the perception might find comfort is in the doctrine of duty of care but that will generally limit the CA's liability to gross negligence. Either way, the last word on this complicated arrangement might need real lawyers and eventually real courts.
It has always been somewhat controversial to suggest that the browser is in control, and therefore may need to consider risks, liabilities and obligations. But now, Paul Hoffman has published a rather serious piece of evidence that Microsoft, for its part, has taken on the R/L/O more seriously than thought:
If a user running Windows XP SP2 in its default configuration removes a root certificate that is one that Microsoft trusts, Windows will re-install that root certificate and again start to trust certificates that come from that root without alerting the user. This re-installation and renewed trust happens as soon as the user visits a SSL-based web site using Internet Explorer or any other web browser that uses the Cryptographic Application Programming Interface (CAPI) built-in to Windows; it will also happen when the user receives secure email using Outlook, Microsoft Mail, or another mail program that uses CAPI, as long as that mail is signed by a certificate that is based on that root certificate.
In effect, the user is not permitted by the software to make choices of reliance. To complete the picture, Paul was careful to mention the variations (Thunderbird and Firefox are not effected, there is an SP2 feature to disable all updates of roots, Vista has the same problem but no override...).
This supports the claim, as I suggested above, that the effective state of play -- best practices if you'll pardon the unfortunate term -- is that the software vendor is the uber-CA.
If we accept this conclusion, then we could conceivably get on and improve security within these limitations, that the user does little or nothing, and the software manufacturer decides everything they possibly can. OK, what's wrong with that? From an architectural position, nothing, especially if it is built-in up-front. Indeed this is one of the core design decisions of the best-of-security-breed applications (x9.59, Skype, SSH, Ricardo, etc. Feel free to suggest any others to the list, it's short and it's lonely.)
The problem lies in that software control is not stated up-front, and it is indeed denied by large swathes of securityland. I'd not be surprised if Microsoft themselves denied it (and maybe their lawyers would be right, given the rather traumatic link between phishing and mitm-proof-certificates...). The PKI state-of-denial leaves us in a little bit of a mess:
To extract from this mess probably takes some brave steps. I think I applaud Microsoft's practice, in that at least this makes that little part clearer.
They are in control, they are (I suggest) a party with risks, liabilities and obligations, so they should get on and make the product as secure as possible, as their primary goal. This includes throwing out bits of PKI best practices that we know to be worst practices.
They are not the only ones. Mozilla Foundation in recent years has completed a ground-breaking project to define their own CA processes, and this evidences great care and attention, especially in the ascension of the CA to their root list. What does this show other than they are a party of much power, exercising their duty of care?
Like Microsoft, they (only) care about their users, so they should (only) consider their users, in their security choices.
Will the CAs follow suit and create a simpler, more aligned product? Possibly not, unless pushed. As a personal remark, the criteria I use in auditing are indeed pushing dramatically in the direction of better care of risks, liabilities and obligations. The work to go there is not easy nor trivial, so it is no wonder that no CA wants to go there (and that may be an answer to those asking why it is taking so long).
Even if every CA stood forth and laid out a clear risks, liabilities and obligations statement before their relying parties, more would still need to be done. Until the uber-CAs also get on board publically with the liability shift and clearly work with the ramifications of it, we're still likely locked in the old PKI-lite paper regime or shell game that nobody ever used nor believed in.
For this reason, Microsoft is to be encouraged to make decisions that help the user. We may not like this decision, or every decision, but they are the party that should make them. Old models be damned, as the users surely are in today's insecurity, thanks in part to those very same models.
Reading this post from Robert Watson:
I presented, “Exploiting Concurrency Vulnerabilities in System Call Wrappers,” a paper on the topic of compromising system call interposition-based protection systems, such as COTS virus scanners, OpenBSD and NetBSD’s Systrace, the TIS Generic Software Wrappers Toolkit (GSWTK), and CerbNG. The key insight here is that the historic assumption of “atomicity” of system calls is falacious, and that on both uniprocessor and multiprocessing systems, it is trivial to construct a race between system call wrappers and malicious user processes to bypass protections. ...The moral, for those unwilling to read the paper, is that system call wrappers are a bad idea, unless of course, you’re willing to rewrite the OS to be message-passing.
And it sparked a thought that systems can only be secure if message-based. Maybe it's a principle, maybe it's a hypothesis, or maybe it's a law.
(My underlining.) To put this in context, if a system was built using TCP/IP, it's not built with security as its number one, overriding goal. That might be surprising to some, as pretty much all systems are built that way; which is the point, such an aggressive statement partly explains where we are, and partly explains how tricky this stuff is.
(To name names: SSH is built that way. Sorry, guys. Skype is a hybrid, with both message-passing and connections. Whether it is internally message-passing or connection-oriented I don't know, but I can speculate from my experiences with Ricardo. That latter started out message-passing over connection-oriented, and required complete client-side rewrites to remove the poison. AADS talks as if it is message-passing, and that might be because it is from the payments world, where there is much better understanding of these things.)
Back to theory. We know from the coordination problem, or the Two Generals problem, that protocols cannot be reliable about what they have sent to the other side. We also know from cryptography that we can create a reliable message on the receive-side (by means of an integrated digsig). We know that reliabile connections are not.
Also, above, read that last underlined sentence again. Operating systems guys have known for the longest time that the cleanest OS design was message passing (which they didn't push because of the speed issues):
Concurrency issues have been discussed before in computer security, especially relating to races between applications when accessing /tmp, unexpected signal interruption of socket operations, and distributed systems races, but this paper starts to explore the far more sordid area of OS kernel concurrency and security. Given that even notebook computers are multiprocessor these days, emphasizing the importance of correct synchronization and reasoning about high concurrency is critical to thinking about security correctly. As someone with strong interests in both OS parallelism and security, the parallels (no pun intended) seem obvious: in both cases, the details really matter, and it requires thinking about a proverbial Cartesian Evil Genius. Anyone who’s done serious work with concurrent systems knows that they are actively malicious, so a good alignment for the infamous malicious attacker in security research!
But none of that says what I asserted above: that if security is your goal, you must choose message-passing.
Is this intuition? Is there a theory out there? Where are we on the doh!-to-gosh scale?
A curious debate erupted over whether there is ROI on security investments. Normally sane Chris Walsh points to normally sensible Richard Bejtlich seems to think that because a security product saves money and cannot make money on its own, therefore it is not an investment, and therefore there cannot be ROI.
The problem the "return on security investment" (ROSI) crowd has is they equate savings with return. The key principle to understand is that wealth preservation (saving) is not the same as wealth creation (return).
If you use your fingers to count, you will have problems. The issue here is a simple one of negative numbers and the distinctions between absolute and relative calculations.
Here's how it works. Invent Widget. Widget generates X in revenue, per unit, which includes some small delta x in shrinkage or loss. Call it 10% of $100 so we are at an X of $90 of revenues.
Now, imagine a security tool that reduces the shrinkage by half. X' improves by $5. As X' of $95 is an improvement in your basic position of X at $90, this can then be calculated as an ROI (however that is done).
What then is the fallacy? One way to put it is like this (edited from original):
The "savings" you get back are what you already own, and you only need to claw them back.
No such, you don't have it, so it isn't yours to calculate, and resting on some moral or legal position is nonsense. The thief laughs at you, even if the blog evidence is that nobody else notices the joke, including economists who should know better! The thing that Richard is talking about is not "savings" in economic terms but "sunk costs."
In business terms, too, all numbers and formulas are just models. As the fundamental requirement is here to compare different investments then as long as we treat "savings" or "shrinkage" or "sunk costs" or whatever the same way in each instance of the model, the result is comparable. Mathematics simply treats minus numbers as backwards of positive numbers, it doesn't refuse to do it. A "savings" is just a negative number taken from another positive number that might be called "ideal maximum".
Having said all that, Richard's other points are spot on:
In closing, it still remains the case that security people say their managers don't understand security. And, as above, managers are safe in assuming that security people don't understand business. Another point that is starting to become more widely accepted, thank heavens, again spotted recently from sensibly sane Arthur ( Chris Walsh :).
Gunnar Peterson writes The agenda for Metricon 2.0 in Boston August 7th has been set. Metricon is co-located with Usenix security conference. The details, travel info, registration, and agenda are here.
There are a limited number of openings so please REGISTER SOON if interested in attending. A summary of the presentations:
The Read more....
I have a lot of skepticism about the notion of provable security.
To some extent this is just efficient hubris -- I can't do it so it can't be any good. Call it chutzpah, if you like, but there's slightly more relevance to that than egotism, as, if I can't do it, it generally signals that businesses will have a lot of trouble dealing with it. Not because there aren't enough people better than me, but because, if those that can do it cannot explain it to me, then they haven't got much of a chance in explaining it to the average business.
Added to that, there have been a steady stream of "proofs" that have been broken, and "proven systems" that have been bypassed. If you look at it from a scientific investigative point of view, generally, the proof only works because the assumptions are so constrained that they eventually leave the realm of reality, and that's particularly dangerous to do in security work.
Added to all that: The ACM is awarding its Godel prize for a proof that there is no proof:
In a paper titled "Natural Proofs" originally presented at the 1994 ACM STOC, the authors found that a wide class of proof techniques cannot be used to resolve this challenge unless widely held conventions are violated. These conventions involve well-defined instructions for accomplishing a task that rely on generating a sequence of numbers (known as pseudo-random number generators). The authors' findings apply to computational problems used in cryptography, authentication, and access control. They show that other proof techniques need to be applied to address this basic, unresolved challenge.The findings of Razborov and Rudich, published in a journal paper entitled "Natural Proofs" in the Journal of Computer and System Sciences in 1997, address a problem that is widely considered the most important question in computing theory. It has been designated as one of seven Prize Problems by the Clay Mathematics Institute of Cambridge, Mass., which has allocated $1 million for solving each problem. It asks - if it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This problem is posed to determine whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve.
The paper proves that there is no so-called "Natural Proof" that certain computational problems often used in cryptography are hard to solve. Such cryptographic methods are critical to electronic commerce, and though these methods are widely thought to be unbreakable, the findings imply that there are no Natural Proofs for their security.
If so, this can count as a plus point for risk management, and a minus point for the school of no-risk security. However hard you try, any system you put in place will have some chances of falling flat on its face. Deal with it; the savvy financial cryptographer puts in place a strong system, then moves on to addressing what happens when it breaks.
The "Natural Proofs" result certainly matches my skepticism, but I guess we'll have to wait for the serious mathematicians to prove that it isn't so ... perhaps by proving that it is not possible to prove that there is no proof?
For some obscure reason, this morning I ploughed through the rather excellent but rather deep tome of Peter Gutmann's Cryptographic Security Architecture - Design and Verification (or at least an older version of chapter 2, taken from his thesis).
He starts out by saying:
Security-related functions which handle sensitive data pervade the architecture, which implies that security needs to be considered in every aspect of the design, and must be designed in from the start (it’s very difficult to bolt on security afterwards).
And then spends much of the chapter showing why it is very difficult to design it in from the start.
When, then, to design security in at the beginning and when to bolt it on afterwards? In my Hypotheses and in the GP essays I suggest it is impractical to design the security in up-front.
But there still seems to be a space where you do exactly that: design the security in up-front. If Peter G can write a book about it, if security consultants take it as unquestionable mantra, and if I have done it myself, then we need to bring these warring viewpoints closer to defined borders, if not actual peace.
Musing on this, it occurs to me that we design security up front when the mission is security . And, not, if not. What this means is open to question, but we can tease out some clues.
A mission is that which when you have achieved it, you have succeeded, and if you have not, you have failed. It sounds fairly simple when put in those terms, and perhaps an example from today's world of complicated product will help.
For example a car. Marketing demands back-seat DVD players, online Internet, hands-free phones, integrated interior decorative speakers, two-tone metallised paint and go-faster tail trim. This is really easy to do, unless you are trying to build a compact metal box that also has to get 4 passengers from A to B. That is, the mission is transporting the passengers, not their entertainment or social values.
This hypothesis would have it that we simply have to divide the world's applications into those where security is the mission, and those where some other mission pertains.
E.g., with payment systems, we can safely assume that security is the mission. A payment system without security is an accounting system, not a payment system. Similar logic with an Internet server control tool.
With a wireless TCP/IP device, we cannot be so dismissive; an 802.11 wireless internet interface is still good for something if there is no security in it at all. A wireless net without security is still a wireless net. Similar logic with a VoIP product.
(For example, our favourite crypto tools, SSH and Skype, fall on opposing sides of the border. Or see the BSD's choice.)
So this speaks to requirements; a hypothesis might be that in the phase of requirements, first establish your mission. If your mission speaks to security, first, then design security up front. If your mission speaks to other things, then bolt on the security afterwards.
Is it that simple?
One of the things we know is that MITMs (man-in-the-middle attacks) are possible, but almost never seen in the wild. Phishing is a huge exception, of course. Another fertile area is wireless lans, especially around coffee shops. Correctly, people have pointed to this point as a likely area where MITMs would break out.
Incorrectly, people have typically confused possibility with action. Here's the latest "almost evidence" of MITMs, breathtakingly revealed by the BBC:
In a chatroom used to discuss the technique, also known as a 'man in the middle' attack, Times Online saw information changing hands about how security at wi-fi hotspots – of which there are now more than 10,000 in the UK – can be bypassed.During one exchange in a forum entitled 'T-Mobile or Starbucks hotspot', a user named aarona567 asks: "will a man in the middle type attack prove effective? Any input/suggestions greatly appreciated?"
"It's easy," a poster called 'itseme' replies, before giving details about how the fake network should be set up. "Works very well," he continues. "The only problem is,that its very slow ~3-4 Kb/s...."
Another participant, called 'baalpeteor', says: "I am now able to tunnel my way around public hotspot logins...It works GREAT. The dns method now seems to work pass starbucks login."
Now, the last paragraph is something else, it is referring to the ability to tunnel through DNS to get uncontrolled access to the net. This is typically possible if you run your own DNS server and install some patches and stuff. This is useful, and economically sensible for anyone to do, although technically it may be infringing behaviour to gain access to the net from someone else's infrastructure (laws and attitudes varying...).
So where's the evidence of the MITM? Guys talking about something isn't the same as doing it (and the penultimate paragraph seems to be talking about DNS tunnelling as well). People have been demoing this sort of stuff at conferences for decades ... we know it is possible. What we also know is that it is not a good use of your valuable time as a crim. People who do this sort of thing for a living search for techniqes that give low visibility, and high gathering capability. Broadcasting in order to steal a single username and password fails on both counts.
If we were scientists, or risk-based security scholars, what we need is evidence that they did the MITM *and* they committed a theft in so doing it. Only then can we know enough to allocate the resources to solving the problem.
To wrap up, here is some *credible* news that indicates how to economically attack users:
Pump and dump fraudsters targeting hotels and Internet cafes, says FBI Cyber crooks are installing key-logging malware on public computers located in hotels and Internet cafes in order to steal log-in details that are used to hack into and hijack online brokerage accounts to conduct pump and dump scams.The US Federal Bureau of Investigation (FBI) has found that online fraudsters are targeting unsuspecting hotel guests and users of Internet cafes.
When investors use the public computers to check portfolios or make a trade, fraudsters are able to capture usernames and passwords. Funds are then looted from the brokerage accounts and used to drive up the prices of stocks the frudsters had bought earlier. The stock is then sold at a profit.
In an interview with Bloomberg reporters, Shawn Henry, deputy assistant director of the FBI's cyber division, said people wouldn't think twice about using a computer in an Internet cafe or business centre in a hotel, but he warns investors not to use computers they don't know are secure.
Why is this credible, and the other one not? Because the crim is not sitting there with his equipment -- he's using the public computer to do all the dangerous work.
Unconfirmed claims are being made on WSJ that the hackers in the TJX case did the following:
The TJX hackers did leave some electronic footprints that show most of their break-ins were done during peak sales periods to capture lots of data, according to investigators. They first tapped into data transmitted by hand-held equipment that stores use to communicate price markdowns and to manage inventory. "It was as easy as breaking into a house through a side window that was wide open," according to one person familiar with TJX's internal probe. The devices communicate with computers in store cash registers as well as routers that transmit certain housekeeping data.After they used that data to crack the encryption code the hackers digitally eavesdropped on employees logging into TJX's central database in Framingham and stole one or more user names and passwords, investigators believe. With that information, they set up their own accounts in the TJX system and collected transaction data including credit-card numbers into about 100 large files for their own access. They were able to go into the TJX system remotely from any computer on the Internet, probers say.
OK. So assuming this is all true (and no evidence has been revealed other than the identity of the store where it happened), what can we say? Lots, and it is all unpopular. Here's a scattered list of things, with some semblance of connectivity:
a. Notice how the crackers still went for the centralised database. Why? It is validated information, and is therefore much more valuable and economic. The gang was serious and methodical. They went for the databases.
Conclusion: Eavesdropping isn't much of a threat to credit cards.
b. Eavesdropping is a threat to passwords, assuming that is what they picked up. But, we knew that way back, and that exact threat is what inspired SSH: eavesdroppers sniffing for root passwords. It's also where SSL is most sensibly used.
c. Eavesdropping is a threat, but MITM is not: by the looks of it, they simply sat there and sucked up lots of data, looking for the passwords. MITMs are just too hard to make them economic, *and* they leave tracks. "Who exactly is it that is broadcasting from that car over there....?"
(For today's almost evidence of the threat of MITMs see the BBC!)
d. Therefore, SSL v1 would have been sufficient to protect against this threat level. SSL v2 was overkill, and over-expensive: note how it wasn't deployed to protect the passwords from being eavesdropped. Neither was any other strong protocol. (Standard problem: most standardised security protocols are too heavy.)
TJX and 45 million americans say "thanks, guys!" I reckon it is going to take the other 255 million americans to lose big time before this lesson is attended to.
e. Why did they use a weak crypto protocol? Because it is the one delivered in the hardware.
Question: Why is hardware often delivered with weak crypto?
f. And, why was a weak crypto protocol chosen by the WEP people? And why are security observers skeptical that the new replacement for WEP will last any longer? The solution isn't in the "guild" approach I mentioned earlier, so forget ranting about how people should use a good security expert. It's in the institutional factors: security is inversely proportional to the number of designers. And anything designed by an industry cartel has a lot of designers.
g. Even if they had used strong crypto, could the breach have happened? Yes, because the network was big and complex, and the hackers could have simply plugged into some place elsewhere. Check out the clue here:
The hackers in Minnesota took advantage starting in July 2005. Though their identities aren't known, their operation has the hallmarks of gangs made up of Romanian hackers and members of Russian organized crime groups that also are suspected in at least two other U.S. cases over the past two years, security experts say. Investigators say these gangs are known for scoping out the least secure targets and being methodical in their intrusions, in contrast with hacker groups known in the trade as "Bonnie and Clydes" who often enter and exit quickly and clumsily, sometimes strewing clues behind them.
Recall that transactions are naked and vulnerable . Because the data is seen in so many places, savvy FCers assume the transactions are visible by default, and thus vulnerable unless intrinsically protected.
h. And, even if the entire network had been protected by some sort of overarching crypto protocol like WEP, the answer is to take over a device. Big stores means big networks means plenty of devices to take over.
i. Which leaves end-to-end encryption. The only protection you can really count on is end-to-end. WEP, WPA and IPSec and other such infrastruction-level systems are only a hopeful answer to an easy question, end-to-end security protocols are the real answer to application level questions.
(e.g., they could have used SSL for protecting the password access to the databases, end to end. But they didn't.)
j. The other requirement is to make the data insensitive to breaches. That is, even if a crook gets all the packets, he can't do anything with them. Not naked, as it were. End-to-end encryption then becomes a privacy benefit, not a security necessity.
However, to my knowledge, only Ricardo and AADS deliver this, and most other designers are still wallowing around in the mud of encrypted databases. A possible exception to this is the selective disclosure approach ... but for various business reasons that is even less likely to field than Ricardo and AADS were.
k. Why don't we use more end-to-end encryption with naked transaction protocols? One reason is that they don't scale: we have to write one for each application. Another reason is that we've been taught not to by generations: "you should use a standard security product." ... as seen by TJX, who *did* use a standard security product.
Conclusion: Security advice is "lowest common denominator" grade. The best advice is to use a standard product that is inapplicable to the problem area, and if that's the best advice, that also means there aren't enough FC-grade people to do better.
l. "Oh, we didn't mean that one!" Yeah, right. Tell us how to tell? Better yet, tell the public how to tell. They are already being told to upgrade to WPA, as if improving 1% of their network from 20% security to 80% security is going to help.
m. In short, people will seize on the encryption angle as the critical element. It isn't. If you are starting to get to the point of confusion due to the multiplying conflicts, silly advice, and sense of powerless the average manager has, you're starting to understand.
This is messy stuff, and you can pretty much expect most people to not get it right. Unfortunately, most security people will get it wrong too in a mad search for the single most important mistake TJX made.
The real errors are systemic: why are they storing SSNs anyway? Why are they using a single number for the credit card? Why are retail transactions so pervasively bound with identity, anyway? Why is it all delayed credit-based anyway?
The gold community is a-buzz with the news ... first an announcement from BullionVault (no URL):
There have been growing stresses on our relationship with Brinks Inc, the US-owned vault operator, and it has become clear that they feel uncomfortable about continuing to vault BullionVault gold.Why this might be so I am genuinely unable to say. Their exact reasoning has not been disclosed to us.
Fortunately, there is an excellent alternative available to us in ViaMat International, the largest Swiss-owned vault operator, and one which has a full quota of internationally located professional bullion vaults.
Swiss ownership suggests an independence from some of the pressures which Brinks may have found themselves operating under recently. Also you - our users - have chosen to vault 26 times as much gold in Switzerland as in the United States, so we believe this change will be both natural and welcome.
This is an echo of the old e-gold story, where different reputable vault companies handed the e-gold reserves around like it was musical chairs. BuillionVault is a new generation gold system, not encouraging payments but instead encouraging holding, buying and selling. It's not yet clear why they would be a threat.
But then, from 1mdc, a payment system:
ATTENTION Friday Apr 27 2007 - 4AM UTC
It appears that a U.S. Government court order has forced e-gold(R) to close down or confiscate all of 1mdc's accounts. All of 1mdc account's have been closed at e-gold by order of the US Government.
Please note that it appears the accounts of a number of the largest exchangers and largest users of e-gold have also been closed or confiscated overnight: millions of Euros of gold have been held in this event. A couple of large exchanger's accounts have been shutdown.
If the confiscation or court order in the USA is reversed, your e-gold grams remaining in 1mdc will "unbail" normally to your e-gold account.
We suggest not panicking: more will be known on Monday when there will be more activity in the courts.
You CAN spend your 1mdc back and fore to other 1mdc accounts. 1mdc is operating normally within 1mdc. However you should be aware there is the possibility your e-gold will never be released from e-gold due to the court order.
Ultimately e-gold(R) is an entirely USA-based company, owned and operated by US citizens, so, as e-gold users we must respect the decisions of US courts and the US authorities regarding the disposition of e-gold. Even though 1mdc has no connection whstsoever to the USA, and most 1mdc users are non-USA, e-gold(R) is USA based.
You are welcome to email "team@1mdc.com", thank you.
Yowsa! That's heavy. And now, BullionVault's actions make perfect sense. Brinks probably heard rumour of happenings, and BullionVault are probably sweating off those pounds right now crossing the border with rucksacks of kg of yellow ballast.
It's worth while looking at how 1mdc worked, so as to understand what all the above means. 1mdc is simply a pure-play e-gold derivative system, in that 1mdc maintained one (or some) e-gold accounts for the reserve value, and then offered an accounting system in grams for spending back and forth.
1mdc then stands out as not actually managing and reserving in gold. Instead it manages e-gold ... which manages and reserves in gold. Now, contractually, this would be quite messy, excepting that the website has fairly generally made no bones of this: 1mdc is just e-gold, handled better.
So, above, 1mdc is totally uneffected! Except all the users who held e-gold there (in 1mdc) are now totally stuffed! Well, we'll see in time what the real story is, but when this sort of thing happens, there are generally some losers.
What then was the point of 1mdc? e-gold were too stuffy in many ways. One was that they charged a lot, another was the owners of e-gold were pretty aggressive characters, and they scared a lot of their customers away. Other problems existed which resulted in a steady supply of customers over to 1mdc, who famously never charged fees.
We could speculate that 1mdc was destined for sale at some stage. And I stress, I don't actually know what the point was. In contrast to the e-gold story, 1mdc ran a fairly tight ship, printed all of their news in the open, and didn't share the strategy beyond that.
It may appear then that the US has moved to close down competition. Other than pique at not being able to see the account holders, this appeared yesterday to be a little mysterious and self-damaging. Today's news however clarifies, which I'll try and write a bit more about in another entry:
...the Department of Justice also obtained a restraining order on the defendants to prevent the dissipation of assets by the defendants, and 24 seizure warrants on over 58 accounts believed to be property involved in money laundering and operation of an unlicensed money transmitting business.
Adam over at EC joined the fight against the disaster known as Internet Security and decided Choicepoint was his wagon. Mine was phishing, before it got boring.
What is interesting is that Adam has now taken on the meta-question of why we didn't do a better job. Readers here will sympathise. Read his essay about how the need for change is painful, both directly and broadly:
At a human level, change involves loss and and the new. When we lose something, we go through a process, which often includes of shock, anger, denial, bargaining and acceptance. The new often involves questions of trying to understand the new, understanding how we fit into it, if our skills and habits will adapt well or poorly, and if we will profit or lose from it.
Adam closes with a plea for help on disclosure :
I'm trying to draw out all of the reasons why people are opposed to change in disclosure habits, so we can overcome them.
I am not exactly opposed but curious, as I see the issues differently. So in a sense, deferring for a moment a brief comment on the essay, here are a few comments on disclosure.
Schechter & Smith use an approach of modelling risks and rewards from the attacker's point of view which further supports the utility of sharing information by victims:
Sharing of information is also key to keeping marginal risk high. If the body of knowledge of each member of the defense grows with the number of targets attacked, so will the marginal risk of attack. If organizations do not share information, the body of knowledge of each one will be constant and will not affect marginal risk. Stuart E. Schechter and Michael D. Smith "How Much Security is Enough to Stop a Thief?", Financial Cryptography 2003 LNCS Springer-Verlag.Yet, to share raises costs for the sharer, and the benefits are not accrued to the sharer. This is a prisoner's dilemma for security, in that there may well be a higher payoff if all victims share their experiences, yet those that keep mum will benefit and not lose more from sharing. As all potential sharers are joined in an equilibrium of secrecy, little sharing of security information is seen, and this is rational. We return to this equilibrium later.
(OK, so some explanation. At what point do we forget the nonsense touted in the press and move on to a real solution where the lost data doesn't mean a compromise? IOW, "we were given the task of securing all retail transactions......")
So, while I think that there is a lot to be said about disclosure, I think it is also a limited story. I personally prefer some critical thought -- if I can propose half a dozen solutions to some poor schmuck company's problems, why can't they?
And it is to that issue that Adam's essay really speaks. Why can't we change? What's wrong with us?
I've previously blogged on how the non-profit sector is happy hunting grounds for scams, fraud, incentives deals and all sorts of horrible deals. This caused outrage by those who believe that non-profits are somehow protected by their basic and honest goodness. Actually, they are fat, happy and ripe for fraud.
The basic conditions are these:
Examples abound, if you know how to look. Coming from a background of reading the scams and frauds that rip apart the commercial sector, seeing it in the non-profit sector is easy, because of the super-fertile conditions mentioned above.
Here's one. In New York state in the US of A, the schools have been found taking incentives to prefer one lender over another for student loans.
[State Attorney-General] Cuomo has sent letters to and requested documents from more than a hundred schools for information about any financial incentives the schools or their administrators may have derived from doing business with certain lenders, such a gifts, junkets, and awards of stock.A common practice exposed by Cuomo is a revenue-sharing agreement, whereby a lender pays back a school a fixed percentage of the net value loans steered its way. Lenders particularly benefit when schools place them on a short list of “preferred” lenders, since 3,200 firms nationwide are competing for market share in the $85 billion a year business.
Here's an inside tip that I picked up on my close observance of the mutual funds scandal, also brought by then NY-AG Elliot Spitzer (now Governor, an easy pick as he brought in $1.5bn in fines). If the Attorney-General writes those letters, he already has the evidence. So we can assume, for the working purposes of a learning exercise, that this is in fact what is happening.
There's lots of money ($85Bn). The money comes from somewhere. It can be used in small incentives to steer the major part in certain directions.
To narrow the options, most schools publish lists of preferred lenders for both government and private loans. They typically feature half a dozen lenders, but they might have only one. Students should always ask if the school is getting any type of payment or service from lenders on the list.To get a loan, schools must certify that you are qualified. By law, schools can't refuse to certify a loan, nor can they cause "unreasonable delays," because you choose a nonpreferred lender. That said, many schools strongly discourage students from choosing a nonpreferred lender.
...
The University of North Carolina at Chapel Hill tells students on a form that if they choose a lender other than the school's sole preferred lender for Stafford loans, "there will be a six-week delay in the processing of your loan application" because it must be processed manually.
How do we address this? If we are a non-profit, then we can do these things:
It's not so exceptional. Some schools got it:
Thirty-one other schools joined Penn and NYU in adopting a code of conduct that prohibits revenue-sharing with lenders, requires schools to disclose why they chose preferred lenders, and bans financial aid officers and other school officials from receiving more than nominal gifts from lenders.
Clues in bold. How many has your big fat, rich open source non-profit got? I know one that has all of the first list, and has none of the second.
Sometimes when we can't seem to get anywhere on analysing our own sector of criminal activity, it helps to look at some ordinary stuff. Here's one:
According to the Commission's complaint, between July and November 2006, the Defendants repeatedly hijacked the online brokerage accounts of unwitting investors using stolen usernames and passwords. Prior to intruding into these accounts, the Defendants acquired positions in the securities of at least fourteen securities, including Sun Microsystems, Inc., and "out of the money" put options on shares of Google, Inc. Then, without the accountholders' knowledge, and using the victims' own accounts and funds, the Defendants placed scores of unauthorized buy orders at above-market prices. After these unauthorized buy orders were placed, the Defendants sold the positions held in their own accounts at the artificially inflated prices, realizing profits of over $121,500.
To achieve this benefit, the prosecution alleges that $875,000 of damage was done.
It's a point worth underscoring: a criminal attack in our world often involves doing much more damage than the gain to the criminal. For that reason, we must focus on the overall result and not on the headline number. Here's a more aggressive damages number:
The pump and dump scheme, which occured between July and November 2006, has cost one brokerage firm at least $2m in losses. An estimated 60 customers and nine US brokerage firms were identified as victims.
Also, funds seized.
In the ongoing saga of "what is security?" and more importantly, "why is it such a crock?" Bruce Schneier weighs in with some ruminations on "feelings" or perceptions, leading to an investigation of psychology.
I think the perceptional face of security is a useful area to investigate, and the essay shines as a compendium of archtypical heuristics, backed up by experiments, and written for a security audience. These ideas can all be fed in to our security thinking, not to be taken for granted, but rather as potential explanations to be further tested. Recommended reading, albeit very long...
I would however urge some caution. I claim that buyers and sellers do not know enough about security to make rational decisions, the essay suggests a perceptional deviation as a second uncertainty. Can we extrapolate strongly from these two biases?
As it is a draft, requesting comment, here are three criticisms, which suggest that the introduction of essay seems unsustainable:
THE PSYCHOLOGY OF SECURITY -- DRAFTSecurity is both a feeling and a reality. And they're not the same.
The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures.
Firstly, I'd suggest that "what security is" is not yet well defined, and has defied our efforts to come to terms with it. I say a bit about that in Pareto-secure but I'm only really looking at one singular aspect of why cryptographers are so focussed on no-risk security.
Secondly, both maths and feelings are approximations, not the reality. Maths is just another model, based on some numeric logic as opposed to intuition.
What one could better say is that security can be viewed through a perceptional lens, and it can be viewed through a mathematical lens, and we can probably show that the two views look entirely different. Why is this?
Neither is reality though, as both take limited facts and interpolate a rough approximation, and until we can define security, we can't even begin to understand how far from the true picture we are.
We can calculate how secure your home is from burglary, based on such factors as the crime rate in the neighborhood you live in and your door-locking habits. We can calculate how likely it is for you to be murdered, either on the streets by a stranger or in your home by a family member. Or how likely you are to be the victim of identity theft. Given a large enough set of statistics on criminal acts, it's not even hard; insurance companies do it all the time.
Thirdly, insurance is sold, not bought. Actuarial calculations do not measure security to the user but instead estimate risk and cost to the insurer, or more pertinently, insurer's profit. Yes, the approximation gets better for large numbers, but it is still an approximation of the very limited metric of profitability -- a single number -- not the reality of security.
What's more, these calculations cannot be used to measure security. The insurance company is very confident in its actuarial calculations because it is focussed on profit; for the purpose of this one result, large sets of statistics work fine, as well as large margins (life insurance can pay out 50% to the sales agent...).
In contrast, security -- as the victim sees it -- is far harder to work out. Even if we stick to the mathematical treatment, risks and losses include factors that aren't amenable to measurement, nor accurate dollar figures. E.g., if an individual is a member of the local hard drugs distribution chain, not only might his risks go up, and his losses down (life expectancy is generally lowered in that profession) but also, how would we find out when and how to introduce this skewed factor into his security measurement?
While we can show that people can be sold insurance and security products, we can also show that the security they gain from those products has no particular closeness to the losses they incur (if it was close, then there would be more "insurance jobs").
We can also calculate how much more secure a burglar alarm will make your home, or how well a credit freeze will protect you from identity theft. Again, given enough data, it's easy.
It's easy to calculate some upper and lower bounds for a product, but again these calculations are strictly limited to the purpose of actuarial cover, or insurer's profit.
They say little about the security of the user, and they probably play as much to the feelings of buyer as any mathematical model of seller's risks and losses.
It's my contention that these irrational trade-offs can be explained by psychology.
I think that's a tough call, on several levels. Here's some contrary plays:
If we put all those things together, a complex pattern emerges. I look at a lot of these elements in the market for silver bullets, and, leaning heavily on the Spencarian theory of symmetrically insufficient information, I show that best practices may emerge naturally as a response to costs of public exposure, and not the needs for security. Some of the experiments listed (24,38) may actually augment that pattern, but I wouldn't go so far as to say that the heuristics described are the dominating factor.
Still, in conclusion, irrationality is a sort of code word in economics for "our models don't explain it, yet." I've read the (original) literature of insufficient information (Spence, Akerlof, etc) and found a lot of good explanations. Psychology is probably an equally rewarding place to look, and I found the rest of the article very interesting.
Insider fraud is like an evil twin of security. From the "it could be you" department...
There has been an internal feud at the company for some time between joint owners Kevin Medina, CEO, and John Naruszewicz, vice president, which culminated in a February 12 lawsuit.Naruszewicz sought, and received, a preliminary court injunction preventing Medina from accessing the company's funds. Naruszewicz claimed that Medina had been using corporate money to pay for a life of luxury, at the expense of the company and its customers.
Among the allegations were claims that Medina has used Registerfly's money to pay for a $10,000-a-month Miami Beach penthouse, a $9,000 escort, and $6,000 of liposuction surgery.
Many "security people" from the new, net-based culture only discover what older value institutions have known for centuries -- and then only when it happens to them.
The overall lesson that we need to bear in mind is that the twins should be kept in balance: cover the external security to the same extent as the internal security. Security proportional to risk, in other words, as having perfect security in one area is meaningless if there is a weak area elsewhere.
That's a case from the computer industry: It could be you... We can imagine that it all started out as an innocent need to network with some important clients.
(Note that the unsung hero here, the VP who challenged the fraud, will probably never be rewarded, thanked, or protected from counter-attacks.)
The Mozilla governance debate is running hot, rejoinders flowing thick and fast. Here is a seriously good riposte by James Donald:
A successful open source project has a large effect on what large numbers of people do. The effect has a large indirect effect on various for-profit ventures, who then proceed to give handouts to the non profit open source project. Thus, for example, linux was the beneficiary of vast amounts of work by engineers employed by corporations who feared that they would be screwed by Microsoft or wintel, and urgently wanted to have an alternative, or, in the case of Sun, had to ensure that their customers had an alternative.In that case, the big corporations were the good guys, reacting against the dangerous power of a particular big corporation, protecting everyone in the course of protecting themselves.
More nefarious activities are common: For example OpenID is backed by XRI, and tends to do things that are more in the interests of XRI rather than support the objectives of OpenID - but then there is nothing terribly wicked or nefarious about the objectives of XRI.
Getting back to the case in dispute, the various browser responses to phishing, to the internet crisis of identity and security, make more sense as a Verisign business plan than as a response to phishing, and in so doing harm security, in the sense that they are disinclined to take any effective action, for any effective action would compete with the services provided by Verisign.
We don't need to worry about governance with linux, for the interests of the contributors are well aligned - they all want free software ("free" as in "free speech", not just "free" as in "free beer") that does all the things that Microsoft's unfree software does) So we just proclaim Torvalds dictator and let him get on with it. No one cares about linux governance.
Trouble is that some of the contributors to Mozilla want to paid for security, which means that they do not want Mozilla to provide free security - neither in the sense of free speech, nor in the sense of free beer.
And Mozilla really should provide free security.
Now, we might not agree with everything written above ... but James does raise the rather good point that there is a big difference between the Linux community and the Mozilla community.
Superficially, there is tight control over both projects. In the first case by Linus, Grand Vizier and Despot Over all his Kernels and Dominions. In the second, MoFo developers are Most Benevolent and Principled Dictators, Defenders of the Freedom of all our Code in all our Repositories. To paraphrase.
Both despots, both dictators. Here is the difference. Linus only rules over the kernel; which is then fed to 100 or more secondary tier distributors, within the freedom granted by GPL. They then feed it to users.
In contrast, Mozo rules over the whole show. The user interface ("UI") is controlled by the Mozo developers, but not by Linus in his project. For Mozo the money comes flooding in like the spring melt because they have a vast user base wanting to access the lodestone of net commerce: search engines.
For the linux kernel there is no such centralised opportunity, as the UI is controlled at the remote distro level. In practical terms, the Linux commercial opportunity has been outsourced into the free market of Redhat, Ubuntu, Suse, Debian and a hundred others.
The reason that no-one cares about Linux governance is that the very structure of the Linux industry is the governance. The governance issue of regulating benefits and opportunities is solved by placing it were it is best dealt with: in the market place.
Expressed as a principle, Linus says it's ok to be a systems despot, but, please, let the UI go free.
Visa and Nokia have taken the wraps off their handset-based payment system. Details of workings are unclear:
The wireless standard that will link mobile phones with payment systems in stores and elsewhere will be the near field communication (NFC) chip, which will be hidden under the phone cover and makes contact when swiped over a reader.
Visa being involved means it is likely to be tied to a classical Visa card, with billing backed into the existing system.
The initial version of the mobile payment platform, which launched on Monday, offers contactless mobile payment, personalization over mobile telephony networks, coupons and direct marketing. Subsequent versions of the platform, to be made available later in the year, will include remote payment--also using mobile telephony networks--and person-to-person payment.
What is perhaps more interesting is that Visa are floating themselves as a public company. This cuts the direct tie with the banks, which in the past owned Visa (and Mastercard). So now, we can expect Visa to be (a) not a bank, and (b) not regulated by the ownership method.
Which will leave Nokia in a more confident position, as it will be Nokia that has the final say on what goes on its phones.
It's yet more evidence that the payment function is gradually moving out of the banks' sphere of influence, alongside the exploding retail gift card issuance and the slow recovery of interest in net-based payment systems.
What is to happen in the coming year?
(Apologies for being behind on the routine end-of-year predictions, but I was AFI -- away from Internet -- and too depressed with predictions to make the journey. Still, duty calls!)
In echoes of the Sony versus Cuthbert mess of 2005, it all adds up to: "it's OK if you can get away with it," a message much reinforced by politics. There are no rules you can rely on, and everyone struggles to keep up with the results.
For this reason, I dub 2007 the year of the platypus! What more confabulated animal is there than our world?
The crying direct need is for such a product or process in employment processes. That's old news given Michael Spence's seminal work on signals about 30 years back, but what is curious is why nobody has really stepped in to look at it? A serious idea for b-school types or economists? How do we get away from Spencarian Signalling and put integrity back into employment interviews?
Some evidence: an open phone, this phone called me on Skype, a Cordless phoneset delivered with Skype, and today's news: Apple's iPhone does wifi and runs OSX.
Expect cellphone cross-over to wifi as routine by the end of 2007. The ability to redirect calls to the net dramatically changes the competitive position of the telcos, and the open platforms make software development a low cost reality.
Why do I say that? "Been there, done that!" Chat goes with payments like Molotov with cocktails, Eddy with Patsy, Blue with Danube, but to see that you have to see the full design. The blue touchpaper has been lit, stand well back.
This is very significant, historically. Very Very Signficant: it is the end of the central bank monopoly on the control of issuance of money. As CBs are no longer the only issuers of money, we can historically mark the 20th century as the century of central banking, and the future is now refreshingly open.
Of course, we will see much hand-wringing and bemoaning of the lack of control. Also a stream of pointless and annoying regulations, audits requirements, quasii-bank statii and what-have-you. But the genie is out of the bottle.
And, it is also important to remember one of last year's very significant events, something so awesome that I never wrote it up on the blog: the Nobel Prize for Peace was awarded to Mohammad Yunus and the Grameen Bank. The significance of that event to financial cryptography is simple: their work is FC work, they just did it without our help.
The reason I know this is because around 2001-2003 I was involved in a company that tried to do it. The application epitomised by Grameen Bank, financial lending from large western sources to small 3rd world borrowers, is pure FC at its finest. (As RAH would say, of course, you can only do it with a system that shows 2 orders of magnitude savings in costs.)
This doesn't mean the end of RIAA raids and other dirty tricks. The war goes on, and battles will still be fought to keep the lid on territorial submissions. It also doesn't mean the end of cash cow economics. But it does mean lots of experiments ... on both sides ... as IP owners loosen their control on their property and p2p entrepreneurs get to grips with business models.
a. It's because Microsoft didn't understand the core weakness of security: marketing comes first. There is now sufficient evidence that they've allowed marketing to take over and drive fundamental architectural decisions which clash with security requirements we were promised. Specifically, they prayed to the false god of DRM, and the god took them for a ride. It is also the god of perpetual mirth, notching out Bachanus for hilarity. Contrast with Apple's approach, if you still aren't seeing it.
b. It's because the industrial criminal sector migrated through the easy ones and are now adept at the sophisticated ones. They can now take on new opportunities faster than responses. MITB is "game over" unless Vista is more secure than the market place will accept. Microsoft is stuck between a rock and a hard place; BCG says "cash cow."
c. It's because the economics of the OS has shifted. The third world cannot afford those prices, so they will go Vista if they can steal it, or Linux of they can't (which means they can switch easily to Mac when they can afford it). Given that most all growth is in non-1st world markets, that's kind of important to the overall game plan. Again, another rock and hard place for Microsoft.
z.b. Mac OSX and Macs will continue to acquire "all" real growth in marketshare in the 1st world, where people can afford it. Microsoft may see a buzz of pent-up activity burst through on the release in Vista, but with discouraging real take up, where it counts.
z.c. Linux up. *BSD stable or down, but up if we include OSX. Better if they can keep up their reputation as being the serious man's free Unix, the professional's alternative to Linux. Worse if they don't keep up with the application install blues; perhaps they should look at stealing Apple's pkg system.
z.d This will add costs to software developers as they are forced to support more platforms, but will also improve the overall security through diversity and also the recovery of competition. This might become the way consumers pay for security, who knows?
That's it, folks! Have a happy if confused year, evolutionarily speaking.
Preliminary Programme for "USABLE SECURITY 2007" which is colocated with FC2007 below, again in "title-only-peer-review" mode.
An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks WSKE: Web Server Key Enabled Cookies (Panel) - The Future of Phishing Usability Analysis of Secure Pairing Methods Low-cost Manufacturing, Usability, and Security: An Analysis of Bluetooth Simple Pairing and Wi-Fi Protected Setup Empirical Studies on Software Notices to Inform Policy Makers and Usability Designers Prime III: Where Usable Security and Electronic Voting Meet (Panel) Building Trusted Systems: Does Trusting Computing Enable Trusted Systems?
Click to vote your interest: https://www.usablesecurity.org/accepted.html
(Ha! Finally someone else who supports encrypted web browsing. Hey, guys, can you fix the links so that they are relative and keep people in HTTPS?)
If I was a terrorist, this would be great news. The US has revealed yet another apparently illegal and plainly dumb civilian spying programme, called ATS or Automated Targeting System. Basically, it datamines all passenger info, exactly that which they were told not to do by Congress, and flags passengers according to risk profile.
Those who are responsible for US Homeland Insecurity had this to say:
Jayson Ahern, an assistant commissioner of the Department of Homeland Security's Customs and Border Protection Agency, told AP: "If (the ATS programme) catches one potential terrorist, this is a success.
From an FC perspective it is useful to see how other people's security thinking works. Let's see how their thinking stacks up. I think it goes like this:
See the problem? There's no feedback loop, no profit signal, no way to cull out the loss-making programmes. The only success criteria is in the minds of the people who are running the show.
Meanwhile, let's see how a real security process works.
Approximately, the objectives of the terrorists are to deliver the death of a thousand tiny cuts to the foreign occupiers, to draw from the Chinese parable. In other words, to cause costs to the US and their hapless friends, until they give up (which they must eventually do unless they have some positive reward built in to their actions).
So let's examine the secret programme from the point of view of the terrorist. Obviously, it was no real secret to the terrorists ("yes, they will be monitoring the passenger lists for Arab names and meals....") and it is relatively easy to avoid ("we have to steal fresh identities and not order meals"). What remains is a tiny cut magnified across the entire flying public, and a minor inconvenience to the terrorist. Because it is known, it can be avoided when needed.
We can conclude then that the ATS is directly aligned with the objectives of the terrorists. But there's worse, because it can even be used against the USA as a weapon. Consider this one again:
"If (the ATS programme) catches one potential terrorist, this is a success."
Obviously, one basic strategy is for the terrorists to organise their activities to avoid being caught. Or, a more advanced strategy is to amplify the effect of the US's own weapon against them. As the results of the ATS are aligned with objectives of the terrorists -- death by a thousand tiny cuts -- they can simply reinforce it. More and deeper cuts...
Get it? As a terrorist campaign, what would be better than inserting terrorists into the passenger lists? The programme works, so catching some terrorists causes chaos and costs to the enemy (e.g., the recent shampoo debacle). The programme works, so it is a success, therefore it must be reinforced. And it works predictably, so it is easy to spoof.
Even if the terrorist fails to get caught, he wins with some minor panic at the level of exploding shoes or shampoo bottles. And, if the suicide bomber is anything to go by, there is only a small cost to the terrorists as an organisation, and a massive cost to the enemy: consider the cost of training one suicide bomber (cheap if you want him to be caught, call it $10k) versus the cost of dealing with a terrorist that has been caught (publically funded defence, decades of appeals, free accomodation for life, private jets and bodyguards..., call it $10m per "success").
Not to mention the megaminutes lost in the entire flying public removing their shoes. For the terrorist, ATS is the perfect storm of win-wins -- there is no lose square for him.
On the other hand, if our security thinking was based on risk management, we'd simply use any such system for tracking known targets and later investigation of incidents. Or turn it off, to save the electricity. Far too many costs for minimal benefit.
Luckily, in FC, we have that feedback loop. When our risk management programmes show losses, or our capital runs out, the plug is pulled. Maybe it is time to float the DHS on the LSE?
SWIFT won two Big Brother awards at last week's Austrian presentation. The first was in the "finance" category, underscoring the relationship between Orwell's despotic prediction of the future and the control of money and payments. |
Nothing in English, it seems. The second Big Brother award for SWIFT was the "public voting" category. Surprisingly, the public voted that SWIFT was the worst thing that had happened to them over the last year, against other things that the Austrian people may have had much more exposure to.
This is somewhat significant as it signals how the SWIFT scandal is of wide-reaching impact. SWIFT has not handled it well, reacting in the worst possible way -- to suppress and deny the scandal where they could.
As an example of that poor handling, it is slowly becoming more clear that their responses are not honest. Last blog entry I pointed at this strange comment in responses to questioning by Quintessenz's Erich Moechel:
No, SWIFT does can not provide this type of data. It is important to understand that SWIFT does not have the means to read the information inside a message. SWIFT can only read the information necessary to route the messages across its network from bank A to bank B. In this respect SWIFT is similar to a postal service.
I don't believe that. To check, I've now talked to others that are in the finance sector and have some view on this, and universally, nobody else believes it either. Some research however has not been able to turn up a comprehensive answer, so it is not conclusive as yet.
If SWIFT are being honest, now's their chance to confirm. If they are spreading lies, now's the time to sack their Public Relations department, along with their security department, their privacy people, and their bank relations people.
Heck, sack the whole board. But that would be asking Europe's banking community to show some spine. Corporate governance is not going to come from the ECB and its flock.
Someone's paying attention to the tracking ability of mobile phones. Darrent points to Spyblog who suggests some tips to whistleblowers (those who sacrifice their careers and sometimes their liberty to reveal crimes in government and other places):
8. Do not use your normal mobile phone to contact a journalist or blogger from your Home Office location, or from home. The Cell ID of your mobile phone will pinpoint your location in Marsham Street and the time and date of your call. This works identically for Short Message Service text messages as well as for Voice calls.Such Communications Traffic Data does not require that a warrant be signed by the Home Secretary, a much more junior official has the power to do this, e.g. the Home Office Departmental Security Unit headed by Jacqueline Sharland.
9. Buy a cheap pre-paid mobile phone from a supermarket etc..
- Do not buy the phone or top up phone credit using a Credit Card or a make use of a Supermarket Loyalty Card.
- Do not switch on or activate the new mobile at home or at work, or when your normal mobile phone switched on (the first activation of a mobile phone has its physical location logged, and it is easy to see what other phones are active in the surrounding Cells at the same time.
- Do not Register your pre-paid mobile phone, despite the tempting offers of "free" phone credit.
- Do not store any friends or familiy or other business phone numbers on this disposable phone - only press or broadcast media or blogger contacts.
- Set a power on PIN and a Security PIN code on the phone.
- Physically destroy the phone and the SIM card once you have done your whistleblowing. Remember that your DNA and fingerprints will be on this mobile phone handset.
- Do not be tempted to re-use the SIM in another phone or to put a fresh SIM in the old phone, unless you are confident about your ability to illegally re-program the International Mobile Equipment Electronic Identity (IMEI).
Just in case you think this is excessive paranoia, it recently emerged that journalists in the USA and in Germany were having their phones monitored, by their national intelligence agencies, precisely to try to track down their "anonymous sources".
Why would this not happen here in the UK ?
See Computer Encryption and Mobile Phone evidence and the alleged justification for 90 days Detention Without Charge - Home Affairs Select Committee Oral Evidence 14th February 2006
...
25. If you decide to meet with an alleged "journalist" or blogger (who may not always be who they claim to be), or if a journalist or blogger decides to meet with an "anonymous source" (who also might not be who they claim to be), then you should switch off your mobile phones, since the proximity of two mobile phones in the same approximate area, at the same time, is something which can be data mined from the Call Data Records, even if no phone conversations have taken place. Typically a mobile phone will handshake with the strongest Cell Base Station transmitter every 6 to 10 minutes, and this all gets logged, all of the time.
Read the whole thing if it is important to you. Personally, I'd say that's a difficult list. If you are suspect, don't use a cellphone. Not that I have a better idea (although I think Spyblog's comments on Skype are perhaps a little weak.)
European data protection authorities empanelled an investigation, alongside their companion privacy commissioners. Their verdict? SWIFT broke the law.
The Belgian-based consortium known as Swift, which handles money transfers among banks, violated European privacy regulations when it turned over confidential transaction information to the Central Intelligence Agency and other American agencies, Belgium's privacy protection commission concluded today. ... Under European Union law, companies may not transfer confidential personal data to an entity in another country unless that country's privacy protections are deemed adequate. The union does not consider American protections adequate because the United States has never enacted comprehensive data protection laws. Under that rule, the commission found, Swift acted without a legal basis when it sent the data to the United States.Swift has defended the transfers on the ground that, because it has offices in the United States, it was bound by United States law, and had no choice but to turn over the data after the Treasury Department issued broad administrative subpoenas to it.
The commission rejected this argument, saying Swift was still subject to Belgian rules, regardless of Swift's American subsidiary and its legal obligations. "Swift should have realized that exceptional measures based on American rules do not legitimize hidden, systemic violations of fundamental European principles related to data protection over a long period of time," the commission wrote.
Others agreed. This is not helped by SWIFT's ducking and weaving. SWIFT may have provided some evidentiary information in paper form:
Both SWIFT and the U.S. authorities say records were subpoenaed as part of targeted investigations into suspected terrorist activity. In its defense, SWIFT reiterated on Monday that it received "significant protections and assurances" that the data transferred to the U.S. was used confidentially.
But -- rumour has it -- the subpoenas have not been supplied to EU investigators, and we already know that UST passes the information on for other purposes / investigations, as mentioned here before.
Quintessenz points out that their answers were evasive, and I'd agree. They must think the public are muggins, but they did let some things slip out:
No, SWIFT does can not provide this type of data. It is important to understand that SWIFT does not have the means to read the information inside a message. SWIFT can only read the information necessary to route the messages across its network from bank A to bank B. In this respect SWIFT is similar to a postal service.
Yet, the investigation concluded that SWIFT was not simply a messaging service, and was in practice a financial institution. Is SWIFT suggesting that SWIFT can't see the content? In that case, what is the motivation of the US-Treasury interest? We already know that banks send lots of messages between each other, so this would seem to be a comment of quite extraordinary strangeness, and SWIFT's case may depend on the answer to this question.
The investigation also concluded that SWIFT had not advised its overseers to sufficient extent, had acted independently, and had acted more as principal than as agent. In other words, SWIFT is responsible. Consider these three statements, by SWIFT, from diverse sources:
The US Treasury cannot search the data for evidence of nonterrorist related crime. SWIFT has explicitly excluded searches for tax evasion, economic espionage, money laundering or other criminal activity.
...
In a statement signed by SWIFT's CEO Leonard H. Schrank, the company said the U.S. Treasury does not have unlimited access to data stored by SWIFT, and the information it got was used only "for the exclusive purpose of terrorism investigations."
...
The laws haven't changed. The environment of terrorism hasn't changed. SWIFT is well aware of the laws and regulations people are concerned about. The Board is monitoring the situation on a regular basis. Beyond that, SWIFT cannot comment.
With the extraordinary powers Congress passed a month ago, SWIFT's agreement is no longer a barrier of any import. The laws have changed. When President Bush signed the act into law that destroys habeus corpus, Military Commissions Act, he also signed in the ability to designate anyone as a terrorist. Without recourse.
SWIFT data may now be requested on discretion without any high degree of proof, simply on the say-so of the requestor.
Who's looking like a muggins now? The European privacy investigation wasn't fooled, and correctly rejected the argument that SWIFT was under US law; if the compliance broke European law, then the New York office of SWIFT should not have had the data in the first place. Again, Quintessenz asks:
As this process has been going for nearly five years why did SWIFT not cease to store all datasets in the New York headquarter?To ensure the reliability and resilience of its network, SWIFT has redundant systems spanning multiple continents, including operating centres located on different continents. Each operating centre is an active backup to the other and is designed to independently manage SWIFT's entire operations, if required. In other words, messages are mirrored and stored for retrieval purposes during 124 days in both operating centres. This architecture has been in place for decades.
No muggins, that Erich Moechel ... From SWIFT again:
"We informed the overseers. What their position was in most cases was this didn't pose a risk for the financial stability of the financial system and that's what their remit is. So they didn't need to inform others and SWIFT wasn't legally bound to inform anyone else," said the spokesman."These discussions were held at the highest level between SWIFT's board and its overseers. None of them raised any objection."
So were are the regulators in this mess? The ECB remains under pressure. As mentioned, the central banks have to be involved in this one because they are the only regulators with credibility in the area. Begging out on some pretext then must be examined with the highest degree of skepticism.
The ECB is part of a group of central banks that oversee SWIFT informally but have no legal power to sanction it."The group considered that this matter would not have financial-stability implications and therefore concluded it fell totally outside the remit of the oversight role," [ECB President] Trichet said.
"We did not give SWIFT any blessing in their compliance with these subpoenas. SWIFT remains fully responsible for its decision," he said, adding that it was not up to the ECB, but SWIFT, to decide whether to inform European institutions.
Yet. their mandate is only loosely financial stability; when called upon, they quite happily dive into other areas. Here's how Ben Bernanke, the new Chairman of the Federal Reserve, puts it:
Historically, the goals of banking regulation have included the safety and soundness of bank operations, the stability of the broader financial system, the promotion of competition and efficiency in banking, assistance to law enforcement, consumer protection, and broader social objectives.
In many of these cases, it takes a very open imagination to create the nexus between central bank activities and these areas of interest, so the hand-waving of the European Central Bank at this point is like watching a man drown, and assuming it is the lifeguard's responsibility to save him.
This is justly called a scandal for the underhanded way in which the US Treasury breached the SWIFT databases. It wasn't that the EU wouldn't have done a deal, it was that the UST felt it necessary to keep it secret so as to not ask the Europeans.
Secrecy is a bad policy. Kerckhoffs said it a century ago, and it remains as true today. The need to show that some enemy could benefit from seeing your operations is a flimsy excuse alongside the massive danger you do to your own side when you hide things from your own people, and let them fester.
Which leads into that other "government secrets" case. Judge Taylor, who a month or two back ruled one of the Bush domestic spying programmes illegal, has rejected arguments to stop the programme. Instead, she has gave the government a week to get a reversal from the Appeals court, and effectively underlined the suggestion that the "government secrets" argument is being used to hide bad stuff, not good stuff.
Normally, we try and keep clear of politics, and stick to FC. Hence, the SWIFT breach represents a fascinating case of governance gone wrong in a major system of FC import. But it wouldn't be right to exclude mention of the broader canvas such as the suspension of habeas corpus in the US of A. In the latest of a long series of bills, American lawmakers have shown themselves willing to hand over all power to an exective branch, and SWIFT is happy to comply with that.
As we come into the November USA election madness, some might be asking, "Where the American people?" Most don't know or care. I think a comment on Bruce Schneier's blog had it best:
"We're better than that."No we're not.
Strike the American people. Strike their Congress. The only thing left is the judiciary and other nations, and I'm not holding my breath over the European's response. Judge Taylor's case and the SWIFT breach in Europe bear watching closely.
One of the points behind Pareto-secure, if not *the* point (disagree here), is that only a few components ever achieve the strength to be rated Pareto-secure or even Pareto-complete. In short, that means they are so good that you don't need to worry about them in your design within your context (Pareto-secure) or even forever, in any reasonable scenario (Pareto-complete).
The headline component for this treatment is today's encryption algorithms. AES and the like are so strong we don't need to worry about them. But the corollary is that the protocols we use them in are nowhere near so secure, and our faith in Pareto-secure components has to be very carefully contained.
That extends to "modes," being those short protocols to create streams out of blocks. Which brings us to this very nice description from Mark Pustilnik of how short the distance between "strong" and "ridiculous" is with cipher modes.
Just spotted, another excellent exposition of mathematics in pictures on Nick Szabo's site.
Arthur spots a humourous post:
But it also illustrated a fundamental difference in the way audits are conducted on both continents. In the United States, audits are about ensuring that sufficient controls are in place to mitigate risks. Thus, the audit findings tend to emphasize lapses in application and network security. In Europe, audits tend to focus on following a predefined process, being transparent in the actions taken, precisely defining policies and procedures, and adhering to international standards.
The difference, I suggest, may depend on whether you thought audits were useful, and whether auditors could be trusted to provide checks that were useful instead of being perverted by any of a hundred tricks. For an example of "agenda" see the recent HP case where the ethics officer was apparantly quite happy to approve spying on board members.
Would an audit have picked that up? And more importantly, what do we want in a world where an audit won't pick up those things? Value for money, right?
A further problem is what is known as Audit Independence. Audits tend to suffer from exploitation by auditors. One sign of this is when they turn the process of governance into a branding exercise, such as eTrust or WebTrust. Is this a process to provide customer value or is it a process to earn fees?
AbstractWidely-used online "trust" authorities issue certifications without substantial verification of the actual trustworthiness of recipients. Their lax approach gives rise to adverse selection: The sites that seek and obtain trust certifications are actually significantly less trustworthy than those that forego certification. I demonstrate this adverse selection empirically via a new dataset on web site characteristics and safety. I find that TRUSTe-certified sites are more than twice as likely to be untrustworthy as uncertified sites, a difference which remains statistically and economically significant when restricted to "complex" commercial sites. I also present analogous results of adverse selection in search engine advertising - finding ads at leading search engines to be more than twice as likely to be untrustworthy as corresponding organic search results for the same search terms.
Either way, the *result* of the branding exercise is clear -- you tend to acquire better than your fair share of scams, which are willing to pay the price to hide behind the brand. The extent to which the process behind the brand adds value then becomes all-important. The brand makes it effectively more difficult, perhaps harkening back to the old days when the professions were not supposed to advertise.
One conclusion that is emerging out of the current spate of governance failures -- mostly from the US but sometimes in Europe -- is to ask why auditors aren't picking up the frauds?
In Enron's case (Enron's 30$bn), we know the Auditor was Arthur Andersen, and the reason they did not pick it up is that at the least, they were conflicted. More likely, they were "running cover" for the company. Bawag / Refco wasn't picked up until Refco went public, and even then it was lucky (the guy lost his job, a fitting reward for public service).
Switching across to the *long term response* to uncontrolled auditors and rampant audit practices, we have a critical mass against Sarbanes-Oxley building up. Sir Alan Greenspan reaches out and the thumb goes down!
The Sarbanes-Oxley Act is doing more harm than good and must be overhauled, Alan Greenspan told a technology audience here."One good thing: Sarbox requires the CEO to certify the financial statement. That's new and that's helpful. Having said that, the rest we could do without. Section 404 is a nightmare."
Sarbanes-Oxley was the legislation that approximately doubled the cost of audits in the US. Now the debate is on as to whether it is bad or good; does it deliver twice the value, or just twice the headaches? Does it deliver anything at all? Again, Sir Alan nails the key difference that likely counts above mere governance considerations:
He said the evidence is clear that *Sarbanes-Oxley strictures are driving initial public stock offerings away* from the New York Stock Exchange and to the London Stock Exchange. Increasingly, he said, people recognize that Sarbanes-Oxley must be changed. "The pressure on getting 404 significantly altered is rising and is taking on a critical mass." But he added, "You do not get a bill altered when the two names [Sarbanes and Oxley] are in the process of retiring. People are waiting until they are gone. Then, hopefully, changes will be made. Any bill that passes both houses almost unanimously, cannot be a good piece of legislation."
My emphasis. And don't miss that great quote at the end!
Not all agree. In a dramatic echo of the two posts on security training of last week, Sam E. Antar suggests that Sarbanes-Oxley is worth a partial raised thumb:
I say to you that the Sam E. Antar of twenty years ago (I am not a criminal today) would be just as successful in today’s environment.Other than Sarbanes-Oxley and its limited reforms (which many misguided detractors are trying to weaken today), little progress has been made in the culture and attitude of the accounting profession (in private industry or government) regarding white collar crime.
Sam was the CFO for the Crazy Eddie fraud, so he is an expert in fraud. He now helps the other side (honest! His site was there last week, I swear!). He also bemoans the fact that accountants aren't trained enough in basic fraud. His more basic point:
Criminals always have the initiative and the professions approach to preventing fraud (whether as CPAs ay accounting firms, accountants in government, the private sector, and nonprofit sector) is “process oriented” rather than the criminal who approaches his work in a judgmental way.Therefore, the criminal has the fundamental advantage against the under informed, not very well trained accounting profession in regards to combating white collar crime.
That bears interesting comparison to the first quote above. It's worth stressing that criminals think differently, and thus always have the advantage over process; you won't hear that in white hat security classes, because it is very hard to say just how criminals think, without exposing ones hat to a certain greyness.
Finally, I have long predicted a private class action response to the phishing and security morass (but not seen it yet). Here's a paper spotted by Adam that suggests it has merit:
Public choice analysis suggests that a meaningful public law response to insecure databases is as unlikely now as it was in the early Industrial Age. The Industrial Age's experience can, however, help guide us to an appropriate private law remedy for the new risks and new types of harm of the early Information Age. Just as the Industrial Revolution's maturation tipped the balance in favor of early tort theorists arguing that America needed, and could afford, a Rylands solution, so too the Information Revolution's deep roots in American society and many strains of contemporary tort theory support strict liability for bursting cyber-reservoirs of personal data instead of a negligence regime overmatched by fast-changing technology.
So which audit approach is better? Process oriented? Risks mitigation? It is clear that the value for money is sorely missing. Here's a summary of *my* thoughts, including the wider question of alternates to audits:
We can do better by:
But don't expect anything soon.
Lynn mentioned in comments yesterday:
I guess I have to admit to being on a roll.
:-) Lynn grasped the nexus between the tea-room and the systems room yesterday:
One of the big issues is inadequate design and/or assumptions ... in part, failing to assume that the operating environment is an extremely hostile environment with an enormous number of bad things that can happen.
What I didn't stress was the reasons behind why security training was so important -- more important than your average CSO knows about. Lynn spots it above: reliability.
The reason we benefit from teaching security (think Fight Club here not the American Football) is that it clearly teaches how to build reliable systems. The problem addressed here is that unreliable systems fall foul of statistical enemies, and they are weak, few and far between. But when you get to big systems and lots of transactions, they become significant, and systems without reliability die the death of a thousand cuts.
Security training solves this because it takes the statistical enemy up several notches and makes it apparent and dangerous even in small environments. And, once a mind is tuned to thinking of the attack of the aggressor, dealing with the statistical failure is easy, it's just an accidental case of what an aggressor could do.
I would even assert that the enormous amounts of money spent attempting to patch an inadequate implementation can be orders of magnitude larger than the cost of doing it right in the first place.
This is the conventional wisdom of the security industry -- and I disagree. Not because it doesn't make sense, and because it isn't true (it makes sense! and it's true!) but because time and time again, we've tried it and it has failed.
The security industry is full of examples where we've spent huge amounts of money on up-front "adequate security," and it's been wasted. It is not full of examples where we've spent huge amounts of money up front, and it's paid off...
Partly, the conventional security industry wisdom fails because it is far too easy for us to hang it all out in the tea-room and make like we actually know what we are talking about in security. It's simply too easy to blather such received wisdom. In the market for silver bullets, we simply don't know, and we share that absence of knowledge with phrases and images that lose meaning for their repitition. In such a market, we end up selling the wrong product for a big price -- payment up front, please!
We are better off -- I assert -- saving our money until the wrong product shows itself to be wrong. Sell the wrong product by all means, but sell it cheaply. Live life a little dangerously, and let a few frauds happen. Ride the GP curve up and learn from your attackers.
But of course, we don't really disagree, as Lynn immediately goes on to say:
Some of this is security proportional to risk ... where it is also fundamental that what may be at risk is correctly identified.
Right.
To close with reference to yesterday's post: Security talk also easily impresses the managerial class, and this is another reason why we need "hackers" to "hack", to use today's unfortunate lingo. A breach of security, rendered before our very servers, speaks for itself, in terms that cut through the sales talk of the silver bullet sellers. A breach of security is a hard fact that can be fed into the above risk analysis, in a world where Spencarian signals abound.
Once upon a time we all went to CompSci school and had lots of fun.
Then it all stopped. It stopped at different times at different places, of course, but it was always for the same reason. "You can't have fun," wiggled the finger of some professor talking a lot of Greek.
Well, it wasn't quite like that, but there is a germ of truth in the story. Have a look over at this post (spotted on EC) where the poster declares -- after a long-winded description of the benefits of a classical education -- that the computer science schools should add security to their canons, or core curricula. (Did I get the plurals right? If you know the answer you will have trouble following the rest of this post...)
Teaching someone to validate input is easy. Teaching someone why they need to be rabid about doing it every single time - so that they internalize the importance of security - is hard. It's the ethics and values part of secure coding I really hate having to retrofit, not the technical bit. As it says in Proverbs 22:6, "Train a child in the way he should go, and when he is old he will not turn from it."
This poster has it wrong, and I sense years in the classroom, under the ruler, doing verbs, adverbs and whatnots. No fun at all.
Of course, the reason security is hard is because they -- the un-fun classical scholars -- don't provide any particular view as to why it is necessary, and modern programmers might not be able to eloquently fill in the gap, but they do make very economic decisions. Economics trumps ethics in all known competitions, so talking about "ethics and values of secure coding" is just more Greek.
So what happened to the fun of it all? I speak of those age-old core skills now gone, the skills now bemoaned in the board rooms of just-hacked corporations the world around, as they frown over their SB1386s. To find out happened to them, we have to go back a long time, to a time where titles mattered.
Time was, a junior programmer didn't have a title, and his first skills were listening, coding and frothing.
He listened, he coded and frothed, all under close supervision, and perchance entered the world's notice as a journeyman, being an unsupervised listener, coder and frother.
In those days, the journeyman was called a hacker, because he was capable of hacking some bits and pieces together, of literally making a hack of something. It was a bit of a joke, really, but our hacker could get the job done, and it was better to be known as one than not being known at all. (One day he would aspire to become a guru or a wizard, but that's another story.)
There was another lesser meaning to hacker, which derived from one of the journeyman's core classes in the canon -- breaching security. He was required to attack others' work. Not so as to break it but so as to understand it. To learn, and to appreciate why certain odd things were done in very certain but very odd ways.
Breaching security was not only fun, it was a normal and necessary part of computer science. If you haven't done it, you aren't well rounded, and this is partly why I muck around in such non-PC areas such as poking holes in SSL. If I can poke holes in SSL, so the theory of secure protocols goes, then I'm ready -- perhaps -- to design my own.
Indeed breaching security or its analogue is normal and essential in many disciplines; cryptology for example teaches that you should spend a decade or so attacking others' designs -- cryptanalysis -- before you ever should dare to make your own -- cryptography. Can you imagine doing a civil engineering course without bending some beams?
(Back to the story.) And in due course, when a security system was breached, it became known as being hacked. Not because the verb was so intended, but by process of elimination some hacker had done it, pursuant to his studies. (Gurus did not need to practice, only discuss over cappuccinos. Juniors listened and didn't do anything unless told, mostly cleaning out the Atomic for another brew.)
You can see where we are going now, so I'll hit fast-forward. More time passed ... more learning a.k.a. hacking ... The press grasped the sexy term and reversed the senses of the meaning.
Some company had its security breached. Hacking became annoying. The fun diminishes... Then the viruses, then the trojans, the DDOS, the phishing, the ....
And it all got lumped together under one bad name and made bad. Rustication for some, "Resigned" for others, Jail for some unfortunates.
That's how computer science lost the security skills from the canon. It was dropped from the University curricula by people who didn't understand that it was there for a reason. Bureaucrats, lawmakers, police, and especially corporates who didn't want to do security and preferred to blame others, etc etc, the list of naysayers is very long.
Having stripped it from computer science, we are now in a world where security is not taught, and need to ask: what they suggest in its place:
There are some bright security spots in the academic environs. For example, one professor I talked to at Stanford in the CS department - in a non-security class, no less - had his students "red team" and blue team" their homework, to stress that any and all homework had to be unhackable. Go, Stanford! Part of your grade was your homework, but your grade was reduced if a classmate could hack it. As it should be.
Right, pretend hacking exercises. Also, security conferences. Nice in spirit, but the implementation is a rather poor copy of the original. Indeed, if you think about the dynamics of hacking games and security conferences ("Nobody ever got hacked going to Blue Hat??") we aren't likely to sigh with relief.
And then:
One of my colleagues in industry has an even more draconian (but necessary) suggestion for enforcing change upon universities. ... He decided that one way to get people's attention was to ask Congress to tie research funds to universities to changing the computer science curriculum. I dare say if universities' research grants were held up, they might find the extra time or muster the will to change their curricula!
Heaven help us! Are we to believe that the solution to the security training quandary is to ask the government to tell us how to do security training? Tell me this is a joke, and the Hello Kitty People haven't taken over our future:
The Hello Kitty people are those teenagers who put their personal lives on MySpace and then complain that their privacy is being violated. They are the TV viewers who think that the Hurricane Katrina rescue or the Iraq war were screwed up only because we don't, they belatedly discover, have actual Hello Kitties in political power. When inevitably some of world's Kitties, unknown beyond their cute image, turn out to be less than fully trustworthy, the chorus of yowling Kitty People becomes unbearable cacophony.
(We've written before about how perhaps the greatest direct enemy of Internet security is the government, so we won't repeat today.)
Here is a test. A corporation such as Oracle could do this, instead of blaming the government or the hackers or other corporations for its security woes. Or Microsoft could do it, or anyone, really.
Simply instruct all your people to breach security. Fill in the missing element in the canon. Breach something, today, and learn.
Obviously, the Greeks will complain about the errant irresponsibility of such support for crime ... but just as obviously, if they were to do that, their security knowledge would go up by leaps and bounds.
Sadly, if this were taken seriously, modern corporations such as Oracle would collapse in a heap. It's far cheaper to drop the training, blame "the hackers" and ask the government for a subsidy.
Even more sadly, we just don't have a better version of training than "weapons free." But let's at least realise that this is the issue: you classicals, you bureaucrats, you Sonys and Oracles and Suns, you scared insecure corporations have brought us to where we are now, so don't blame the Universities, blame yourselves.
And, in the name of all that is sacred, *don't ask the government for help!*
Vlad Miller writes from Russia (translated by Daniel Nagy):
We can invent any algorithm, develop any protocol, build any system, but, no matter how secure and reliable they are, it is the human taking the final decision that remains the last link of security. And, taking into account the pecularities of human nature, the least reliable link, at that, limiting the security of the entire system. All of this has long been an axiom, but I would like to share a curious case, which serves as yet another confirmation of this fact.
We all visit banks. Banks, in addition to being financial organizations attracting and investing their clients' funds, are complex systems of informational, physical and economic defenses for the deposited cash and account money. Economic defenses are based on procedures of confirming and controlling transactions, informational defenses -- on measures and procedures guarding the information about transactions, personal, financial and other data, while physical defenses comprise the building and maintenance of a secure physical perimeter around the guarded objects: buildings, rooms and valuable items.
Yet, regardless of the well-coordinated nature of the whole process, final decisions are always taken by humans: the guard decides whether or not to let the employee that forgot his ID through the checkpoint; the teller decides whether a person is indeed the owner of the passport and the account he claims to own; the cashier decides whether or not there is anything suspicious in the presented order. A failure can occur at any point, and not only as a consequence of fraudulent activities, but also due to carelessness or lack of attention on the part of the bank's employee, a link of the security system.
Not too long ago, I was in my bank to deposit some cash on my account. The teller checked my passport, compared my looks to the photo within, took my book and signed a deposit order for the given amount. The same data were duplicated in the bank's information system and the order with my book were passed on to the cashier. Meanwhile, I was given a token with the transaction number, which I should have presented to the cashier so that she could process the corresponding order. Everybody is familiar with this procedure; it may differ a bit from bank to bank, but the general principles are the same.
Walking over to the cashier, I have executed my part of the protocol by handing over the token to the cahsier (but I did not put the cash into the drawer before having been asked to do so). She looked at my order, affixed her signature to it and to my book and ... took a few decks of banknotes out of the safe and started feeding them to the counting machine. I got curious how long it would take for the young lady to realize the error in her actions, and did not interrupt her noble thrust. And only when she turned around to put the cash into the drawer did I delicately remark that I did not expect such a present for March 8 and that I came to deposit some cash, not to withdraw. For a few seconds, the yound lady gave me a confused look, then, after looking at the order and crossing herself, thanked me for saving her from being fired.
The banking system relies a great deal on governmental mechanisms of prevention, control and reaction. Had I not, in computer-speak, interrupted the execution of the miscarried protocol, but instead left the bank with the doubled amount of money, it would not have lead to anything except for the confiscation of the amount of my "unfounded enrichment". The last link of security is unreliable: it fails at random and is strongly vulnerable to various interferences and influences. This is why control and reaction are no less important than prevention of attacks and failures.
There are already a couple of improvements signalled at Mozilla in security terms since the appointment of Window Snyder as single security chair, for those interested (and as Firefox has 10-20% of the browser market, it is somewhat important). Check out this interview.
What is the key rule that you live by in terms of security?
Snyder: That nothing is secure. ...
( Adi Shamir says that absolutely secure systems don't exists. Lynn's version: "can you say security proportional to risk?" I say Pareto-secure ... Of course, to risk practitioners, this is just journeyman stuff. )
So the answer, in one word: Is Firefox more secure than Internet Explorer?
Snyder: I don't think there is a one-word answer for that question.
If ever there was a battle that was unwinnable, that was it. It quite possibly needed someone who had extensive and internal experience of MS to nail that one for Mozo.
You dealt with security researchers at Microsoft and will deal with them at Mozilla. How do you see the community? There have been several cases where researchers have gone public with Firefox flaws.
Snyder: The security research community I see as another part of the Mozilla community. There's an opportunity for these people, if they get excited about the Mozilla project, to really contribute. They can contribute to secure design, they can suggest features, they can help us identify vulnerabilities, and they can help us test it. They can help us build tools to find more vulnerabilities. The spectrum is much broader (than with commercial products) in ways the research community can contribute to this project.
Earlier, Snyder said:
Snyder: There has been a lot of great work done. I think there is a great opportunity to continue that work and make the entire process available externally.
Is this a move towards opening Mozilla's closed security process? If so, that would be most welcome.
And in other news, Firefox 2.0 is almost here:
Version 2.0 of the software will still feature a raft of new features including an integrated in-line spell checker, as well as an anti-phishing tool (a must-have accessory that's in Opera 9 and will be included in IE 7),...
Hopefully someone will get a chance to review it the anti-phishing tool (!) and compare it to the efforts of the research community over the last few years.
Universal Music has announced it is moving its catalogue to a "free with adverts" model:
Backed by Universal, Spiralfrog will become one of the first sites to offer free music legally. Fans will be able to download songs by the record company's roster of artists, including U2, Gwen Stefani and The Roots.The service - which will be supported by advertising, unlike other legal download sites that charge for music - will launch in the US and Canada from December. It will become available in Europe in early 2007.
If the business succeeds, that will be the new standard price. If it fails, then it will take another year or two, I would predict, before the price goes back down to $0 (in delicious irony, the above article is now only available for a pound!).
There are a few reasons to believe that the business may not succeed -- massive lobbying by the others, duff selection, lousy adverts and plenty of time before now and then -- so this is a non-trivial question. Here's another reason:
Josh Lawler, a US-based music industry legal specialist, said news of the new service was "inevitable". He said questions over how artists would be paid may make some reluctant to agree to the free service. "SpiralFrog will have to find a way to pay artists from the advertising dollars they are generating," he added. "But they're not necessarily going to know how many advertising dollars there are and so some artists are going to be hesitant about it."
Here's my favourite quote, from a HMV rep who otherwise was quite positive (pay a pound for thereference) :
"What is a little concerning is that for a long time now, the trade body, BPI, has been anxious to put across an anti-illegal or piracy message, which suggests that music is of intrinsic value and people should be prepared to pay for it, so this may give a conflicting, mixed signal."
There's nothing "conflicting, mixed" about free. To see why this was inevitable:
"A report published last month by the International Federation of Phonographic Industries (IFPI) claimed 40 illegal downloads were made for every legal one in the US. The ratio, believed to be much the same in the UK,"
Now, I don't believe those numbers, necessarily, as I doubt the IFPI even bothered to pretend they weren't exaggerating. But even if in the ballpark, the amount of sharing dominates any other use, including practically everything else that isn't to do with music. If you believe the ISP grumbles, that is.
Time for a new model - the physics is the reality, the economics is the deal, and the legal stuff just has to keep up. BigMac suggests Pandora's Music Genome Project.
Another great quote:
"The US radio industry generates $20 billion a year in revenue and they give the product away for free," he said. "Record labels generate $12 billion a year and they sell their product."
Here's some clues on the new model:
Users can download an unlimited number of songs or music videos if they register at the site and watch online advertisements.The tracks cannot be burned to a CD, but users will be able to transfer music to portable media players equipped with Microsoft Windows digital rights management software, Ford said. However, the service will not work with Apple Computer's computers or its iPod music players.
Funny source for the nitty gritty!
Oh, I forgot to mention -- what's the nexus with FC? That's easy -- all those payment systems that were banking on micropayments from music downloads can close up shop. They should have studied more economics and less marketing.
2nd addition, to stress the move to $0 content:
Sony to buy Sausalito's GrouperSony Pictures is expected to announce today that it has acquired Sausalito Internet video-sharing company Grouper for $65 million.
Teaming up with Sony further highlights the role amateur videos -- and the companies that host them -- are having in changing the Hollywood landscape.
Traditional entertainment companies are working with Silicon Valley start-ups to navigate a new, on-demand entertainment world. Tuesday, the popular video-sharing site YouTube announced a new video advertising platform, and its first client is Warner Bros., which is promoting Paris Hilton's debut album.
Grouper's technology allows a user to easily take a video from its site and post it on third-party sites such as a MySpace or Blogger page. Its videos can also be watched on devices other than your personal computer, such as a video iPod.
For more naysaying, see BigPicture as suggested by Frank in comments below.
Chris points to the Court of the Honourable Anna Diggs Taylor, representing the third branch of power in the USA. Justice A. D. Taylor rules the telephone wire tapping programs out of order. That is, illegal.
In summary, she knocked out the "states secrets" defence because all the information needed was already public, and she granted a permanent injunction based on breaches of the law -- FISA -- and the US constitution and US bill of rights.
As the case has some bearing on the recent SWIFT breach by US Treasury (probably conducted under the same novel theories inside or outside the law), we present some snippets from Case No. 06-CV-10204:
The President of the United States, a creature of the same Constitution which gave us these Amendments, has undisputedly violated the Fourth in failing to procure judicial orders as required by FISA, and accordingly has violated the First Amendment Rights of these Plaintiffs as well.....In this case, if the teachings of Youngstown are law, the separation of powers doctrine has been violated. The President, undisputedly, has violated the provisions of FISA for a five-year
period. ...VII. The Separation of Powers The Constitution of the United States provides that “[a]ll legislative Powers herein granted shall be vested in a Congress of the United States. . . .”43 It further provides that “[t]he executive Power shall be vested in a President of the United States of America.”44 And that “. . . he shall take care that the laws be faithfully executed . . . .”45
.... Justice O’Connor concluded that such a citizen must be given Fifth Amendment rights to contest his classification, including notice and the opportunity to be heard by a neutral
decisionmaker. (citation) Accordingly, [Justice O’Connor's] holding was that the Bill of Rights of the United States Constitution must be applied despite authority granted by the AUMF.
She stated that:It is during our most challenging and uncertain moments that our Nation’s commitment to due process is most severely tested; and it is in those times that we must preserve our commitment at home to the principles for which we fight abroad. **** Any process in which the Executive’s factual assertions go wholly unchallenged or are simply presumed correct without any opportunity for the alleged combatant to demonstrate otherwise falls constitutionally short. Hamdi, 542 U.S. at 532, 537.Under Hamdi, accordingly, the Constitution of the United States must be followed.
...The duties and powers of the Chief Executive are carefully listed, including the duty to be Commander in Chief of the Army and Navy of the United States,49 and the Presidential Oath of Office is set forth in the Constitution and requires him to swear or affirm that he “will, to the best of my ability, preserve, protect and defend the Constitution of the United States.”50
...Not only FISA, but the Constitution itself has been violated by the Executive’s TSP. As the court states in Falvey, even where statutes are not explicit, the requirements of the Fourth Amendment must still be met.54 And of course, the Zweibon opinion of Judge Skelly Wright plainly states that although many cases hold that the President’s power to obtain foreign intelligence information is vast, none suggest that he is immune from Constitutional requirements.55
The argument that inherent powers justify the program here in litigation must fail.... Plaintiffs have prevailed, and the public interest is clear, in this matter. It is the upholding of our Constitution.
As Justice Warren wrote in U.S. v. Robel, 389 U.S. 258 (1967):
Implicit in the term ‘national defense’ is the notion of defending those values and ideas which set this Nation apart. . . . It would indeed be ironic if, in the name of national defense, we would sanction the subversion of . . . those liberties . . . which makes the defense of the Nation worthwhile. Id. at 264.IT IS SO ORDERED.
Date: August 17, 2006 | s/Anna Diggs Taylor | |
Detroit, Michigan | ANNA DIGGS TAYLOR | |
UNITED STATES DISTRICT JUDGE |
Some caveats: we probably have to wait for the inevitable appeal, and we don't do law here, we just do FC. And here seems an appropriate moment to finish with Nick's observations on the wider scope:
There is a long-standing controversy about the idea of parliamentary supremacy -- the idea that legislative law trumps all other law. That is currently the dominant theory in England, but the United States holds a contrary view -- here judges review legislative laws against a prior and higher law: a written constitution (and perhaps also against natural law, but that is a subject we won't pursue here).
There will be multiple additional links to the precise case...
Adam over at EC, responding to an entry by Phill, is banging the drum on breach data collection and distribution, which is well needed. I first saw this point in a paper from around 2004, and it has been a well trodden theme in the now popular field of Sec&Econ. All well and good. We need more breach data.
However, collecting the data is not the be-all and end-all. It's not for example what would have saved Enron, which is what Adam alludes to:
SarBox is what we get when we have no data with which to push back.
Sarbanes-Oxley collects lots of data, but doesn't change the problem space, and it arguably makes the problem space worse.
Sarbanes-Oxley is what you get when (a) systems get too complex and (b) businesses don't implement Financial Cryptography techniques to reduce and eliminate those complexities. Under these two conditions, what you get first is fraud -- keep in mind that fraud comes out of complexity and lack of reliable systems. The lack of reliability means the systems can be perverted, and the complexity means the perversions can be hidden.
What you get second is Sarbanes-Oxley, as well-meaning accountants discover that they can set themselves up for a *lot* of work and provide a veneer of cover for frauds like Enron.
In contrast, in FC, we have patterns and methods to tighten up all that transaction stuff. RAH's old saw about it being 2 orders of magnitude cheaper is in the right ballpark, except that it might be conservative. FC creates both reliability --- cryptographically based real time non-pervertable reliability -- and removes complexity, as whole layers at a time can be dispensed with because we can simply write automatic audit processes that show 100% conformance.
But these aspects are also why FC didn't get implemented. In a regulated industry, there is incentive to corporate business model cloning (economists call this 'herding'), which leads to cartel behaviour (see paper for another angle) and this then leads to an incentive to raise costs, not lower costs. The banks are a canonical case, as they are so highly regulated that all banks are almost always clones of each other.
If that doesn't make sense, consider it this way. Ask an accountant whether he would recommend a system (say, FC) that would halve his account, or whether he'd recommend a system (like Sarbanes-Oxley) that would double his account.
Add to these effects of fraud and we find another barrier to FC: the experience of those who work at the systemic level to try to put in simpler, more cost-effective systems that eliminate fraud is that they also run slap-bang into those people who perpetuate fraud. These people are very hard to dislodge, for very good reason: they are making a lot of money out of the complexity and poor reliability systems. They are prepared to spend a lot of money keeping systems complex and unreliable.
The answer then is not to increase regulation a la Sarbanes-Oxley, but to decrease regulation, and let things like SOX (or AADS or other similar systems) innovate and drive costs and complexity down. Every time the regulation increases, expect more complexity and therefore more fraud and more costs. Decrease regulation and expect the reverse. Simple.
Economists have known this for decades, we don't need to re-invent the wheel in the security industry with calls for this or that regulation.
More rumours on how the US Treasury breached SWIFT: It appears that UST knew about certain SWIFT breaches by insiders in the past and used those infractions as leverage to get access.
This may be in contrast to claims by SWIFT itself that UST prepared warrants for seizure as extortion ploys. Indeed, it has been suggested that not only were no warrants prepared, but that the UST provided no written evidence of any form of due process at all. An interesting question to put to SWIFT, #1: show us the evidence!
It gets better: SWIFT was breached not once, not twice, but three times!
Rumour has it that two other agencies of unknown character had also breached the SWIFT record set independently of UST, and that they were better at it than UST in that they really knew how to use the information. The timeline of these breaches is unclear.
At least one of these agencies has found all sorts of interesting information and has used it -- which is how the secret was outed. They apparently have done the datamining thing and fed the results into various cases. It's what you do with data, right? Then, conversations with those implicated groups (read: wall street firms) has led to a suspicion that more than just domestic data was involved. At least one company with rock-solid profitably has already proceeded on an "orderly exit from the market," after having been given "the talk." The people involved read like a who's who of the mothers of the Texas / Washington DC oil industry which raises the idle speculation of political connections and insider trading -- were there suspiciously good trading records in oil? And was this found in the SWIFT analysis? And what sort of agency takes on that power group and lives to tell the tale?
All which rumours might point to TLA2 being a US agency with interests domestic rather than foreign. Likely candidates we could speculate on given the financial regulatory interest would be the SEC or the Federal Reserve.
TLA3 remains obscure. But, once we get to 3 agencies, we can stop counting and also stop pretending that there is any governance in place. SWIFT is an open book for regulators in the US at least, and that makes it just another smoking gun in the never-ending Spy v. Spy game. At the least, this suggests question #2 for SWIFT: how many agencies have your data?
In related gossip, SWIFT itself has conducted an internal audit, perhaps in response to the above rumour of leverage, or perhaps out of caution. It has apparently found additional multiple breaches across the lines -- uncovering misuses of data by employees.
Insiders suggest a strategy of cleaning house before outside regulators come in. Do we audit then Ajax, or is it the other way around? Sustained pressure on privacy and banking regulators in Europe has made intervention a non-trivial risk; latest rumour there is that the Belgian privacy regulator is taking lead on the case for all EU privacy regulators, and they all now working through SWIFT's response to the first round of questioning. The question of whether European companies are alive to the risks of "Restaurant economics," a.k.a. industrial espionage remainsl an open one.
Question #3 for SWIFT: why didn't your prior and no doubt expensive audits uncover signs of data abuses? (Readers of FC already know the answer to that, but SWIFT might not, so it is worth making them think about it.)
Also, there are scurrilous suggestions that the SWIFT breach has triggered a wave of copycat audits across FIs with a wide network of users. Major banks take note -- you may want to now go through and audit how your data has been used and misused, and we ain't talking about Sarbanes Oxley. "One more time, with feeling." Many institutions are apparently already doing this, which has lead to a surge of firings and hirings where misuse of data has been found. Some of the breaches relate to USG as beneficiary, others do not, but details are of course scant. (Companies that are mentioned as having surges in firings/hriings other than SWIFT include three household names, leaders in their respective sectors.)
[ Search for more on SWIFT breach. ]
FC'07: Financial Cryptography and Data Security
http://fc07.ifca.ai/
Eleventh International Conference
February 12-15, 2007
Lowlands, Scarborough, Trinidad and Tobago
Submissions Due Date: October 9, 2006, 11:59pm, EDT (UTC-4)
Program Chair: Sven Dietrich (Carnegie Mellon University)
General Chair: Rafael Hirschfeld (Unipay)
At its 11th year edition, Financial Cryptography and Data Security (FC'07) is a well established and major international forum for research, advanced development, education, exploration, and debate regarding security in the context of finance and commerce. We will continue last year's augmentation of the conference title and expansion of our scope to cover all aspects of securing transactions and systems. These aspects include a range of technical areas such as: cryptography, payment systems, secure transaction architectures, software systems and tools, fraud prevention, secure IT infrastructure, and analysis methodologies. Our focus will also encompass financial, legal, business, and policy aspects. Material both on theoretical (fundamental) aspects of securing systems,and on secure applications and real-world deployments will be considered.
...
The conference goal is to bring together top cryptographers, data-security
specialists, and computer scientists with economists, bankers, implementers,
and policy makers. Intimate and colorful by tradition, the FC'07 program
will feature invited talks, academic presentations, technical
demonstrations, and panel discussions.
This conference is organized annually by the International Financial
Cryptography Association (IFCA).
Original papers, surveys, and presentations on all aspects of financial and
commerce security are invited. Submissions must have a strong and visible
bearing on financial and commerce security issues, but can be
interdisciplinary in nature and need not be exclusively concerned with
cryptography or security. Possible topics for submission to the various
sessions include, but are not limited to:
Anonymity and Privacy
Auctions
Audit and Auditability
Authentication and Identification, including Biometrics
Certification and Authorization
Commercial Cryptographic Applications
Commercial Transactions and Contracts
Digital Cash and Payment Systems
Digital Incentive and Loyalty Systems
Digital Rights Management
Financial Regulation and Reporting
Fraud Detection
Game Theoretic Approaches to Security
Identity Theft, Physhing and Social Engineering
Infrastructure Design
Legal and Regulatory Issues
Microfinance and Micropayments
Monitoring, Management and Operations
Reputation Systems
RFID-Based and Contactless Payment Systems
Risk Assessment and Management
Secure Banking and Financial Web Services
Securing Emerging Computational Paradigms
Security and Risk Perceptions and Judgments
Security Economics
Smart Cards and Secure Tokens
Trust Management
Trustability and Trustworthiness
Underground-Market Economics
Virtual Economies
Voting system security
For those interested, last year's proceedings are available from Springer.
Submission Instructions
Submission Categories
FC'07 is inviting submissions in four categories: (1) research papers, (2)
systems and applications presentations, (3) panel sessions, (4) surveys. For
all accepted submissions, at least one author must attend the conference and
present the work.
Research Papers
Research papers should describe novel scientific contributions to the field,
and they will be subject to rigorous peer review. Accepted submissions will
be included in the conference proceedings to be published in the
Springer-Verlag Lecture Notes in Computer Science (LNCS) series after the
conference, so the submissions must be formatted in the standard LNCS format
(15 page limit).
Systems and Application Presentations
Submissions in this category should describe novel or successful systems
with an emphasis on secure digital commerce applications. Presentations may
concern commercial systems, academic prototypes, or open-source projects for
any of the topics listed above. Where appropriate, software or hardware
demonstrations are encouraged as part of the presentations in these
sessions. Submissions in this category should consist of a short summary of
the work (1-6 pages in length) to be reviewed by the Program Committee,
along with a short biography of the presenters. Accepted submissions will be
presented at the conference (25 minutes per presentation), and a one-page
abstract will be published in the conference proceedings.
Panel Sessions
Proposals for panel sessions are also solicited, and should include a brief
description of the panel as well as prospective participants. Accepted panel
sessions will be presented at the conference, and each participant will
contribute a one-page abstract to be published in the conference
proceedings.
Surveys
A limited number of surveys presentations may also be included in the
program. We encourage submissions that summarize the current state of the
art on any well-defined subset of the above listed submission topics. A
limited description of visions on future directions of research in these
topics would also be appreciated. Survey submissions can be significantly
shorter than research paper submissions.
Preparation Instructions
Submissions to the research papers, systems/application presentation
categories, and surveys must be received by the due date. Papers must be
formatted in standard PostScript or PDF format. Submissions in other formats
will be rejected. All papers must be submitted electronically according to
the instructions and forms found on this web site and at the submission
site.
Authors should provide names and affiliations at submission time, and have
the option of including or not names and affiliations in their submitted
papers, that must include on their first page the title of the paper, a
brief abstract, and a list of topical keywords. Accepted submissions will be
included in the conference proceedings to be published in the
Springer-Verlag Lecture Notes in Computer Science (LNCS) series after the
conference, so the submissions must be formatted in the standard LNCS format
(15 page limit). Authors of accepted submissions will be required to
complete and sign an IFCA copyright form. A pre-proceedings volume
containing preliminary versions of the papers will be distributed at the
conference.
Questions about all conference submissions should be directed to the Program
Chair at fc07chair@cert.org
Paper Submission
Authors should only submit work that does not substantially overlap with
work that is currently submitted or has been accepted for publication to a
conference with proceedings or a journal.
Paper submission will occur via website to be announced at a later time.
The Rump Session
FC'07 will also include the popular "rump session" held on one of the
evenings in an informal, social atmosphere. The rump session is a program of
short (5-7 minute), informal presentations on works in progress,
off-the-cuff ideas, and any other matters pertinent to the conference. Any
conference attendee is welcome to submit a presentation to the Rump Session
Chair (to be announced). This submission should consist of a talk title, the
name of the presenter, and, if desired, a very brief abstract. Submissions
may be sent via e-mail, or submitted in person through the Monday of the
conference.
Associated Workshop
There will be a Usability Workshop held in conjunction with FC 2007. Details
will be published at a later time.
Program Committee
Alessandro Acquisti, Carnegie Mellon University
Jon Callas, PGP Corporation
Yvo Desmedt, University College London
Giovanni di Crescenzo, Telcordia Technologies
Roger Dingledine, The Freehaven Project
Bernhard Esslinger, Deutsche Bank
Philippe Golle, PARC
Klaus Kursawe, Philips Research Eindhoven
Arjen Lenstra, EPFL
Patrick McDaniel, Penn State University
Tatsuaki Okamoto, NTT
Kazue Sako, NEC
Radu Sion, SUNY Stony Brook
Stuart Stubblebine, Stubblebine Consulting
Paul Syverson, NRL
Mike Szydlo, RSA
Jonathan Trostle, ASK Consulting and Research
Moti Yung, RSA & Columbia University
Yuliang Zheng, University of North Carolina at Charlotte
Important Dates:
Paper Submission: October 9, 2006
Notification: December 11, 2006
Pre-Proceedings: January 11, 2007
Conference dates: February 12-15, 2007
Post Proceedings: April 10, 2007
SWIFT was extorted to hand over the data. According to two Austrian reports:
"Einverständnis wurde abgepresst" Per Gerichtsbeschluss sollte der gesamte Datenverkehr in der US-Zentrale von SWIFT beschlagnahmt werden, falls SWIFT nicht freiwillig eine bestimmte Zahl von Datensätzen liefere - "das Einverständnis wurde abgepresst", sagt Gall.
“Agreement was squeezed off” By court order the entire data traffic in the US center should be seized by SWIFT, if SWIFT does not supply voluntarily a certain number of data records - “agreement was squeezed off”, says Gall.
Also, here (left in German, right in Googlish, sorry about that) and also see earlier reports (1, 2, 3, 4, 5) ... :
Mit Beschlagnahme gedroht Als sich SWIFT zunächst weigerte, drohten die Amerikaner mit der Beschlagnahme der in den USA gespeicherten SWIFT-Daten. Im US-Bundesstaat Virginia befindet sich nämlich eines der drei Hauptrechenzentren des weltweiten Finanztransaktionssystems SWIFT. Mit der Verhinderung von noch Schlimmerem begründet SWIFT nun, dass man alle gewünschten Daten freiwillig übermittelt habe.
Threatened with seizure When SWIFT refused first, the Americans threatened with the seizure of the Swift data stored in the USA. In the US Federal State Virginia is one of the three main computing centres of the world-wide financial transaction system SWIFT. With the prevention of still worse one SWIFT justifies now that one conveyed all desired data voluntarily.
SWIFT director Günther Gall made those comments at a recent "crisis meeting" held by Austrian banks. He reported that the US Treasury prepared warrants for the seizure of the entire data center, and then offered SWIFT the chance to cooperate by means of a slightly less draconian handover of data.
There's no doubt that the US Government can have this data if it wants. It has the power, and SWIFT is an easy touch, so say at least the Swiss
Stellt sich die Frage, wieso die Swift dem amerikanischen Finanzministerium gefügig Folge leistete. «Die Swift ist leicht erpressbar», mutmasst Mark Pieth, der bereits die Untersuchungskommission der Uno im «Oil for Food» -Skandal leitete, «denn wenn die US-Behörden der Swift die Lizenz für ihre amerikanische Niederlassung entziehen, ist sie nicht mehr funktionsfähig.» Pieth wäre auch nicht erstaunt, wenn der amerikanische Geheimdienst sich im selben Atemzug Einblick in andere Finanztransaktions-Plattformen beschafft hätte. Laut dem Bericht der «New York Times» bestätigten Offizielle der US-Behörden zudem, limitierte Abkommen mit Kreditkarten- oder Transaktionsunternehmen wie Western Union eingegangen zu sein. Die Angelegenheit stellt für Pieth jedenfalls einen eindeutigen Eingriff in die Freiheitsrechte der Bürger dar, und er ist gespannt, wie sich die Schweizer Behörden rechtfertigen werden. The question arises, why Swift had complied to the American Finance Ministry so easily. “*SWIFT is easily extortable*”, presumed Mark Pieth, which already led the inquiry of the UN in the “oil for Food” scandal, “if the US authorities withdraw the licence from SWIFT, it is no longer functional.” Pieth would not be surprised also, if the American secret service had procured itself in the same breath view of other financial transaction platforms. According to the report the “New York Time” confirmed official one of the US authorities besides, agreement with credit card or transaction enterprises limited such as Western union to have been received. The affair represents a clear interference anyhow for Pieth into the liberty rights of the citizens, and it is strained, as Swiss authorities will justify themselves.
In this case, it used the power, and hence we now have another reason for the cover-up -- if true, the US government again exceeded the bounds of reasonable civilised behaviour. It brings to mind how Judge Lewis A. Kaplan recently ruled in the New York Southern District court:
There, Judge Kaplan was referring to the US Justice Department's documented technique of pressuring companies to "cooperate" by hanging their employees out to dry. Unfortunately, there was nobody at the American data center to represent the world's wire senders against the seizure "incentive."
This is a scenario which is all too routine in the money world which is why we have rather strict banking secrecy laws. More:
Offiziell Bescheid wusste in Österreich offenbar nur ein Manager der Raiffeisen Zentral Bank und SWIFT Aufsichtsrat Günther Gall. Er hatte bereits 2001 der Weitergabe von österreichischen Transaktionsdaten an die CIA zugestimmt. Angeblich wurden die österreichischen Banken über diese, nach österreichischem Recht illegalen Datenweitergaben, nicht informiert. Nach fast fünf Jahren wird es nun Zeit, dass sich die Banken darum kümmern, wie mit unseren österreichischen Finanzdaten umgegangen wird, die sie an SWIFT weitergeben. Denn es sind die Banken, die garantieren müssen, dass Dienstleister den gleichen Standard beim Datenschutz garantieren, wie sie selbst mit dem Bankgeheimnis versichern.
Officially, only one manager in Austria know, Raiffeisen central bank and SWIFT supervisory board Günther Gall. It had already agreed 2001 of the passing on of Austrian transaction data to the CIA. Allegedly the Austrian banks were not informed about these data passing on illegal after Austrian right. After nearly five years it becomes now time the fact that the banks worry about how with our Austrian financial data is handled, which pass it on at SWIFT. Because there is the banks, which must guarantee that Dienstleister guarantee the same standard with the data security, how they insure with the banking secrecy.
Which is a grumble that the Austrian responsibilities towards secrecy of data have been breached.
There is yet another problem. Most of the world is looking at this as a privacy issue. (Indeed Privacy International is right on the case, and it needs no crystal ball to predict who's top choice for this year's Big Brother award.) Yet, this is missing the point. As I've pointed out, the lack of governance means that the information will be leaked in due course; not to you or I, as we can't affort the price of illegal data, and we may not want to anyway. But to someone; and due to strong secrecy of their operations, we won't know who it is being leaked to.
René Pfeiffer reports in EDRIgram:
This isn't so much about privacy of the individual, as privacy of the business. A.k.a. industrial espionage, on a state-run scale.
If a large enough deal is being done, where some US champion (say, Boeing) is up against some European champion (say Airbus) for some large bid (say 100 A380s for China), then we can expect _governance be damned_. Maybe the US Treasury would stand up and defend Airbus's right to privacy, against Boeing's corporate survival, ... but I wouldn't be betting *my* factory on it. More:
Wirtschaftsspionage mit EU-Finanzdaten Bisher wurde nur vermutet, die USA könnten ihr Überwachungssystem im Namen der Terrorabwehr auch dazu nutzen, Europas Wirtschaft auszuspionieren. Mit dem Abhören der Finanzdaten habe sich das nun betätigt, so ein Experte der Internationalen Handelskammer.
"Für mich kommt diese Wendung nicht überraschend. Vom Überwachen des internationalen Telefonieverkehrs bis zur Kontrolle des Finanzverkehrs ist es ja nur ein kleiner Schritt. Was im ECHELON-Untersuchungsausschuss des EU-Parlaments noch Vermutung war, hat sich damit bestätigt", sagt Maximilian Burger-Scheidlin von der Internationalen Handelskammer [ICC] in Wien.
ECHELON lässt grüßen
Die Vermutungen im Untersuchungsausschuss, die USA würden ihr elektronisches Überwachungssystem ECHELON auch gezielt dazu benutzen, Europas Wirtschaft auszuspionieren, sind für Burger-Scheidlin mit der Affäre SWIFT nun real geworden.
Man könne eigentlich dankbar sein, denn nun lägen handfeste Indizien vor, dass US-Geheimdienste die europäischen Finanztransfers systematisch durchsuchten. "Wir hoffen nun, dass die Regierungen Europas endlich aktiv werden, nachdem sie nun seit vielen Jahren Bescheid wissen", sagt Müller-Scheidlin, dessen Spezialgebiet bei der ICC die Abwehr von Wirtschaftsspionage ist.
Restaurant economics with European Union financial data So far only one assumed, which could use the USA their monitor in the name of the terror defense also to spy of Europe economics. With the hearing of the financial data now, such an expert of the international Chamber of Commerce worked.
“For me this idiom does not come surprisingly. From supervising the international telephone voice traffic up to control of financial traffic it is only a small step. Which in the ECHELON committee of inquiry of the European Union parliament still assumption was, thereby”, says Maximilian Burger Scheidlin was confirmed of the international Chamber of Commerce [ICC] in Vienna.
ECHELON says "I'm not dead yet!"
The assumptions in the committee of inquiry, which the USA its electronic monitor ECHELON also purposefully to use to spy of Europe economics are become for Burger Scheidlin with the affair SWIFT now material.
One can be actually grateful, because now strong indications would be present that US secret services scanned the European financial transfers systematically. “We hope now that the governments of Europe become finally active, after they know, say now for many years answer” Mueller Scheidlin, whose special field is with the ICC the protection from restaurant economics.
And More. "Restaurant economics" is googlish for "industrial espionage" it seems. Luckily, it's all in German which means the Bush Administration can wave off these funny krauts and their silly terms, and pretend this is just another case of New York Times treachery and treason.
It remains to be seen if the International Chamber of Commerce will carry this fight to the enemy. What was last month's curiousity -- the European Commission being quite happy to sign over so much data in one-way exchanges on individuals -- is now replaced with the new curiosity of whether they will do the same to their companies.
Darren points to a development reminiscent of satellite TV: the codes to protect the European Galileo satellite's positioning signals have been cracked by a team from Cornell University in USA. Full story below:
Cracking the secret codes of Europe's Galileo satellite
Members of Cornell's Global Positioning System (GPS) Laboratory have cracked the so-called pseudo random number (PRN) codes of Europe's first global navigation satellite, despite efforts to keep the codes secret. That means free access for consumers who use navigation devices -- including handheld receivers and systems installed in vehicles -- that need PRNs to listen to satellites.
The codes and the methods used to extract them were published in the June issue of GPS World.
The navigational satellite, GIOVE-A (Galileo In-Orbit Validation Element-A), is a prototype for 30 satellites that by 2010 will compose Galileo, a $4 billion joint venture of the European Union, European Space Agency and private investors. Galileo is Europe's answer to the United States' GPS.
Because GPS satellites, which were put into orbit by the Department of Defense, are funded by U.S. taxpayers, the signal is free -- consumers need only purchase a receiver. Galileo, on the other hand, must make money to reimburse its investors -- presumably by charging a fee for PRN codes. Because Galileo and GPS will share frequency bandwidths, Europe and the United States signed an agreement whereby some of Galileo's PRN codes must be "open source." Nevertheless, after broadcasting its first signals on Jan. 12, 2006, none of GIOVE-A's codes had been made public.
In late January, Mark Psiaki, associate professor of mechanical and aerospace engineering at Cornell and co-leader of Cornell's GPS Laboratory, requested the codes from Martin Unwin at Surrey Space Technologies Ltd., one of three privileged groups in the world with the PRN codes.
"In a very polite way, he said, 'Sorry, goodbye,'" recalled Psiaki. Next Psiaki contacted Oliver Montenbruck, a friend and colleague in Germany, and discovered that he also wanted the codes. "Even Europeans were being frustrated," said Psiaki. "Then it dawned on me: Maybe we can pull these things off the air, just with an antenna and lots of signal processing."
Within one week Psiaki's team developed a basic algorithm to extract the codes. Two weeks later they had their first signal from the satellite, but were thrown off track because the signal's repeat rate was twice that expected. By mid-March they derived their first estimates of the code, and -- with clever detective work and an important tip from Montenbruck -- published final versions on their Web site (http://gps.ece.cornell.edu/galileo) on April 1. The next day, NovAtel Inc., a Canadian-based major manufacturer of GPS receivers, downloaded the codes from the Web site and within 20 minutes began tracking GIOVE-A for the first time.
Galileo eventually published PRN codes in mid-April, but they weren't the codes currently used by the GIOVE-A satellite. Furthermore, the same publication labeled the open source codes as intellectual property, claiming a license is required for any commercial receiver. "That caught my eye right away," said Psiaki. "Apparently they were trying to make money on the open source code."
Afraid that cracking the code might have been copyright infringement, Psiaki's group consulted with Cornell's university counsel. "We were told that cracking the encryption of creative content, like music or a movie, is illegal, but the encryption used by a navigation signal is fair game," said Psiaki. The upshot: The Europeans cannot copyright basic data about the physical world, even if the data are coming from a satellite that they built.
"Imagine someone builds a lighthouse," argued Psiaki. "And I've gone by and see how often the light flashes and measured where the coordinates are. Can the owner charge me a licensing fee for looking at the light? … No. How is looking at the Galileo satellite any different?"
Adam pointed to more here and slashdot.
In the breach that keeps on breaching, I suggested that the reason the Bush administration was nervous of the program was that the Europeans might be embarrassed via public opinion to put in place real governance. I was close (dead link to "Piling On the New York Times With a Scoop," Howard Kurtz, WaPo):
Keller said he spent more than an hour in late May listening to Treasury Secretary John Snow argue against publication of the story. He said that he also got a call from Negroponte, the national intelligence czar, and that three former officials also made the case to Times editors: Tom Kean and Lee Hamilton, chairmen of the 9/11 commission, and Democratic Rep. John Murtha of Pennsylvania -- an outspoken critic of the war in Iraq."The main argument they made to me, extensively and at length, besides that the program is valuable and legitimate, was that there are a lot of banks that are very sensitive to public opinion, and if this sees the light of day, they may stop cooperating," Keller said.
What useful reason could they have for keeping it secret? If it was legal, the banks will cooperate. I think we can clearly state that banks will generally operate within the law, and will always side with the government over the interests of their customers.
As far as the banks are concerned, their interests are covered as long as a) it is legal, and b) all banks equally have to comply. So, keeping it secret was either directed at covering up potential illegality or a lack of legality, or some particular discrimination that was going on. As there has been no real hint of any discrimination here (SWIFT by definition serving all banks), it would be the former. (And, what does he mean by "a lot of banks?")
He acknowledged, as did the Times article, that there was no clear evidence that the banking program was illegal. But, he said, "there were officials who talked to us who were uncomfortable with the legality of this program, and others who were uncomfortable with the sense that what started as a temporary program had acquired a kind of permanence.
So what we have here is a programme of dubious legality, where insiders know they have transgressed, and would like the law to be clarified and updated. So they themselves are not at risk, and evidently the banks feel the same way.
"It's a tough call; it was not a decision made lightly," said Doyle McManus, the Los Angeles Times' Washington bureau chief. "The key issue here is whether the government has shown that there are adequate safeguards in these programs to give American citizens confidence that information that should remain private is being protected." ... McManus said the other factor that tipped the paper's decision to publish was the novel approach government was using to gather data in another realm without warrant or subpoena."Police agencies and prosecutors get warrants all the time to search suspects' houses, and we don't write stories about that," he said. "This is different. This is new. And this is a process that has been developed that does not involve getting a specific warrant. It's a new and unfamiliar process."
That's raising an interesting question. The question of the maybe-subpoena is addressed here:
The Administration says the program is legal because every month the Treasury Department issues an administrative subpoena, basically a subpoena you write yourself without seeing a judge.
Ryan Single also offered a theory on how the newspapers wrote their own administrative subpoenas. More odd remarks from the McManus of the LATimes:
"I always start with the premise that the question is, why should we not publish? Publishing information is our job. What you really need is a reason to withhold information."
It's a point. As we dig further we discover the old black helicopter theories surging up:
The scandal here is not government over-reach, [Blum] tells me. The scandal is the pitiful reluctance of this administration (and others before it) to get serious about the problem. Bankers, Blum explained, "have fended off every conceivable rule that would really be effective. Why are we pandering to them if we say we are in such a desperate situation?" ... The monitoring system described by the Times seems unexceptional to Blum. Indeed, his complaint is that it's so narrowly focused that it mostly harvests empty information. "Meanwhile, the biggest purveyor of terrorist money, as everyone knows, are accounts in Saudi Arabia," Blum observes. "Nobody will deal with it because the Saudis own half of America." An exaggeration, but you get his point.Blum knows the offshore outposts where US corporations and wealthy Americans dodge taxes or US regulatory laws. Congress could shut them tomorrow if it chose. Instead, it keeps elaborating new loopholes that enable the invention of exotic new tax shelters for tainted fortunes. The latest to flourish, he says, are shell corporations-- freely chartered by states.
"The GAO says this device is being used for money laundering by everyone else in the world," Blum says. "Congress ought to start there." He is not holding his breath.
Which would be the house of cards defence. Here's another card that is showing signs of bending:
The U.S. National Security Agency asked AT&T Inc. to help it set up a domestic call monitoring site seven months before the Sept. 11, 2001 attacks, lawyers claimed June 23 in court papers filed in New York federal court.The allegation is part of a court filing adding AT&T, the nation's largest telephone company, as a defendant in a breach of privacy case filed earlier this month on behalf of Verizon Communications Inc. and BellSouth Corp. customers. The suit alleges that the three carriers, the NSA and President George W. Bush violated the Telecommunications Act of 1934 and the U.S. Constitution, and seeks money damages.
``The Bush Administration asserted this became necessary after 9/11,'' plaintiff's lawyer Carl Mayer said in a telephone interview. ``This undermines that assertion.''
People all around the world bent over backwards to help the USA deal with 9/11. And let's not forget a substantial number of people killed were foreigners -- expatriate workers in the towers at the time.
If it is true that the spying programmes were begun before 9/11, that might shake the faith a bit. For the unshakebly faithful, see the relevant complaint and some skepticism here (tip to Adam and 27BStroke6):
Within eleven (11) days of the onset of the Bush administration, and at least seven (7) months prior to the attacks of September 11, 2001, defendant ATT began development of a center for monitoring long distance calls and internet transmissions and other digital information for the exclusive use of the NSA.
Why are we doing all this? Eavesdropping. We know it is a present danger, but the clarity lacks. We need to figure out what capabilities the agencies have and how far it spreads, in order to inform future designs.
Fearghas points to more graphical evidence of risk practices from the Lifeboat station in Harwich.
For those bemused by this attention -- a brief rundown of "Rough Towers," a.k.a. Sealand. Some 5 or so years ago, some colourful libertarian / cypherpunk elements started a "data haven" called Havenco running in this "claimed country". Rumour has it that after the ISP crashed and burnt, all the servers were moved up the Thames estuary to London.
Not wishing to enter into the discussion of whether this place was *MORE* risky or less risky ... Chandler asks:
What kind of country doesn't have a fire department? One that doesn't plan on having a fire, as evidenced by the fact that Sealand/HavenCo didn't have fire insurance.
Well, as they were a separate jurisdiction, they probably hadn't got around to (ahem) legislating an insurance act? Or were you thinking of popping into the local office in Harwich and picking up a home owner's policy?
Peter Gutmann asks:
Do you have any figures on how many security people your self-signed certificate is turning away? I'd be interested in knowing whether the indistinguishable-from-placebo effect of SSL certs also extends to a site used mainly by security people.
I have no idea! Anybody?
This emerging threat has sent a wave of fear through the banks. Different strategies have been formulated and discussed in depth, and just this month the first roll-outs have been seen in Germany and Austria. This information cries out for release as there are probably 10,000 other banks out there that would have to go through and do the work again.
Philipp Gühring has collected the current best understanding together in a position paper entitled "Concepts against Man-in-the-Browser Attacks."
Abstract. A new threat is emerging that attacks browsers by means of trojan horses. The new breed of new trojan horses can modify the transactions on-the-fly, as they are formed in in browsers, and still display the user's intended transaction to her. Structurally they are a man-in-the-middle attack between the the user and the security mechanisms of the browser. Distinct from Phishing attacks which rely upon similar but fraudulent websites, these new attacks cannot be detected by the user at all, as they are use real services, the user is correctly logged-in as normal, and there is no difference to be seen.The WYSIWYG concept of the browser is successfully broken. No advanced authentication method (PIN, TAN, iTAN, Client certificates, Secure-ID, SmartCards, Class3 Readers, OTP, ...) can defend against these attacks, because the attacks are working on the transaction level, not on the authentication level. PKI and other security measures are simply bypassed, and are therefore rendered obsolete.
If you are not aware of these emerging threats, you need to be. You can either get it from sellers of private information or you can get it from open source information sharing circles like FC++ !
Dave Birch says let's all get it off:"
I've got a very simple, and absolutely foolproof, plan to reduce payment card fraud (much in the news recently) to zero. It's based on ... So here goes:Change the law. Have the government pass a bill that says that, as from 1st January 2011, it won't be against the law to use someone else's payment card. Result: on 1st January 2011, card fraud falls to zero because there won't be any such thing as card fraud.
This has two benefits, both of which greatly increase the net welfare.
Firstly, it would to stimulate competition between payment card companies to provide cards that could not be used by anyone other than the rightful owner.
OK, logical, coherent and a definate brain tease. Much of the underlying reason that naked payments waft comfortably around inside the network is that the inside network is built of corporations that rely on the crime of misusing a payment, naked or otherwise. With such strong criminal punishments in place, they can push the naked and vulnerable payments around.
Before you discount the idea totally, consider this: it is already in operation to some extent. In the open governance payments world, there is no effective "law" operating that makes it "illegal" to use some account or other. Rather, the providers live in what we might term the "open governance" regime, and there, they use a balance of techniques to defend themselves and their customers. Those techniques refer often to contract laws, but try not to rely on criminal laws.
Does it work? I think so. Costs are lower, most such systems operate at under 1% transaction fees whereas the regulated competitors operate around 2-5%. P2p fraud seems lower, but unfortunately nobody talks about the fraud rates that much (and in this way, the open governance world mirrors the regulated world), so it is difficult to know for sure. Succesful attacks appear lower than with regulated US/UK systems, although not lower than mainland Europe. Possibly this is a reflection of the lack of anyone backstopping them, and the frequency of unsuccessful attacks giving lots of practice.
One thing's for sure - the open governance providers would be quite happy to get rid that law as they don't expect to benefit from it anyway.
Probably a useful area to research - although I get the feeling that nobody in the regulated world wants to honour the alternate with admission, and the same scorn exists in the governed world, so a researcher would have to be careful not to give the game away.
[Anne & Lynn Wheeler do a guest post to introduce a new metaphor! Editorial note: I've done some minor editorial interpretations for the not so eclectic audience.]
From the ISO, a new standard aims to ensure the security of financial transactions on the Internet:
ISO 21188:2006, 'Public Key Infrastructure for financial services - practices and policy framework', offers a set of guidelines to assist risk managers, business managers and analysts, technical designers and implementers and operational management and auditors in the financial services industry.
My two bits [writes Lynn], in light of recent British pin&chip vulnerability thread, is to consider another metaphor for viewing the session authentication paradigm: They tend to leave the transaction naked and vulnerable.
In the early 1990s, we had worked on the original payment gateway for what become to be called e-commerce 1, 2 (as a slight aside, we also assert it could be considered the first SOA implementation 3 - Token-ring vs Ethernet - 10 years later ).
To some extent part of the transaction vulnerability analysis for x9.59 transaction work done in the mid-90s was based on analysis and experience with that original payment gateway as it was implemented on the basis of the session-oriented paradigm 4.
This newer work resulted in something that very few other protocols did -- defined end-to-end transactions with strong authentication. Many of the other protocols would leave the transaction naked and vulnerable at various steps in the processing. For example, session-oriented protocols would leave the entire transaction naked and vulnerable. In other words, the bytes that represent the transaction would not have a complete end-to-end strong authentication related to exactly that transaction, and therefore leave it naked and vulnerable for at least some part of the processing.
This then implies that the complete end-to-end business process has to be heavily armored and secured, and even minor chinks in the business armoring would then result in exposing the naked transaction to the potental for attacks and fraud.
If outsider attacks aren't enough, naked transactions are also extremely vulnerable to insider attacks. Nominally, transactions will be involved in a large number of different business processes, exposing them to insider attacks at every step. End-to-end transactions including strong authentication armors the actual transaction, and thus avoids leaving the transaction naked and vulnerable as it travels along a vast array of processing steps.
The naked transaction paradigm also contributes to the observation that something like seventy percent of fraud in such environments involve insiders. End-to-end transactions with strong authentication (armoring the actual transaction) then also alleviates the need for enormous amounts of total business process armoring. As we find it necessary to protect naked and vulnerable transactions, we inevitably find that absolutely no chinks in the armor can be allowed, resulting in expensive implications in the business processing - the people and procedures employed.
The x9a10 working group (for what become the x9.59 financial standard) was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. This meant not only having countermeasures to things like replay attacks (static data that could be easily skimmed and resent), but also having end-to-end transaction strong authentication (eliminating the vulnerabilities associated with having naked and vulnerable transactions at various points in the infrastructure).
The x9.59 financial standard for all retail payments then called for armoring and protecting all retail transactions. This then implied the business rule that account numbers used in x9.59 transactions could not be used in transactions that didn't have end-to-end transaction strong authentication. This eliminated the problem with knowledge leakage; if the account number leaked, it no longer represents a vulnerability. I.e. an account number revealed naked was no longer vulnerable to fraudulent transactions 5.
Part of the wider theme on security proportional to risk is that if the individual transactions are not armored then it can be extraordinarily expensive to provide absolutely perfect total infrastructure armoring to protect naked and vulnerable transactions. Session-hiding cryptography especially is not able to absolutely guarantee that naked, vulnerable transactions have 100% coverage and/or be protected from all possible attacks and exploits (including insider attacks) 6.
There was yet another issue with some of the payment-oriented protocols in the mid-90s looking at providing end-to-end strong authentication based on digital signature paradigm. This was the mistaken belief in appending digital certificates as part of the implementation. Typical payment transactions are on the order of 60-80 bytes, and the various payment protocols from the period then appended digital certificates and achieved a payload bloat of 4k to 12k bytes (or a payload bloat of one hundred times). It was difficult to justify an enormous end-to-end payload bloat of one hundred times for a redundant and superfluous digital certificate, so the protocols tended to strip the digital certificate off altogether, leaving the transaction naked and vulnerable during subsequent processing.
My response to this was to demonstrate that it is possible to compress the appended digital certificates to zero bytes, opening the way for x9.59 transactions with end-to-end strong authentication based on digital signatures. Rather than viewing x9.59 as using certificate-less digital signatures for end-to-end transaction strong authentication, it can be considered that an x9.59 transaction appends a compressed zero-byte digital certificate to address the severe payload bloat problem 7.
To return briefly to Britain's Chip&Pin woes, consider that the issue of SDA (static data authentication vulnerable to replay attacks) or DDA (countermeasure to replay attacks) is somewhat independent of using a session-oriented implementation. That is, the design still leaves transactions naked and vulnerable at various points in the infrastructure.
Over on Enumerated, Nick Szabo posts twice on the framework of the courts in anglo-norman history. He makes the surprising claim that up until the 19th century, the tradition was all of private courts. Franchises were established from the royal perogative, and once granted as charters, were generally inviolate. I.e., courts were property.
There were dozens of standard jurisdictional franchises. For example, "infangthief" enabled the franchise owner to hang any thief caught red-handed in the franchise territory, whereas "outfangthief" enabled the owner to chase the thief down outside the franchise territory, catch him red-handed, and then hang him. "Gallows" enabled the owner to try and punish any capital crime, and there were a variety of jurisdictions correponding to several classes of lesser offenses. "View of frankpledge" allowed the owner to control a local militia to enforce the law. "The sheriff's pleas" allowed the owner to hear any case that would normally be heard in a county court. There were also franchises that allowed the collection of various tolls and taxes.
A corporation was also a franchise, and corporations often held, as appurtenances, jurisdictional franchises. The City of London was and is a corporate franchise. In the Counties Palatine the entire government was privately held, and most of the American Colonies were corporate franchises that held practically all jurisdiction in their territory, sometimes subject to reservations (such as the common law rights of English subjects and the right of the king to collect customs reserved in the American charters). The colonies could in turn grant franchises to local lords (as with the Courts Baron and Courts Leet in early Maryland) and municipalities. American constitutions are largely descended from such charters.
Consider the contrast with the European hierarchical view. Not property but master-servant dominated, as it were. And, some time in the 19th century the European hierarchical view won:
The Anglo-Norman legal idea of jurisdiction as property and peer-to-peer government clashed with ideas derived from the Roman Empire, via the text of Justinian's legal code and its elaboration in European universities, of sovereignty and totalitarian rule via a master-servant or delegation hierarchy. By the 20th century the Roman idea of hierarchical jurisdiction had largely won, especially in political science where government is often defined on neo-Roman terms as "sovereign" and "a monopoly of force." Our experience with totalitarianism of the 19th and 20th centuries, inspired and enabled by the Roman-derived procedural law and accompanying political structure (and including Napoleon, the Csars, the Kaisers, Communist despots, the Fascists, and the National Socialists), as well as the rise of vast and often oppressive bureaucracies in the "democratic" countries, should cause us to reconsider our commitment to government via master-servant (in modern terms, employer-employee) hierarchy, which is much bettter suited to military organization than to legal organization.
Why is that? Nick doesn't answer, but the correlation with the various wars is curious. In my own research into Free Banking I came to the conclusion that it was stronger than any other form, yet it was not strong enough to survive all-out war - and specifically the desires of the government and populace to enact extraordinary war powers. Which led to the annoying game theory result that central banking was stronger, as it could always pay the nation into total war. If we follow the same line in causality, Nick suggests that the hierarchical government is stronger because it can control the nation into total war. And, if we assume that any nation with these two will dominate, this explains why Free Banking and Franchise Law both fell in the end; and fell later in Britain.
Several articles on scary, ooo, so scary cyberwar scenarios. We will see a steady stream of this nonsense as the terrorist-watchers, china-watchers and every-other-bogeyman-watchers all combine in a war on our own fears.
A hyperventilating special from the US DHS:
According to cyber-security experts, the terror attacks of 11 September and 7 July could be seen as mere staging posts compared to the havoc and devastation that might be unleashed if terrorists turn their focus from the physical to the digital world.Scott Borg, the director and chief economist of the US Cyber Consequences Unit (CCU), a Department of Homeland Security advisory group, believes that attacks on computer networks are poised to escalate to full-scale disasters that could bring down companies and kill people. He warns that intelligence "chatter" increasingly points to possible criminal or terrorist plans to destroy physical infrastructure, such as power grids. Al-Qa'ida, he stresses, is becoming capable of carrying out such attacks.
Summarised over on risks, the US DOD also partakes liberally of the oxygen tank:
From the nation that enjoys U.S. Most Favored Nation trade status, and a permanent member of the WTO...China is stepping up its information warfare and computer network attack capabilities, according to a Department of Defense (DoD) report released last week. The Chinese People's Liberation Army (PLA) is developing information warfare reserve and militia units and has begun incorporating them into broader exercises and training. Also, China is developing the ability to launch preemptive attacks against enemy computer networks in a crisis, according to the document, ...
The tendency for public officials to try and scare the public into more funding is never-ending. The positive feedback loop is stunningly safe for them - if there is a cyber attack, they are proved right. If there isn't a cyber attack, it's just about to happen and they'll be proven right. And every one of our enemies is ... Huff, puff, huff!
Is D.C. ready for terrorist attack? Two unrelated traffic accidents within an hour of each other yesterday in Northeast shut down two major highways during the busy morning commute, causing massive gridlock and seemingly endless delays -- but also providing an ominous warning: What if it had been a terrorist attack?
About the best we can do is patiently point out that what they are talking about doesn't happen because in most cases it is evidently uneconomic. When the economic attack develops, we'll deal with it. Londoners walked home, that one day, and those that were a bit late that morning like me stayed home. The next day everyone went back to work, the same way as before. The next week, nobody noticed. It's not a particularly economic attack.
Some things you deal with by preparation. But other things you just have to let happen, because the attack goes around your preparations. By definition. Can anybody guess what would have happened if instead of a couple of traffic accidents, it was a couple of bombs? All of the greater Washington area would probably enter gridlock, because the authorities would hand it to the terrorists.
A little sunday morning governance nightmare. Over in Canada, a media entrepreneur sued his ex-law firm:
Mr. Cheikes says that in 1997 [his lawyer] Mr. Strother advised him he could not legally operate his tax-shelter business because of tax rule changes. Mr. Cheikes shut operations down that fall and asked Mr. Strother to find ways to make Monarch compliant so that the company could re-open.A year later, he found out that Mr. Strother had become a 50% partner in a movie tax shelter business called Sentinel Hill, and that [law firm] Davis & Co. were its attorneys. Mr. Cheikes says that from 1998 to 2002, Sentinel Hill made $140-million in total profits, that Mr. Strother made $32-million for himself and that Davis & Co. had been paid $9-million in fees.
Now, as written, this is a slam dunk, and in absence of a defence, we should be looking at what amounts to fraud here - breach of fiduciary trust, theft of trade secrets and business processes, etc etc.
A few words on where this all comes from. Why do we regularly question audits in the governance department of financial cryptography? Partly it is because the auditor's actual product is so disconnected from what people need. But the big underlying concern - the thing that makes people weak at the news - is the potential for abuse.
And this is much the same for your lawyer. They both get deep into your business. They are your totally trusted parties - TTPs in security lingo. This doesn't mean you trust them because you think they are trustworthy sorts, they speak with a silver toungue and you'd be happy to wave your daughter off with one of them.
No, quite the reverse: it means you have no choice but to trust them. The correct word is vulnerability, not trust. You are totally vulnerable to your lawyer and your auditor, more so than your spouse, and if they wish to rape your business, the only thing that will stop them is if you manage to get out of there with your assets firmly buttoned up. The odds are firmly against that - you are quite seriously relying almost completely on convention, reputation and the strength of the courts here, not on your ability to detect and protect the fraud as you are with almost all other attackers.
Which is why over time institutions have arisen to help you with your vulnerability. One of those institutions is the courts themselves, which is why this is such a shocker:
For example, the B.C. Supreme court agreed with Mr. Strother's argument that he had no obligation to correct his mistaken legal advice even though that advice had led to the closure of Monarch.The lower court also agreed with Davis & Co.'s contention that it should be able to represent two competitors in a business without being obliged to tell one what the other is doing.
What the hell are they smoking in B.C.? The Supreme Court is effectively instructing the vulnerable client to lie back and enjoy it. There is no obligation in words, and it's not because it would be silly to write down "thou shalt not rape your client," but simply because if we write it down, enterprising legal arbitrageurs will realise it's ok to rape as long as they do it using different words. It's for reasons like this that you have "reasonable man" tests - as when your Supreme Court judges are stoked up on the finest that B.C. can offer, we need something to bring them back to reality. It's for reasons like this we also have appeals courts.
In 2003, the B.C. Supreme Court heard the case but found no wrongdoing. Two years later, however, the B.C. Court of Appeal overturned that ruling, finding the lawyer and firm guilty.Mr. Strother was ordered to pay back the $32-million he had made and Davis & Co. was ordered to return $7-million of the $9-million in fees to Mr. Cheikes and his partners.
The high-stakes case was appealed again and the Supreme Court of Canada has granted leave to appeal, probably in October.
To be fair, this article was written from one side. I'd love to hear the case for the law firm! I'd also love to hear what people in BC think about that - is that a law firm you'd go to? What does a client of that law firm think when she reads about the case over his sunday morning coffee?
(If you are not a cryptoplumber, the following words will be indistinguishable from random... that might be a good thing!)
When I and Zooko created the SDP1 layout (for "Secure Datagram Protocol #1") one of the requirements wasn't to avoid traffic analysis. So much so that we explicitly left all that out.
But, the times, they are a-changing. SDP1 is now in use in one app full-time, and 3 other apps are being coded up. So I have more experience in how to drive this process, and not a little idea about how to inform a re-design.
And the war news is bleak, we are getting beaten up in Eurasia. Our boys over there sure could use some help.
So how to avoid traffic analysis? A fundamental way is to be indistinguishable from random, as that reveals no information. Let's revisit this and see how far we can get.
One way is to make all packets around the same size, and same frequency. That's somewhat easy. Firstly, we expand out the (internal) Pad to include more mildly random data so that short packets are boosted to the average length. There's little we can do about long packets, except break them up, which really challenges the assumptions of datagrams and packet-independence.
Also, we can statistically generate some chit chat to keep the packets ticking away ... although I'd personally hold short of the dramatically costly demands of the Freedom design, which asked you to devote 64 or 256k permanent on-traffic to its cause. (A laughable demand, until you investigate what Skype is doing, now, successfully, on your computer this very minute.)
But a harder issue is the outer packet layer as it goes over the wire. It has structure, so someone can track it and learn information from it. Can we get rid of the structure?
The consists of open network layout consists of three parts - a token, a ciphertext and a MAC.
Token | . . . enciphered . . . text . . . | SHA1-HMAC |
Each of these is an array, which itself consists of a leading length field followed by that many bytes.
Length Field | . . . many . . . bytes . . . length . . . long . . . |
(Also, in all extent systems, there is a leading byte 0x5D that says "this is an SDP1, and not something else." That is, the application provides a little wrapping because there are cases where non-crypto traffic has to pass.)
5D |
|
|
|
Those arrays were considered necessary back then - but today I'm not so sure. Here's the logic.
Firstly the token creates an identifier for a cryptographic context. The token indexes into the keys, so it can be decrypted. The reason that this may not be so necessary is that there is generally another token already available in the network layer - in particular the UDP port number. At least one application I have coded up found itself having to use a different port number (create a new socket) for every logical channel, not because of the crypto needs, but because of how NAT works to track the sender and receiver.
(This same logic applies to the 0x5D in use.)
Secondly, skip to the MAC. This is in the outer layer - primarily because there is a paper (M. Bellare and C. Namprempre, Authenticated Encryption: Relations among notions and analysis of the generic composition paradigm Asiacrypt 2000 LNCS 1976) that advises this. That is, SDP1 uses Encrypt-then-MAC mode.
But it turns out that this might have been an overly conservative choice. Earlier fears that MAC-then-Encrypt mode was insecure may have been overdone. That is, if due care is taken, then putting the MAC inside the cryptographic envelope could be strong. And thus eliminate the MAC as a hook to hang some traffic analysis on.
So let's assume that for now. We ditch the token, and we do MAC-then-encrypt. Which leaves us with the ciphertext. Now, because the datagram transport layer - again, UDP typically - will preserve the length of the content data, we do not need the array constructions that tell us how long the data is.
Now we have a clean encrypted text with no outer layer information. One furfie might have been that we would have to pass across the IV as is normally done in crypto protocols. But not in SDP1, as this is covered in the overall "context" - that session management that manages the keys structure also manages the IVs for each packet. By design, there is no IV passing needed.
Hey presto, we now have a clean encrypted datagram which is indistinguishable from random data.
Am I right? At the time, Zooko and I agreed it couldn't be done - but now I'm thinking we were overly cautious about the needs of encrypt-then-Mac, and the needs to identify each packet coming in.
(Hat tip to Todd for pushing me to put these thoughts down.)
One way to tell when they cross the line is to watch the comics and jokes circuit. Here's another one:
Original source is RollCall.com, so tap their phones, not mine.
Seriously though, the reason this scandal has "legs" is because the Democrats' hands are clean. In previous scandals, it seems that the dirt was evenly spread across the political dipole. This time, the Dems have one they can get their teeth into and not look silly.
Twan points to a nice slate/FT article on the market for lemons:
In 1966 an assistant economics professor, George Akerlof, tried to explain why this is so in a working paper called "The Market for 'Lemons.'" His basic insight was simple: If somebody who has plenty of experience driving a particular car is keen to sell it to you, why should you be so keen to buy it?Akerlof showed that insight could have dramatic consequences. Buyers' perfectly sensible fears of being ripped off could, in principle, wipe out the entire used-car market: There would be no price that a rational seller would offer that was low enough to make the sale. The deeper the discount, the more the buyer would be sure that the car was a terrible lemon.
If you are unfamiliar with Akerlof's market for lemons, you should read that article in full, and then come back.
This whole area of lemons is sometimes called markets in asymmetric information - as the seller of the car has the information that you the buyer doesn't. Of course, asymmetries can go both ways, and sometimes you have the information whereas the other guy, the seller, does not. What's up with that?
Well, it means that you won't be able to get a good deal, either. This is the market in insurance, as described in the article, and also the market in taxation. These areas were covered by Mirlees in 1970, and Rothschild & Stiglitz in 1976. For sake of differentiation, as sometimes these details matter, I call this the market for limes.
But there is one final space. What happens when neither party knows the good they are buying?
Our gut reaction might be that these markets can't exist, but Michael Spence says they do. His example was the market for education, specifically degrees. In his 1973 paper entitled "Job Market Signalling" he described how the market for education and jobs was stable in the presence of signals that had no bearing on what the nominal goal was. That is, if the market believed a degree in arts was necessary for a job, then that's what they looked for. Likewise, and he covers this, if the market believed that being male was needed for a job, then that belief was also stable - something that cuts right to the core of our beliefs, because such a belief is indeed generally irrelevant but stable, whether we like it or not.
This one I term the Market for Silver Bullets, a term of art in the computing field for a product that is believed to solve everything. I came to this conclusion after researching the market for security, and discovering that security is a good in Spence's space, not in Akerlof's nor Rothschild and Stiglitz's spaces. That is, security is not in the market for lemons nor limes - it's in the tasteless spot in the bottom right hand.
Yup, because it is economics, we must have a two by two diagram:
The Market for Goods, as described by Information and by Party | Buyer Knows | Buyer Lacks |
---|---|---|
Seller Knows | Efficient Goods | Lemons (used cars) |
Seller Lacks | Limes (Tax, Insurance) | Silver Bullets (Security) |
Michael Spence coined and explored the sense of signals as being proxies for the information that parties were seeking. In his model, a degree was a signal, that may or may not reveal something of use. But it's a signal because we all agree that we want it.
Unfortunately, many people - both economists and people outside the field - have conflated all these markets and thus been lead down the garden path in their search for fruit. Spence's market in silver bullets is not the same thing as Akerlof's market in lemons. The former has signals, the latter does not. The latter has institutions, the former does not. To get the full picture here we need to actually do some hard work like read the original source papers mentioned above (Akerlof and Spence aren't so bad, but Rothschild & Stiglitz were tougher. I've not yet tried Mirrlees, and I got bogged down in Vickery. All of these require a trip to the library, as they are well-pre-net papers.)
In particular, and I expand on this in a working draft paper, the bitter-sweet truth is that the market for security is a market for silver bullets. This has profound implications for security research. But for those, you'll have to read the paper :)
Someone blogged a draft essay of mine on the limits of reliability in connections. As it is now wafting through the blog world, I need to take remedial action and publish it. It's more or less there, but I reserve the right to tune it some :)
When is a reliable connection not a reliable connection?The answer is - when you really truly need it to be reliable. Connections provided by standard software are reliable up to a point. If you don't care, they are reliable enough. If you do care, they aren't as reliable as you want.
In testing the reliability of the connection, we all refer to TCP/IP's contract and how it uses the theory of networking to guarantee the delivery. As we shall see, this is a fallacy of extension and for reliable applications - let's call that reliability engineering - the guarantee is not adequate.
And, it specifically is not adequate for financial cryptography. For this reason, most reliable systems are written to use datagrams, in one sense or another. Capabilities systems are generally done this way, and all cash systems are done this way, eventually. Oddly enough, HTTP was designed with datagrams, and one of the great design flaws in later implementations was the persistent mashing of the clean request-response paradigm of datagrams over connections.
To summarise the case for connections being reliable and guaranteed, recall that TCP has resends and checksums. It has a window that monitors the packet reception, orders them all and delivers a stream guaranteed to be ordered correctly, with no repeats, and what you got is what was sent.
Sounds pretty good. Unfortunately, we cannot rely on it. Let's document the reasons why the promise falls short. Whether this effects you in practice depends, as we started out, on whether you really truly need reliability.
Lynn points to BBC on: Petrol giant Shell has suspended chip-and-pin payments in 600 UK petrol stations after more than £1m was siphoned out of customers' accounts. Eight people, including one from Guildford, Surrey and another from Portsmouth, Hants, have been arrested in connection with the fraud inquiry.
The Association of Payment Clearing Services (Apacs) said the fraud related to just one petrol chain. Shell said it hoped to restore the chip-and-pin service before Monday.
"These Pin pads are supposed to be tamper resistant, they are supposed to shut down, so that has obviously failed," said Apacs spokeswoman Sandra Quinn. Shell has nearly 1,000 outlets in the UK, 400 of which are run by franchisees who will continue to use chip-and-pin.
A Shell spokeswoman said: "We have temporarily suspended chip and pin availability in our UK company-owned service stations. This is a precautionary measure to protect the security of our customers' transactions. You can still pay for your fuel, goods or services with your card by swipe and signature. We will reintroduce chip and pin as soon as it is possible, following consultation with the terminal manufacturer, card companies and the relevant authorities."
BP is also looking into card fraud at petrol stations in Worcestershire but it is not known if this is connected to chip-and-pin.
And immediately followed by more details in this article: Customers across the country have had their credit and debit card details copied by fraudsters, and then money withdrawn from their accounts. More than £1 million has been siphoned off by the fraudsters, and an investigation by the Metropolitan Police's Cheque and Plastic Crime Unit is under way.
The association's spokeswoman Sandra Quinn said: "They have used an old style skimming device. They are skimming the card, copying the magnetic details - there is no new fraud here. They have managed to tamper with the pin pads. These pads are supposed to be tamper resistant, they are supposed to shut down, so that has obviously failed."
Ms Quinn said the fraud related to one petrol chain: "This is a specific issue for Shell and their supplier to sort out. We are confident that this is not a systemic issue."
Such issues have been discussed before.
Fido is a maths puzzle (needs Flash, hopefully doesn't infect your machine) that seems counter-intiutive... Any mathematicians in the house? My sister wants to know...
In terms of presence, the site itself seems to be a web presence company that works with music media. Giving away fun things like that seems to work well - I wouln't have looked further if they had pumped their brand excessively.
A list of numbers on fraud, allegedly from The Times (also) repeated here FTR (for the record).
Regular credit card number: | $1 |
Credit card with 3-digit security code: | $3-$5 |
Credit card with code and PIN: | $10-$100 |
Social security number (US): | $5-$10 |
Mother's maiden name: | $5-$10 |
THE BIG NUMBERS
£56.4 ($100) billion: Total amount owed on British credit cards 141.1 million: Number of credit, debit and charge cards in Britain 1.9 billion: Number of purchases on credit and charge cards in Britain a year £123 billion: Total value of credit and charge card purchases a year 5: Number of credit, debit and charge cards held by 1 in 10 consumers £58: Average value of a purchase on a credit card £41: Average value of a debit card purchase 88 percent: Proportion of applicants who have been issued with a credit card without providing proof of income £504.8 ($895) million: Total plastic-card fraud losses on British cards a year £1.3 ($2.3) million: Amount of fraud committed against cards each day 7: Number of seconds between instances of fraud £696 ($1,235): Average size of fraud, 2004
(Printing them in USD is odd, but there you go... I've preferred the Times' UKP amounts above, as there were a number of mismatches.)
Dan Kaminsky writes on the Sony experience:
Learning from Sony: An External Perspective‘What happens when the creators of malware collude with the very companies we hire to protect us from that malware?’ Bruce Schneier, one of the godfathers of computer security, was pretty blunt when he aired his views on the AVindustry’s disappointing response to the Sony rootkit (for an overview of the rootkit and its discovery see VB, December 2005, p.11). His question was never answered, which is fine, but his concerns were not addressed either, and that’s a problem.
The incident represents much more than a black eye on the AV industry, which not only failed to manage Sony’s rootkit, but failed intentionally. The AV industry is faced with a choice. It has long been accused of being an unproductive use of system resources with an insufficient security return on investment. It can finally shed this reputation, or it can wait for the rest of the security industry to finish what Sony started. Is AV useful? The Sony incident is a distressingly strong sign that it is not.
I'm not sure what to make of the threats situation here. On the one hand, it is shocking, simply shocking to think of corporates deliberately increasing the risks of consumers so as to make more money. But, in reality, this has been going on for decades. So what we need is not less but more of the Sony threats. We need more information out in the public view so that we can all more clearly analyse the threats here. I call for more Sony rootkits :)
Has Microsoft declared Game Over?
In a rare discussion about the severity of the Windows malware scourge, a Microsoft security official said businesses should consider investing in an automated process to wipe hard drives and reinstall operating systems as a practical way to recover from malware infestation."When you are dealing with rootkits and some advanced spyware programs, the only solution is to rebuild from scratch. In some cases, there really is no way to recover without nuking the systems from orbit," Mike Danseglio, program manager in the Security Solutions group at Microsoft, said in a presentation at the InfoSec World conference here.
Basically, the OS cannot be protected, and in the event of infection, you have to re-install. That's one brave disclosure, but better they start seeding the public with this info less later than later still.
Fear of security is starting to bite in the US - in contrast to anecdotal evidence. Entrust did a survey that said:
Fear of alienating customersBanks recognize they must increase online security, but are equally concerned that making Web sites harder to use will drive customers back to telephone and branch banking.
"Telephone transactions cost banks 10 times as much to process as Internet transactions. And an in-branch transactions cost 100 times Internet transactions," Voice said.
About 18 percent of online bank customers have already cut back or stopped banking online completely because of security worries, according to an Entrust survey.
It is the cutting back or stopping that is causing the fear from the Meccano trojans I reported on a bit back (also known as MITB or Man-in-the-Browser). Forcing people back to phone or branch has massive cost and deployment ramifications. In the face of these costs, expect many banks to simply suffer the losses. Unfortunately, this won't be socially acceptable, as the majority of the costs are borne by the consumer, not the institution.
Lynn spots the latest crazy threat to invade media mindspace - Beware the 'pod slurping' employee
A U.S. security expert who devised an application that can fill an iPod with business-critical data in a matter of minutes is urging companies to address the very real threat of data theft. ...."(Microsoft Windows) Vista looks like it's going to include some capability for better managing USB devices, but with the time it's going to take to test it and roll it out, we're probably two years away from seeing a Microsoft operating system with the functionality built in," Usher said. "So companies have to ask themselves, 'Can we really wait two years?'"
This is not a new threat, just an old threat with a sexy new toy. Don't believe we had to wait for Apple for the innovative solution to employees' desperate needs to walk out with lots of data...
On the other hand, read that second paragraph above carefully. If you don't like today's scenario, you'll have to wait about 2 years, assuming that Vista has some sort of answer to whatever it is you don't like. I don't normally do stock picks but here's one that screams: buy Apple, sell Microsoft. Users will, even if you don't.
A bit of BitTorrent bother. In brief, ISPs have been using "traffic shaping" to identify Bittorrent traffic and drop it. In response, the top three clients have added an RC4 encryption capability. Threats everywhere...
In closing, 1 in 10 Laptops Stolen:
"Up to 1 in 10 laptops will be stolen during their lifetime according to one of the Law Enforcement Officers behind the new Web site Juststolen.net..."
Winston wrote in his diary, some 22 years ago:
as he stumbled into the immoral task of pouring his overburdened thoughts onto paper. I say approximately, because there are a number of uncertainties in the source, not least the date. A bit later on, Winston meets an editor of the new dictionary in the canteen, who has this to say:
"It's a beautiful thing, the destruction of words. Of course the great wastage is in the verbs and adjectives, but there are hundreds of nouns that can be got rid of as well. It isn't only the synonyms; there are also the antonyms. After all, what justification is there for a word which is simply the opposite of some other word? A word contains its opposite in itself. Take 'good,' for instance. If you have a word like 'good,' then what need is there for a word like 'bad'? 'Ungood' will do just as well -- better, because it's an exact opposite, which the other is not. Or again, if you want a stronger version of 'good,' what sense is there in having a whole string of vague useless words like 'excellent' and 'splendid' and all the rest of them? 'Plusgood' covers the meaning, or 'doubleplusgood' if you want something stronger still. Of course we use those forms already, but in the final version of Newspeak there'll be nothing else. In the end the whole notion of goodness and badness will be covered by only six words -- in reality, only one word. Don't you see the beauty of that, Winston? It was B.B.'s idea originally, of course," he added as an afterthought.
George Orwell's 1984 remains the definitive word on how a population is suppressed for the benefit of a ruling class, for one agenda or other. The techniques that he describes are so powerful that they literally cut across ideologies, and it seems, across time and experience.
The message of 1984 rose in public consciousness as the year itself approached - recall the film, the songs? But then it started to fade, almost immediately afterwards. Choosing a year in the future as the title might have seemed like a brilliant literary device 40 years earlier, but are we paying the cost now?
You are invited to submit nominations to the 2006 PET Award.
The PET Award is presented annually to researchers who have made an outstanding contribution to the theory, design, implementation, or deployment of privacy enhancing technology. It is awarded at the annual Privacy Enhancing Technologies Workshop (PET). The PET Award carries a prize of 3000 Euros thanks to the generous support of Microsoft.
Any paper by any author written in the area of privacy enhancing technologies is eligible for nomination. However, the paper must have appeared in a refereed journal, conference, or workshop with published proceedings in the period that goes from the end of the penultimate PET Workshop (the PET workshop prior to the last PET workshop that has already occurred: i.e. June 2004) until April 15th, 2006. The complete Award rules including eligibility requirements can be found at http://petworkshop.org/award/.
Anyone can nominate a paper by sending an email message containing the following to award-chairs06@petworkshop.org:
- Paper title
- Author(s)
- Author(s) contact information
- Publication venue
- A nomination statement of no more than 250 words.
All nominations must be submitted by April 15th, 2006. A seven-member Award committee will select one or two winners among the nominations received. Winners must be present at the PET workshop in order to receive the Award. This requirement can be waived only at the discretion of the PET Advisory board.
2006 Award Committee:
- Alessandro Acquisti (chair), Carnegie Mellon University, USA
- Roger Dingledine (co-chair), The Free Haven Project, USA
- Ram Chellappa, Emory University, USA
- Lorrie Cranor, Carnegie Mellon University, USA
- Rosario Gennaro, IBM Research, USA
- Ian Goldberg, Zero Knowledge Systems, Canada
- Markus Jakobsson, Indiana University at Bloomington, USA
More information about the PET award (including past winners) is available at http://petworkshop.org/award/.
More information about the 2006 PET workshop is available at http://petworkshop.org/2006/.
-----------------------
Alessandro Acquisti
Heinz School, Carnegie Mellon University
(P) 412 268 9853
(F) 412 268 5339
http://www.heinz.cmu.edu/~acquisti
-----------------------
Digital Money is coming up, 29th and 30th March. Always good for a visit.
The goal of the Forum is to encourage discussion and debate around the real issues at the heart of electronic identity [sic - must be digital money] in all its forms. In addition to this Forum, every autumn we organise the annual Digital Identity Forum (see the web site at www.digitalidforum.com for more details), the sister event to the Digital Money Forum.
There are several great things about the Hyperion conferences. Firstly, Dave and the team work hard to keep the commercial presentations down to a minimum. Next, he casts out looking for up and coming trends including the wild and woolly social experiments. Lastly, there's usually a great book giveaway!
Talks I'd travel some distance for, if I could:
Replacing Cash with Mobile Phones
Susie Lonie, Vodafone
A case study on the African M-PESA schemeCurrency for Kids
Jonathan Attwood, Swap-it-Shop UK
The UK's "eBay for kids"Cross-Border Funds Transfer before the Internet - The ransom of King Richard
David Boyle, Author of "Blondel's Song"
Over at CeBIT I spent some time at the CAcert booth checking out what they were up to. Lots of identity checking, it seems. This process involves staring hard at government Id cards and faces, which gets a bit tricky when the photo is a decade or two out of date. What do you say to the heavy-set matron about the cute skinny teenager on her identity card?
One artist chap turned up and wanted to sign up as an artist. This turns out to be a 'right' in Germany. Lo and behold, on his dox there is a spot for his artist's identity name. Much debate ensued - is CAcert supposed to be upstream or downstream of the psuedonymous process?
In this case, the process was apparently resolved by accepting the artist's name, as long as the supporting documentation provided private clarification. Supporting nymous identities I think is a good idea - and age old scholars of democracy will point out the importance of the right to speak without fear and distribute pamphlets anonymously.
CAcert is probably downstream, and over in Taiwan (which might be representative of China or not) we discover more government supported nymous identities: passport holders can pick their own first names for their passport. Why? The formal process of translating (Kanji) Han into Latin for passports - (popumofu?) pinyin! - so mangles the real sense that letting a person pick a new name in Latin letters restores some humanity to the process.
This development is not without perverse results. It places CAcert in the position of supporting nyms only if they are government-approved yet frowning on nyms that are not. Hardly the result that was intended - should CAcert apply to sponsor the Big Brother awards - for protecting privacy - or to receive one - for supporting government shills?
Most people in most countries think Identity is simple, and this was evident in spades at the booth. For companies, one suggestion is to take the very strong German company scheme and make it world standard. This won't work, simply because it is built on certain artifacts of the German corporate system. So there is a bit of a challenge to build a system that makes companies look the same the world over.
Not that individuals are any easier. Some of the Assurers - those are the ones that do the identity verification - are heading for the Phillipines to start up a network. There, the people don't do government issued identity, not like we do in the West. In fact, they don't have much Id at all it seems, and to get a passport takes 4 visits to 4 different offices and 4 different pieces of paper (the last visit is to the President's Office in Manila...).
The easy answer then is to just do that - but that's prohibitively expensive. One of the early steps is visiting a notary, which is possibly a reliable document and a proxy for a government ID, but even that costs some substantial fraction of the average monthly wage (only about 40 Euros to start with).
A challenge! If you have any ideas about how to identify the population of a country that isn't currently on the identity track, let us know. Psuedonymously, of course.
The much discussed CA branding model has apparently been adopted by Microsoft, implemented in the InfoCard system, if the presentation by Cameron and Jones is anything to go by. I reviewed and critiqued that presentation last week, including relevant screenshots.
Now, the statement as the core reason for the CA's existance is becoming more clearly accepted. It's something that got thrown out with the bath water back in 1995 or so, but it seems to be heading for something of a revival. Cameron and Jones say:
In many cases, all that having a standard site certificate guarantees is that someone was once able to respond to e-mail sent to that site. In contrast, a higher-value certificate is the certificate authority saying, in effect, “We stake our reputation on the fact that this is a reputable merchant and they are who they claim to be”.
I also found an RFC by Chokhani, et al, called Internet X.509 Public Key Infrastructure (RFC 3647) which throws more light on the statement:
3.1. Certificate PolicyWhen a certification authority issues a certificate, it is providing a statement to a certificate user (i.e., a relying party) that a particular public key is bound to the identity and/or other attributes of a particular entity (the certificate subject, which is usually also the subscriber). The extent to which the relying party should rely on that statement by the CA, however, needs to be assessed by the relying party or entity controlling or coordinating the way relying parties or relying party applications use certificates. Different certificates are issued following different practices and procedures, and may be suitable for different applications and/or purposes.
The CA's statement is that the the key is bound to attributes of an entity (including Identity). So we are all agreed that the cert has or is a statement of the CA saying something. But consider the caveat there that I emphasised: the authors have recognised that relying parties typically do not control or coordinate their use of certificates. There is typically an intermediate entity that takes responsibility for this. To the extent that this entity controls or coordinates the root list, they are in the driver's seat.
For browsing, this is the browser manufacturer, and thus the browser manufacturer is the relying party of ultimate responsibility. What this does is put browser manufacturers in a sticky position, whether they admit to it or not (and notice how you won't find any browser manufacturer clarifying who makes the statement from a manufacturered cert).
Microsoft's position may be weak in understanding and implementation, or maybe they know full well where this is going, and are implementing an intermediate step. Leaving that aside, it does leave the interesting question as to why they have only partially implemented the model. Not only does the high-assurance program prove the point that the CA has to be branded (thanks for that!) but it also confirms that the browser is on the hook for all the other certs in the other, default, poor man's certificate regime.
Either way, we are on the move again...
Viking reads the terms of service for Google Payments (discussed here) and discovers:
Here is the really interesting part though. Google Payments is set up like the DGC industry in regards to user responsibility & payment repudiability."Buyer is responsible for any and all transactions by persons that Buyer gives access to or that otherwise use such username or password and any and all consequences of use or misuse of such username and password.""all Payment Transactions processed through the Service are non-refundable to Buyer and are non-reversible [...] fraud and other disputes regarding transactions shall not entitle Buyer to a refund of the Payment Amount or a reversal of a Payment Transaction"
This ought to be very interesting to watch as they are completely violating the May Scale. They facilitate cc payments from the buyer, but the seller "gets paid & stays paid".
Indeed. Although, if they can hold the line on that issue, and keep their user base clean, this would mean that they would be well placed for the future. Margins in transactions in non-reversable payment systems range from 0.1% to 0.5% whereas reversible payments charge around 4-5%. Easy meat.
America moves a bit closer to using cells (mobiles outside the US) for payment. What I find curious is why banks don't simply use their customer's phones as two-factor tokens. It can't be any more sophisticated than selling a ring tone, surely?
Skype signs up with Click&Buy and seemingly others. Again curious, given they are owned by eBay these days.
Google reveals more, as spotted over on PaymentNews. Basically, move the billing systems over to an internal money.
If you take a look at the history of Google's advertising programs and online services, one thing you notice is that online billing and payments have been a core part of our offerings for some time. To run our ad programs, Google receives payments every day from advertisers, and then pays out a portion of those funds to advertising partners. Over the past four years, Google has billed advertisers in 65 countries more than $11.2 billion in 48 currencies, and made payments to advertising partners of more than $3.9 billion. When one of our consumer services requires payment to us, we've also provided users a purchase option.As the number of Google services has increased, we've continued to build on our core payment features and migrate to a standard process for people to buy our services with a Google Account. Examples of this migration include enabling users to buy Google Video content, Google Earth licenses, and Google Store items with their Google Accounts. We also just began offering similar functionality on Google Base.
If only more companies issued their own internal money and used it for the billing systems. Expect Microsoft to scratch its collective heads and wonder "why didn't we think of that?" It's actually not that much a leap, more a mental twist than a business change. From there, integrating credit card collection is easy. (Credits: I first did this a few years back, but the idea has been around for yonks, I recall Lucky Green explaining it as a potential direction for DigiCash sales, around 1997.)
For buyers, this feature will provide a convenient and secure way to purchase Google Base items by credit card. For sellers, this feature integrates transaction processing with Google Base item management.
And if they do it carefully enough, they won't walk into the minefield of regulatory interference.
The Mountain View, California-based search giant made sure the test got off to a quiet start. Google launched a video store last month, and shoppers found that they could buy videos by signing into their Google accounts. People have also been using their accounts to buy mapping-related products from Google Earth, information on Google Answers, and keywords on AdWords, Google's advertising program, in some cases for more than a year now.
Huh. Scary but true - hype is not the same thing as strategy. We'll see. Elsewhere, Technews says that Google is a direct shot at eBay and Paypal. I agree, it's on the agenda, but it's a shot across the bows, the actual broadsides will come later. Paypal is very vulnerable, Google won't need to rush. Also see lots of screen shots there, and also see buyer's terms.
A commercial presentation on Microsoft's Infocard system is doing the rounds. (Kim Cameron's blog.) Here's some highlights and critiques. It is dressed up somewhat as an academic paper, and includes more of a roadmap and analysis view, so it is worth a look.
The presentation identifies The Mission as "a Ubiquitous Digital Identity Solution for the Internet."
By definition, for a digital identity solution to be successful, it needs to be understood in all the contexts where you might want to use it to identify yourself. Identity systems are about identifying yourself (and your things) in environments that are not yours. For this to be possible, both your systems and the systems that are not yours – those where you need to digitally identity yourself – must be able to speak the same digital identity protocols, even if they are running different software on different platforms.In the case of an identity solution for the entire Internet, this is a tall order...
Well, at least we can see a very strong thrust here, and as a mission-oriented person, I appreciate getting that out there in front. Agreeing with the mission is however an issue to discuss.
Many of the problems facing the Internet today stem from the lack of a widely deployed, easily understood, secure identity solution.
No, I don't think so. Many of the problems facing the Internet today stem from the desire to see systems from an identity perspective. This fails in part because there is no identity solution (and won't be), in part because an identity solution is inefficient, and in part because the people deploying these systems aren't capable of thinking of the problem without leaning on the crutch of identity. See Stefan Brands' perspective for thinking outside the tiny cramped box of identity.
A comparison between the brick-and-mortar world and the online world is illustrative: In the brick-and-mortar world you can tell when you are at a branch of your bank. It would be very difficult to set up a fake bank branch and convince people to do transactions there. But in today’s online world it’s trivial to set up a fake banking site (or e-commerce site …) and convince a significant portion of the population that it’s the real thing. This is an identity problem. Web sites currently don’t have reliable ways of identifying themselves to people, enabling imposters to flourish. One goal of InfoCard is reliable site-to-user authentication, which aims to make it as difficult to produce counterfeit services on the online world as it is to produce them in the physical world.
(My emphasis.) Which illustrates their point nicely - as well as mine. There is nothing inherent in access to a banking site that necessitates using identity, but it will always be an identity based paridigm simply because that's how that world thinks. In bricks-and-mortar contrast, we all often do stuff at branches that does not involve identity. In digital contrast, a digital cash system delivers strength without identity, and people have successfully mounted those over web sites as well.
That aside, what is this InfoCard? Well, that's not spelt out in so many words as yet:
In the client user interface, each of the user’s digital identities used within the metasystem is represented by a visual “Information Card” (a.k.a. “InfoCard”, the source of this technology’s codename). The user selects identities represented by InfoCards to authenticate to participating services. The cards themselves represent references to identity providers that are contacted to produce the needed claim data for an identity when requested, rather than claims data stored on the local machine. Only the claim values actually requested by the relying party are released, rather than all claims that the identity possesses (see Law 2).
References to providers is beginning to sound like keys managed in a wallet, and this is suggested later on. But before we get to that, the presentation looks at the reverse scenario: the server provides the certificate:
To prevent users from being fooled by counterfeit sites, there must be a reliable mechanism enabling them to distinguish between genuine sites and imposters. Our solution utilizes a new class of higher-value X.509 site certificates being developed jointly with VeriSign and other leading certificate authorities. These higher-value certificates differ from existing SSL certificates in several respects.
Aha. Pay attention, here comes the useful part...
First, these certificates contain a digitally-signed bitmap of the company logo. This bitmap is displayed when the user is asked whether or not they want to enter into a relationship with the site, the first time that the site requests an InfoCard from the user.Second, these certificates represent higher legal and fiduciary guarantees than standard certificates. In many cases, all that having a standard site certificate guarantees is that someone was once able to respond to e-mail sent to that site. In contrast, a higher-value certificate is the certificate authority saying, in effect, “We stake our reputation on the fact that this is a reputable merchant and they are who they claim to be”.
Users can visit sites displaying these certificates with confidence and will be clearly warned when a site does not present a certificate of this caliber. Only after a site successfully authenticates itself to a user is the user asked to authenticate himself or herself to the site.
Bingo. This is just the High Authentication proposal written about elsewhere. What's salient here is that second paragraph, my emphasis added. So, do they close the loop? Elsewhere there has been much criticism of the proposals made by Amir and myself, but it is now totally clear that Microsoft have adopted this.
The important parts of the branding proposal are there:
The loop is closed. Now, finally, we have a statement with cojones.
There remain some snafus to sort out. This is not actually the browser that does this, it is the InfoCard system which may or may not be available and may or may not survive as this year's Microsoft Press Release. Further, it only extends to the so-called High Assurance certs:
To help the user make good decisions, what’s shown on the screen varies depending on what kind of certificate is provided by the identity provider or relying party. If a higher-assurance certificate is provided, the screen can indicate that the organization’s name, location, website, and logo have been verified, as shown in Figure 1. This indicates to a user that this organization deserves more trust. If only an SSL certificate is provided, the screen would indicate that a lower level of trust is warranted. And if an even weaker certificate or no certificate at all is provided, the screen would indicate that there’s no evidence whatsoever that this site actually is who it claims to be. The goal is to help users make good decisions about which identity providers they’ll let provide them with digital identities and which relying parties are allowed to receive those digital identities.
The authors don't say it but they intend to reward merchants who pay more money for the "high-assurance". That's in essence a commercial response to the high cost of the DD that Geotrust/RSA/Identrus are trying to float. This also means that they won't show the CA as the maker of a "lower assurance" statement, which means the vast bulk of the merchants and users out there will still be phishable, and Microsoft will be liable instead of the statement provider. But that's life in the risk shifting business.
(As an explanatory note, much of the discussion recently has focussed on the merchant's logo. That's less relevant to the question of risk. What is more relevant is VeriSign's name and logo. They are the one that made the statement, and took money for it. Verisign's brand is something that the user can recognise and then realise the solidity of that statement: Microsoft says that Verisign says that the merchant is who they are. That's solid, because Microsoft can derive the Verisign logo and name from the certificate path in a cryptographically strong fashion. And they could do the same with any CA that they add into their root list.)
Finally, the authors have not credited prior work. Why they have omitted this is obscure to me - this would be normal with a commercial presentation, but in this case the paper looks, writes and smells like an academic paper. That's disappointing, and further convinces people to simply not trust Microsoft to implement this as written; if Microsoft does not follow centuries-old academic customs and conventions then why would we trust them in any other sense?
That was the server side. Now we come to the user-centric part of the InfoCard system:
2.7. Authenticating Users to Sites InfoCards have several key advantages over username/password credentials:
- Because no password is typed or sent, by definition, your password can not be stolen or forgotten.
- Because authentication is based on unique keys generated for every InfoCard/site pair (unless using a card explicitly designed to enable cross-site collaboration), the keys known by one site are useless for authentication at another, even for the same InfoCard.
- Because InfoCards will resupply claim values (for example, name, address, and e-mail address) to relying parties that the user had previously furnished them to, relying parties do not need to store this data between sessions. Retaining less data means that sites have fewer vulnerabilities. (See Law 2.)
What does that mean? Although it wasn't mentioned there, it turns out that there are two possibilities: Client side key generation and relationship tracking, as well as "provider generated InfoCards" written up elsewhere:
Under the company's plan, computer users would create some cards for themselves, entering information for logging into Web sites. Other cards would be distributed by identity providers -- such as banks or governmental agencies or online services -- for secure online authentication of a person's identity.To log in to a site, computer users would open the InfoCard program directly, or using Microsoft's Internet Explorer browser, and then click on the card that matches the level of information required by the site. The InfoCard program would then retrieve the necessary credentials from the identity provider, in the form of a secure digital token. The InfoCard program would then transmit the digital token to the site to authenticate the person's identity.
Obviously the remote provision of InfoCards will depend on buy-in, which is a difficult pill to follow as that means trusting Microsoft in oh so many ways - something they haven't really got to grips with. But then there are also client-generated tokens. Are they useful?
If they have client-side key generation and relationship caching, then these are two of the missing links in building a sustainable secure system. See my emphasis further above for a hint on relationship tracking and see Kim Cameron's blog for this comment: "Cameron: A self-issued one you create yourself." Nyms (as per SSH and SOX) and relationship tracking (again SSH, and these days Trustbar,Petname and recent other suggestions) are strong. These ideas have been around for a decade or more, we call it opportunistic cryptography as a school.
Alternatively, notice how the credentials term is slipped in there. That's not how Stefan Brands envisages it (from Identity on the move I - Stefan Brands on user-centric identity management), but they are using his term. What that means is unclear (and see Identity on the move III - some ramblings on "we'll get it right this time, honest injun!" for more).
Finally, one last snippet:
3.6. Claims != “Trust” A design decision was to factor out trust decisions and not bundle them into the identity metasystem protocols and payloads. Unlike the X.509 PKIX [IETF 05], for example, the metasystem design verifies the cryptography but leaves trust analysis for a higher layer that runs on top of the identity metasystem.
Hallelujah! Trust is something users do. Crypto systems do claims about relationships.
Adam writes that he walks into a hotel and gets hit with a security brand.
For quite some time, Ian Grigg has been calling for security branding for certificate authorities. When making a reservation for a Joie de Vivre hotel, I got the attached Javascript pop-up. (You reach it before the providing a credit card number.)I am FORCED to ask, HOWEVER , what the average consumer is supposed to make of this? ("I can make a hat, and a boat...") Who is this VERISIGN, and why might I care?
Well, precisely! They are no-one, says the average consumer, so anything they have done to date, including the above, is irrelevant.
More prophetically, one has to think of how brand works when it works - every brand has to go through a tough period where it braves the skeptics. Some of the old-timers might recall rolling around the floor laughing at those silly logos that Intel were putting in other supplier's adverts! And stickers on laptops - hilarious !
These guys will have to do that, too, if things are to this way pass. It will involve lots of people saying "so what?" until one day, those very same skeptics will say "Verisign... now I know."
The word Verisign isn't a link. It's not strongly tied to what I'm seeing. (Except for the small matter of legality, I could make this site pop up that exact same dialog box.) It is eminently forgeable, there's no URL, there's nothing graphical.
Right, so literally the only thing going on here is a bit of branding. The brand itself is not being used as part of a security statement in any sense worthy of attention. To recap, the statement we are looking for is something like "Comodo says that the certificate belongs to XYZ.com." That's a specific, verifiable and reliable statement. What you're seeing on the ihotelier page is a bit of fluff.
Nevertheless, it probably pre-sages such dialog boxes popping up next to the colored URL bar, and confusing the message they're trying to send.
I guess it presages a lot of bad experimentation, sure. What should he happening in the coloured URL bar is simply that "CAcert claims that Secure.com is who you are connected to." It's very simple. a. the remote party, b. the CA, and c. the statement that the CA says the remote party is who it is. Oh, and I almost forgot: d. on the chrome so no forgeries, thanks, Mr Browser.
Why does all this matter? To close the loop. Right now, Firefox says you are connected to Paypal.com. And IE6 says you are connected to BoA. If you get phished, it's the browser that got it wrong, not the CA. As the CA is the one that collected the money for securing the connection we need to reinsert the CA into the statement.
So when the user sues, she does so on the proper design of the PKI, not some mushed up historical accident.
In branding news: IE7 is out in Beta 2 and I'm impatiently waiting for the first road tests. (Roight... as if I have a Microsoft platform around here...) Readers will recall that Microsoft took the first steps along the branded security path by putting the CA name up on the chrome. This places them in the lead in matters of risk.
Sadly, they also got a bit confused by the whole high-end super-certs furfie. IE7 only rewards the user with the CA brand if the site used these special high-priced certs.
Plonk! That kind of ruins it for security - the point of the branding is that the consumer wants to see the Bad Brand or Unknown Brand or the Missing Brand or the Bland Brand ... up there as well. Why? So as to close off the all-CAs-are-equal bug in secure browsing. (Preferably before the phishers start up on it, but just after the first sightings will do nicely, thanks, if you subscribe to post-GP theories.)
By choosing to promote a two-tiered risk statement, Microsoft then remains vulnerable to a takeover in security leadership. That's just life in the security world; leadersip is a bit of a lottery when you allow your security to become captive to marketing departments' zest for yet another loyalty program. Also, annoyingly, IE7 promises to mark any slightly non-formal certificated site (such as FC) as a Red Danger Danger site. Early indications are that this will result in an attack on brand that hasn't hitherto been seen, and has interesting strategic implications for you-know-who.
The CA branding idea is not new nor original. It was even (claimed to be) in the original Netscape design for secure browsing, as was the coloured security bar. Using brand is no more than an observation deriving from several centuries of banking history - a sector that knows more about risk matters than the Internet, if only because they lose money every time they get it wrong.
Consider some more in the flood of evidence that brand matters - over in VoIPland look at how things have changed:
In Europe, branded VoIP represented 51.2 percent of all VoIP calls in the last quarter of 2005, while Skype accounted for 45 percent of VoIP minutes. Vonage took less than one percent of the market while other third-party VoIP providers represented 3.5 percent of all VoIP traffic, the report said."Twelve months ago, Skype represented 90 percent of all VoIP minutes. Now people are buying branded services," Chris Colman, Sandvine's managing director for Europe, said Tuesday.
Whaaa.... 90% to 45% of the market in 12 months! No wonder Skype sold out!
The same trend was found in the North American market. The study found that U.S. branded VoIP represented 53 percent of VoIP minutes on broadband networks. Vonage, with a 21.7 percent share, and Skype, with 14.4 percent, were the leading third-party providers.
I'll bet Vonage are kicking themselves... Stop Press!
TECHNOLOGY ALERT from The Wall Street Journal. Feb. 8, 2006
Internet-phone company Vonage Holdings has filed to raise up to $250 million in an initial public offering. The company also named Mike Snyder, formerly president of security company ADT, as its new CEO. Founder Jeffrey Citron, who had served as CEO, remains chairman.
FOR MORE INFORMATION, see:
http://wsj.com/technology?mod=djemlart
I didn't know you could file an IPO in just minutes like that!
Meanwhile, one group that have traditionally resisted the risk nexus of brands ... just got hit over the head with their own brand! Mozilla earnt a spot in the 10 ten most influential brands last year. More influential that Sony! Heady praise indeed. Well done, guys. You have now been switched on to the miracle of brand, which means you have to defend it! Even as this was happening, Firefox lost market share in the US. Predicted of course, as IE7 rolls out, Microsoft users start to switch back. Nice. Competition works (in security too).
So, what's the nexus between brand and risk? Newbies to the brand game will blather on with statements like "we protect our brand by caring about the security of our users." Can you imagine a journo typing that up and keeping a straight face?
No, brand is a shorthand, a simple visual symbol that points to the entire underlying security model. Conventional bricks&mortar establishments use a combination of physical and legal methods (holograms and police) to protect that symbol, but what Trustbar has shown is that it is possible to use cryptography to protect and display the symbol with strength, and thus for users to rely on a simple visual icon to know where they are.
Hopefully, in a couple of years from now, we'll see more advanced, more thoughtful, more subtle comments like "the secured CA brand display forms an integral part of the security chain. Walking along this secured path - from customer to brand to CA to site - users can be assured that no false certs have tricked the browser."
More on threats. A paper Paul sent to me mentions that:
Stuart Schechter’s thesis [11] on vulnerability markets actually discusses bug challenges in great detail and he coined the term market price of vulnerability (MPV) as a metric for security strength.
A good observation - if we can price the value of a vulnerability then we can use that as a proxy for the strength of security. What luck then that this week, we found out that the price of the Windows Metafile (WMF) bug was ... $4000!.
The Windows Metafile (WMF) bug that caused users -- and Microsoft -- so much grief in December and January spread like it did because Russian hackers sold an exploit to anyone who had the cash, a security researcher said Friday.The bug in Windows' rendering of WMF images was serious enough that Microsoft issued an out-of-cycle patch for the problem in early January, in part because scores of different exploits lurked on thousands of Web sites, including many compromised legitimate sites. At one point, Microsoft was even accused of purposefully creating the vulnerability as a "back door" into Windows.
Alexander Gostev, a senior virus analyst for Moscow-based Kaspersky Labs, recently published research that claimed the WMF exploits could be traced back to an unnamed person who, around Dec. 1, 2005, found the vulnerability.
"It took a few days for exploit-enabling code to be developed," wrote Gostev in the paper published online, but by the middle of the month, that chore was completed. And then exploit went up for sale.
"It seems that two or three competing hacker groups from Russian were selling this exploit for $4,000," said Gostev.
(That's a good article, jam-packed with good info.) Back to the paper. Rainer Bohme surveys 5 different vulnerability markets. Here's one:
Vulnerability brokers are often referred to as “vulnerability sharing circles”. These clubs are built around independent organizations (mostly private companies) who offer money for new vulnerability reports, which they circulate within a closed group of subscribers to their security alert service. In the standard model, only good guys are allowed to join the club. The customer bases are said to consist of both vendors, who thus learn about bugs to fix, and corporate users, who want to protect their systems even before a patch becomes available. With annual subscription fees of more than ten times the reward for a vulnerability report, the business model seems so profitable that there are multiple players in the market: iDefense, TippingPoint, Digital Armaments, just to name a few.
OK! He also considers Bug Challenges, Bug Auctions, Exploit derivatives, and insurance. Conclusion?
It appears that exploit derivatives and cyber-insurance are both acceptable, with exploit derivatives having an advantage as timely indicator whereas cyber-insurance gets adeduction in efficiency due to the presumably high transaction costs. What’s more, both concepts complement one another. Please note the limitations of this qualitative assessment, which should be regarded as a starting point for discussion and exchange of views.
Hasan finds Gutenberg's copy of "A PRINCESS OF MARS," (1917):
"The brothers had supplied me with a reddish oil with which I anointed my entire body and one of them cut my hair, which had grown quite long, in the prevailing fashion of the time, square at the back and banged in front, so that I could have passed anywhere upon Barsoom as a full-fledged red Martian. My metal and ornaments were also renewed in the style of a Zodangan gentleman, attached to the house of Ptor, which was the family name of my benefactors."They filled a little sack at my side with Zodangan money. The medium of exchange upon Mars is not dissimilar from our own except that the coins are oval. Paper money is issued by individuals as they require it and redeemed twice yearly. If a man issues more than he can redeem, the government pays his creditors in full and the debtor works out the amount upon the farms or in mines, which are all owned by the government. This suits everybody except the debtor as it has been a difficult thing to obtain sufficient voluntary labor to work the great isolated farm lands of Mars, stretching as they do like narrow ribbons from pole to pole, through wild stretches peopled by wild animals and wilder men.
"When I mentioned my inability to repay them for their kindness to me they assured me that I would have ample opportunity if I lived long upon Barsoom, and bidding me farewell they watched me until I was out of sight upon the broad white turnpike."
How's that for dredging ;-)
It has often been to my regret that a fuller treatment of contracts as the keystone to financial cryptography has been beyond me. But not beyond Nick Szabo, who posts on a crucial case in the development of the negotiable instrument as a contract of money. He suggests two key differences:
Negotiable instruments – checks, bank notes, and so on – are promises to pay or orders with an implied promise to pay, and are thus contracts. But they differ from contracts in two important ways. First is the idea of “merger.” Normally, a contract right is an abstraction that is located nowhere in particular but belongs to a party to the contract or to a person to whom that party has assigned that right. Possessing a copy of a normal contract has nothing to do with who has rights under that contract. But in a negotiable instrument, the contract right is “merged” into the document. Assignment of that right takes place simply by transferring the document (in the case of a bearer instrument) or by indorsing (signing) and transferring it.The second big way negotiable instruments differ from contracts is the “good faith purchaser” or “holder in due course” rule which is illustrated by Miller v. Race. In a normal sale of goods under common law, the new owner’s title to the goods is at risk to the contractual defenses of a prior owner. ...
In short, with a normal contract, even once assigned - sold - to new parties, there are defences to get it back. Yet in the case of Miller v. Race in the halcyon days of London's note banking, Lord Mansfield declared in 1758:
A bank-note is constantly and universally, both at home and abroad, treated as money, as cash; and paid and received, as cash; and it is necessary, for the purposes of commerce, that their currency should be established and secured.
It is these instruments that we commonly issue on the net. Although issuers sometimes have the technical capability to return them, subject to some good case, they often declare in effect that they are indeed negotiable instruments and the current holder is "holder in due course." Especially, the digital golds and the digital cashes have so declared, to their benefit of much lower costs. Those that aren't so resolved - the Paypals of the world - inflict much higher fees on their users.
Often people expect your transaction systems to be scaleable, and fast and capable of performing prodigous feats. Usually these claims reduce down to "if you're not using my favourite widget" then you are somehow of minimal stature. But even when exposed as a microdroid sales tactic, it is very hard to argue against someone that is convinced that a system just must use the ACME backend, or it's not worth spit.
One issue is transaction flow. How many transactions must you handle? Here's some numbers, suitable for business plans or just slapping on the table as if your plan depended on it. Please note that this is a fairly boring post - only read it when you need some numbers for your b-plan. I'll update it as new info comes in.
Earlier Wednesday, the Tokyo exchange had issued a warning it would stop trading if the system capacity limit of 4 million transactions was reached. As it reached 3.5 million about an hour before the session's close, it announced it would stop trading 20 minutes early.
According to the 2004 fourth-quarter report issued by Western Union's parent company, First Data Corp., an estimated 420 Western Union transactions occur every minute each day -- amounting to an average of seven transfers every second of the year.
Craig's stellar plotting system for e-gold's transaction flow gives us loads of 34k per day last I looked. Also Fee Income.
eBay's annual report for 2005 reported that "PayPal processed an average of approximately 929,000 transactions per day during 2004." There are contradictory numbers here and here: Total number of payments grew to 117.4 million, up 41 percent year over year and 4 percent vs. the prior quarter. This would imply 320k transactions per day, but we don't expect Paypal to be accurate in filings to the SEC.
Payment News reports:
Robin Sidel reports for the Wall St. Journal on how credit card issuers are now pursuing the market for smaller payments less than $5:....The market for transactions valued at less than $5 accounted for $1.32 trillion in consumer spending in 2003, representing more than 400 billion transactions, according to research by TowerGroup, a unit of MasterCard International Inc. By comparison, Visa processes a total of about $2 trillion of global transactions each year.
During the busiest hour on December 23 [2005], Visa processed an average of 6,363 transaction messages each second. That's a 14 percent increase over the average of 5,546 transaction messages per second Visa processed during the peak hour on December 24, 2004. Consider that Visa's payment network, VisaNet, can process more transactions over the course of a coffee break than all the stock exchanges of the world in an entire day.
Nice quip! I'll check that out next time I get my exchange running. In closing, let's let Lynn Wheeler have the last word. He reports that the old white elephant of transaction processing, SET, performed like this:
...If even a small percentage of the 2000 transactions/sec that typically go on were to make the transition to SET, the backend processing institution would have to increase their legacy computational processing capability by three orders of magnitude. The only way that SET could ever be succesful was if it totally failed, since the backend processing couldn't build out to meet the SET computational requirements. It was somewhat satisfying to see the number of people that the thot stopped them in their tracks.The best case demo of SET at a show a year later was on an emulated processing environment with totally idle dedicated boxes. The fastest that they could get was 30 seconds elapsed time, with essentially all of that being RSA crypto computational processing. Now imagine a real-world asymmetric environment that is getting 1000 of those a second. My statement was that a realistic backend processing configuration would require on the order of 30,000 dedicated machines just doing the RSA crypto processing.
There were then remarks from the workstation vendors on the prospect of selling that many machines, and somebody from RSA about selling that many licenses. From then on my description that SET was purely an academic toy demo, since nobody could ever take it as a serious proposal with operational characteristics like that.
private email, 14th December 2005. Note how he's talking about the mid 90's, using 10 year old figures for credit card processing networks like Visa.
Adam points to Ethan's musings on the dire need to move many small payments across borders. It's a good analysis, he gets it right.
Remittances has been huge business for a long time. However it didn't burst onto the international agenda until 9/11 when it was suggested that some of the money was moved using Hawala. Whether that was found to be true or not I never heard - certainly most of it was sent through the classical banking channels. Not that it made any difference; even the Congressional committee remarked that the amounts neeeded for 9/11 were too small easily trace.
No matter, suddenly everyone was talking about remittances. The immediate knee-jerk reaction was to shut down the Hawalas. Of course, this got a huge cheer from anti-immigrant interests, and Western Union, who provides the same service at about 5 times the cost.
Unfortunately, shutting them down was never going to work. Remittances is such a large part of the economy it has to be recognised. The effect is so large, it is the economy in some senses and places. (I recall Ecuador numbers its exports as oil, remittances, and fruit, in some or other order. Other countries do something similar, without the oil.) Africa Unchained reports:
According to a recent report (Migrations and Development) by the International Development Select Committee (UK), over $300 Billion was sent from developed to developing countries in 2003 by diasporas living in the developed countries. Global remittance, the report maintains is growing faster than official development assistance from the developed countries, also global remittance is the second largest source of external funding for developing countries, behind Foreign Direct Investment (FDI), and also accounts for as much as 27% of the GDP for some African countries.
But these economies and their remittances will always now be cursed by the need to give lip service to the anti-money-laundering (AML) people. Of course money laundering (ML) will go on through those channels, but whether it is more or less than through other channels, and whether it is likely to be more obvious than not is open to question. From what I can tell, ML would be hard to hide in those systems because of the very cautious but "informal" security systems in place, and no operator wants the attention any more.
What is not open to question is that the attention of AML will dramatically increase the costs of remittances. Consider adding a 2% burden to the cost of remittances, which is easy given the cost disparity between the cheaper forms and Western Union. If remittances happens to generate half of the cash of a country, then the AML people have just added a whole percentage point of drag to the economy of an entire underdeveloped nation.
Gee, thanks guys! And there is another insidious development going on here, which is also mentioned above:
Hundreds of creative efforts are underway across the developing world to solve these problems with remittance. To address safety issues, MoneyGram is offering delivery services of money transfers in the Phillipines, bringing money to your door instead of forcing you to come and collect your funds from an office in town. Alternatively, if your recipient has an ATM card, they will transfer the deposit to her account. A new remittance strategy - goods and service remittance - addresses the safety, cost and misuse issues simultaneously. Instead of sending money home, make a purchase from a store or website in the US or Europe, and powdered milk, cans of corned beef or a live goat is delivered to your relatives. Manuel Orozco, an economist with the IADB, estimates that as much as 10% of all remittance happens via goods and services.Mama Mike’s - a pioneer in goods remittance - offers online shoppers the ability to buy supermarket vouchers and mobile phone airtime for relatives in Kenya and Uganda, as well as more conventional gifts like flowers and cards. SuperPlus, Jamaica’s largest supermarket chain, goes even further, allowing online shoppers to fill a shopping card for their relatives and arrange for them to pick up the order in one of the SuperPlus stores around the country. SuperPlus is a partner with both Western Union and MoneyGram and has been promoting its supermarket remittance service through Western Union and MoneyGram stores in New York City, home to a large Jamaican diaspora. Goods remittance services generally don’t charge a fee, making their profit off goods sales instead.
Spot it? The ones who benefit most from the push for AML are the large transnational corporations that come in and provide a "creative effort." They get a free pass, and help from authorities because they say all the right words. Today's pop quiz: is Western Union is more likely to stop ML than informal methods of remittances? Would Western Union be able to close down any troublesome competitor with the right noises?
Depending on your answers, it's either the noble fight, or just another traumatic security agenda being captured and turned into a _barrier to entry_ to squeeze the small guys out of a very lucrative business.
Well, that was easy! I mentioned in my 2006 predictions that the USG controls enough of the Internet to have it's way, and it won't give that up. Now the administration has come out and defined its policy in definite terms, an unusually forthright step.
U.S. Principles on the Internet's Domain Name and Addressing System
The United States Government intends to preserve the security and stability of the Internet's Domain Name and Addressing System (DNS). Given the Internet's importance to the world's economy, it is essential that the underlying DNS of the Internet remain stable and secure. As such, the United States is committed to taking no action that would have the potential to adversely impact the effective and efficient operation of the DNS and will therefore maintain its historic role in authorizing changes or modifications to the authoritative root zone file. |
Etc, etc. Read the Register's commentary to see more background on who is suggesting otherwise. Curiously though, they missed one issue when they said that the US would let other countries run their own ccTLD domains. That's not what it said at all. Rather, the US has said that it recognises the other countries' interests while retaining the controlling role. (Icann falls into line.)
Why was this an easy call? The style of the current administration might be blamed by many, but the underlying issue is that this is the make-up of Washington policy and practice, going back decades or even centuries. The Internet will not be let go. The only thing that will shake this intent is complete and utter collapse of the USG, something pretty unlikely, really, regardless of what the conspiracy buffs over at IcannWatch think.
(For those looking for more meat, there was a Cook report on this about a decade ago. Also, see the snapshot of Internet Governance forces from a decade back in the GP4.3 case study on phishing. See also the Register on .al ccTLD.)
I took a lot of flak from the Diamond Governance story, so it behoves to move forward and make the point more clearly. The essential point is that there are less interested stakeholders in non-profits, and therefore governance likely needs to be stronger.
Another way of putting it is that if fraud is your thing, non-profits are fertile territory. Or if you think non-profits mean trust, you are fertile territory.
Why this is will take more than a blog entry to write up, and as Jean points out there is lots of study in governance for non-profits. However, I'm aware of a bunch of fraud patterns, and I'll post those for when I see them. Here's one I've been aware of for a couple of years. It is based on certain daft legal provisions, and would disappear in an instant if the law were changed.
The government sued AmeriDebt and Andris Pukke two years ago, seeking $172 million in damages.Regulators accused the Germantown- based nonprofit of charging excessive and poorly disclosed fees to consumers seeking help managing their debt and then channeling millions to Pukke's for-profit company, DebtWorks.
AmeriDebt once was one of the nation's largest credit counselors but is now out of business.
AmeriDebt was a non-profit. That's because there is some stupid law that says that a non-profit can do debt consolidation and gain certain privileges over a for-profit firm in the same business. A subsidy, in other words. So, obviously at least in hindsight, a smart operator starts a non-profit, consolidates a lot of debt for a lot of stricken people, and then funnels the cash somewhere else. Here's some more hints:
As consumer debt skyrocketed over the past two decades, a new breed of credit counselor emerged, one that relied heavily on television advertising to promote its services and toll-free telephone lines to dispense advice, replacing the person-to-person consultations offered by older firms.As more aggressive firms proliferated, so did consumer complaints, prompting the Internal Revenue Service to begin auditing 60 credit-counseling organizations, including AmeriDebt, in late 2003 to see if they were misusing their tax-exempt status to benefit their owners. Those audits continue.
"Non-profit" equals no taxes, no audits, no owners. Now fill it with cash and see what happens. Likely, I will take yet more flak for this. All I would ask is, do you believe that a non-profit is safe from fraud because it is doing good works?
2005 was when the Snail lost its identity. What is to come in 2006? Prediction always being a fool's game compared to the wiser trick of waiting until it happens and then claiming credit, here's a list of strategic plays for the year to come.
1. Government will charge into cybersecurity. So far, the notion of government involvement has been muted, as there have been enough voices pointing out that while the private sector may not have a good idea, it certainly has a less bad idea than the government. Cybersecurity departments have been duly and thankfully restricted.
I suspect in 2006, the Bull will begin to Roar through our China Shop. Calls seem to be escalating in all areas. This is a reflection of many factors:
For all that, calls to send in the cavalry will increase. Oh course, we know that we the user will be more insecure and poorer as a result of the Bull market for cybersecurity. What we don't know is how much worse it will be, and I daren't predict that :)
2. Anti-virus manufacturers will have a bad year. Not so much in profits, which might perversely go up, but in terms of their ability to make a difference. Kapersky of eponymously named company points out that it is getting much harder. We know the crooks are getting much more focussed due to their revenues cycle. Also of note is that Microsoft has entered the game of selling you (not me) protection for their OS breaches - bringing with it a whole messy smelly pile of conflicts which will make it even harder for the "independent" anti-virus providers. |
3. Firefox will continue to grow. My guess is that it will get past 15%, probably to 20%. Microsoft won't fight back seriously, they have other battles, even though this would leave them at say 70-75% (Safari, Opera, Konqueror take 5-10%).
3.b But sometime by the end of the 2006, Firefox will be seriously bloodied as it runs into security attacks targetted directly at it. What does this mean? Firefox crosses GP sometime soon, in terms of financial fraud. The only ones this will surprise will be Mozilla. Most observers with a clue have realised that Firefox has enjoyed its reputation due to reasonably sound factors, but these factors are just basic engineering issues like a re-write and solid coding practices, not those high level practices that distinguish the security projects (BSD, PGP, etc). This probably won't hold back Firefox's growth, as their mission to give browser users a choice remains aligned with our needs, and it remains tremendously useful even if the security rep takes a knock. So be it. But by the end of the year, expect some hand-wringing and moaning and more than usually confusing comments from Mozo Central's revolving security spokesman of the day. |
3.c Also expect Mozilla to start talking about the next step in commercialisation -- the IPO. The discipline of the public company will start to look very attractive to those tasked with sorting out intractable internal conflicts inherited from the touchy-feely open source world.
4. In payments, the number of non-bank payment systems looks like it will increase dramatically. Gift card systems are exploding as companies discover that they take the money up front so they get financing at a better rate than the bank offers -- Free! -- and the wastage rate is better than any retail margin seen outside monopoly products. It doesn't take much for these to be integrated into an online account system - after all, what is a gift card but a psuedonymous account identifier. Then, once the alignment takes place, adding user-to-user payments is just a switch to throw one the weekend when your bank isn't looking. Maybe 2006 will be the year of the indie payment system? |
5. No predictions for the gold sector. Even though the gold unit is skyrocketing and will continue to do that, there are continuing woes facing the issuers which they haven't sorted out long term. The bounty of people rushing up the gold price curve is offset by the cowboy image of the gold issuers. |
6. Mac OSX will get to 7 or 8% of the market, maybe 10% if the intel change goes well. Given the aggressiveness of the PC world, that can be considered to be a stunning result. The BSDs and the Linuxes still won't penetrate the desktop as much as we'd like, but it will become reasonable to talk about a Microsoft-free environment. 6.b I might finally acquire a Mac. Not because I want to -- I hate the UI, the keyboards suck, and they haven't got the reliability of the old thinkpad workhorses -- but for reasons too irrelevent and arcane to go into today. |
7. Google will grow and prosper and survive. This might be an odd thing to predict, but the thing with Google is that it is a bit like a Netscape with an endless supply of revenue. As one insider put it to me recently, it was a boon to the world when Netscape was cleaned out as that meant we could get on with business. Unfortunately, google has lots of revenue, so the mad cats will be financed and the projects just go on and on and on... I expect by the end of the year, though, we'll see adverts for cat herders as long term insiders realise that some chaos is good but most is just chaos. |
8. Microsoft will not succeed in sorting out its security mess and will continue to lose market share. Let's get this in perspective - it will probably drop down from around 90% to around 80% allround. Nothing worth crying over, and indeed, it will give the company some sorely lacking focus. Some time around the end of 2006, some soul searching will result in serious investigation of other operating systems. So, who wants to bet on what OS Microsoft will pick up? Here's my anti-picks: it won't be Unix, and it won't be Java (which in its J2EE form is more or less a server OS, and then there's Swing...). |
9. Macs won't be seriously hit by security issues, but will suffer a few minor embarrassments which will be trumped up in the press. The Mac application space will come under attack. On the other hand, Apple doesn't look ready for a security war, and will muff it, regardless. Welcome to the Hotel California! |
10. I've predicted these before and not seen them so I'll predict them again:
In the security world, it is important to avoid the disclosure of being completely and utterly wrong; these above predictions are my way of avoiding that terrible fate. Neat, huh? On the other hand, I probably needn't have bothered. The security discipline is missing, presumed dead. Experts have hummed and hahed and shuffled feet over the mess for so long that it is now at the point where when anyone hears a so-called 'security expert' talk, they mentally discount most of what is said. |
Try it! By the end of the year, I predict open derision of anyone who pretends to be a security professional. Dilbert, anyone?
That's for the lightweight stuff. For the heavyweight stuff, look for alternate plaforms, like....
13. Unluckily for authors, artists, and blog writers, nothing much will change in DRM. Music sellers will sue file swappers, concentrating on the demographic of 12-17. File swappers will continue to swap, and learn how much better their own music is. The techniques of both sides will advance and both will be widely copied and nobody will make any money. My bulls here are courtesy images.google.com, and if I made any money from this blog, I'd be one sorry matador. Some time around 2007, all IP owners will realise they cannot win this war, and all swappers will get religious about DRM. A papal bull will be issued and a property love-in will ensue. Nobody will fight over the rights for the book, musical, philosophical or TV. By then we will have worked out the killer rights paradigm - convenience. |
Hopefully we will be invited back for the love-in. Merry Xmas, Happy New Year, Season's Greetings, and may your packets move at Kelvin speed. I wish you a happy Year of the Bull.
Some other predictions:
In the military they say no plan survives the first shot, and call this aptly "the fog of war." The best laid plans of the security industry and various parliaments (US Congress, the European Union, etc) are being challenged in war as in music. Now comes news that one DRM supplier is threatening to reverse-engineer the DRM of another supplier.
A company that specializes in rights-management technology for online stores has declared its plans to reverse-engineer the FairPlay encoding system Apple uses on iTunes Music Store purchases. The move by Cupertino-based Navio Systems would essentially break Apple’s Digital Rights Management (DRM) system in order to allow other online music retailers to sell downloads that are both DRM-encoded and iPod-compatible by early 2006.“Typically, we embrace and want to work with the providers of the DRM,” said Ray Schaaf, Navio’s chief operating officer. “With respect to FairPlay, right now Apple doesn’t license that, so we take the view that as RealNetworks allows users to buy FairPlay songs on Rhapsody, we would take the same approach.”
In 2004, after unsuccessfully courting Apple to license FairPlay, RealNetworks introduced its Harmony technology, which allowed users to buy music from online sources other than the iTunes Music Store and transfer it to their iPod. RealNetworks’ move was then denounced by Apple as adopting “the tactics and ethics of a hacker to break into the iPod.” In December of 2004, Apple shot back by releasing an iPod software update that disabled support for RealNetworks-purchased songs.
I forgot to add: This trend is by no mean isolated, as pointed to by Adam. Here's an account of AOL inserting capabilities into our computers. I noticed this myself, and had to clean out these bots while making a mental note to never trust AOL with any important data or contacts.
Big mistake. That was my list, not AOL's. They've violated my personal space. By doing this they've demonstrated that my data — my list of contacts — can be tampered with at their whim. I have to wonder what comes next? Can my lists be sold, or mined for more data? Will they find out if my buddies purchase something online and then market that thing to me, on the assumption that I share mutual tastes? Just what is AOL doing with my data?
In the fall-out from the Sony root-kit affair, here's an interesting view:
Sony Rootkits: A Sign Of Security Industry Failure?Nov. 18, 2005 By Gregg Keizer TechWeb News
One analyst wonders why it took so long to catch onto Sony's use of rootkits on CDs and whether customers may have a false sense of security.Sony's controversial copy-protection scheme had been in use for seven months before its cloaking rootkit was discovered, leading one analyst to question the effectiveness of the security industry.
"[For] at least for seven months, Sony BMG Music CD buyers have been installing rootkits on their PCs. Why then did no security software vendor detect a problem and alert customers?" asked Joe Wilcox, an analyst with JupiterResearch.
"Where the failure is, that's the question mark. Is it an indictment of how consumers view security software, that they have a sense of false protection, even when they don't update their anti-virus and anti-spyware software?
"Or is it in how data is collected by security companies and how they're analyzing to catch trends?"
Ouch! I wondered before who was attacking who, but this is a good point that goes further. Why didn't anti-virus programs detect the attack from Sony? We rely on the anti-virus sellers in the Microsoft field to protect from the weakness of the underlying OS.
It shouldn't be a surprise to discover that there is some form of selective detection going on in the Microsoft security world - the rest of the article identifies that their source of information is problem reports, honeynets, and a vague but interesting comment:
"Frankly, we were busy looking for where the [spyware] money was going," said Curry. "We weren't looking at legitimate industries."
This is probably as it should be. Microsoft creates the vulnerabilities and the rest of the industry follows along cleaning up. It isn't possible to be more than reactive in this business, as to be proactive will lead to making mistakes - at cost to the company selling the security software. So companies will routinely promise to clean up 100% of the viruses on their list of viruses that they clean up 100% of.
(Note that this still leaves the cost of missed attacks like the Sony rootkit, but that is borne by the user, a problem for another day.)
The next interesting question is whether Sony, or the inevitable imitators that come along, are going to negotiate a pass with the anti-virus sellers. That is, pay blood money to anti-virus scanners for their rootkit. In the spam world, these are called "pink sheets" for some obscure reason. Will an industry in acceptable, paid for attacks on Microsoft's OS spring up? Or has it already sprung up and we just don't know it?
If so, I'd have to change the title of this rant to "Security is getting more economic..."
Addendum:
Hagai Bar-El points to a paper on the market for anti-forensic tools - ones that wipe your tracks after you've done your naughty deed.
I have just enjoyed reading "Evaluating Commercial Counter-Forensic Tools" by Matthew Geiger from Carnegie Mellon University. The paper presents failures in commercially-available applications that offer covering the user's tracks. These applications perform removal of (presumably) all footprints left by browsing and file management activities, and so forth. To make a long story short: seven out of seven such applications failed, to this or that level, in fulfilling their claims. ...The next thing I was wondering about is how come these products sell so well, given that they do not provide what they state they do, in a way that is sometimes so evident.
I think a partial answer to why these things sell so well might be found in the debate about security as viewed as a market in insufficient information. It has been suggested that security is a market for lemons (one where the customer does not know the good from the bad) but I prefer to refer to security as a market for silver bullets (one where neither the customer nor the supplier know good from bad).
Either way, in such insufficient markets, the way sales arise is often quite counter intiutive. In a draft paper (html and PS), I make the claim that sales in the market for security have nothing to do with security, but are driven by other factors.
So, once we appreciate that disconnect in the market, it's quite easy to prediuct that vapourware sells better than real product, because the real product has higher costs which means less marketing. All other things being equal of course.
Another partial answer is that the bad guys that do need to evade the FBI (and competitors) will know the score. They also know something that shows them
to be generally astute: they generally mistrust privacy-oriented technology as being fraudulent in claims because it can't be easily checked up on. So sales of products will tend to go to people who believe claims - being those who actually have no strong reason to rely on the claims.
In another story similar in spirit to the Cuthbert case, Adam points to Mark who discovers that Sony has installed malware into his Microsoft Windows OS. It's a long technical description which will be fun for those who follow p2p, DRM, music or windows security. For the rest I will try and summarise:
Mark bought a music disk and played it on his PC. The music disk installed a secret _root kit_ which is a programme to execute with privileges and take control of Microsoft's OS in unknown and nefarious ways. In this case, its primary purpose was to stop Mark playing his purchased music disk in various ways.The derivative effects were a mess. Mark knows security so he spent a long time cleaning out his system. It wasn't easy, well beyond most Windows experts, even ones with security training, I'd guess. (But you can always reformat your drive!)
No hope for the planet there, then, but what struck me was this: Who was attacking who? Was Sony attacking Mark? Was Mark attacking Sony? Or maybe they were both attacking Microsoft?
In all these interpretations, the participants did actions that were undesirable, according to some theory. Yet they had pretty reasonable justifications, on the face of it. Read the comments for more on this; it seems that the readers for the most part picked up on the dilemmas.
So, following Cuthbert (1, 2, 3) both could take each other to court, and I suppose Microsoft could dig in there as well. Following the laws of power, Sony would win against Mark because Sony is the corporation,and Microsoft would win against Sony, because Microsoft always wins.
Then, there is the question of who was authorised to do what? Again, confusion reigns, as although there was a disclaimer on the merchant site that the disk had some DRM in it, what that disclaimer didn't say was that software that would be classified as malware would be installed. Later on, a bright commenter reported that the EULA from the supplier's web site had changed to add a clause saying that software would be added to your Windows OS.
I can't help being totally skeptical about this notion of "authorisation." It doesn't pass the laugh test - putting a clause in an EULA just doesn't seem to be adequate "authorisation" to infect a user's machine with a rootkit, yet again following the spirit of Cuthbert, Sony would be authorised because they said they were, even if after the fact. Neither does the law that "unauthorises" the PC owner to reverse-engineer the code in order to protect his property make any sense.
So where are we? In a mess, that's where. The traditional security assumptions are being challenged, and the framework to which people have been working has been rent asunder. A few weeks ago the attackers were BT and Cuthbert, on the field of Tsunami charity, now its Sony and Mark, on the field of Microsoft and music. In the meantime, the only approach that I've heard make any sense is the Russian legal theory as espoused by Daniel: Caveat Lector. If you are on the net, you are on your own. Unfortunately most of us are not in Russia, so we can't benefit from the right to protect ourselves, and have to look to BT, Sony and Microsoft to protect us.
What a mess!
Addendums:
And in closing, I just noticed this image of Planet Sony Root Kit over at Adam's entry:
Nick pointed me to his Cuthbert post, and I asked where the RSS feed was, adding "I cannot see it on the page, and I'm not clued enough to invent it." To which he dryly commented "if you tried to invent it you'd arguably end up creating many 'unauthorized' URLs in the process...."
Welcome to the world of security post-Cuthbert. He raises many points:
Under these statutes, the Web equivalent of pushing on the door of a grocery store to see if it's still open has been made a crime. These vague and overly broad statutes put security professionals and other curious web users at risk. We depend on network security professionals to protect us from cyberterrorism, phishing, and many other on-line threats. These statutes, as currently worded and applied, threaten them with career ruin for carrying out their jobs. Cuthbert was convicted for attempting to determine whether a web site that looked like British Telecom's payment site was actually a phishing site, by adding just the characters "../.." to the BT site's URL. If we are to defeat phishing and prevent cyberterrorism, we need more curious and public-spirited people like Cuthbert.Meanwhile, these statutes generally require "knowledge" that the access was "unauthorized." It is thus crucial for your future liberty and career that, if you suspect that you are suspected of any sort of "unauthorized access," take advantage of your Miranda (hopefully you have some similar right if you are overseas) right to remain silent. This is a very sad thing to have to recommend to network security professionals, because the world loses a great deal of security when security professionals can no longer speak frankly to law-enforcement authorities. But until the law is fixed you are a complete idiot if you flap your lips [my emphasis].
Point! I had not thought so far, although I had pointed out that security professionals are now going to have to raise fingers from keyboards whenever in the course of their work they are asked to "investigate a fraud."
Consider the ramifications of an inadvertant hit on a site - what are you going to do when the Met Police comes around for a little chat? Counsel would probably suggest "say nothing." Or more neutrally, "I do not recollect doing anything on that day .. to that site .. on this planet!" as the concept of Miranda rights is not so widely encouraged outside the US. Unfortunately, the knock on effect of Cuthbert is that until you are appraised of the case in detail -- something that never happens -- then you will not know whether you are suspect or not, and your only safe choice is to keep your lips buttoned.
Nick goes on to discuss ways to repair the law, and while I agree that this would be potentially welcome, I am not sure whether it is necessary. Discussion with the cap-talk people (the most scientific security group that I am aware of, but not the least argumentive!) has led to the following theory: the judgement was flawed because it was claimed that the access was unauthorised. This was false. Specifically, the RFC authorises the access, and there is no reasonably theory to the alternate, including any sign on the site that explains what is authorised and what not. The best that could be argued - in a spirited defence by devil's advocate Sandro Magi - is that any link not provided by the site is unauthorised by default.
In contrast, there is a view among some security consultants that you should never do anything you shouldn't ever do, which is more or less what the "unauthorised" claim espouses. On the surface this sounds reasonable, but it has intractable difficulties. What shouldn't you ever do? Where does it say this? What happens if it *is* a phishing site? This view is neither widely understood by the user public nor plausibly explained by the proponents of that view, and it would appear to be a cop-out in all senses.
Worse, it blows a hole in the waterline of any phishing defence known to actually work - how are we going to help users to defend themselves if we ourselves are skating within cracking distance of the thin ice of Cuthbert? Unfortunately, explaining why it is not a rational theory in legal, economic or user terms is about as difficult as having an honest discussion about phishing. Score Cuthbert up as an own-goal for the anti-phishing side.
A story doing the rounds (1, 2) shows how money laundering is now being used to open up security in banks that don't do DD. The power of the money laundering bureaucrats is now so unquestioned that mere mention of it and a plausible pretence at it allows anyone to do the craziest things to you.
AN INGENIOUS fraudster is believed to be sunning himself on a beach after persuading leading banks to pay him more than €5 million (£3.5 million) in the belief that he was a secret service agent engaged in the fight against terrorist money-laundering. The man, described by detectives as the greatest conman they had encountered, convinced one bank manager to leave him €358,000 in the lavatories of a Parisian bar. "This man is going to become a hero if he isn't caught quickly," an officer said. "The case is exceptional, perfectly unbelievable and surreal."
In another case he did it with wires to Estonia, but had to sacrifice his wife and mother-in-law in the process:
A third payment of €5.18 million was made to an account in Estonia. This time Gilbert was quicker. Police identified him by tracing his calls, but by the time they caught up with him he was in Israel. They arrested his wife and mother-in-law at the family home outside Paris. They deny acting as accomplices.
Can you hear the mother-in-law screaming "I told you he was a SCHMUCK!?!" Read the whole thing - it's a salutory lesson in how governance is done not by blindly doing what bureaucrats and experts think is a good idea, but by thinking on your feet and doing your own due diligence. In this case, it is somewhat unbelievable that the banks did not do the due diligence on this chap, but I suppose they were waiting for an invitation!
I wrote before about rising barriers in security. We now have the spectre of our worst nightmare in security turned haptic: the British have convicted a security person for doing due diligence on a potential scam site. If you work at a British Financial Institution, be very very aware of how this is going to effect security.
[writes El Reg] On December 31, 2004, Cuthbert, using an Apple laptop and Safari browser, became concerned that a website collecting credit card details for donations to the Tsunami appeal could be a phishing site. After making a donation, and not seeing a final confirmation or thank-you page, Cuthbert put ../../../ into the address line. If the site had been unprotected this would have allowed him to move up three directories.After running the two tests, at between 15.12 and 15.15 on New Year's Eve, Cuthbert took no further action. In fact his action set off an Intrusion Detection System at BT's offices in Edinburgh and the telco called the police.
It may well be that the facts of the case are not well preserved in the above reporting; no journalist avoids a good story, and El Reg is no exception. A reading of other reports didn't disagree, so I'll assume the facts as written:
[El Reg again] DC Robert Burls of the Met's Computer Crime Unit said afterwards: "We welcome today's verdict in a case which fully tested the computer crime legislation and hope it sends a reassuring message to the general public that in this particular case the appropriate security measures were in place thus enabling donations to be made securely to the Tsunami Appeal via the DEC website."
The message is loud and clear, but it is not the one that they wanted. The Metropolitan's Computer Crime Unit is blissfully unaware of the unintended consequences it has unleashed... Let's walk through it.
According to the testimony, the site gave some bad responses and raised red flags. A routine event, in other words. Cuthbert checked it out a little to make sure it wasn't a phishing site, and apparently found it wasn't. This would have been a very valuable act for him, as otherwise he might have had to change all his card and other details if they had been compromised by a real phishing attack (of which we have seen many).
The 'victim' in the case was named as Disasters Emergency Committee (DEC). From Cuthbert, DEC received a 30 pound donation via donate.bt.com and then said (after he was charged):
[more] There's no suggestion any money was stolen and the Disasters Emergency Committee has issued a statement reassuring the public that the internet remains a safe way to donate money to victims of the 26 December disaster.
The real victim may well be the British public who now only have BT's IDS and the Metropolitan Computer Crime Unit between them and phishing. Let's imagine we are tasked to teach users or banks or whoever how to defend themselves against phishing and other Internet frauds. What is it that we'd like to suggest? Is it:
a. be careful and look for signs of fraud by checking the domain name whois record, tracerouting to see where the server is located, probing the webserver for oddities?
Or
b. don't do anything that might be considered a security breach?
Either way, the user is in trouble because the user has no capability to determine what is and what is not an "unauthorised access". Banks won't be able to look at phishing sites because that would break the law and invite prosecution; after this treatment, security professionals are going to be throwing their hands up saying, "sorry, liability insurance doesn't cover criminal behaviour." The Metropolitan Computer Crime Unit will likewise be hamstrung by not being able to breach the very laws they prosecute.
There is a blatant choice here not only for security professionals and banks, but also for the hapless email user. Follow the law, and perform due diligence without the use of ping, traceroute, telnet, whois, or even browsers ... Or follow the protocols and send the recognised and documented requests to respond according to the letter and spirit of the RFCs, and in the meantime discover just how bona fide a site is. Tyler dug up the appropriate document in this case:
So RFC 3986 explicitly lists this construction as a valid URL. See section 5.4.2.:5.4.2. Abnormal Examples
Although the following abnormal examples are unlikely to occur in
normal practice, all URI parsers should be capable of resolving them
consistently. Each example uses the same base as that above.Parsers must be careful in handling cases where there are more ".."
segments in a relative-path reference than there are hierarchical
levels in the base URI's path. Note that the ".." syntax cannot be
used to change the authority component of a URI."../../../g" = "http://a/g"
"../../../../g" = "http://a/g"
The way the security bureaucrats are treating this war, possession of an RFC will become evidence of criminal thoughts! The battle for British cybersecurity looks over before it has begun.
Addendum: Due diligence defined and reasonably duty of care discussed. Also see Alice in Credit-Card Land and also Phishing and Pharming and Phraud - Oh My for discussions in more depth. Also, I forgot to tip my hat to Gerv for the original link.
When I first learnt of technical trading I puzzled over it for a long time. By own admission it ignored the rules of theory; yet the technical traders believe in it immensely, and profitably one supposes, and they consider the alternate to be useless. (In this at least, they are in agreement with the efficient markets hypothesis.)
I eventually came to the conclusion that in the absence of any good theory, then a theory of another sort must evolve. Some sort of shared understanding must evolve to permit a small interested community to communicate on a sort of insider basis. There is probably, I postulated, some economics or psychology law somewhere that says that a group of insiders is somewhat contiguous with a shared language, shared theory and eventually shared beliefs.
That sounds like a Schelling point. Is technical trading - flags, pennants, head&shoulders, etc - a Schelling point?
Footnote: wikipedia describes technical analysis which is close. In the above I'm more referring to what they describe as charting.
Just presented (slides) at ICETE2005 by Daniel Nagy: On Digital Cash-like Payment Systems:
Abstract. In present paper a novel approach to on-line payment is presented that tackles some issues of digital cash that have, in the author s opinion, contributed to the fact that despite the availability of the technology for more than a decade, it has not achieved even a fraction of the anticipated popularity. The basic assumptions and requirements for such a system are revisited, clear (economic) objectives are formulated and cryptographic techniques to achieve them are proposed.
Introduction. Chaum et al. begin their seminal paper (D. Chaum, 1988) with the observation that the use of credit cards is an act of faith on the part of all concerned, exposing all parties to fraud. Indeed, almost two decades later, the credit card business is still plagued by all these problems and credit card fraud has become a major obstacle to the normal development of electronic commerce, but digital cash-like payment systems similar to those proposed (and implemented) by D. Chaum have never become viable competitors, let alone replacements for credit cards or paper-based cash.One of the reasons, in the author s opinion, is that payment systems based on similar schemes lack some key characteristics of paper-based cash, rendering them economically infeasible. Let us quickly enumerate the most important properties of cash:
1. "Money doesn't smell." Cash payments are -- potentially -- anonymous and untraceable by third parties (including the issuer).
2. Cash payments are final. After the fact, the paying party has no means to reverse the payment. We call this property of cash transactions irreversibility.
3. Cash payments are _peer-to-peer_. There is no distinction between merchants and customers; anyone can pay anyone. In particular, anybody can receive cash payments without contracts with third parties.
4. Cash allows for "acts of faith" or naive transactions. Those who are not familiar with all the antiforgery measures of a particular banknote or do not have the necessary equipment to verify them, can still transact with cash relying on the fact that what they do not verify is nonetheless verifiable in principle.
5. The amount of cash issued by the issuing authority is public information that can be verified through an auditing process.
The payment system proposed in (D. Chaum, 1988) focuses on the first characteristic while partially or totally lacking all the others. The same holds, to some extent, for all existing cash-like digital payment systems based on untraceable blind signatures (Brands, 1993a; Brands, 1993b; A. Lysyanskaya, 1998), rendering them unpractical.
...[bulk of paper proposes a new system...]
Conclusion. The proposed digital payment system is more similar to cash than the existing digital payment solutions. It offers reasonable measures to protect the privacy of the users and to guarantee the transparency of the issuer s operations. With an appropriate business model, where the provider of the technical part of the issuing service is independent of the financial providers and serves more than one of the latter, the issuer has sufficient incentives not to exploit the vulnerability described in 4.3, even if the implementation of the cryptographic challenge allowed for it. This parallels the case of the issuing bank and the printing service responsible for printing the banknotes.
The author believes that an implementation of such a system would stand a better chance on the market than the existing alternatives, none of which has lived up to the expectations, precisely because it matches paper-based cash more closely in its most important properties.
Open-source implementations of the necessary software are being actively developed as parts of the ePoint project. For details, please see http://sf.net/projects/epoint
Signs abound that it is becoming more difficult to do basic security and stay clean oneself. An indictment for selling software was issued in the US, and this opens up the pandora's box of what is reasonable behaviour in writing and selling.
Can writing software be a crime? By Mark Rasch, SecurityFocus (MarkRasch at solutionary.com) Published Tuesday 4th October 2005 10:05 GMTCan writing software be a crime? A recent indictment in San Diego, California indicates that the answer to that question may be yes. We all know that launching certain types of malicious code - viruses, worms, Trojans, even spyware or sending out spam - may violate the law. But on July 21, 2005 a federal grand jury in the Southern District of California indicted 25 year old Carlos Enrique Perez-Melara for writing, advertising and selling a computer program called "Loverspy," a key logging program designed to allow users to capture keystrokes of any computer onto which it is installed. The indictment raises a host of questions about the criminalization of code, and the rights of privacy for users of the Internet and computers in general.
We all might agree that what the defendent is doing is distasteful at some level, but the real danger here is that what is created as a precedent against key loggers will be used against other things. Check your local security software package list, and mark about half of them for badness at some level. I'd this is as so inevitable that any attention paid to the case itself ("we need to stop this!") is somewhere between ignorance and willful blindness.
On a similar front, recall the crypto regulations that US security authors struggle under. My view is that the US government's continuing cryptopogrom feeds eventually into the US weakness against cyber threats, so they've only themselves to blame. Which might be ok for them, but as software without crypto also effects the general strength of the Internet at large, it's yet another case of society at large v. USG. Poking around over on PRZ's xFone site I found yet another development that will hamper US security producers from securing themselves and us:
Downloading the PrototypeSince announcing this project at the Black Hat conference, a lot of people have been asking to download the prototype just to play with it, even though I warned them it was not a real product yet. In order to make it available for download, I must take care of a few details regarding export controls. After years of struggle in the 1990's, we finally prevailed in our efforts to get the US Government to drop the export controls in 2000. However, there are still some residual export controls in place, namely, to prevent the software from being exported to a few embargoed nations-- Cuba, Iran, Libya, North Korea, Sudan, and Syria. And there are now requirements to check customers against government watch lists as well, which is something that companies such as PGP have to comply with these days. I will have to have my server do these checks before allowing a download to proceed. It will take some time to work out the details on how to do this, and it's turning out to be more complicated than it first appeared.
(My emphasis.) Shipping security software now needs to check against a customer list as well? Especially one as bogus as the flying-while-arab list? Phil is well used to being at the bleeding edge of the crypto distribution business, so his commentary indicates that the situation exists, and he expects to be pursued on any bureaucratic fronts that might exist. Another sign of increasing cryptoparanoia:
The proposal by the Defense Department covers "deemed" exports. According to the Commerce Department, "An export of technology or source code (except encryption source code) is 'deemed' to take place when it is released to a foreign national within the United States."The Pentagon wants to tighten restrictions on deemed exports to restrict the flow of technical knowledge to potential enemies.
A further issue that has given me much thought is the suggestion by some that security people should not break any laws in doing their work. I saw an article a few days back on Lynn's list (was lost now found), that described how the FBI cooperates with security workers who commit illegal or questionable acts in chasing bad guys, in this case extortionists in Russia (this para heavily rewritten now that I've found the article):
The N.H.T.C.U. has never explicitly credited Prolexic’s engineers with Maksakov’s arrest. “The identification of the offenders in this came about through a number of lines of inquiry,” Deats said. “Prolexic’s was one of them, but not the only one.” In retrospect, Lyon said, “The N.H.T.C.U. and the F.B.I. were kind of using us. The agents aren’t allowed to do an Nmap, a port scan”—techniques that he and Dayton Turner had used to find Ivan’s zombies. “It’s not illegal; it’s just a little intrusive. And then we had to yank the zombie software off a computer, and the F.B.I. turned a blind eye to that. They kind of said, ‘We can’t tell you to do that—we can’t even suggest it. But if that data were to come to us we wouldn’t complain.’ We could do things outside of their jurisdiction.” He added that although his company still maintained relationships with law-enforcement agencies, they had grown more cautious about accepting help.
What a contrast to the view that security workers should never commit a "federal offence" in doing their work.
I find the whole debate surrealistic as the laws that create these offences are so broad and sweeping that it is not clear to me that a security person can do any job without breaking some laws; or is this just another sign that most security people are more bureaucrats than thinkers? I recently observed a case where in order to find some security hardware, a techie ran a portscan on a local net and hard-crashed the very equipment he was looking for. In the ensuing hue and cry over the crashed equipment (I never heard if they ever recovered the poor stricken device...), the voice that was heard loudest was that "he shouldn't be doing that!" The voice that went almost unheard was that the equipment wasn't resiliant enough to do the job of securing the facility if it fell over when a mere port scan hit it.
Barriers are rising in the security business. Which is precisely the wrong thing to do; security is mostly a bottom-up activity, and making it difficult for the small players and the techies at the coalface will reduce overall security for all. The puzzling thing is why other people do not see that; why we have to go through the pain of Sarbanes-Oxley, expensive CA models, suits against small software manufacturers, putting software tool-makers and crypto protocol designers in jail and the like in order to discover that inflexible rules and blind bureaucracy have only a limited place to play in security.
Addendum: Security consultant convicted for ... wait for it: doing due diligence on a charity site in case it was a scam!
(Perilocity reports that) LLoyds and OSRM to issue open source insurance, including being attacked by commercial vendors over IP claims.
(Adam -> Bruce -> ) an article of the "Six Dumbest Ideas in Computer Security" by Marcus Ranum. I'm not sure who Marcus is but his name keeps cropping up - and his list is pretty good:
I even agree with them, although I have my qualms about these two "minor dumbs:"
The reason this doesn't work is basic economics. You can't generate revenues until the business model is proven and working, and you can't secure things properly until you've got a) the revenues to do so and b) the proven business model to protect! The security field is littered with business models that secured properly and firstly, but very few of them were successful, and often that was sufficient reason for their failure.
Which is not to dispute the basic logic that most production systems defer security until later and later never comes ... but there is an economic incentives situation working here that more or less explains why - only valuable things are secured, and a system that is not in production is not valuable.
There's several errors here, starting with a badly formed premise leading to can/can't arguments. Secondly, we aren't in general risking our life, just our computer (and identity as of late...). Thirdly, it's a risk based thing - there is no Axiomatic Right From On High that you have to have secure computing, nor be able to drive safely to work, nor fly.
Indeed no less than Richard Feynman is quoted (to support #3) which talks about how to deal and misdeal with the occasional problem.
Richard Fenyman's [sic] "Personal Observations on the Reliability of the Space Shuttle" used to be required reading for the software engineers that I hired. It contains some profound thoughts on expectation of reliability and how it is achieved in complex systems. In a nutshell its meaning to programmers is: "Unless your system was supposed to be hackable then it shouldn't be hackable."
Feynman found that the engineering approach to Shuttle problems was (often or sometimes) to rewire the procedures. Instead of fixing them, the engineers would move the problems into the safety zone created conveniently by design tolerances; Insert here the normal management pressures including the temptation to call the reliability as 1 in 100,000 where 1 in 100 is more likely! (And even that seems too low to me.)
Predictibly Feynman suggests not doing that, and finishes with this quote:
"For a successful technology, reality must take precedence over public relations, for nature cannot be fooled."
A true engineer :-)
See earlier writings on failure in complex systems and also compare Feynman's comments on the software and hardware reliability with this article and earlier comments.
As discussed here a while back in depth, there is an increasing Human Resources problem in some countries. Here's actual testing of the scope of the issue whereby job employers ask for people to lie to them in the interview, and jobseekers happily oblige:
One CV in four is a work of fictionBy Sarah Womack, Social Affairs Correspondent (Filed: 19/08/2005)
One in four people lies on their CV, says a study that partly blames the "laxity" of employers.
The average jobseeker tells three lies but some employees admitted making up more than half their career history.
A report this month from The Chartered Institute of Personnel and Development highlights the problem. It says nearly a quarter of companies admitted not always taking up candidates' references and a similar percentage routinely failed to check absenteeism records or qualifications.
Example snipped...The institute said that the fact that a rising number of public sector staff lie about
qualifications or give false references was a problem not just for health services and charities, where staff could be working with vulnerable adults or children, but many public services.The institute said a quarter of employers surveyed ''had to withdraw at least one job offer. Others discover too late that they have employed a liar who is not competent to do the job."
Research by Mori in 2001 showed that 7.5 million of Britain's 25.3 million workers had misled potential employers. The figure covered all ages and management levels.
The institute puts the cost to employers at £1 billion.
© Copyright of Telegraph Group Limited 2005.
If it found 25% of the workers admitted to making material misrepresentations, that shows it is not an abnormality, rather lying to get a job is normal. Certainly I'd expect similar results in computing and banking (private) sectors, and before you get too smug over the pond, I'd say if anything the problem is worse in the US of A.
There is no point in commenting further than to point to this earlier essay: Lies, Uncertainty and Job Interviews. I wonder if it had any effect?
It looks like Microsoft are about to release their anti-phishing (first mooted months ago here):
WASHINGTON _ Microsoft Corp. will soon release a security tool for its Internet browser that privacy advocates say could allow the company to track the surfing habits of computer users. Microsoft officials say the company has no intention of doing so.The new feature, which Microsoft will make available as a free download within the next few weeks, is prompting some controversy, as it will inform the company of Web sites that users are visiting.
The browser tool is being called a "Phishing Filter." It is designed to warn computer users about "phishing," an online identity theft scam. The Federal Trade Commission estimates that about 10 million Americans were victims of identity theft in 2005, costing the economy $52.6 billion dollars.
What follows in that article is lots of criticsm. I actually agree with that criticism but in the wider picture, it is more important for Microsoft to weigh into the fight than to get it right. We have a crying need to focus the problem on what it is - the failure of security tools and the security infrastructure that Microsoft and other companies (RSADSI, VeriSign, CAs, Mozilla, Opera...) are peddling.
Microsoft by dint of the fact that they are the only large player taking phishing seriously are now the leaders in security thinking. They need to win some I guess.
(Meanwhile, I've reinstated the security top tips on the blog, by popular demand. These tips are designed for ordinary users. They are so brief that even ordinary users should be able to comprehend - without needing further reference. And, they are as comprehensive as I can get them. Try them on your mother and let me know...)
Rick points at a nice page showing lots of OpenPGP web of trust metrics.
The web of trust in OpenPGP is an informal idea based on signing each other's keys. As it was never really specified what this means, there are two schools of thought, being the one where "I'll sign anyone's key if they give me the fingerprint" and the other more European inspired one that Rick lists as "it normally involves reviewing a proof of their identity." Obviously these two are totally in conflict. Yet, the web of trust seems not to care too much, perhaps because nobody would really rely on the web of trust only to do anything serious.
So an open question is due - how many out there believe in the model of "proving identity then signing" and how many out there subscribe to the more informal "show me your fingerprint and I'll trust your nym?"
What's this got to do with Financial Cryptography? PKI, the white elephant of the Internet security, is getting a shot in the arm from web of trust. In order to protect web browsing, CACert is issuing certificates for you, based on your subscription and your entry into a web of trust. In one sense they have outsourced (strong) identity checking to subscribers, in another they've said that this is a much better way to get certificates to users, which is where security begins, not ends.
More pennies: I've got my Thunderbird and Firefox back, so now I can see the RSS feeds. I came across this from Risks: How to build software for use in a den of thieves. We'd call that Governance and insider threats in the FC world - some nice tips there though.
PaymentNews reports that PayPal CEO Jeff Jordan presented to Etail 2005:
Nearly 10 percent of all U.S. e-commerce is funneled through PayPal, according to Jordan. One out of seven transactions crosses national boundaries. Consumers in more than 40 countries send PayPal, and those in more than 20 countries receive this currency."Our goal," he said, "is to be the global standard for online payments."
(More on Paypal.) And more from Scott:
Eliminate the banking middle man -- that's what Zopa's about. Rebecca Jarvis reports for Business 2.0 on what the UK's Richard Duvall is up to with Zopa.Are you a better lender than a bank is? Richard Duvall, who helped launch Britain's largest online bank, Egg, thinks you are. His new venture, Zopa, is an eBay-like website that lets ordinary citizens borrow money from other regular Joes -- no bank needed.
In mailtapping news from Lynn, a US court of appeals reversed a ruling, and said that ISPs could not copy and read emails. Meanwhile a survey found that small firms were failing to copy and escrow emails as instructed. And we now have the joy of companies competing to datamine the outgoing packets in order to spy on insider's net habits. The sales line? "every demo results in a sacked employee..."
E-mail wiretap case can proceed, court says
Study Finds Small Securities Firms Still Fail To Comply With SEC E-mail Archiving Regulations
When E-Mail Isn't Monitored
In closing, Everquest II faced off with hackers who had found a bug to create currency. We've seen this activity in the DGC world, and it no doubt has hit the Paypal world from time to time; it's what makes payment systems serious.
Voting is a particularly controversial application (or feature) for FC because of the difficulty in both setting the requirements, and the 'political requirement' of ensuring a non-interfered vote for each person. I've just got back from an alpine retreat where I participated in a small experiment to test votes with tokens, called Beetle In A Box. The retreat was specifically purposed to do the early work to build a voting application, so it was fun to actually try out some voting.
Following on from our pressed flower production technique, we 'pressed & laminated' about 100 'beatles,' or symbols looking like squashed beatles. These were paired in plus and minus form, and created in sets of similar symbols, with 5 colours for different purposes. Each person got a set of 10, being 5 subsets of two complementary pairs each.
The essence of the complicated plus and minus tokens was to try out delegated voting. Each user could delegate their plus token to another user, presumably on the basis that this other user would know more and was respected on this issue. But they could always cast their minus to override the plus, if they changed their minds. More, it is a sad fact of voting life that unless you are in Australia, where political voting is compulsory, most people don't bother to turn up anyway.
To simulate this, we set up 4 questions (allocating 4 colours) to be held at 4 different places - a deliberate conflict. One of the questions was the serious issue of naming the overall project and we'd been instructed to test that; the others were not so essential. Then we pinned up 21 envelopes for all the voters and encouraged people to put their plus tokens in the named envelope of their delegatee.
When voting time came, chaos ensued. Many things went wrong, but out of all that we did indeed get a vote on the critical issue (not that this was considered binding in any way). Here's the stats:
Number of direct voters: 4 Number of delegated votes: 3 Therefore, total votes cast: 7 Winning project name: Mana, with 3 votes.
So, delegated voting increased the participation by 75%, taking total participation to 33% (7 of 21 participants). That's significant - that's a huge improvement over the alternate and indicates that delegated voting may really be useful or even needed. But, another statistic indicates there is a lot more that we could have done:
Number of delegated votes, not cast: 9
That is, in the chaos of the game, many more people delegated their votes, but the tokens didn't make it to the ballot. The reasons for this are many: one is just the chaos of understanding the rules and the way they changed on the fly! Another is that many delegatees simply didn't participate at all, and in particular the opinion leaders who collected fat envelopes forgot they were supposed to vote, and just watched the madness around them (in increasing frustration!).
Canny FCers will recognise another flaw in the design - having placed the tokens into envelopes, the delegators then had to become delegatees and collect from their envelopes. And, if they were not to then attend that meeting (there were 4 conflicting meetings, recall) then the delegatees would become delegators again and re-delegate. Thus forcing the cycle to start again, ad infinitum.
Most people only went to the pinboard once. So the formal delegation system simply failed on efficiency grounds, and it is lucky that some smart political types did some direct swaps and trades on their delegated votes.
How then to do this with physical tokens is an open question. If one wants infinite delegation, I can't see how to do it efficiently. With a computer system, things become more plausible, but even then how do we model a delegated vote in software?
Is it a token? Not quite, as the delegate vote can be overridden and thus we need a token that can be yanked back. Or overridden or redirected. So it could be considered to be an accounting entry - like nymous account money - but even then, the records of a payment from alice to bob need to be reversable.
One final result. Because I was omnipresent (running the meeting that took the important vote) I was able to divine which were the delegated votes. And, in this case, if the delegated votes had been stripped out, and only direct voting handled, the result of the election decision would have changed: the winning name would have been Medici, which was what I voted for.
Which I count as fairly compelling evidence that whatever the difficulties in implementing delegated voting, it's a worthwhile goal for the future.
In payment news, two stories are ending. Jim reported on the Blind Signature patent expiry party:
Guest of honor David Chaum challenged us: How to change the world for the better by implementing new protocols.
I like to think that is what FC is about; but I'm also old enough and bloodied enough to know that without revenues we can't sustain the cash flow to employ the programmers to write the protocols to change the world for the better....
And, over in Congress, the CEO of CardSystems is moaning that his company might be out of business if Mastercard were to follow Visa and Amex in dropping them from the credit card processing business.
The head of a payment processing firm that was infiltrated by computer hackers, exposing as many as 40 million credit card holders to possible fraud, told Congress yesterday that his company is "facing imminent extinction" because of its disclosure of the breach and industry's reaction to it."As a result of coming forward, we are being driven out of business," John M. Perry, chief executive of CardSystems Solutions Inc., told a House Financial Services Committee subcommittee considering data-protection legislation. He said that if his firm is forced to shut down, other financial companies will think twice about disclosing such attacks.
A curious response - as the company offers little or nothing positive to the damage that it has potentially done to 240,000 credit card holders, I'm not sure what there is to say! (What was the average cost of identity theft to the victim, again?(
There's little help from the credit card companies:
Credit card companies say they are trying to stave off unneeded panic. And costs are an issue as well; if a new card costs $30 to create, 40 million cancelled cards would cost $12 billion to replace.
"Obviously." Seriously, this company must die. Like Arthur Andersen, the message has to be sent. Regulation didn't work. The Regulator didn't do anything. Contracts didn't work. Audits do not work, whether it was by Cable & Wireless or Mickey Mouse. Pontifications by a myriad of security experts didn't work.
Nothing worked - and it's time to form a hanging party and go get us some bandits. (If it's any consolation, when the users get to a-lynching in civil courts, CardSystems will appreciate the humane way out.) The Chistian Science Monitor goes on to report that considered thought and intelligence seen in Washington DC:
But state lawmakers were skeptical. "It seems there's a very paternalistic theme to those comments, which is 'We know what's best for consumers,'" said Massachusetts state Rep William M. Straus.He said the issue should be turned over to the victims of ID theft: "Would they trade a 10 percent discount from Sears for everything they've been through?"
Now there's a thought! In closing, looks like ePoints made a big splash in the New Scientist:
The ePoints system set up by Agnes Koltay and Daniel Nagy is different. It allows anonymous person-to-person transactions over the web, and though the software itself costs money, Nagy says every subsequent transaction will be free. Charles Cohen, founder of failed e-currency Beenz, supports this thinking. People will only adopt new payment systems if they are free, he says.To use ePoints, a person requests an ePoint "note" - in reality an encrypted code that represents some amount of ePoints - from an ePoints issuer. The issuer is the person or body that administers the system and ensures that ePoints aren't duplicated. The issuer cryptographically signs each ePoint note in exchange for some money of equivalent value in another currency, say pounds or dollars, or for some work done, or as payment for some other service.
When someone spends ePoints, the person receiving them in payment contacts the issuer to verify they are not counterfeit. The cryptographic algorithms ensure the issuer cannot tell where the ePoint originated, nor the chain of hands it has passed through, only that he has been asked to confirm an ePoint is authentic.
But anonymity alone is not going to make people use it. If ePoints is going to catch on, it will have to find a niche that makes it attractive to a large pool of users. That's where ePoints' cheap and borderless nature comes in. ePoints can be seen as an international electronic currency and this, Nagy and Koltay believe, along with security and anonymity, will provide the niche it needs.
ePoints may also be attractive to companies that want an electronic method for handling payments of a few pennies. Credit card companies charge a minimum fee for each transaction they process, and for transactions of less than a few dollars this can represent a large slice of the total. In return, credit card companies provide a high level of security. But as Nagy points out, this is overkill when only small sums are changing hands. A penny transaction should not need a lot of security, Nagy says. A thief will gladly invest five pennies of effort to steal a credit card, but no smart thief will spend five pennies to steal a one-penny ePoint.
Nagy and Koltay are not the only ones aiming at the micropayments niche. In spite of the rocky beginning of digital cash in the 1990s, several alternative micropayment systems have sprung up, including Peppercoin, PayCash and Open Money.
And recently a big name has shown interest. Nagy says a test version of the entire ePoints software system was recently downloaded by engineers at Google. News reports suggest the company will soon launch a competing service to PayPal. As with a cash transaction, only the two parties to the transfer need know each other's true identity.
The rest of the article is well worth reading as well.
Robert Murphy defends Hayek's proposals for private issuance against Mises and Rothbard. Such is of course welcome but bemusing; as it does not take into account modern developments in the financial cryptography world, it seems quaint and historical. This is now a solved problem, and I suppose if I'd thought about it I would have written a paper on it!
Private issuance of monies by corporations is easy and within reach of all. The technical problems are solved, as are the economic interests and the value feedback equations. I know this because I - or my company - have been issuing for two (oops not four) years now, and even though our scale is tiny, it still works so efficiently that I cannot see a circumstance where the company would choose not to issue.
Corporate issuance works like this:
A company creates a contract that redeems for any service the company offers. Not gold, not a basket of Walmart items, etc etc. It redeems in its own services and units. All commercial companies have some sort of service or good, by definition, although I grant non-profits sometimes do not. This of course means we need an external unit of account, but that's no problem.
Then, the corporation pays all its people in the issue. Every month their paychecks come in the corporate unit. As this is a digital operation, it is painless and quick.
Finally, all purchasers to the company are instructed to pay all invoices in the corporate issue. In order to get their corporate units they contact the employees and arrange a trade. Purchasers might have gold, dollars or some other unit, and employees often need exactly those things. A trade is arranged, the value moves from the employees to the purchasers, and then back to the company as the invoice is paid.
All else is evolution. How does the company pay for bills in local national currency? Simple - we post a bill on the notice board (perhaps virtually) with an offer of X+1% in the corporate unit to get it paid. Maybe we need to up the number to 2% the next day, but soon enough someone will decide to do take the profit.
How do buyers find sellers? In our small scenario by word of mouth. But markets would no doubt spring up once a threshhold is reached, and a market is nothing but a mailing list or a chatroom.
How does a big purchaser deal with lots of little sellers? Someone stands in as an intermediary, of course, a process that we have seen arise in recent times. How do profits get taken? How do shares dividends get paid? Taxes, etc etc - all the same way. These are all bills.
How is accountancy handled? This is perhaps the best bit. The system is its own accountancy system. Everyone has their own books for their own transactions (a la Triple Entry Accounting in FC++ #2) and the need to keep a separate book of accounts disappears.
How is bankrupcy handled? Perhaps testament to the power of this system, it works as well in times of lean payments as it does in boom time. That's because of the strong recording capabilities of the digital payments system means that what was liquid converts seamlessly into debt, thus relieving everyone of the need to account for their position. In fact, we originally floated the new issue in a time of quasi-bankrupcy, and within a single weekend, every employee had converted their notes, their promises, their deferred invoices into corporate issue. Because it was so much easier and so much clearer.
Is it efficient? On a small scale, for users, it can be about as much work as dealing with the average invoice, per transaction. As scale ramps up, it reduces to about the same order of cost of getting a coffee out of a coin-powered machine.
For the issuer, it has one final magic component - it raises long term financing on floating terms. Nominally at zero percent, the costs of holding and converting and indeed financing the working capital are worked out in the exchange rate between the corporate rate and local units. Your employees are your investment bank; it's painless and obviates the need for a CFO until you've added another two zeros to the revenues. Yet another saving.
Open is a big word these days. It started out as open source, being the progression of AT&T's distro of Unix leading to BSD and then to GPL. For computer source code, open works well, as long as you are doing the code anyway and can't figure out how to sell it. Instead of just keeping it secret, share the source and profit on the service.
Or something - the economic ramifications are quite interesting and deserve a much wider attention (especially with our new Hayekian and Misean perspectives, and the changing economics of digital property).
People have also applied open to other things: I apply it to Governance, there is a group working on words and music called Open Commons and this blog is under one of their licences. People have even prepared legal cases in Open forums. The list of experiments in Open This and Open That is quite long. I want to apply it to FC, and indeed we've published many papers and much source code without seeing much or any openness in FC at the project level to date, so it remains a big question: just how far does open go?
One of the things we have found is that open source helps security. People have often thought too much of this - that open source is necessary for security. No such applies, it certainly helps a lot, but so do many other things, and there are plenty of secure systems with closed source. Strangely, open also clashes with the process of fixing bugs. Clearly, if there is a bug in your source, and you know it, you want it fixed before the bad guys find out. Telling everyone that there is a bug might not be the smartest thing to do.
So security is more than source, it's a process. The security process involves many elements. Patching and bug fixes, of course. These are the things that non-security projects know about and the things that the press reports on (like those silly Symantec comments on how many security advisories each competitor has issued).
But there is more, much more, and these are the things that projects with a Security Goal have developed. One of these things is a relatively open process. What this means is that decisions on security are taken openly - in open forums. Even though uncomfortable and noisy, the result is better because the interests of all are heard, including the users, who normally aren't adept enough to enter these highly technical debates. Hopefully someone there will represent the users if they know this is an open process.
The problem with the alternate is "agenda capture" (or co-option?). If a project conducts secret negotiations to do some sort of security thing, then you can bet your bottom dollar that the participants want it secret because they are attempting some sort of coup. They are trying to achieve something that won't work when exposed to the disinfectant of open sunlight. It infringes the interests of one group or another, and if it didn't there wouldn't be any reason to keep it secret.
So it was with sadness that I discovered that the Mozilla Foundation had entered into the smoke filled rooms of secret negotiations for security changes. These negotiations are apparently over the security User Interface. It involves some other browser manufacturers - Microsoft was mentioned - and some of the CAs - Verisign has not been mentioned that I have heard.
There is no doubt that Mozilla has walked into an agenda capture process. It specifically excluded one CA, CACert.org, for what appears to be competitive reasons. Microsoft enters these things frequently for the purposes of a) knowing what people are up to, and b) controlling them. (Nothing wrong with that, unless you aren't Microsoft.) At least one of the participants in the process is in the throes of selling a product to others, one that just happens to leave itself in control. The membership itself is secret, as are the minutes, etc etc.
The rooms were filled with smoke a month or two back. And now, people are reportedly beavering away on the results, which are again secret. Normally, cartel theory will tell you that these sort of approaches won't work positively because of the economics of game theory (check out the Prisoner's Dilemma). But in this case, there was "A Result" and that "Result" is now being used as a justification for not addressing other initiatives in phishing. We don't know what it was but it exists and it has been accepted, secretly, without formal process or proposal, by Mozilla.
I have therefore departed the scene of Mozilla. This is a road to disaster for them, and it is blatently obvious that they haven't the business acumen to realise what they've been sucked into. As the security process is well and truly compromised at this point, there is no hope for my original objectives, which were to encourage anti-phishing solutions within Firefox.
Personally, I had thought that the notion of open source was enough to assume that an open process in security would follow, and a sterling effort by Frank Hecker seemed to support this. But I was wrong; Mozilla runs a closed security process and even Frank's openly negotiated CA ascendency protocol is stalled in closed session. The actual bug fixing process is documented as closed if there are security issues involved, and from that one small exception, the entire process has closed and to my mind stalled (1, 2). The team is a closed shop (you have to apply, they have to ignore the application), any security decisions are taken in secret forums that we haven't been able to identify, and the whole process flips mysteriously between the security team, the security group and the group sometimes known as "staff". Oh, and in their minds, security is synonymous with PKI, so anything and anyone that challenges the PKI model is rejected out of hand. Which is an issue for those that suggest PKI is at the heart of phishing...
So the security process is either closed or it's not present, which in my mind amounts to the same thing, because one becomes the other in due course. And this is another reason that security processes have to be open - in order to eliminate groupthink and keep themselves alive, the process must defend itself and regenerate itself, out in the open, on a regular basis.
My time there has still been very educational. Coming from the high security world of payments and then entering the medium security world of the browser, I've realised just how much there is yet to learn about security. I have a developing paper on what it means to be a security project, and I've identified about 19 or so factors. Comparing and contrasting Security Goal projects like the BSDs and the OpenPGPs with Mozilla has teased these factors out slowly, over a year or so. The persistence of security myths has led me on a search via Adam's security signals suggestion to Michael Spence's job market signalling theory and into a new model for Silver Bullets, another paper that is evolving (draft in the normal place).
But I have to face facts. I'm interested in security and specifically in the security of the browsing process under its validated threat of phishing. If the Mozilla Foundation cannot produce more than an alpha effort in certificate identification in 2.5 years of my being involved in what passes for their open security forum, that has to suggest that Mozilla is never going to meet new and evolving objectives in security.
I wondered when we'd see this. Tao points to news that 40 million card data units have been breached:
MasterCard International reported today that it is notifying its member financial institutions of a breach of payment card data, which potentially exposed more than 40 million cards of all brands to fraud, of which approximately 13.9 million are MasterCard-branded cards.MasterCard International's team of security experts identified that the breach occurred at Tuscon-based CardSystems Solutions, Inc., a third-party processor of payment card data."
This AP story mentions "the security breach involves a computer virus that captured customer data for the purpose of fraud" and MasterCard "did not know how a virus-like computer script that captured customer data got into CardSystems' network, which MasterCard said was infiltrated by an unauthorized individual."
At this point, Americans may as well get used to the fact that their entire data set is probably in the hands of criminals. (Up until this one broke, the running totals showed about 5 million.)
In my humble opinion, the credit system of the United States of America is totally compromised, security wise. Given the size of the infrastructure, the complexity, the amount of money being made, the existing mess of laws, and the hidden assumptions, it will take decades to clean it up.
No amount of government intervention is going to make you safer, and will probably make things more dangerous for you. Companies have no interest in your security, only in your continuing payments. Get used to it. About all I can suggest is that each and every American learn how the credit system works; take your own steps to secure your identity - there are some cunning tricks. You are on your own, for the foreseeable future.
Also see Emergent Chaos for likely more pervasive coverage. Slashdot has a rash of jokes:
there are some numbers hackers can't stealfor everything else there's MasterCard
(Accepted all over, even if it's not yours.)
And then there's:
Interest rate: 20%Annual Fee: $40
Randomly being declined because the machine is on the fritz: $1-$1000 purchase down the drain.
Being the target of fraud through no fault of your own: Priceless.
Phishing news: puddle phishing (targetting small banks) is on the rise, as is phishing outside the US. Both of these are to be expected as phishers move around and try new things. One might suspect that the major US financial institutions have been 'phished out' but I wouldn't say that yet. The browser infrastructure remains riddled with too much swiss cheese security for any but a politician's declaration of victory, and it will be interesting to see how successful DNS attacks are at raising more funds from the hapless victims.
Amid yesterdays's flood of identity/phishing news, there are reports that identity theft is bigger in the US than anywhere else. Long reported in these pages as primarily a US problem, it turns out that people inside the US have now noticed this: Lower Overseas Rates Of Identity Theft Could Guide U.S. Lawmakers, and Privacy Advocates: Look Overseas For Lower Identity Theft Rates (both the same content).
And if you think your data is being phished by crooks, think again. The US Department of Justice is seeking to make matters worse by mandating data collection by the ISPs. Perry quotes this Article:
The U.S. Department of Justice is quietly shopping around the explosive idea of requiring Internet service providers to retain records of their customers' online activities.
More data collected, more value. More value, more theft. We saw the same kneejerk reaction a few months back when Elliot Spitzer floated the idea of making an extra crime. Truly dumb, truly guaranteed to reduce security.
In crypto news, the how to crack SHA1, SHA0 papers from the Shandong team are now released on Prof Wang's site. Good work, but attention is now switched to Dan Boneh's timing attack on AES (also 1 and 2). Slightly embarrassing for the NIST assessment that didn't pick it up, but no need to panic - make sure your AES is constant time.
In closing, some rare common sense spotted in security reporting. Gartner, long a packager of other people's nonsense, has issued its list of 5 most hyped security issues. And, they actually aren't that far off the mark:
It's about time someone stood up and pointed out who's selling clothes to the Emporer (someone like Tao?).
Are Security Threats Really Overhyped?Some experts say VoIP security and mobile viruses already are serious problems.
Grant Gross, IDG News Service Monday, June 13, 2005
Two Gartner analysts released their list of the five most overhyped IT security threats, with IP (Internet Protocol) telephony and malware for mobile devices making the list, but not all IT security vendors agree with the analysts' assessment.
Lawrence Orans, principal analyst at Gartner, and John Pescatore, vice president and Gartner fellow, noted that while attacks on IP telephony and mobile devices may come eventually, current warnings about security problems are ahead of actual attacks.
"Securing IP telephony is very similar to securing a data-only network," Orans said during a presentation last week at the Gartner IT Security Summit in Washington, D.C. "The fact that you could capture packets with e-mail isn't being covered in the trade publications."
Recent concerns about eavesdropping on IP telephony calls have discounted the fact that it's nearly impossible to eavesdrop without being inside of the building where an IP call is initiated or received, with eavesdroppers needing access to the corporate LAN, he said. "It's not really happening on any networks today," he said.
Different Opinion
Not everyone agreed with Gartner's assessment, however. Companies deploying IP telephony or voice over IP services do need to pay attention to security, and users of IP telephony need to protect not only the end-device phones and IP servers, but also signaling and other voice equipment, said Stan Quintana, vice president of managed security services for AT&T. "It's a slightly different, more complex equation than data networks," he said.
The two Gartner analysts see large businesses delaying IT improvements such as wireless LANs because of "overhype" over security threats, they said.
Too much hype on some threats may distract businesses from focusing on other, real threats, added Tom Grubb, vice president of marketing for Vormetric, a data security vendor. This year, a series of massive data breaches at several large companies have occurred, and protecting against data theft, and protecting against insider threats, may be more important than worrying about issues such as malware for mobile devices, he said.
"I think their point was, these things may be threats, but you have to keep your eye on the ball," added Grubb, who attended the Gartner summit.
ID theft and spyware are threats that have gotten a lot of attention lately because they are real, prevalent risks, added Richard Stiennon, vice president of threat research for Webroot Software, an antispyware software vendor.
Going Mobile
Some security vendors have focused on malware for so-called smart phones and other mobile devices, but such devices run on a number of operating systems, unlike the Windows dominance on desktop and laptop computers, Pescatore said. Without a dominant mobile operating system for at least a couple of years, mobile viruses or worms will have a limited impact, he said.
"For any piece of software, somebody can write an attack," Pescatore added. "The key issue is: can somebody write [a mobile attack] that will spread quickly and rapidly and cause more damage to your enterprise than it will cost you to prevent that damage?"
Some security software vendors have hyped mobile malware as a potential problem as a way to expand their business beyond the traditional desktop and laptop markets, Pescatore said. Only about 3 percent of consumers and workers have smart phones and PDAs with always-on wireless connections right now, he added.
"You can see the glint in the antivirus vendors' eyes when they think of the billion mobile phones out there," added Webroot's Stiennon.
A representative of antivirus vendor Symantec said the company isn't trying to hype mobile device threats, but trying to educate users as mobile devices become capable of storing more information. While mobile device security isn't a big issue now, that could change in coming years, said Vincent Weafer, senior director of Symantec Security Response.
"The risk changes dramatically in a short amount of time," Weafer said. "What we're trying to tell people is, if they're deploying these devices, they should deploy them in the right way."
Vormetric's Grubb agreed that mobile malware shouldn't be a top-priority concern for most large businesses, but mobile device security is becoming an issue. As more workers use more powerful mobile devices, companies need to be concerned with the physical security of mobile devices and about what mobile devices are downloading from their networks, he said.
Companies need to be concerned about what kinds of malware mobile devices can bring into a corporate network, added AT&T's Quintana. "The convergence of our networks is a double-edged sword," he said. "It's providing a high level of risk. It's not overhyped."
Also On the List
* Also on the list of overhyped security threats, according to Orans and Pescatore:
Fast-moving worms that infect the entire Internet within minutes will make the Web unreliable for business traffic and virtual private networks (VPNs) . While the SQL Slammer worm in 2003 did much of its damage within 15 minutes, that's the only such example so far of a so-called Warhol worm, Orans said. The analysts predicted that the public Internet will continue to remain a low-cost, safe alternative to closed data networks, although they recommended companies consider using VPNs.* Wireless hot spots are unsafe. While uneducated wireless users can fall victim to hackers, corporations have tools such as VPNs to protect wireless data, Pescatore said. Some wireless carriers and wireless security vendors also offer tools that validate an access point's identity and reduces the risk of connecting to a hacker's access point. Targeted attacks on corporate networks, not picking off wireless user data, is where the money is, said Reed Taussig, chief executive officer of Vormetric. "That's a much larger return on investment than sitting around Starbucks waiting for someone to enter a credit card at Amazon.com," Taussig added. "Hanging around at Starbucks waiting for someone to make a mistake is the definition of a stupid criminal."
* Finally, the Gartner analysts suggested that some vendors are hyping regulatory compliance as a way to achieve security. Regulations such as the U.S. Sarbanes-Oxley financial reporting rules are focused primarily on other issues besides IT, but many corporations remained concerned about compliance reporting, Pescatore said.
"[The hype] often distracts that spending into compliance reporting rather than increasing security," he said.
Steve Roop, vice president of marketing for data loss prevention vendor Vontu agreed. "There's a large number of solutions providers who claim that what they do is the silver bullet," he said.
Pinch me, I must be dreaming!
Software charging for big ticket sellers is getting more complex again, as dual cores from AMD and Intel start to invade the small end. Oracle, which made billions charging on the muscle power of CPUs, will have to do something, and we've by now all seen IBM's adverts on TV suggesting "on demand" with its concommitant charging suggestion: You demand, we charge.
I've done a lot of thinking over the years about how to licence big ticket items like issuance software. In practice it is very difficult, as the only revenue model that makes sense for the supplier is for large up front licence fees to recover large up front capital and sunk costs. But for the demander (issuer and user of the software) the only model that makes sense is to pay later, when the revenues start flowing...
Issuance software has all the hallmarks of an inefficient market and I don't think there has been successful case of issuance licencing yet, as those two "sensible" options do not leave any room for agreement. This may be rational but it's very frustrating. Time and again, we see the situation of people wanting to get into the issuance market who think they can produce the software themselves for a cheaper price. And they always end up spending more and getting a lesser quality product.
In practice what we (Systemics) have been doing is this: running the software ourselves as "operator", and charging operating costs, with some future licencing or transaction flow revenues. Yet, the deal for future revenues is always based on a promise and a prayer, which is already asymmetrical given that most startups do no more than start up. (And it isn't just me bemoaning here - if you look back through history there are literally hundreds of companies that tried to build value issuance and sell it.)
Which leads to the freeware model. In the freeware world, big ticket items are given away and money is made on the consulting. This has worked relatively well in some areas, but doesn't work so well in issuance. I'm unclear of the full reason why open source software doesn't work in issuance, but I think it is mostly the complexity, the sort of complexity I wrote about in FC7. It's not that the software can't capture that complexity but that the financial cryptography business often finds itself so squeezed for management complexity that partnering with a strong software supplier are beyond capabilities.
What will potentially help is p2p issuance. That is, "everyone an issuer." We've always known this model existed even as far back as 1995, but never really considered it seriously because too many questions arose. Little things like how we teach grandma to sign a digital contract. We've now done enough experiments in-house to confirm that the corporate internal issue and the individual issue are workable, sustainable economic models but we have to get other companies and individuals to do that and for the most part they still don't do anything they don't understand.
I'm guessing the way forward here is to turn client software into issuance software. This brings up a whole host of issues in financial cryptographic architecture. For a start it can never seriously scale simply because people do silly things like turn off their laptops at night.
But, more and more, the barriers to issuance and financial cryptography in general I believe are spreading the knowledge, not the tools and tech. Every year our tools and tech get better; but every year our real barriers seem the same - how to get users and customers to make their first tentative issue of a currency of value. Oh, and how to make money so as to keep us all alive, which was the starting point on this long rant of liberal licence.
A couple of footnotes: In a similar thread over at PGP Inc, Will Price reveals how they've managed to get out of the legacy freeware version trap:
"When the 30 Day Trial version of PGP Desktop Home expires, it reverts to a set of functionality comparable to what used to be known as Freeware, and said functionality remains available indefinitely -- under the same license conditions as Freeware used to be under."
Nice one. That works for client software, not for server software.
Here's a further article on how the big companies are also working out how big ticket software isn't the way to go:
The concept of whistleblowing informs our deepest designs. We cannot secure everything, so we go to the next best thing: we document everything. Extraordinarily, we can put together extremely strong systems that use the humble message digest to create chains of signatures and time entanglement, not because this is perfect, but because we know that if someone is looking, they can find.
As our deepest difficulties lie not in external security but in protecting against the insider, audit trails and wide dissemination of information is one of our hottest tools. For the financial cryptographer, our hope is to leave a trail so well buried and indicative that any investigator is supported with some real evidence and doesn't need to rely on anything but the evidence.
That's an ideal, thought, and it doesn't normally happen quite so well. Sometimes spectacularly so. Here are two whistleblowing stories from the US that provide colourful background to our efforts to secure systems and processes.
In what has turned into a festival of hand-wringing moralising, Deep Throat has revealed himself to be Mark Felt, the Deputy Director of the FBI during the Watergate Affair....
Deep Throat was the fabled secret source who prodded the journalists and provided the crucial inside tips to keep the story alive until it swept over and destroyed the corrupt and arrogant administration of Richard Nixon. (See google.news for a squillion articles.)
How could he, write many of Washington's finest. The act of treachery, the traitor!
How could he not? I ask. When your administration is corrupt, what do you do? What is the press for? What is that much exported model of freedom there for if not to dig out the dirt and keep politicians honest? And is Mark Felt an employee of a corrupt administration first and always? Or is he human being, a member of a society? (Americans would ask if he was an American, but that always confuses me.)
I think it is pretty clear that all our institutions, and also our models of financial cryptography support the concept and presence of whistle blowers. It may be hell when he's not on your team, but that's a different issue.
And in story #2, the Arthur Andersen conviction was overturned in the US Supreme Court by a unanimous decision. Arthur Andersen went down with Enron, which was done in by a public whistleblower when other, inside whistleblowers failed to do it.
What can done say about the Supreme Court's ruling - one of the most reputable names in accounting was wiped out by the original decision and now we are told it was wrong?
The obvious - too late for the 28,000 workers - has been written about elsewhere, but I can't help thinking such is simply the wrong way to look at the judicial process. Did it do the right thing or the wrong thing? I can't see the wrong thing having been done here. The prosecutor had a good case, and won the conviction. But he overstepped the mark and now it has been overturned. What else is there to say?
That's the way the process works, it's called checks and balances. If those that think this dreadful mistake means we should scrap the prosecutorial process, or "reel in the prosecution" then they need to think up a process to replace it. Regardless of the 27,000 or however many innocent workers at Arthur Andersen, that company was selling its soul.
So we need a process to stop that, and the current process just happens to do that. Sometimes. If anything, I think we need another big N accounting firm to go down for just such another scandal, as we know they were *all* doing it (as I've oft reported, I know all but one were doing it, and I just never heard what the other one was doing...).
Literally, yes, if the system needs to work that way, we need another 27,000 innocents to be turned onto the streets in order to get the message to the 1000 or so bad apples who will lie and cheat and basically sell their company's reputation for 30 pieces of silver. Remember, there are thousands of shareholders and the millions of california tax payers who also lost big time, and nobody's bemoaning their fate much. And nobody owns a job, whether they work for a corrupt company or an honest one.
But I'm all ears to a better system. Many older and legacy systems think they can protect themselves with an audit, and for the sake of all those who think that, well, their only real defence is an occasional spectacular bust of those selling unreliable audits. Or, to get serious about auditing and learn about financial cryptography :-)
Adam points to something I've been stating for a year or more now and a day: why is the current security crisis happening in USA and not elsewhere, asks a nervous troll on Bruce Schneier's blog:
This isn't intended as a troll, but why is it that we never hear about this sort of problem in Europe? Is it because we simply don't hear about overseas breaches, or do the European consumer and personal privacy laws seem to be working? How radical a rethink of American buisness practices would be required if we _really_ did own our personal data....Posted by: Anonymous at May 24, 2005 09:41 AM
Go figure. If you need to ask the question, then you're half way to the answer - start questioning the crap that you are sold as security, rights to easy credit, and all the other nonsense things that those who sell thrust down consumers' throats. Stop accepting stuff just because it sounds good or because the guy has a good reputation - why on earth does any sane person think that a social security number is likely to protect a credit system?
Adam also points to an article in which Richard Clarke, one time head of DHS, points out that we should plan for failure. Yes indeed. Standard systems practice! Another thing you shouldn't accept is that you just got offered a totally secure product.
With most of the nation's critical infrastructure owned by private companies, part of the onus falls to companies' C-level executives to be more proactive about security. "The first thing that corporate boards and C-level officials have to accept is that they will be hacked, and that they are not trying to create the perfect system, because nobody has a perfect system," he says.In the end, hackers or cyberterrorists wanting to infiltrate any system badly enough will get in, says Clarke. So businesses must accept this and design their systems for failure. This is the only sure way to stay running in a crisis. It comes down to basic risk management and business continuity practices.
"Organizations have to architect their system to be failure-tolerant and that means compartmentalizing the system so it doesn't all go down... and they have to design it in a way that it's easy to bring back up," he says.
Relying too heavily on perimeter security and too little on additional host-based security will fall short, says Clarke. Organizations, both public and private, need to be much more proactive in protecting their networks from internal and external threats. "They spend a lot of money thinking that they can create a bullet-proof shield," he says. "So they have these very robust perimeters and nothing on the inside."
It's a long article, the rest is full of leadership/proactive/blah blah and can be skipped.
And Stefan rounds out today's grumbles with one about one more security product in a long list of them. In this case, IBM has (as Stefan claims) a simplistic approach - hash it all before sharing it. Nah, won't work, and won't scale.
Even IBM's notions of salting the hashes won't work, as the Salt becomes critical data. And once that happens, ask what happens to your database if you were to reinstall and lose the Salt? Believe me, it ain't pretty!
For those who consider the goal to be Perfect Forward Secrecy (or PFS as the acronymoids have it) think about the $850 million punitive damages awarded by a jury in Florida against Morgan Stanley and to the former owner of Sunbeam.
"The billionaire investor started the trial with an advantage after the judge in the case punished Morgan Stanley for failing to turn over e-mails related to the 1998 Coleman deal.""Judge Elizabeth Maass ordered jurors to assume Morgan Stanley helped Sunbeam inflate its earnings. To recover damages, Perelman only had to prove that he relied on misstatements by Morgan Stanley or other parties to the transaction about Sunbeam's finances."
"Maass also allowed Perelman's lawyers to make reference to the missing e-mails in the punitive damage phase of the case Jurors examined whether any bad conduct by Morgan Stanley merited a punishment in addition to compensation they gave Perelman to offset his losses."
This is the core problem with PFS of course - which is the promise that your emails or IMs won't be provable in the future. It matters not that someone eavesdropping can't prove anything cryptographically, because it simply doesn't matter in the scheme of things. The node is where the threat is, and you are thousands of times more likely to come to blows with your partner than any other party.
Your partner has the emails. So whatever you said, no matter how secret, gets plonked in front of you in the court case, and you've got a choice: rely on PFS and lie about the emails, or say "it's a fair cop, I said that!" Unless this issue is addressed, PFS doesn't really effect the vast majority of the us.
Unfortunately addressing second-party-copies is very hard. I had mused that it would be possible to mark emails as "Without Prejudice" which is a legal term signalling that these conversations were to be kept out of court. Nicholas Bohm set me straight there when he explained that this only applied *after* you have entered dispute, by which time you are taking all care anyway.
Alternatively it may be possible to enter into what amounts to a contract with your companion to agree to keep these conversations private and ex-judicial. That might work but it has two big powerful limitations: it doesn't effect other parties seizing them and it doesn't apply to criminal cases.
Still there may be some merit in that and it will be interesting to experiment with once I get back into IM coding mode. Meanwhile, I'm wondering just what we have to do to convince Morgan Stanley to get into a $850 million tussle with us ... nice work if you can get it.
Research on apotential SSH worm is reported by Bruce Schneier - this is academic work in advance of a threat, looking at the potential for its appearance in the future. Unusual! There is now optional protection available for the threat, as an option, so it will be interesting to see if this deploys as and when the threat arises.
Ping rites to say that he "recently started a weblog on usable security at http://usablesecurity.com/:
"I haven't seen any other blogs on the topic, so it seemed like a good idea to get one going. I invite you to read and comment on the entries there, and i hope you will find them interesting."
Great stuff. HCI/security is at the core of phishing. Which also brings us to an article about how the KDE people and the usability people eventually came to see eye to eye and learn respect for each other's strangenesses:
"When trying to set up a mail account with an OpenPGP key in KMail, you have to make settings in three different configuration modules. Users have problems understanding that. This is not a technical issue, because once the user discovers how it works he can set everything up. But to make the developers understand that users might have a problem with the workflow, you have to explain the context of usage and the way common users think."
Which brings me to something I've been meaning to shout about - when I (finally) got KDE 3.4 compiled and running, and started using Kmail (Thunderbird has too large a footprint) the GPG encryption feature just started working!
I'll say that again: encryption using my GPG Keys JUST STARTED WORKING!!! Outstanding! That's how encryption should be - and I don't know how they did it, but read the article for some clues.
It's not perfect - for example, the default is some hairbrained attachment scheme that nobody I know can read, so I have to remember each time to select "Inline (deprecated)" which is of course how it should be sent out for cross-platform independence. But it sure beats vi&cut&paste.
I've just been reminded of Stefan's post that Microsoft are looking at blinded signatures. To add to that, I've heard related rumours. Firstly, that they are planning to introduce some sort of generic blinding signature technology in the (northern) summer browser release ("Longhorn"). That is, your IE will be capable of signing for you when you visit your bank.
Now, anyone who's followed the travails and architectural twists and turns of the Ricardian contract - with its very complete and proven signature concepts - will appreciate that this is a hard problem. You don't just grab a cert, open a document, slap a sig on it and send it off. Doing any sort of affirming signature - one where the user or company is taking on a committment or a contract - is a serious multi-disciplinary undertaking that really challenges our FC neurons to the full. Add that to Microsoft's penchant for throwing any old tech into a box, putting shiny paint on it and calling it innovation, and I fear the worst.
We shall see. Secondly, buried in another area (discipline?) totally, there is yet another set of rumours in interesting counterpoint. It appears that Microsoft is under the bright lights of the Attornies General of the US of A over spyware, malware, and matters general in security. (Maybe phishing, I wasn't able to tie that one down.) And this time, they have the goods - it appears that not only is Microsoft shipping insecure software, which we all knew anyway, but they are deliberately leaving back doors in for future marketing purposes, and have been caught in the act.
Well, you know how these rumours are - everyone loves to poke fun at big guy. So probably best to write this lot down as scurrillous and woefully off the mark. Or I hope so. I fail to see how Microsoft are ever going to win back the confidence of the public if they ship signing tech with an OS full of backdoors. What do we want to sign today?
Addendum: El Reg pointed at this amusing blog where Microsoft easily forgets what platforms out there are being hacked.
Netcraft publishes the top phishing hosters - and puts Inktomi in pole position. Think class-action, damages, lack of due care, billion dollar losses ... we need more of this naming and shaming.
Rumours abound that Microsoft is about to be dragged into the security mess. I had thought (and wrote occasionally to effect) that the way phishing would move forward would be by class-action suits against software suppliers, following the lead of Lopez v. Bank of America ([1], [2]). But there is another route - that of regulators deciding to lean on the suppliers. Right now, that latter path is being explored in smoke-filled rooms, or at least that's what those who smoke say.
These two routes aren't exactly in competition, they are more symbiotic. Class-action suits often follow on the judgments of regulatory settlements, so much so that the evidence one side discovers is used by the other side to advance. In this way they work as a team.
Over on CACert, Duane alerts me to a blog that they run, and an emotional cry for help by an auditor from Coopers and Lybrand (now PriceWaterhouseCoopers). Like my own observations, the briefly named 'Gary' points out that CACert's checking procedures are as good or better than the others. He also breaks ranks and wiggles the finger at immoral and probably illegal practices at auditors in security work. I am not surprised, having heard stories that would make your faith in public auditors of security practices wilt and expire forever more. Basically, any *private* audit can be purchased and it costs double to write it yourself. Trust only what is published, and even then cast a skeptical eyebrow skywards.
Speaking of Microsoft and car wrecks to come, this factoid suggests that "25 car models run Microsoft software" ... unfortunately or perhaps luckily there is no reference.
In more damages of the other kind, RIAA File-Sharing Lawsuits Top 10,000 People Sued. In the Threats department: One-Third Of Companies Monitoring Email. Also, a nice discussion on fraud by fraudsters. A Newspaper interviews two fraudsters behind bars for what we now know as identity theft. Good background material on why and how easy.
AOL in Britain reports that one in 20 report that they have been phished. I find these sorts of surveys somewhat "phishy" given that if one in 20 of the population has been phished, we'd have rioting in the streets and politicians and phishers alike strung from lamposts. But, it's important to keep an eye on these datapoints as we want to know whether the status of phishing as primarily an "american disease" is likely to go global.
And in closing, a somewhat meanandering article that links Sarbanes-Oxley, IT and security products. It asks:
"But here is the fundamental question - has there ever been a pervasive and material financial fraud which has resulted directly or indirectly from a failure of an IT security control? Would IT controls have prevented or detected the frauds at Enron, WorldCom, Tyco, and the like?"
The author might be a closet financial cryptographer.
And, if you got this far, it is only fair to warn you that you've now lost 10 points of your IQ level. (Sorry, no URL for the following ...)
It's the technology, stupid
By Michael Horsnell in London
April 23, 2005
THE regular use of text messages and emails can lower the IQ more than twice as much as smoking marijuana. Psychologists have found that tapping away on a mobile phone or computer keypad or checking them for electronic messages temporarily knocks up to 10 points off the user's IQ.
This rate of decline in intelligence compares unfavourably with the four-point drop in IQ associated with smoking marijuana, according to British researchers, who have labelled the fleeting phenomenon of enhanced stupidity as "infomania".
Research on sleep deprivation suggests that the IQ drop caused by electronic obsession is also equivalent to a wakeful night.
The study, commissioned by technology company Hewlett Packard, concludes that infomania is mainly a problem for adult workers, especially men.
The noticeable drop in IQ is attributed to the constant distraction of "always on" technology, when employees should be concentrating on what they are paid to do. They lose concentration as their minds remain fixed in an almost permanent state of readiness to react to technology instead of focusing on the task at hand.
The brain also finds it hard to cope with juggling lots of tasks at once, reducing its overall effectiveness, the study has found. And while modern technology can have huge benefits, excessive use can be damaging not only to a person's mind, but also their social life.
Eighty volunteers took part in clinical trials on IQ deterioration and 1100 adults were interviewed.
Sixty-two per cent of people polled admitted that they were addicted to checking their email and text messages so assiduously that they scrutinised work-related ones even when at home or on holiday. Half said they always responded immediately to an email and 21 per cent would interrupt a meeting to do so.
Furthermore, infomania is having a negative effect on work colleagues, increasing stress and dissenting feelings. Nine out of 10 polled thought that colleagues who answered emails or messages during a face-to-face meeting were extremely rude. Yet one in three Britons believes that it is not only acceptable to do so, but actually diligent and efficient.
The effects on IQ were studied by Glenn Wilson, a University of London psychologist, as part of the research project.
"This is a very real and widespread phenomenon," he said. "We have found that infomania, if unchecked, will damage a worker's performance by reducing their mental sharpness."
The report suggests that firms that give employees gadgets and devices to help them keep in touch should also produce guidelines on use. These "best-practice tips" include turning devices off in meetings and using "dead time", such as travelling time, to read messages and check emails.
David Smith, commercial manager of Hewlett Packard, said: "The research suggests that we are in danger of being caught up in a 24-hour, always-on society.
"This is more worrying when you consider the potential impairment on performance and concentration for workers, and the consequent impact on businesses.
"Always-on technology has proven productivity benefits, but people need to use it responsibly. We know that technology makes us more effective, but we also know that misuse of technology can be counter-productive."
>From The Times of London in The Australian
Sarbanes-Oxley victims are counting pennies. They know, or they have been told, it will bring benefits. But at what costs? Audit costs seem anecdotally to be up by 50% or so. Honest injuns think it might not be worth the cost. Chiefs keep silent, it isn't worth their salary to rock the canoe. Interestingly, the article suggests that this year is a hump, and next year should be cheaper as the systems are in place.
Which reminds me of another set of victims counting cost - the Brits. For some reason they've noticed that it is now very difficult to open a bank account, which might have unintended consequences.
Martin Hall, chairman of the JMLSG editorial panel, said: 'We have taken a radical approach. The new guidance reflects the reality, that most customers are neither money launderers nor terrorists.
Over in certification land, the recent insider job in an Indian outsourcing firm is being ramped up by those who hate outsourcing. Another article points out:
" Ironically it shows the weakness of the certification system, which is supposed to guard against things like this. The centre in Pune was BS 7799- and CMM Level 5-certified, yet the fact that such a theft took place shows that such assurances probably aren’t worth that much."
It's just one cute data point, we'd needs a survey to really decide if that was statistically meaningful. Here's some more data points: The alleged #8 spammer in the world got 9 years in the slammer.
Let's work that out. If each spam costs a lost second to delete, then 3 million spams is worth a year. Nine years is worth 27 million spams. Now, if #8, a.k.a. Jeremy Jaynes sent a mailshot of a million a day, and he'd been doing it for a month, that's about right. An eye for an eye, a second for a spam. If however he had consumed say 70 spam-years, then that's a death sentance: 220 million spams means we lost a life somewhere, in the aggregate.
Looks like he got off lightly.
Meanwhile, some great figures are appearing from an e-crime conference where CEO from HSBC, spoke.
"The UK apparently leads the world in terms of 'bot nets', or collections of compromised computers that are rented out by criminal gangs. In March of 2004, German police uncovered a network of 476 hackers in 32 countries who had turned more than 11,000 computers into such 'zombies'. In September 2004 a Norwegian internet company shut down a bot-net controlling 10,000 machines. And SpamHaus estimates suggest 50,000 new zombie systems may be appearing each week."
And in the proportionality stakes, the unintended consequences of criminalising theft of IP strike home: one games manufacturer has complained to the FBI about several years of illegal selling of their game. By rights, the FBI ought to swoop in and bust the place up ...
I wonder if anyone has thought of making a game of strategy out of IP theft?
The Champion of NerdHerders points to the pathological habit of nerds-gone-binary to do either all of it or nothing. It's true, that we all face this inability to create a sensible compromise and to recognise when our binary extremes are unacceptable.
But I don't think it is so black and white. It's alright to say "why not use Jakarta this/that/other" but in practice it tends to hit up against some real barriers.
Firstly, what happens if you don't spend your entire life following the esoteric and mindboggling silly arguments as to whether this tool is better than that tool? How are you to know what tool to use?
In practice, in a big enough team, there is a role for tool-meister. Or package-tamer. But for most teams, that luxury simply isn't there. My own feeling is that any tool I know about, if I can't see the fit within an hour of reading, then that's it, it's all over, and most tools aren't set up to come anywhere close to that. (So to eat my own dogfood I spend a lot of time writing executive summaries to bridge that gap, but I also recognise how few times I've succeeded!)
My general strategy is to ignore all tools, all competitors, all everything, until I've had about 3 independent comments from uncorrelated sources. The alternate is to drown. My strategy gets about a 99%hit rate, as within a year, last year's flavour is gone, replaced, forgotten. (Last year it was all Struts, this year, Spring. Do I add to last year's wasted effort with another month on Spring this year?)
Secondly, it is my overwhelming impression that most tools out there are schlock. They are quite literally junk, pasted over with some sugar coating and lots of enthusiasm. Now, that's nice and I'd fight for the right to let those people do that, because some of them aren't, but in practice, I don't want to spend a month fighting someone else's schlock when I could do the same with my own code.
Sometimes this works out: the most extreme case was the accounting engine I wrote. It took a month to do the whole thing. I estimated it would take a month just to create the _recovery strategy_ for an off-the-shelf database engine. (It will still take a month, no matter what, I think. That's because it has to work, in FC.) So for one month's effort, we've got a free engine that is totally tuned to our needs. The next best thing is Oracle, and that starts at five figures. Per unit. And climbs...
Sometimes this doesn't work out: our approach to writing a website framework for payments was a costly 2.5 year lesson in how hard it is to create good frameworks. But, even when we wanted to replace our stuff, the choice was and is shlock. I spent some months learning the J2EE frameworks, and concluded they are not going to cut down the time by so much. Plus, they're Schlock, I have no confidence in the security of J2EE and no confidence in the manageability of it. (Now, 4 years after the fact, someone wrote a J2EE framework that does the job. Even that had to be rewritten by the first programmer on the job.........)
Thirdly, when you are dealing with other people's tools, you create an admin load. The more tools the more load. The more you have to install, the more that can break. And, this not only hits you in the cost-of-server shorts, it can blow you away legally, as some tools come with poison pills that just aren't predictable (and, I'm not just speaking of licences here, but needs for support, costs in programmers, etc etc). The same with languages; Java is a cost, because there is so little support for non-preferred platforms, Perl is a cost because it isn't 64bit compatible in all senses yet, PHP is a cost because every time they up the revision, the language shifts on you ... on and on it goes, always trouble, and it grows with at least proportional to the number of tools.
It's tough. I think there is a desperate desire in the programming world to churn out a solution so damn quickly it's as if it came out of a recipe book, and anything else is unacceptable. That's not a real picture of the world, especially if you are building software that has to be relied upon.
On the other hand, what is easy is when someone says "oh, you should use XYZ" and really mean it. It's extraordinarily rare that they know what they are talking about, and it's a great differentiator between some junior nerd who's still in slashdot space, and someone who's actually been stabbed by tools and knows stuff.
I've written before about how a major milestone in phishing was reached when Lopez sued Bank of America in Florida, USA. If you don't see that, click and read this article. It is maybe not obvious on the outside, but for once, a press journalist has talked to some people in the banking world and discovered something new: Fear.
Regardless of what a judgement or settlement brings to the actual litigants, dotted-line association with the BofA case will likely cause financial institutions to spend at least some additional money on security to prevent fraud. And since North American banks already spend more than $1 billion per year on such technology, the notion that they're not spending it in the right place or in the right amount raises temperatures. "I just came from Washington, where I was at a meeting of 40 financial institutions, regulators and the government," says Ilieva Ageenko, director of emerging enterprise applications for Wachovia. "We all said there's a press euphoria [about on-line crime] and pretty much all institutions have a very well-defined risk management strategy that allows us to identify fraud."
Banks are scared of the Lopez case. What does that tell us? It tells us that banks know this is not a frivolous case and furthermore banks don't know what to do about it.
All the buzz is about 2 factor authentication tokens, but in their hearts, banking people know this isn't the answer to the problem. The reasons are several-fold: one is that they are expensive, and the banks likely will have to foot the bill - one hardware gizmo for every customer. A second reason is that the banks also suspect that the secure tokens being peddled by irresponsible companies are not a real answer to the problem, but are only a short term hack.
The banks suspect this but the peddlers aren't telling the truth. Security people have known for a long time that these tokens are subject to phishing; all they do is force the phisher to do a dynamic real time phish instead of doing the connection to the bank in their own sweet time.
Yes, ladies and gentlemen, the secure tokens guarantee that the user and the bank are talking together right now, but they don't guarantee someone isn't in the middle passing packets back and forth and listening happily to traffic! Spoofing - phishing - is a class of attack called man-in-the-middle (MITM) and these tokens .. fall to the MITM. Or will do when the phishers get around to it.
So what's the solution? FCers know the solution is in bringing the user and her browser back into the security model. Banks know they can't do that (alone), but they also know that at the end of the day, they are going to have to carry the can (also alone). Even if the Lopez case goes against them, all the posturing tells us one more thing: banks know the FDIC or whoever will eventually put the onus on them to solve the problem.
So who can solve the problem? Who do the poor phishing victims have to sue?
"Wachovia offers the standard 128-bit encryption and requires on-line customers to have user IDs and passwords."
Who told you that would secure your customers?
Some more snippets: Stats suggest that users (now, still) trust online banking more then branch banking. Yet corporate customers would change banks if they could get fraud controls from a new bank (sorry, PDF).
You'll already have seen the recent stories about the .net contract going back to VeriSign. The decision was made on the technical capabilities report, with the report being accepted without discussion, without input from the stakeholder public, and more disturbingly, only consultation with DNS experts, no input. (See ICANNWatch.)
Addendum: from the report, Eric claims it was a tie and Verisign was chosen arbitrarily from the two leaders.
It seems that ICANN does not consider overall governance issues as important in its decisions on domain renewals, as mooted in comments by ICANNWatch's Michael Froomkin.
Further, one of the "losers", Denic, was apparently knocked out on a false claim that it has an in-house built database! That's a sad bad decision if true, and I speak as someone who builds in-house, own built databases for the most sensitive of tasks - because they are the most sensitive of tasks!
Back over on ICANNWatch, they announced that the ITU is merging with ICANN and that WIPO lost its domain to a baby wipe company. That's about par for the course...
In other news, two new TLDs have been launched as .travel and .jobs, and ICANN does consider itself in a position to
charge a tax of $2 per domain. The .net contract is being re-negotiated with a 75 cents tax.
Some people swear by them. Others think they are just paper. What are they really? I am minded to the Stigler observation, that, paraphrased, all certifications are eventually taken over by their stakeholders. Here is a Register article that bemoans one such certification dropping its practical test, and reducing it to boot camp / brain dump status.
I have no certifications, and I can declare myself clearly on this: it would be uneconomic for my own knowledge to obtain one. Frankly, it is always better to study something you know little of (like an MBA) than something you already know a lot of (like how to build a payment system, the pinnacle of security problems because unlike other areas, you are guaranteed to have real threats) .
I recently looked at CISSP (as described in the above article). With some hope, I downloaded their PDF, after tentatively signing my life away because of their fear of revealing their deepest security secrets to some potential future student. Yet what was in the PDF was ... less than in any reasonable half-dozen articles in any net rag with security in their name. Worse, the PDF finishes with a 2 or 3 page list of references with no organisation and no hope of anyone reading or even finding more than a few of them.
So in contrast to what today's article suggests, CISSP is already set up to be a funnel for revenue purposes. When a certification draws you in to purchase the expensive training materials then you know that they are on the road to degree mills, simply because they now no longer have a single security goal. Now they have a revenue incentive ... It's only a matter of time before they forget their original purpose.
Which all leads to the other side of the coin - if an employer is impressed by your certification, then that employer hasn't exactly thought it through. Not so impressive in return; do you really want to do security work for an organisation that has already reduced their security process to one of borrowing other organisations's best practices? (Which leads to the rather difficult question of how to identify an organisation that thinks for itself in security matters. Another difficult problem in security signalling!)
So what does an employer do to find out if someone knows any security? How does an _individual_ signal to the world that he or she knows something about the subject? Another question, another day!
Rarely does anyone bother to sit down and ponder why the world is so crazy, and ask why those people over the other side are so different. Asking questions is anathema to the times we live in, and I have living proof of that - I occasionally throw out completely unbelievable statements and rarely if ever am I asked about them. I'm told, I'm challenged, and I'm damned. But never asked...
So it is with some surprise that an American (!) has sat down and thought about why Europeans email the way they do, and why Americans email the way they do. A thoughtful piece. Once you've read it, I'd encourage you to try something different: ask a question, try and work out the answer.
(Oh, and the relevance to Financial Cryptography is how people communicate and don't communicate, where communication is the meta-problem that FC is trying to solve. Thanks to Jeroen to pointer... And for a more amusing perspective on asking questions, try Dilbert)
Euromail
What Germans can teach us about e-mail.
By Eric Weiner
Posted Friday, March 25, 2005, at 4:17 AM PT
North America and Europe are two continents divided by a common technology: e-mail. Techno-optimists assure us that e-mail—along with the Internet and satellite TV—make the world smaller. That may be true in a technical sense. I can send a message from my home in Miami to a German friend in Berlin and it will arrive almost instantly. But somewhere over the Atlantic, the messages get garbled. In fact, two distinct forms of e-mail have emerged: Euromail and Amerimail.
Amerimail is informal and chatty. It's likely to begin with a breezy "Hi" and end with a "Bye." The chances of Amerimail containing a smiley face or an "xoxo" are disturbingly high. We Americans are reluctant to dive into the meat of an e-mail; we feel compelled to first inform hapless recipients about our vacation on the Cape which was really excellent except the jellyfish were biting and the kids caught this nasty bug so we had to skip the whale watching trip but about that investors' meeting in New York. ... Amerimail is a bundle of contradictions: rambling and yet direct; deferential, yet arrogant. In other words, Amerimail is America.
Euromail is stiff and cold, often beginning with a formal "Dear Mr. X" and ending with a brusque "Sincerely." You won't find any mention of kids or the weather or jellyfish in Euromail. It's all business. It's also slow. Your correspondent might take days, even weeks, to answer a message. Euromail is also less confrontational in tone, rarely filled with the overt nastiness that characterizes American e-mail disagreements. In other words, Euromail is exactly like the Europeans themselves. (I am, of course, generalizing. German e-mail style is not exactly the same as Italian or Greek, but they have more in common with each other than they do with American mail.)
These are more than mere stylistic differences. Communication matters. Which model should the rest of the world adopt: Euromail or Amerimail?
A California-based e-mail consulting firm called People-onthego sheds some light on the e-mail divide. It recently asked about 100 executives on both sides of the Atlantic whether they noticed differences in e-mail styles. Most said yes. Here are a few of their observations:
"Americans tend to write (e-mails) exactly as they speak."
"Europeans are less obsessive about checking e-mail."
"In general, Americans are much more responsive to email—they respond faster and provide more information."
One respondent noted that Europeans tend to segregate their e-mail accounts. Rarely do they send personal messages on their business accounts, or vice versa. These differences can't be explained merely by differing comfort levels with technology. Other forms of electronic communication, such as SMS text messaging, are more popular in Europe than in the United States.
The fact is, Europeans and Americans approach e-mail in a fundamentally different way. Here is the key point: For Europeans, e-mail has replaced the business letter. For Americans, it has replaced the telephone. That's why we tend to unleash what e-mail consultant Tim Burress calls a "brain dump": unloading the content of our cerebral cortex onto the screen and hitting the send button. "It makes Europeans go ballistic," he says.
Susanne Khawand, a German high-tech executive, has been on the receiving end of American brain dumps, and she says it's not pretty. "I feel like saying, 'Why don't you just call me instead of writing five e-mails back and forth,' " she says. Americans are so overwhelmed by their bulging inboxes that "you can't rely on getting an answer. You don't even know if they read it." In Germany, she says, it might take a few days, or even weeks, for an answer, but one always arrives.
Maybe that's because, on average, Europeans receive fewer e-mails and spend less time tending their inboxes. An international survey of business owners in 24 countries (conducted by the accounting firm Grant Thornton) found that people in Greece and Russia spend the least amount of time dealing with e-mail every day: 48 minutes on average. Americans, by comparison, spend two hours per day, among the highest in the world. (Only Filipinos spend more time on e-mail, 2.1 hours.) The survey also found that European executives are skeptical of e-mail's ability to boost their bottom line.
It's not clear why European and American e-mail styles have evolved separately, but I suspect the reasons lie within deep cultural differences. Americans tend to be impulsive and crave instant gratification. So we send e-mails rapid-fire, and get antsy if we don't receive a reply quickly. Europeans tend to be more methodical and plodding. They send (and reply to) e-mails only after great deliberation.
For all their Continental fastidiousness, Europeans can be remarkably lax about e-mail security, says Bill Young, an executive vice president with the Strickland Group. Europeans are more likely to include trade secrets and business strategies in e-mails, he says, much to the frustration of their American colleagues. This is probably because identity theft—and other types of hacking—are much less of a problem in Europe than in the United States. Privacy laws are much stricter in Europe.
So, which is better: Euromail or Amerimail? Personally, I'm a convert—or a defector, if you prefer—to the former. I realize it's not popular these days to suggest we have anything to learn from Europeans, but I'm fed up with an inbox cluttered with rambling, barely cogent missives from friends and colleagues. If the alternative is a few stiffly written, politely worded bits of Euromail, then I say … bring it on.
Thanks to Pierre Khawand for research assistance.
Eric Weiner is a correspondent for NPR's Day to Day program.
Article URL: http://slate.msn.com/id/2115223/
My first foray into S/MIME is ongoing. I say "ongoing" because I've completed the 3rd attempt to get it going without success and now have signing going!. The first was with the help of one of CACert's experts, and within 10 minutes or so of massive clicking around, I had a cert installed in my Thunderbird.
10 minutes to create a cert is total failure right there. There should be ONE button and it should take ONE second. No excuses. The notion that I need a cert to tell other people who I am - people who already know me - is so totally off the charts there are no words to describe. None that are polite anyway.
(Actually, there shouldn't even be a button, it should be created when the email account is created! Thanks to Julien for that observation.)
Anyway, to be a crypto scribbler these days one has to have an open mind to all cryptosystems, no matter who designed them, so I plough on with the project to get S/MIME working. No matter how long it takes. Whip me again, please.
There are three further signing problems with S/MIME I've seen today, beyond the lack of the button to make it work.
Firstly, it seems that the key exchage is based on signed messages. The distribution of your public key only happens when you sign a message! Recalling yesterday's recommendation for S/MIME signing (do not sign messages unless you know what that means) this represents a barrier to deployment. The workaround is to send nonsense signed messages to people who you want to communicate with, but to otherwise turn signing off. Techies will say that's just stoopid, but consider this: It's just what your lawyer would say, and you don't want to be the first one to feel really stoopid in front of the judge.
Secondly, Daniel says that implementations first encrypt a message and sign it. That means that to show that a message is signed, you must de-sign it *and* decrypt it in one operation. As only the owner has the key to decrypt, only the owner can show it is signed! Dispute resolution is looking messy, even buggy. How can anyone be sure that that a signed message is indeed signed if there are layers separating message from signature? The RFC says:
In general, the best strategy is to "be liberal in what you receive and conservative in what you send"
Which might be good advice in Gnuland, but is not necessarily the best security advice. (I think Dan Bernstein said this.) Further, this puts the application in a big dilemma. To properly look after messages, they should be decrypted and then re-encrypted within the local database using the local keys. Otherwise the message is forever dependent on that one cert, right?! (Revoke the cert, revoke all messages?)
But if the message is decrypted, the signature is lost, so the signature can only ever form part of a message integrity function. Luckily, the RFC goes on to say that one can sign and encrypt in any order, so this would seem to be an implementation issue
That's good news. And (thirdly) it also takes the edge off of the RFC's suggestion that signatures are non-repudiable. FCers know well that humans repudiate, and dumb programs can't do a darn thing about it.
Adam points to an essay by Paul Graham on A Unified Theory of VC Suckage. Sure, I guess, and if you like learning how and why, read it and also Adam's comments. Meanwhile, I'll just leave you with this amusing footnote:
[2] Since most VCs aren't tech guys, the technology side of their due diligence tends to be like a body cavity search by someone with a faulty knowledge of human anatomy. After a while we were quite sore from VCs attempting to probe our nonexistent database orifice.No, we don't use Oracle. We just store the data in files. Our secret is to use an OS that doesn't lose our data. Which OS? FreeBSD. Why do you use that instead of Windows NT? Because it's better and it doesn't cost anything. What, you're using a freeware OS?
How many times that conversation was repeated. Then when we got to Yahoo, we found they used FreeBSD and stored their data in files too.
Flat files rule.
(It turns out that the term of art for "we just use files on FreeBSD" is flat files. They are much more common than people would admit, especially among old timers who've got that "been there, done that" experience of seeing their entire database puff into smoke because someone plugged in a hair dryer or the latest security patch just blew away access to that new cryptographic filesystem with journalling mirrored across 2 continents, a cruise liner and a nuclear bunker. Flat files really do rule OK. Anyway, back to debugging my flat file database ...)
We don't often get the chance to do the Rip van Winkle experience, and this makes Christopher Allen's essay on how he returned to the security and crypto field after exiting it in 1999 quite interesting.
He identifies two things that are in common: Fear and Gadgets. Both are still present in buckets, and this is a worry, says Christopher. On Fear:
"To simplify, as long as the risks were unknown, we were in a business feeding off of 'fear' and our security industry 'pie' was growing. But as we and our customers both understand the risks better, and as we get better at mitigating those risks cheaply, this "fear" shrinks and thus the entire 'pie' of the security industry becomes smaller. Yes, new 'threats' keep on coming: denial-of-service, worms, spam, etc., but businesses understanding of past risks make them believe that these new risks can be solved and commodified in the same way."
The observation that fear fed on the unknown risks is certainly not a surprise. But I find the conclusion that as the risks were better understood, the pie shrunk to be a real eye opener. It's certainly correlated, but it misses out a world of causality - fear is and was the pie, and now the pie buyers are jaded.
I've written elsewhere on what's wrong with the security industry, and I disagree with how Christopher's assumptions, but we both end up with the same question (and with a nod to Adam's question): how indeed to sell a viable real security product into a market where statistically, we are probably selling FUD?
Addendum: Adam's signalling thread: from Adam: 1, 2. From Iang: 1, 2
A security model generally includes the human in the loop; totally automated security models are generally disruptable. As we move into our new era of persistent attacks on web browsers, research on human failure is very useful. Ping points to two papers done around 2001 on what web bowsing security means to users.
Users Conceptions of Web Security (by Friedman, Hurley, Howe, Felten, Nissenbaum) explored how users treat the browser's security display. Unsurprisingly, non-technical users showed poor recognition of spoof pages, based on the presence of a false padlock/key icon. Perhaps more surprisingly, users derived a different view of security: they interpreted security as meaning it was safe to put their information in. Worse, they tended to derive this information as much from the padlock as from the presence of a form:
"That at least has the indication of a secure connection; I mean, it's obviously asking for a social security number and a password."
Which is clearly wrong. But consider the distance between what is correct - the web security model protects the information on the wire - and what it is that is relevant to the user. For the user, they want to keep their information safe from harm, any harm, and they will assume that any signal sent will be to that end.
But as the threat to information is almost entirely at the end node, and not on the wire, users are faced with an odd choice: interpret the signal according to the harms that they know about, which is wrong, or ignoring the signal because it is irrelevent to the harms. Which is unexpected.
It is therefore understandable that users misinterpret the signals they are given.
The second paper, User's Conceptions of Risks and Harms on the Web(by Friedman, Nissenbaum, Hurley, Howe, Felten) is also worth reading, but it was conducted in a period of relative peace. It would be very interesting to see the same study conducted again, now that we are in a state of perpetual attack.
Stefan writes:
"Back in 1996–1998, I worked in my spare time on a book titled Electronic Money and Privacy. Due to career priorities from 1999 onward, I never got around to finishing the book alas. Since I will not have any time in the foreseeable future to get back to working on the book, I am hereby making the first four draft chapters freely available."
My own story is similar. Back in 97 or so I started a book with a working title of FC. In 98 I rewrote it along the lines of the then evolving 7 layer model of financial cryptography. Unfortunately I did not get the time to wrap the book up, and it remains somewhat incomplete.
Perhaps I should put it on the net. I recently put all my draft papers up on the net, as some are a year or more old and aren't getting closer! Comments? Maybe there is too much stuff on the net already ...
Jim points at this article by Ivan Schneider on the attempts of airlines to reduce payment cost.
Airlines Aim for Expense Reduction in Payments
...
After labor and fuel, the passenger airline industry's largest expenses involve distribution costs. These are comprised of travel agency commissions, fees to global distribution systems such as Sabre and finally, the merchant discount rate paid to their banks.
Already, the airlines have effectively slashed their distribution costs through hard negotiations with travel agencies and the global distribution systems, yielding a 26 percent decrease in average annual distribution costs from 1999 to 2002, according to Edgar, Dunn & Company (EDC, Atlanta), a financial-services and payments consultancy.
Now, the airlines are targeting the estimated $1.5 billion it spends on accepting credit cards from its customers. "The airlines definitely have payments on their radar screens," says Pascal Burg, a San Francisco-based director at EDC. Airlines Aim for Expense Reduction in Payments
...
After labor and fuel, the passenger airline industry's largest expenses involve distribution costs. These are comprised of travel agency commissions, fees to global distribution systems such as Sabre and finally, the merchant discount rate paid to their banks.
Already, the airlines have effectively slashed their distribution costs through hard negotiations with travel agencies and the global distribution systems, yielding a 26 percent decrease in average annual distribution costs from 1999 to 2002, according to Edgar, Dunn & Company (EDC, Atlanta), a financial-services and payments consultancy.
Now, the airlines are targeting the estimated $1.5 billion it spends on accepting credit cards from its customers. "The airlines definitely have payments on their radar screens," says Pascal Burg, a San Francisco-based director at EDC. "They used to look at accepting cards and paying merchant fees as the cost of doing business, and now they're trying to proactively manage the cost associated with doing payments."
The first approach is for an airline to have a friendly chat with its affinity card co-brand partner. But that's often a difficult conversation to have, both for the bank and for the airline. "Traditionally, the co-brand relationships have been managed in the marketing department, while the acquiring merchant side has been handled through the corporate treasury," says Thad Peterson, also a director at EDC.
...
http://www.banktech.com/news/showArticle.jhtml?articleID=60401062"They used to look at accepting cards and paying merchant fees as the cost of doing business, and now they're trying to proactively manage the cost associated with doing payments."
The first approach is for an airline to have a friendly chat with its affinity card co-brand partner. But that's often a difficult conversation to have, both for the bank and for the airline. "Traditionally, the co-brand relationships have been managed in the marketing department, while the acquiring merchant side has been handled through the corporate treasury," says Thad Peterson, also a director at EDC.
...
http://www.banktech.com/news/showArticle.jhtml?articleID=60401062
Hop on a plane, land, and discover Adam has posted 13 blog entries, including one that asks for more topics! Congrats on 500 posts! He posts on some testimony: " the only part of our national security apparatus that actually prevented casualties on 9/11 was the citizenry." More on security measurements ("fundamentally flawed"). Tons or stuff on Choicepoint.
Axel talks about what it means to be a security professional. Yes, there are some media stars out there, but remember "don't believe Vendor pitches." Sounds like something I would write.
Scott points at an article on the inside story of how plastic payments are battled over in Australia. Sadly, the article requires yet another subscription to yet another newspaper that you only read once and they have your data for ever. No thanks.
Stefan does some critical analysis of psuedonyms; very welcome, there is an absence of good stuff on this. A must read for me, so remind me please... Meanwhile, he comments that laws won't help identity theft, but "Schwarzenegger’s administration ... should point legal fingers ... at organizations that hold, distribute, and make consumer-impacting decisions based on identity information..." It is correct to recognise that the problem lies fundamentally with using the identity as the "hitching post" for animals in the future, but finger pointing isn't going to help. (It's a case of the One True Number.) More on that later, when I've got my draft expose on finger pointing in reasonable shape.
In terms of definitions for FC, applying crypto to banking and finance doesn't work. Mostly because those doors are simply closed to us, but also because that's simply not how it is done. And this brings us to the big difference between Bob's view and FC7.
In Bob's view, we use crypto on anything that's important. Valuable - which is much more open than, say, the 'bank' view. But this is still bottom-up thinking and it is in the implicit assumption of crypto that the trouble lies.
Applications are driven top down. That means, we start from the application, develop its requirements and then construct the application by looking downwards to successively more detailed and more technical layers. Of course, we bounce up and down and around all the time, but the core process is tied to the application, and its proxy, the requirements. The requirements drive downwards, and the APIs drive upwards.
Which means that the application drives the crypto, not the other way around. Hence it follows that FC might include some crypto, or it might not - it all depends on the app! In contrast, if we assume crypto from the beginning, we're building inventions looking for a market, not solving real world problems.
This is at heart one of the major design failures in many systems. For example, PKI/SSL/HTTPS assumed crypto, and assumed the crypto had to be perfect. Now we've got phishing - thanks guys. DigiCash is the same: start from an admittedly wonderful formula, and build a money system around it. Conventional and accepted systems building practices have it that this methodology won't work, and it didn't for DigiCash. Another example is digital signatures. Are we seeing the pattern here? Assume we are using digital signatures. Further assume they have to be the same as pen&ink signatures.... Build a system out of that! Oops.
Any systems methodology keeps an open mind on the technologies used, and that's how it is with FC7. Unlike the other definitions, it doesn't apply crypto, it starts with the application - which we call the finance layer - and then drives down. Because we *include* crypto as the last layer, and because we like crypto and know it very well, chances are there'll be a good component of it. But don't stake your rep on that; if we can get away with just using a grab bag of big random numbers, why wouldn't we?
And this is where FC7 differs from Bob H's view. The process remains security-oriented in nature. The people doing it are all steeped in crypto, we all love to add in more crypto if we can possibly justify it. But the goal we drive for is the goal of an application and our solution is fundamentally measured on meeting that goal; Indeed, elegance is not found in sexy formulas, but in how little crypto is included to meet that goal, and how precisely it is placed.
The good news about FC7 is it is a darn sight more powerful than either the 'important' view, and a darn sight more interesting than the banking view. You can build anything this way - just start with an 'important' application (using Bob's term) and lay our your requirements. Then build down.
Try it - it's fun. There's nothing more satisfying than starting with a great financially motivated idea, and then building it down through the layers until you have a cohesive, end-to-end financial cryptography architecture. It's so much fun I really shouldn't share it!
Just a week or two before the SHA-1 news from Xiaoyun Wang, Yiqun Lisa Yin, Hongbo Yu, I wrote a paper on what it means to be secure. In a flash of inspiration, I hit on applying the methodology of Pareto-efficiency to security.
In brief, I define a Pareto-secure improvement to be one where a change made to a component results in a measurable and useful improvement in security results, at no commensurate loss of security elsewhere. And a component is Pareto-secure if there is no further improvement to be made. I further go on to say that a component is Pareto-complete if it is Pareto-secure under all reasonable assumptions and in all reasonable systems. See the draft for more details!
SHA-1 used to be Pareto-complete. I fear this treasured status was cast in doubt when Wang's team of champions squared up against it at Crypto2004 last year; the damaging note, 'Collision Search Attacks on SHA1,' of a week or two back has dealt a further blow.
Here's the before and after. I wrote this before the new information, but I haven't felt the need to change it yet.
Before Crypto 2004 | After Crypto 2004 | |||
---|---|---|---|---|
Pareto-secure (signatures) |
Pareto-complete | Pareto-secure (signatures) |
Pareto-complete | |
MD5 | ? | No | No | No |
SHA0 | Yes | ? | No | No |
SHA1 | Yes | Yes | ? | No |
SHA256 | Yes | Yes | Yes | ? |
To be Pareto-complete, the algorithm must be good for all uses. To be Pareto-secure, the algorithm must be good - offer no Pareto-improvement - for a given application and set of assumptions. In order to stake that claim, I considered above the digital signature, in the sense of for example DSA.
Triage is one thing, security is another. Last week's ground-shifting news was widely ignored in the press. Scanning a bunch of links, the closest I found to any acknowledgement of what Microsoft announced is this:
In announcing the plan, Gates acknowledged something that many outside the company had been arguing for some time--that the browser itself has become a security risk. "Browsing is definitely a point of vulnerability," Gates said.
Yet no discussion on what that actually meant. Still, to his sole credit, author Steven Musil admitted he didn't follow what Microsoft were up to. The rest of media speculated on compatibility, Firefox as a competitor and Microsoft's pay-me-don't-pay-me plans for anti-virus services, which I guess is easier to understand as there are competitors who can explain how they're not scared.
So what does this mean? Microsoft has zero, zip, nada credibility in security.
...earlier this week the chairman of the World's Most Important Software Company looked an auditorium full of IT security professionals in the eye and solemnly assured them that "security is the most important thing we're doing."And this time he really means it.
That, of course, is the problem: IT pros have heard this from Bill Gates and Microsoft many times before ...
Whatever they say is not only discounted, it's even reversed in the minds of the press. Even when they get one right, it is assumed there must have been another reason! The above article goes on to say:
Indeed, it's no accident that Microsoft is mounting another security PR blitz now, for the company is trying to reverse the steady loss of IE's browser market share to Mozilla's Firefox 1.0.
Microsoft is now the proud owner of a negative reputation in security,
Which leads to the following strategy: actions not words. Every word said from now until the problem is solved will just generate wheel spinning for no productivity, at a minimum (and not withstanding Gartner's need to sell those same words on). The only way that Microsoft can change their reputation for insecurity is to actually change their product to be secure. And then be patient.
Microsoft should shut up and and do some security. Which isn't entirely impossible. If it is a browser v. browser question, it is not as if the competition has an insurmountable lead in security. Yes, Firefox has a reputation for security, but showing that objectively is difficult: their brand is indistinguishable from "hasn't got a large enough market share to be worth attacking as yet."
"This is a work in progress," Wilcox says. "The best thing for Microsoft to do is simply not talk about what it's going to do with the browser."
The note on the SHA1 attack from the team from Shandong - Xiaoyun Wang, Yiqun Lisa Yin, Hongbo Yu - is now available in PDF. Firstly, it is a summary, not the real paper, so the attack is not outlined. Examples are given of _reduced rounds in SHA1_ which is not the real SHA1. However, they established their credibility at Crypto 2004 by turning around attacks over night on new challenges. Essential text, sans numbers, below...
Collision Search Attacks on SHA1
Xiaoyun Wang, Yiqun Lisa Yin, Hongbo Yu
February 13, 20051 Introduction
In this note, we summarize the resulted of our new collision search attacks on SHA1. Technical details will be provided in a forthcoming paper.
We have developed a set of new techniques that are very effective for searching collisions in SHA1. Our analysis shows that collisions of SHA1 can be found with complextity less than 269 hash operations. This is the first attack on the full 80-step SHA1 with complexity less than 280 theoretical bound. Based on our estimation, we expected that real collisions of SHA1 reduced to 70-steps can be found using today's supercomputers.
In the past few years, there have been significant research advances in the analysis of hash functions. The techniques developed in the early work provide an important foundation for our new attacks on SHA1. In particular, our analysis is built upon the original differential attack on SHA0, the near collision attack on SHA0, the multi-block collision techniques, as well as the message modification techniques used in the collision search attack on MD5. Breaking SHA1 would not be possible without these powerful analytical techniques.
Our attacks naturally apply to SHA0 and all reduced variants of SHA1. For SHA0, the attack is so effective that we were able to find real collisions of the full SHA0 with less than 239 hash operations. We also implemented the attack on SHA1 reduced to 58 steps and found collisions with less than 233 hash operations. Two collision examples are given in this note.
2 A collision example for SHA0
<skip some numbers>
Table 1: A collision of the full 80-step SHA0. The two messages that collide are (M0, M1) and (M0 , M'1). Note that padding rules were not applied to the messages.
3 A collision example for 58-step SHA1
<skip some numbers>
"Table 2: A collision of SHA1 reduced to 58 steps. The two messages that collide are M0 and M'0. Note that padding rules were not applied to the messages."
The last footnote generated some controversy which is now settled: padding is irrelevant. A quick summary of our knowledge is that "the Wang,Yin,Yu attack that can reduce the strength of SHA-1 from 80 bits to 69 bits." This still falls short of a practical attack, as it leaves SHA-1 as stronger than MD5 (only 64 bit strength), but SHA-1 is now firmly on the "watch" list. To use my suggested lingo, it is no longer Pareto-complete, so any further use would have to be justified within the context of the application.
To paraphrase Adi, "[security] is bypassed not attacked."
Skype's success has caused people to start looking at the security angle. One easy claim is that because it is not open source, then it's not secure. Well, maybe. What one can say is that the open source advantage to security is absent, and there is no countervailing factor to balance that lack. We have trust, anecdotal snippets, and a little analysis, but how much is that worth? And how long will it last?
fm spotted these comments and attributed them to Sean D. Sollé:
Shame Skype's not open source really. Actually, it's pretty worrying that Skype's *not* open source.Given that you're happily letting a random app send and receive whatever it
likes over an encrypted link to random machines all over the interweb, the
best place for it is on a dedicated machine in a DMZ.Trouble is, you'd then have to VNC into it, or split the UI from the
telephony bits, but then you're back to needing the source again.Say what you like about Plain Old Telephones, but at least you know they're
not going to run through your house emptying the contents of your bookcases
through the letterbox out onto the pavement.Sean.
Over on the mozilla-crypto group, discussions circulated as to how to fix the Shmoo bug. And phishing, of course. My suggestion has been to show the CA's logo, as that has to change if a false cert is used (and what CA wants to be caught having themselves issue the false cert?). I've convinced very few of this, although TrustBar does include this alongside it's own great idea, and it looks great! Then comes this stunning revelation from Bob Relyea:
"These same arguments played around and around at Netscape (back when it mattered to Verisign) about whether or not to include the signer's brand. In the end it was UI realestate (or the lack thereof given to security) argument that won the day. In the arena where realestate was less of an issue, but security was, the signer's logo and name *WERE* included (remember the 'Grant' dialogs for signed apps). They still contain those logos today."
Putting the CA logo on the web page was *always* part of the security model. So, that's why it always felt so right and so obvious. But why then have so many people argued against it? Is it the real estate - that never-ending battle to get as much on the screen as possible? Is it the status quo? Is it the triviality of pictures? The dread of marketing?
Which brings to mind experiences I've had chatting with people.
Not something I normally do, and to be honest, not something many of the readers of FC do, it seems. Techies have a very limited view of what people are and do, which is why the technical world has not yet grasped the importance of IM. It's still considered a sort of adjunct cousin to email. And even the whole p2p thing is considered fashionable for its technical interest, not for its social ramifications.
Here's what I've discovered in talking to people. Text is boring. Smileys are interesting. PIctures that move are better, and ones that talk and jump and say silly things are best of all! Out in userland, they love to have wonderful fun jumping little conversations with lots of silly smileys. Think of it as ring tones in a smiley ...
(Oh, and one other thing. It seems that users don't care much for real estate. They are quite happy to load up all these plugins and use them all ... to the point where the actual web page might be left with a third of the screen! This would drive me nuts, I'm one of those that turns off all the tool bars so I can get extreme vertical distance.)
Which leaves me somewhat confused. I know nothing about this world - I recognise that. But I know enough to recognise that here is one more very good idea. Think back to TrustBar. It allows the user to select their own logo to be associated with a secure site. Great idea - it dominated all the ideas I and others had thought of (visit counts, pet names, fingerprints) because of how it reaches out to that world - the world that is being hit by phishing. But over on the cap-talk list, Marc Stiegler goes one further:
"While I agree that an icon-only system might be unsatisfactory, there's more than one way to skin this cat. One of the ideas I have been inspired to think about (Ken Kahn got me started thinking about this, I don't remember how), was to assign faces to the identities. You could start with faces like pirates, scientists, bankers, priests, etc., and edit them with mustaches, freckles, sunglasses. For creating representations of trust relationships, it would be really entertaining, and perhaps instructive, to experiment with such mechanisms to allow users to create such icons, which could be very expressive."
Fantastic idea. If that doesn't kill phishing, it deserves to! If you aren't confused enough by now, you should re-examine your techie roots... and read this closing piece on how we can use a computer to deliver mental abuse. That's something I've always wanted!
Rude software causes emotional traumaBy Will Knight Published Monday 7th February 2005 13:03 GMT
Scientists at California University in Los Angeles (UCLA) have discovered computers can cause heartache simply by ignoring the user. When simulating a game of playground catch with an unsuspecting student, boffins showed that if the software fails to throw the ball to the poor student, he is left reeling from a psychological blow as painful as any punch from a break-time bully.
Matthew Lieberman, one of the experiment's authors and an assistant professor of psychology at UCLA explains that the subject thinks he is playing with two other students sitting at screens in another room, but really the other figures are computer generated. "It's really the most boring game you can imagine, except at one point one of the two computer people stop throwing the ball to the real player," he said.
The scientists used functional magnetic resonance imaging (fMRI) to monitor brain activity during a ball-tossing game designed to provoke feelings of social exclusion. Initially the virtual ball is thrown to the participating student but after a short while the computer players lob the ball only between themselves. When ignored, the area of the brain associated with pain lights up as if the student had been physically hurt. Being the class pariah is psychologically damaging and has roots deep in our evolutionary past. "Going back 50,000 years, social distance from a group could lead to death and it still does for most infant mammals," Lieberman said.
The fact that this pain was caused by computers ignoring the user suggests interface designers and software vendors must work especially hard to keep their customers happy, and it's not surprising that failing and buggy software is so frustrating. If software can cause the same emotional disturbance as physical pain, it won't be long before law suits are flying through the courts for abuse sustained at the hands of shoddy programming. R
¿ Copyright 2005 The Register
http://go.theregister.com/news/2005/02/07/rude_software/
Mozilla Foundation is running a project to develop a policy for adding new Certificate Authorities to FireFox, Thunderbird and the like. This is so that more organisations can sign off on more certificates, so more sites can use SSL and you can be more secure in browsing. Frank Hecker, leading the project, has announced his "near ready" draft, intimating it will be slapped in front of the board any day now.
It transpires that this whole area is a bit of a mess, with browser manufacturers having inherited a legacy root list from Netscape, and modified it through a series of ad hoc methods that suit no-one. Microsoft - being the 363kg gorilla on the block - hands the whole lot over to a thing called WebTrust, which is a cartel of accountancy firms that audit CAs and charge for the privilege. Perfectly reasonable, and perfectly expensive; it's no wonder there are so few SSL sites in existence.
Netscape, the original and much missed by some, tried charging the CA to get added to its browser, but frankly that wasn't the answer. The whole practice of offering signed certificates is fraught with legal difficulties, and while the system wasn't under any form of attack, this lack of focus brought up all sorts of crazy notions like trying to make money in this business. Now, with phishing running rampant and certs sadly misdirected against some other enemy, getting out of selling CA additions is a Good Thing (tm). Ask your class action attorney if you are unsure on this point.
Which confusion and absence of founding in security and rationality Mozilla Foundation recognised, to their credit. The draft policy has evolved into a balanced process requiring an external review of the CA's operating practices. That review could be a WebTrust, some technical equivalent, or an independent review by an agreed expert third party.
Now in progress for a year, MF has added dozens of new CAs, and is proceeding on a path to add smaller non-commercial CAs like CACert, This is one of a bevy of non-"classical" CAs that use the power of the net to create their relationships network. This is the great white hope for browsing - the chance of unleashing the power of SSL to the small business and small community operators.
To get there from where they were was quite an achievement and other browser manufacturers would do well to follow suit. Crypto wasn't ever mean to be as difficult as the browser makes it. Go Mozilla!
Recently, I stumbled across a logical economics space where a decision had to be made and no rational information was available. It wasn't exactly that there was no information, but that there was too much noise, and the working hypothesis was that risky decisions would be made without any rational process being successful or potential, for the average participant. (I defined 'rational' as being related to the needs in some direct positive sense.)
Which led me to ponder how shared memes arise outside any framework of feedback. Is this a sales activity? A hype activity? A long search (ok, surf) brought me to the following list of possibilities. They are scattered, and tangential, and to cut a long story short, I remain irrationally indecisive on this process. I actually don't know where to look for this, so comments are also searched for?!
(Links: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
I posted one theory a week back, the Big Lie, and I was somewhat surprised at the heckles raised. In a perverse sense, the response proved one thing, that information and truth can be hidden behind a subject of revulsion (and there are plenty of contempory revulsions with which to hide behind). Coincidentally, the Big Lie also provides one theory on how the shared memes arise, that of the conspiracy by the original big liars. It's a theory, but I'm not convinced it explains the space adequately or even in more than a small minority of cases of the Big Lie itself.
The next thread is what happens when a person knows the truth, and the world ignores him. For example, the case of Tsunami Smith, who warned in 1998 that a tsunami could hit in the Indian ocean; we know now he was ignored.
Another thread is how to extract the info. You could go and ask people, but people don't want to reveal their information. Here's two links (Educated Guesswork, and sharad) on how to extract sensitive information from users. Such games remind me of the old british army technique of the firing squad - 6 privates line up and are handed 5 bullets and one dummy. As none of them know which is the dummy, none of them are totally sure that they were responsible for the death of the victim.
Which leads us to the evolving science of the Ideas market. This is an idea by Robin Hanson whereby many people aggregate their opinions, but there are some tricks to overcome the barriers. Firstly, people get rewarded in some fashion for voting on ideas. Of course, few of us can predict the future, so most of the votes are non-useful. But some of the voters actually know what they are talking about. So, in order to overcome the 'popular vote' effect (which is close to what I'm looking at above), people who vote correctly are rewarded by increased value in their 'shares' in the idea's future, and those who vote incorrectly lose their investment. It's "put your money where your mouth is" time. (I have to of course mention my own contribution, the Task Market where you get to own the results of the choices as well.)
Memes are an idea that have been around for a long time - concepts or ideas that pass from person to person. I know this was a hot concept years ago, but I never paid attention to it. Wikipedia has some good starters on it, but it doesn't answer my question; how do these things arise? I do not know, but Wikipedia has a great example of the most popular net meme of all. If only it were that simple!?
You're probably facing some meme resistance by now. Karl Popper advocated this in the strongest possible terms: "the survival value of intelligence is that it allows us to extinct a bad idea, before the idea extincts us." I liked that quote so much I posted it on my SSL page. The only problem is, I don't know where and when he said it, which probably shows its memity.
The self as a meme - I am reminded of a habit I had (have?) when engaging a particularly stupid idea by someone convinced of same. This habit became known by those punished with it, and replicated. So much so that one day I was sitting beside a woman who did it, without realising where it came from ... No, I decline to document the meme, but those who know will.
This post on Boyd and Military Strategy provides an interpretation of what we are observing in certain security goods within OODA (observation-orientation-decision-action) loops. In brief: Observation has initially failed to reward observers, so alternate strategies are formed within Orientation. As there is insufficient feedback in the loop, the Orientation gets more and more powerful, until it is no longer capable of dealing with Observations. That is, those Observations that are in accord with the Orientation are accepted and trumpeted, and those against are discarded. (Those that are ambiguous are open to misinterpretation!)
And finally, Crowds and Power is a book I am reading by Elias Canetti. The mob ruleth, and I shall report back when I've discovered how to rule the mob. Also on the list is Extraordinary Popular Delusions. With a title like that, it just has to have some secrets hidden within.
Which, all tantalising snippets aside, gets me no closer to understanding how decisions are made when there is insufficient information. Maybe that's the way it has to be...
Addendum #1: Adam reminds me to add the Keynesian Beauty Contest:
The Keynesian beauty contest is the view that much of investment is driven by expectations about what other investors think, rather than expectations about the fundamental profitability of a particular investment. John Maynard Keynes, the most influential economist of the 20th century, believed that investment is volatile because investment is determined by the herd-like “animal spirits” of investors. Keynes observed that investment strategies resembled a contest in a London newspaper of his day that featured pictures of a hundred or so young women. The winner of the contest was the newspaper reader who submitted a list of the top five women that most clearly matched the consensus of all other contest entries. A naïve strategy for an entrant would be to rely on his or her own concepts of beauty to establish rankings. Consequently, each contest entrant would try to second guess the other entrants’ reactions, and then sophisticated entrants would attempt to second guess the other entrants’ second guessing. And so on. Instead of judging the beauty of people, substitute alternative investments. Each potential entrant (investor) now ignores fundamental value (i.e., expected profitability based on expected revenues and costs), instead trying to predict “what the market will do.” The results are (a) that investment is extremely volatile because fundamental value becomes irrelevant, and (b) that the most successful investors are either lucky or masters at understanding mob psychology – strategic game playing. “Animal spirits” are now known as “irrational exuberance,” and this beauty contest model is an explanation for such phenomena as stock market bubbles. Contrast this model with efficient markets and present value.
In the governance section, often seen as squeezed between economics and grass growing in the stakes of dismality, we see an emerging trend to compare everything to Arthur Andersen. Of course, the collapsed audit house was a big (!) data point, one which everyone can agree with. So that makes it special. (Links 1, 2, 3, 4).
But let's get real. What Arthur Andersen actually did was a) obvious and b) routine. A little bit of pre-emptive shredding? Who in their right minds thinks this is not going on? By what theory of human action or agency theory or what-have-you can we show that auditors will not take the money and do the company's bidding?
As far as I can tell, what AA was caught for was some minor infraction. No doubt worse was going on under the covers. How can we tell? Because audits are mostly secret. (Read any audit report, and it doesn't really tell you what they did, and is covered by a whole bunch of weasel words.) If they are secret, then there are two possible reasons: one is to hide the information from you, and the other is .. to hide the information from you! Which is to say, they will tell you that it is competitively sensitive, but that's indistinguishable from "didn't do the job."
Which leaves us with yet another case of lemons. The market doesn't really rely on the audit report, other than its binary existence. The market does its own calculations, and looks to other fraud indicators to see what's what. (When was the last time you saw a company fail, and the auditor knew and warned?)
Which also brings us to the question of just what one is supposed to do about it. Basel II and Sarbanes-Oxley will add more and more regulation, but will not change the overall governance equation, except for the worst. That's because they make things more complex, and we are already seeing signs that boards are losing their original governance and strategic focus in a frenzy of CYA adjustment.
They also add costs, so they are 'bads' on two counts. What then is the underlying source of the rot? I believe it to be secrecy. Corporates that practice keeping things secret set themselves up for the rot to spread internally and eventually bring themselves down. The sunlight for secrecy is called disclosure, and if you look at the Enron case, it was disclosure that triggered the event: some member of the public scrutinised the _publically accessible filings_ of the company and realised that the numbers so filed didn't accord with reality.
In my emerging theory of open governance, anything that is disclosed is good, anything secret is bad. (How this theory stacks up against competitive intelligence is an unanswered question, for the moment!) In this sense, the existence of the SEC, FSA and various million or so filings that they mandate is a good thing. As long as they are public. Any rule that doesn't result in a public filing is a 'bad'.
Make no mistake, this is not a satisfactory state of affairs - the government has no clue how to mandate useful disclosure. Not because others are smarter, but simple market principles indicate that no one person knows such things. Disclosure is a competitive force, like all other 'goods' and thus an open governance society would encourage differentiation. In my favoured world, one company would decide on an audit, and another would not. Let the market judge.
(Indeed, in the evolving governance world of the 5PM, those practising it know that it costs a bundle to do it "fully," so the more realistic way is a graduated approach.)
Which brings us to Riggs Bank. It is looking like the rot was both secrecy borne of age and influence, and also a well known form of banking cancer is lurking within. The reason I say that is mostly intuition, but also, it transpires that Riggs Bank was also a favourite bank of the CIA. What this means is that aside from the normal secrecy infection leading to rot, the bank laboured under huge conflicts of interest. The CIA has a long history of infecting banks and running their own banks, and the result is never pretty, in governance terms (think Nugan-Hand, BCCI, ...).
Perversely, when the news of the CIA connection broke, Riggs shares rose heavily. This shows the market knows that the punishment will be relatively light, as Riggs now have a get out of jail card. This was already confirmed in the early plea bargain for a single criminal conviction - there is no way a bank would take a cop like that without fighting unless some other deal were done.
In closing, what can we say? Governance - it's a mess. If there are secrets there, don't expect it to be pretty when the sunlight hits. And don't expect any auditor to have picked up the Riggs situation. That's just naive.
(JPM reports) here is a simple example from Eudora (a popular email client) for OSX.
You'll get the idea. Note that "anti phishing technology!" is stunning, stupidly simple. It's just Not That Complex.
"You need a big warning in email clients when a link is fake."
Oh.
So over at MIT people are making robots that can have intercourse with sound effects, but over in the Email Client Corner, a concept as stunningly simple as this...
"You need a big warning in email clients when a link is fake."
Is just Too Hard.
Note that anti phishing technology is far, far simpler than say "spell checking"
When you use the Eudora email client, and you make a spleliling mistake, it brings up a HUGE, CATACLYSMIC warning - the entire operating system is taken over, massive alerts 50% the size of the screen appear, if you have a printer connected, massive "SPELLING ERROR!" banners immediately shoot out of the printer. The Mac's excellent voice synthesis is employed and suddenly - before you can type the next space key - Steven Hawking's voice is telling you "YOU HAVE -- MADE A SPELLING - ERROR - ALERT!!!"
That's for a spelling error.
In contrast when an email contains a "phishing" link, the miserable alert attached flashes up for a second -- but only if you mouse over the link:
The bottom line here, as always, is that not so much software engineers, but software designers, are stunningly, hopelessly, pathetically, uselessly, staggeringly, mind-blowingly stupid.
Note that the same piece of consumer software put a HUGE amount of effort in to enable REAL TIME SMILEYS .... if you happen to type a smiley :) it notices that in real time as you type, and animates a yellow and black smiley there for you. Wow!
(thanks JPM!)
All the blogs (1, 2, 3) are buzzing about the T-Mobile cracker. 21 year old Nicolas Jacobsen hacked into the phone company's database and lifted identity information for some 400 customers, and also scarfed up a photos taken by various phone users. He sold these and presumably made some money. He was at it for at least 6 months, and was picked up in an international sweep that netted 28 people.
No doubt the celebrity photos were embarrassing, but what was cuter was that he also lifted documents from the Secret Service and attempted to sell them on IRC chat rooms!
One would suppose that he would find himself in hot water. Consider the young guy who tried to steal a few credit cards from a hardware store by parking outside and using his laptop to wirelessly hack in and install a trojan. He didn't succeed in stealing anything, as they caught him beforehand. Even then, the maximum he was looking at was 6 credit card numbers. Clearly, a kid mucking around and hoping to strike lucky, this was no real criminal.
He got 12 years. That's 2 years for every credit card he failed to steal.
If proportionality means anything, Jacobsen is never ever going to see sunlight again. So where are we now? Well, the case is being kept secret, and the Secret Service claim they can't talk about it. This is a complete break with tradition, as normally the prosecution will organise a press circus in order to boost their ratings. It's also somewhat at odds with the press release they put out on the other 19 guys they picked up.
The answer is probably that which "a source" offers: "the Secret Service, the source says, has offered to put the hacker to work, pleading him out to a single felony, then enlisting him to catch other computer criminals in the same manner in which he himself was caught. The source says that Jacobsen, facing the prospect of prison time, is favorably considering the offer."
Which is fine, except the hardware shop hacker also helped the hardware store to fix up their network and still got 12 years. The way I read this message is that proportionality - the punishment matching the crime - is out the window, and if you are going to hack, make sure you hack the people who will come after you to the point of ridicule.
The Year of the Phish has passed us by, and we can relax in our new life swimming in fear of the net. Everyone now knows about the threats, even the users, but what they don't know is what happens next. My call: it's likely to get a lot worse before it gets better. And how it gets better is not going to be life as we knew it. But more on that later.
First... The Good News. There is some cold comfort for those not American. A recent report had British phishing loses under the millions. Most of the rich pickings are 'over there' where credit rules, and identity says 'ok'. And even there, the news could be construed as mildly positive for those in need of good cheer. A judge recently ruled a billion dollar payout against spammers who are identified in name, if not in face. We might never see their faces, but at least it feels good. AOL reported spam down by 75% but didn't say how they did it.
Also, news that Microsoft is to charge extra for security must make us believe they have found the magic pixie dust of security, and can now deliver an OS that's really, truly secure, this time! Either that, or they've cracked the conundrum of how to avoid the liability when the masses revolt and launch the class action suit of the century.
All this we could deal with, I guess, in time, if we could as an industry get out collective cryptographic act together and push the security models over to protecting users (one month's coding in Mozilla should do it, but oh, what a long month it's been!). But there is another problem looming, and it's ...
The Bad News: the politicians are now champing at the bit, looking for yet another reason to whip today's hobby horse of 'identify everyone' along into more lather. Yes, we can all mangle metaphors, just as easily as we can mangle security models. Let me explain.
The current project to identify the humanity of the world will make identity theft the crime of the century. It's really extraordinarily simple. The more everything rests on Identity, the more value will Identity have. And the more value it has, the more it will be worth to steal.
To get a handle on why it is more valuable, put yourself in the shoes of an identity thief. Imagine our phisher is three years old, and has a sweet tooth for data.
How much sugar can there be found in a thousand cooperating databases? Each database perfectly indexed with your one true number and bubbling over with personal details, financial details, searchable on demand. A regulatory regime that creates shared access to a thousand agencies, and that's before they start sharing with other countries?
To me, it sounds like the musical scene in the sweets factory of Chitty Chitty Bang Bang, where the over indulgent whistle of our one true identity becomes our security and dentistry nightmare. When the balance is upset, pandemonium ensues. (I'm thinking here the Year of the Dogs, and if you've seen the movie you will understand!)
Now, one could ask our politicians to stop it, and at once. But it's too late for that, they have the bits of digital identity between their teeth, and they are going to do it to us to save us from phishing! So we may as well be resigned to the fact that there will be a thousand interlinked identity databases, and a 100 times that number of people who have the ability to browse, manipulate, package, steal and sell that data. (This post is already too long, so I'm going to skip the naivete of asking the politicians to secure our identity, ok? )
A world like that means credit will come tumbling down, as we know it. Once you know everything about a person, you are that person, and no amount of digital hardware tokens or special biometric blah blahs will save the individual from being abused. So what do people do when their data becomes a phisher's candyfest?
People will withdraw from the credit system and move back to cash.This will cost them, but they will do it if they can. Further, it means that net commerce will develop more along the lines of cash trading than credit trading. In ecommerce terms, you might know this better as prepaid payment systems, but there are a variety of ways of doing it.
But the problem with all this is that a cash transaction has no relationship to any other event. It's only just tractable for one transaction: experienced FCers know that wrapping a true cash payment into a transaction when you have no relationship to fall back to in event of a hiccup is quite a serious challenge.
So we need a way to relate transactions, without infecting that way with human identity. Enter the nym, or more fully known as the psuedonymous identifier. This little thing can relate a bunch of things together without needing any special support.
We already use them extensively in email, and in chat. There are nyms like iang which are short and rather tricky to use because there are more than one of us. We can turn it into an email address, and that allows you to send a message to me using one global system, email. But spam has taught us a lesson with the email address, by wiping out the ease and reliability of the email nym ... leading to hotmail and the throw away address (for both offense and defense) and now the private email system.
Email has other problems (I predict it is dying!) which takes us to Instant Messaging (or chat or IM). The arisal of the peer-to-peer (p2p) world has taken nyms to the next level: disposable, and evolutionary.
This much we already know. P2P is the buzzword of the last 5 years. It's where the development of user activity is taking place. (When was the last time you saw an innovation in email? In browsing?)
Walking backwards ... p2p is developing the nym. And the nym is critical for creating the transactional framework for ecommerce. Which is getting beaten up badly by phishing, and there's an enveloping pincer movement developing in the strong human identity world.
But - and here's the clanger - when and as the nymous and cash based community develop and overcome their little difficulties, those aforementioned forces of darkness are going to turn on it with a vengeance. For different reasons, to be sure. For obvious example, the phishers are going to attack looking for that lovely cash. They are going to get rather rabid rather quickly when they work out what the pickings are.
Which means the mother of all security battles is looming for p2p. And unfortunately, it's one that we have to win, as otherwise, the ecommerce thing that they promised us in the late nineties is looking like a bit more like those fairy tales that don't have a happy ending. (Credit's going to be squeezed, remember.)
The good news is that I don't see why it can't be won. The great thing about p2p is the failure of standards. We aren't going to get bogged down by some dodgy 80's security model pulled out of the back pages of a superman comic, like those Mr Universe he-man kits that the guy with the funny name sold. No, this time, when the security model goes down in flames (several already have) we can simply crawl out of the wreckage, dust off and go find another fighter to fly into battle.
Let's reel off those battles already fought and won and lost. Napster, Kazaa, MNet, Skype, BitTorrent. There are a bunch more, I know, I just don't follow them that closely. Exeem this week, maybe I do follow them?
They've had some bad bustups, and they've had some victories, and for those in the systems world, and the security world, the progress is quite encouraging. Nothing looks insurmoutable, especially if you've seen the landscape and can see the integration possibilities.
But - and finally we are getting to the BIG BUT - that means whoever these guys are defeating ... is losing! Who is it? Well, it's the music industry. And hollywood.
And here's where it all comes together: ecommerce is going to face a devastating mix of over rich identity and over rich phishers. It'll shift to cash based and nym based, on the back of p2p. But that will shift the battle royale into p2p space, which means the current skirmishes are ... practice runs.
And now we can see why Hollywood is in such a desperate position. If the current battle doesn't see Hollywood go down for the count, that means we are in a world of pain: a troubling future for communication, a poor future for ecommerce, and a pretty stark world for the net. It means we can't beat the phisher.
Which explains why Hollywood and the RIAA have found it so difficult to get support on their fight: everyone who is familiar with Internet security has watched and cheered, not because they like to see someone robbed, but because they know this fight is the future of security.
I like Hollywood films. I've even bought a few kilograms of them. But the notion of losing my identity, losing my ability to trade and losing my ability to communcate securely with the many partners and friends I have over the net fills me with trepidation. I and much of the academic and security world can see the larger picture, even if we can't enunciate it clearly. I'd gladly give up another 10 years of blockbusters if I can trade with safety.
On the scales of Internet security, we have ecommerce on one side and Hollywood on the other. Sorry, guys, you get to take one for the team!
Adam picked up an article analysing Skype. For those on the cutting edge, you already know that Skype is sweeping the boards in VOIP, or turning your computer into a phone. Download it today ... if you have a Mac. Or Linux or even Windows. (I don't.)
What might be less well known is that Skype put in crypto to secure the telephone conversation. This means that eavesdroppers can't ... well, eavesdrop! Great stuff. Now, even better, they built it themselves, so not only do we have a secure VOIP solution, downloadable for free, but we also have a mystery on our hands: is it really secure?
Unfortunately, we don't know for sure as they didn't release the source. And they won't say a thing ... Simson Garfinkel looked at the packets and the sorta look encrypted. Or compressed .. or something.
So where are we? Well, it's still a darn sight better than anything else. Go guys! We have a clear benefit over anything else on the table.
And even if it's not secure, nobody knows that. We have to wait until the cryptanalysts have pored over the packets and found the weaknesses. Or, more likely, the hackers have disassembled the core crypto code, worked out what it does, and handed the crypto guys the easy bit.
Even after they announce a weakness, it's still secure! Because nobody can exploit it, until someone else comes up with a toolkit to breach and exploit the weaknesses. (Generally, it's a different group of people, don't ask me why.)
But, even then it's still secure! Simply because nobody bothers to download the exploit and listen to people's conversation. Get real, there aren't exactly hordes of people driving around listening to poorly secured WEP connections (exploit available!) now are there?
The measure of security is positively dependent on the cost to the *attacker*. So an attacker still has to download the exploit, attach the alligator clips to the ethernet, sit in the van, chew donuts, drink bad coffee and listen to bad jokes while waiting for the call. Well, maybe, but a full analysis of the attacker's costs for eavesdropping shows ... it's too sodding expensive, even with the exploit available. Don't worry about it.
In which case, Skype gives you great security, a bit like the momentous defeat of the GSM crypto protocol over the paparazzi scanners! Scoreboard: Jedi Knights of the Crypto Rebellion, 1. Forces of the Dark Empire, 0.
Axel's blog points to a storm in a teacup over at a professional association called the Computer Security Institute. It seems that they invited Frank Abagnale to keynote at their conference. Abagnale, if you recall, is the infamous fraudster portrayed in the movie Catch me if you can.
Many of the other speakers kicked up a fuss. It seems they had ethical qualms about speaking at a conference where the 'enemy' was also presenting. Much debate ensued, alleges Alex, about forgiveness, holier than thou attitudes and cashing in on notoriety.
I have a different perspective, based on Carl von Clausewitz's famous aphorism. He said something to the extent of "Know yourself and you will win half your battles. Know your enemy and you will win 99 battles out of a hundred." Those speakers who complained or withdrew have cast themselves as limited to the first group, the self-knowers, and revealed themselves as reliable only to win every second battle.
Still, even practitioners of narrow horizons should not be above learning from those who see further. So why is there such a paranoia of only dealing with the honest side in the security industry? This is the never-ending white-hat versus black-hat debate. I think the answer can be found in guildthink.
People who are truly great at what they do can afford to be magnaminous about the achievements of others, even those they fight. But most are not like that, they are continually trapped in a sort of middle level process-oriented tier, implementing that which the truly great have invented. As such, they are always on the defensive for attacks on their capabilities, because they are unable to deal at the level where they can cope with change and revolution.
This leads the professional tiers to always be on the lookout for ways to create "us" and "them." Creating a professional association is one way, or a guild, to use the historical term.
Someone like Frank Abagnale - a truly gifted fraudster - has the ability to make them look like fools. Thus, he scares them. The natural response to this is to search out rational and defensible ways to keep him and his ilk on the outside, in order to protect the delicate balance of trade. For that reason, it is convenient to pretend to be morally and ethically opposed to dealing with those that are convicted. What they are really saying is that his ability to show up the members for what they are - middle ranking professionals - is against their economic interests.
In essence, all professionals do this, and it should come as no surprise. All associations of professionals spend a lot of their time enhancing the credibility of their members and the dangers of doing business with those outside the association. So much so that you won't find any association - medical, accounting, engineering, or security - that will admit that this is all normal competitive behaviour. (A quick check of the CSI site confirms that they sell training, and they had a cyberterrorism panel. Say no more...)
So more kudos to the CSI for breaking out of the mold of us and them! It seems that common sense won over and Frank attended. He can be seen here in a photo op, confirming his ability to charm the ladies, and giving "us" yet another excuse to exclude him from our limited opportunities with "them" !
Phong Nguyen has edited for STORK a long 'New Trends' discussion of what cryptologers are concentrating on at the moment. It's very much a core, focused scientists view, and engineers in the field will find it somewhat disjoint from the practical problems being faced in applications today. E.g., no mention of economic or opportunistic models. Still for all that, it is a useful update on a broad range of areas for the heavy crypto people.
Cypherpunk askes a) why has phishing gone beyond "don't click that link" and b) why we can't educate the users?
A lot of what I wrote in The Year of the Snail is apropos to that first question. In economic terms, we would say that Phishing is now institutionalised. In more general parlance, and in criminal terms, it would be better expressed as organised crime. Phishing is now a factory approach, if you like, with lots of different phases, and different actors all working together. Which is to say that it is now very serious, it's not a simple net bug like viruses or spam, and that generally means telling people to avoid it will be inadequate.
We can look at the second question much more scientifically. The notion of teaching people not to click has been tried for so long now that we have a lot of experience just how effective the approach of 'user education' is. For example, see the research by Ye and Smith and also Herzberg and Gbara, who tested users in user interface security questions. Bottom line: education is worse than useless.
Users defy every effort to be educated. They use common sense and their own eyes: and they click a link that has been sent to them. If they didn't do that, then we wouldn't have all these darn viruses and all this phishing! But viruses spread, and users get phished, so we know that they don't follow any instructions that we might give them.
So why does this silly notion of user education persist? Why is every security expert out there recommending that 'users be educated' with not the least blush of embarrassment at the inadequacy of their words?
I think it's a case of complexity, feedback and some fairly normal cognitive dissonance. It tends to work like this: a security expert obviously receives his training from some place, which we'll call received wisdom. Let's call him Trent, because he is trusted. He then goes out and employs this wisdom on users. Our user, Alice, hears the words of "don't click that link" and because of the presence of Trent, our trusted teacher, she decides to follow this advice.
Then, Alice goes out into the world and ... well, does productive work, something us Internet geeks know very little about. In her office every day she dutifully does not click, until she notices two thing. Firstly, everyone else is clicking away like mad, and indeed sending lots of Word documents and photos of kids and those corny jokes that swill around the office environment.
And, secondly, she notices that nobody else seems to suffer. So she starts clicking and enjoying the life that Microsoft taught her: this stuff is good, click here to see my special message. It all becomes a blur and some time later she has totally forgotten *why* she shouldn't click, and cannot work out what the problem is anyway.
(Then of course a virus sweeps the whole office into the seas ...)
So what's going on here? Well, several factors.
Hence, cognitive dissonance. In this case, the security industry has an unfounded view that education is a critical component of a security system. Out in the real world, though, that doesn't happen. Not only doesn't the education happen, but when it does happen, it isn't effective.
Perhaps a better way to look at this is to use Microsoft as a barometer. What they do is generally what the user asks for. The user wants to click on mail coming in, so that's what Microsoft gives them, regardless of the wider consequences.
And, the user does not want to be educated, so eventually, Microsoft took away that awful bloody paperclip. Which leaves us with the lesson of inbuilt, intiutive, as-delivered security. If you want a system to be secure, you have to build it so that it is so intiutively to the user. Each obvious action should be secure. And you have to deliver it so that it operates out of the box, securely. (Mozilla have recently made some important steps in this direction by establishing a policy of delivery to the average user. It's a first welcome step which will eventually lead them to delivering a secure browser.)
If these steps aren't taken, then it doesn't help to say to the user, don't click there. Which brings me to the last point: why is user education *worse* than useless? Well, every time a so-called security expert calls for the users to be educated, he is avoiding the real problems, and he is shifting the blame away from the software to the users. In this sense, he is the problem, and until we can get him out of the way, we can't start thinking of the solutions.
Over at EmergentChaos, Adam asked what happens when "the Snail" gets 10x worse? I need several cups of coffee to work that one out! My first impressions were that ... well, it gets worse, dunnit! which is just an excuse for not thinking about the question.
OK, so gallons of coffee and a week later, what is the natural break on the shift in the security marketplace? This is a systems theory (or "systemics" as it is known) question. Hereafter follows a rant on where it might go.
(Unfortunately, it's a bit shambolic. Sorry about that.)
A lot of ordinary users (right now) are investigating ways to limit their involvement with Windows due to repeated disasters with their PCs. This is the the first limiting factor on the damage: as people stop using PCs on a casual basis, they switch to using them on a "must use" basis.
(Downloading Firefox is the easy fix and I'll say no more about it.) Some of those - retail users - will switch to Macs, and we can guess that Mac might well double its market share over the next couple of years. A lot of others - development users and poorer/developing countries - will switch to the open source Unix alternates like Linux/BSD. So those guys will have a few good years of steady growth too.
Microsoft will withdraw from the weaker marketplaces. So we have already seen them pull out of supporting older versions, and we will see them back off from trying to fight Firefox too hard (they can always win that back later on). But it will maintain its core. It will fight tooth and nail to protect two things: the Office products, and the basic windows platform.
To do that, the bottom line is that they probably need to rewrite large chunks of their stuff. Hence the need to withdraw from marginal areas in order to concentrate on protecting that which is core, so as to concentrate efforts. So we'll see a period characterised by no growth or negative growth by Microsoft, during which the alternates will reach a stable significant percentage. But, Microsoft will come back, and this time with a more secure platform. My guess is that it will take them 2 years, but that's because everything of that size takes that long.
(Note that this negative market growth will be accompanied by an increase in revenues for Microsoft as companies are forced to upgrade to the latest releases in order to maintain some semblance of security. This is the perversity known as the cash cow: as the life cycle ends, the cash goes up.)
I'd go out on a limb here and predict that in 2 years, Microsoft will still control about half of the desk top market, down from about 90% today.
There are alternates outside the "PC" mold. More people will move to PDAs/cellular/mobile phones for smaller apps like contact and communications. Pushing this move also is the effect we've all wondered about for a decade now: spam. As spam grows and grows, email becomes worse and worse. Already there is a generation of Internet users that simply do not use email: the teenagers. They are chat users and phone users.
It's no longer the grannies who don't use email, it is now the middle aged tech groupies (us) who are feeling more and more isolated. Email is dying. Or, at least, it is going the way of the telegram, that slow clunky way in which we send rare messages like birthday, wedding and funderal notices. People who sell email-based product rarely agree with this, but I see it on every wall that has writing on it [1] [2].
But, I hear you say, chat and phones are also subject to all of the same attacks that are going to do Microsoft and the Internet so much damage! Yes, it's true, they are subject to those attacks, but they are not going to be damaged in the same way. There are two reasons for this.
Chat users are much much more comfortable with many many identities. In the world of instant messaging, Nyms are king and queen and all the other members of the royal family at the same time. The same goes for the mobile phone world; there has been a seismic shift in that world over to prepaid billing, which also means that an identity that is duff or a phone that is duff can simply be disposed of, and a new one set up. Some people I know go through phones and SIMs on a monthly basis.
Further, unlike email, there are multiple competing systems for both the phone platform and the IM platform, so we have a competition of technologies. We never had that in email, because we had one standard and nobody really cared to compete; but this time, as hackers hit, different technologies can experiment with different solutions to the cracks in different ways. The one that wins will attract a few percentage points of market share until the solution is copied. So the result of this is that the much lauded standardisation of email and the lack of competition in its basic technical operability is one of the things that will eventually kill it off.
In summary so far; email is dying, chat is king, queen, and anyone you want to be, and your mobile/cellular is your pre-paid primary communications and management device.
What else? Well, those who want email will have to pay *more* for it, because they will be the shrinking few who consume all the bandwidth with their spam. Also, the p2p space will save us from the identity crisis by inventing the next wave of commerce based on the nym. Which means that we can write off the Hollywood block buster for now.
Shambolic, isn't it!
[1] "Scammers Exploit DomainKeys Anti-phishing Weapon"
http://story.news.yahoo.com/news?tmpl=story2&u=/zd/20041129/tc_zd/139951
[2] "Will 2005 be the year of the unanswered e-mail message?"
http://www.iht.com/bin/print_ipub.php?file=/articles/2004/12/06/business/netfraud.html
Over on the Register they are reporting on a rebellion - they called it a strike - by eBay's users in Spain. Initially, this just seemed to be a bunch of grumbling spaniards, but the rebellion quickly spread to other countries. What seems striking to me is that Spaniards are not the grumbling kind, so things must be pretty bad out there.
It's fun reading! Reading between those lines, it all comes down to the business model goal of pandering to features. Back in the early days of building systems, the architect is faced with a very heavy choice: features or cost. It's generally an either/or, and for the most part features wins in the market place and features wins in the mind space.
A brief explanation of the problem space. When you build a feature that is transactional in intent, there are three standards it must meet, one after the other. Demonstrable, which means you can demo it to your boss, and to analysts, to the extent that they get what it is you are trying to say, and immediately order it into production. Or, if they are analysts, they immediately stock up on options, and then write their Buy recommendations.
Wisely, the developer ignores all that, and then moves the feature to the next standard, which is Usable. This means that under most users and most circumstances the feature actually works. People can bash and bang at it, and any series of 10 user tests makes everyone smile. At this point, even the developer can only smile with joy as his new feature goes into production.
But there is a further standard: Transactional. This means that the feature returns a complete result in every case. It doesn't mean that it always works - that's beyond the ken of mere mortals and most developers. Transactional means that the result that is returned is completely reliable, and can be used to instruct further actions. It is this focus on errors that allows us to get to the real prize for Transactional software: the ability to build other software systems on top.
A few examples: an email is transactional, as you don't ever receive half an email. A bank payment is not, as people are required to find out what happened. A web request can be transactional but a web session is not. That's why there are warnings on some sites "don't push this button twice when buying..."
Most features never make the Transactional standard. Most users don't know it even exists, until they discover their favourite feature doesn't have it. And even then, the mystery just deepens, because nobody says those magic words "oh, the software never made it to Transactional status."
Here's how the progression works:
Firstly, the feature comes into use and it has a lot of bugs. The users are happy enough that they ignore the errors. But then the euphoria dies and the tide of malcontent rises ... then there is a mad push to fix bugs and the easy ones get fixed. But then the bugs get more complex, and each failure involves more and more costs. But by now, also, there are some serious numbers of users involved, and serious numbers of support people, and serious numbers of revenue bux pouring in, and it is no longer clear to the company that there are remaining problems and remaining costs...
So the company institutionalises a system of high support costs, pushing errors back onto compliant users who can be convinced by some convenient reason, and the developers start moving onto other projects. These are all the wrong things to do, but as the user base keeps growing, insiders assume that they are doing something right, not wrong.
As one CEO infamously said, "People keep sending me money." Until they stopped, that is.
To get a full appreciation of what happens next, you have to read Senge's _The Fifth Discipline. His book uses feedback notions to describe how the machine seems to run so well, and then all of a sudden runs full speed into a brick wall. The reason is partly that which worked at user base X won't work at user base 2X, but it's more than that. It's all the factors above, and it can be summed up in one sentance: the failure to move from Usable to Transactional.
You can see this in many successful companies: Microsoft's current security woes are a result of this, they hit the brick wall about 2001, 2002 when the rising tide of virues overtook the falling star of Internet/PC growth. e-gold, a payment system, hit this very problem in 2000 when they went from 10 times annual growth to zilch in a quarter.
On the strength of one article (!) I'd hazard a guess that eBay and Paypal are now entering the phase of running fast and achieving no traction. I.e., the transactional Brick Wall. To be fair, it's a bit more than one article; Paypal have never ever had a transactional approach to their payment system, so it is really been a bit of a mystery to me as to why it has taken so long for their quite horrific hidden cost structure to bite.
According to an unattributed source, the SANS people are looking to compile a list of Sarbanes-Oxley horror stories. They might have their work cut out for them!
For those who don't follow governance like it effects our every minute, Sarbanes-Oxley is the Act in the USA to tighten up the rules on how a company does .. well just about everything. It's the result of the Arthur Andersens, the Worldcoms and other billion dollar collapses. It's big, it's long, it's boring, and if you have heard of it, your only friends are people who have also heard of it...
Notwithstaning its dry accounting background, Sarbanes-Oxley has raised bureaucracy to new levels. Like a scene from Brazil, it's solution to every ill is rules and yet more rules. Understandably, people don't like it, but because it applies to all companies equally, there is little to be gained in fighting it, as it is only the customer who has to pay, and it has no bearing on competing against your competitors, only against your self.
What makes it especially interesting is that this time the risk is entirely within the company; outsourcing of any component is no excuse! A recently seen trend is that new hires in risk management are empowered to develop their own IT teams. Why? Because they can no longer simply shift the burden onto another department; Sarbanes-Oxley requires you to be responsible for the risk, however it's done.
And they're getting tough on compliance. This time, if your company is not up to scratch, you may well be offering yourself up as a sacrifice to a regulatory orgy of fines, inspections, naming and shaming and other terrors.
More great news for suppliers of solutions. More fear, uncertainty and doubt, and less risk. Who could ask for more?
Mini Research Project: Sarbanes Oxley 404 Horror StoriesSANS is looking for evidence to support an assertion we hear a lot, that there is insufficient IT guidance in SOX 404/COSO to show that your IT systems have the needed controls to demonstrate the audit report is accurate. We have heard reports from the field talking about 2 auditors | from the same firm having opposite findings, or more commonly, organizations that can't figure out what to do so they end up buying a six figure SAS 70 to have some sort of coverage. If you have a horror story you can share that would be great, I am happy to read non-attributable statements, but what we are looking for are stories where we can name the individuals and organizations. Send your horror stories to stephen@sans.org
In surprise and indeed shock, someone has made smart card identity systems work on a national scale: The Estonians. Last week, I attended Conult Hyperion's rather good Digital Identity forum where I heard the details from one of their techies.
There are 1.35 million Estonians, and they hold an issued 650,000 cards. Each card carries the normal personal data, a nationally established email address, a photo, and two private keys. One key is useful for identification, and the other is used for signing.
Did I say signing? Yes, indeed, there is a separate key there for signing purposes. They sign all sorts of things, right up to wills, which are regularly excluded from other countries' lists of valid uses, and they seem to have achieved this fairy tale result by ensuring that the public sector is obliged to accept documents so signed.
I can now stop bemoaning that there are no other extant signatory uses of digital signatures other than our Ricardian Contracts. Check out OpenXades for their open source signatory architecture.
When asked, presenter Tarvi Martens from operator AS Sertifitseerimiskeskus (SK) claimed that the number of applications was in the thousands. How is this possible? By the further simple exigency of not having any applications on the card, and asking the rest of the country to supply the applications. In other words, Estonia issued a zero-application smart card, and banks can use the basic tools as well as your local public transport system.
Anybody who's worked on these platforms will get what I am saying - all the world spent the last decade talking about multi-application cards. This was a seller's dream, but to those with a little structural and markets knowledge this was never going to fly. But even us multi-blah-blah skeptics didn't make jump to hypersmartspace by realising that the only way to rollout massive numbers of applications is to go for zero-application smart cards.
A few lucky architectural picks written up in the inevitable whitepaper will not however make your rollout succeed, as we've learnt from the last 10 years of financial cryptography. Why else did this work for the Estonians, and why not for anyone else?
Gareth Crossman, presenter from Liberty, identified one huge factor: the Napoleonic Code countries have a legal and institutional basis that protects the privacy of the data. They've had this basis since the days of Napoleon, when that enlightened ruler conquered much of mainland Europe and imposed a new and consistant legal system.
Indeed, Tarvi Martens confirmed this: you can use your card to access your data and to track who else has accessed your data! Can we imagine any Anglo government suggesting that as a requirement?
Further, each of the countries that has had success (Sweden was also presented) in national smart card rollouts has a national registry of citizens. Already. These registries mean that the hard work of "enrollment" is already done, and the user education issues shrink to a change in form factor from paper to plastic and silicon.
Finally,it has to be said that Estonia is small. So small that things can be got done fairly easily.
These factors don't exist across the pond in the USA, nor across the puddle in the UK. Or indeed any of the Anglo world based on English Common Law. The right to call yourself John Smith today and Bill Jones tomorrow is long established in common law, and it derives from regular and pervasive inteference in basic rights by governments and others. Likewise, much as they might advertise to the contrary, there are no national databases in these countries that can guarantee to have one and only one record for every single citizen.
The lesson is clear; don't look across the fence, or the water, and think the grass is greener. In reality, the colour you are seeing is a completely different ecosystem.
In line with my last post about using payment systems to stupidly commit crimes, here's what's happening over in the hacker world. In brief, some thief is trying to sell some Cisco source code he has stolen, and decided to use e-gold to get the payout. Oops. Even though e-gold has a reputation for being a den of scammers, any given payment can be traced from woe to go. All you have to do is convince the Issuer to do that, and this case, e-gold has a widely known policy of accepting any court order for such work.
The sad thing about these sorts of crooks and crimes is that we have to wait until they've evolved by self destruction to find out the really interesting ways to crack a payment system.
E-gold Tracks Cisco Code Thief
November 5, 2004 By Michael Myser
The electronic currency site that the Source Code Club said it will use to accept payment for Cisco Systems Inc.'s firewall source code is confident it can track down the perpetrators.
Dr. Douglas Jackson, chairman of E-gold Ltd., which runs www.e-gold.com, said the company is already monitoring accounts it believes belong to the Source Code Club, and there has been no activity to date. ADVERTISEMENT
"We've got a pretty good shot at getting them in our system," said Jackson, adding that the company formally investigates 70 to 80 criminal activities a year and has been able to determine the true identity of users in every case.
On Monday, a member of the Source Code Club posted on a Usenet group that the group is selling the PIX 6.3.1 firewall firmware for $24,000, and buyers can purchase anonymously using e-mail, PGP keys and e-gold.com, which doesn't confirm identities of its users.
PointerClick here to read more about the sale of Cisco code.
"Bad guys think they can cover their tracks in our system, but they discover otherwise when it comes to an actual investigation," said Jackson.
The purpose of the e-gold system, which is based on 1.86 metric tons of gold worth the equivalent of roughly $25 million, is to guarantee immediate payment, avoid market fluctuations and defaults, and ease transactions across borders and currencies. There is no credit line, and payments can only be made if covered by the amount in the account. Like the Federal Reserve, there is a finite value in the system. There are currently 1.5 million accounts at e-gold.com, 175,000 of those Jackson considers "active."
eWEEK.com Special Report: Internet Security To have value, or e-gold, in an account, users must receive a payment in e-gold. Often, new account holders will pay cash to existing account holders in return for e-gold. Or, in the case of SCC, they will receive payment for a service.
The only way to cash out of the system is to pay another party for a service or cash trade, which Jackson said creates an increasingly traceable web of activity.
He did offer a caveat, however: "There is always the risk that they are clever enough to figure out an angle for offloading their e-gold in a way that leads to a dead end, but that tends to be much more difficult than most bad guys think."
This is all assuming the SCC actually receives a payment, or even has the source code in the first place.
PointerDavid Coursey says securing source code must be a priority. Read about it here.
It's the ultimate buyer beware-the code could be made up, tampered with or may not exist. And because the transaction through e-gold is instantaneous and guaranteed, there is no way for the buyer to back out.
Next Page: Just a publicity stunt?
Dave Hawkins, technical support engineer with Radware Inc. in Mahwah, N.J., believes SCC is merely executing a publicity stunt.
"If they had such real code, it's more likely they would have sold it in underground forums to legitimate hackers rather than broadcasting the sale on Usenet," he said. "Anyone who did have the actual code would probably keep it secret, examining it to build private exploits. By selling it, it could find its way into the public, and all those juicy vulnerabilities [would] vanish in the next version."
PointerFor insights on security coverage around the Web, check out eWEEK.com Security Center Editor Larry Seltzer's Weblog.
"There's really no way to tell if this is legitimate," said Russ Cooper, senior scientist with security firm TruSecure Corp. of Herndon, Va. Cooper, however, believes there may be a market for it nonetheless. By posting publicly, SCC is able to get the attention of criminal entities they otherwise might not reach.
"It's advertising from one extortion team to another extortion team," he said. "These DDOS [distributed denial of service] extortionists, who are trying to get betting sites no doubt would like to have more ways to do that."
PointerCheck out eWEEK.com's Security Center for the latest security news, reviews and analysis.
Reading an article on RFIDs, those wonderful little things that will surely be used for everything, next year (like smart cards), I came across this gem:
"Nokia (the largest cellphone manufacturer in the world) is about to release a cellphone that incorporates an RFID reader based on the ISO 14443 standard. The combination allows callers to scan posters and stickers that contain an embedded tag and buy the depicted products with the charge appearing automatically on their next phone bill."
Nokia have experimented with payment systems before, using their cellphones to bill for carwashes and cokes. This makes a lot of sense, as the mobile phone operators have the billing, the communications, and also a secure (to them) token in the hands of the consumer.
It's also in accord with Frank Trotter's observation that the three sectors best placed to develop new payment systems are telcos, couriers and ISPs. One to watch.
Over on Adam's blog, he asks the question, how do we signal security? Have a read of that if you need to catch up on what is meant by signalling, and what the market for lemons is.
It's a probing question. In fact, it goes right to the heart of security's dysfunctionalism. In fact, I don't think I can answer the question. But, glutton for punishment that I am, here's some thoughts.
Signalling that "our stuff is secure" is fairly routine. As Adam suggests, we write blogs and thus establish a reputation that could be blackened if our efforts were not secure. Also, we participate in security forums, and pontificate on matters deep and cryptographic. We write papers, and we write stuff that we claim is secure. We publish our code in open source form. (Some say that's an essential signal, but it only makes a difference if anybody reads it with the view to checking the security. In practice, that simply doesn't happen often enough to matter in security terms, but at least we took the risk.)
All that amounts to us saying we grow peaches, nothing more. Then there are standards. I've employed OpenPGP for this purpose, primarily, but we've also used x.509. Also, it's fairly routine to signal our security by stating our algorithms. We use SHA1, triple DES, DSA, RSA, and I'm now moving over to AES. All wonderful acronyms that few understand, but many know that they are the "safe" ones.
Listing algorithms also points out the paucity of that signal: it still leaves aside how well you use them! For imponderable example, DES used in "ECB mode" achieves one result, whereas in "CBC mode" achieves a different result. How many know the difference? It's not a great signal, if it is so easy to confuse as that.
So the next level of signalling is to use packages of algorithms. The most famous of these are PGP for email, SSL for browsing, and SSH for Unix administration. How strong are these? Again, it seems to come down to "when used wisely, they are good." Which doesn't imply that the use of them is in any way wise, and doesn't imply that their choice leads to security.
SSL in particular seems to have become a watchword for security, so much so that I can pretty much guarantee that I can start an argument by saying "I don't use SSL because it doesn't add anything to our security model." From my point of view, I'm signalling that I have thought about security, but from the listener's point of view, only a pagan would so defile the brand of SSL.
Brand is very important, and can be a very powerful signal. We all wish we could be the one big name in peach valley, but only a few companies or things have the brand of security. SSL is one, as above. IBM is another. Other companies would like to have it (Microsoft, Verisign, Sun) but for one reason or another they have failed to establish that particular brand.
So what is left? It would appear that there are few positive signals that work, if only because any positive signal that arises gets quickly swamped by the masses of companies lining up for placebo security sales. Yes, everyone knows enough to say "we do AES, we recommend SSL, and we can partner with IBM." So these are not good signals as they are too easy to copy.
Then there are negative signals: I haven't been hacked yet. But this again is hard to prove. How do we know that you haven't been? How do you know? I know one particular company that ran around the world telling everyone that they were the toppest around in security, and all the other security people knew nothing. (Even I was fooled.) Then they were hacked, apparently lost half a mil in gold, and it turned out that the only security was in the minds of the founders. But they kept that bit quiet, so everyone still thinks they are secure...
"I've been audited as unhackable" might be a security signal. But, again, audit companies can be purchased to say whatever is desired; I know of a popular company that secures the planet with its software (or, would like to) that did exactly that - bought an audit that said it was secure. So that's another dead signal.
What's left may well be that of "I'm being attacked." That is, right now, there's a hacker trying to crack my security. And I haven't lost out yet.
That might seem like sucking on a lemon to see if it is sour, but play the game for a moment. If instead of keeping quiet about the hack attacks, I reported the daily crack attempts, and the losses experienced (zero for now), that indicates that some smart cookie has not yet managed to break my security. If I keep reporting that, every day or every month, then when I do get hacked - when my wonderful security product gets trashed and my digital dollars are winging it to digital Brazil - I'm faced with a choice:
Tell the truth, stop reporting, or lie.
If I stop reporting my hacks, it will be noticed by my no longer adoring public. Worse, if I lie, there will be at least two people who know it, and probably many more before the day is out. And my security product won't last if I've been shown to lie about its security.
Telling the truth is the only decent result of that game, and that then forces me to deal with my own negative signal. Which results in a positive signal - I get bad results and I deal with them. The alternates become signals that something is wrong, so anyway out, sucking on the lemon will eventually result in a signal as to how secure my product is.
Corruption and fraud are ever present. Either as opportunities or as activities, and no good is served by pretending to be above the study of their ways. In financial cryptography, we develop systems that do end-to-end cryptographically protected transactions not because crypto is cool, but because it protects the transactions from inside theft. And it protects the insiders from themselves.
Here's a surprising and contrarian article by Theodore Dalrymple on corruption in Italy. The author argues that the corruption of the leviathan state is the one determining factor that propels Italians to a greater standard of living than their uncorrupted British cousins.
It's well argued and well worth considering (albeit long). Can you unravel the justification and deconstruct the cognitive dissonance to place its views in balance? Just exactly when is corruption a good thing? A challenge!
One of the issues in governance of issuance and other assets is where you site your servers. Just the act of siting your servers opens the business to jurisdictional effects. As the legal and regulatory status of independent issuance is uncertain, many issuers have gone to offshore regimes. This isn't for the popular reason of escaping harmful taxes, but for the more practical reason of seeking small and simple jurisdictions that are efficient to deal with.
By way of example, consider the US, where there is a broad range of agencies that have some sort of interest that could lead them into regulating issuance. Perversely, even though the US is one of the most friendly countries for issuance businesses, it is practically impossible to be in compliance because there are simply too many regulators. Witness the Paypal story, which includes many wrangles with many regulators. In Europe, where there is often a catch-all financial regulator, it's much easier to figure out who one should be in compliance with, but the environment is much less friendly towards issuance (see the eMoney directive for the "like-a-bank" approach).
Offshore islands are just so much simpler, and most of them are only worried about crooks and/or bad perception. Yet, siting offshore exposes you to other risks such as poor connectivity and poor governance. And then there are all the normal disaster issues that nobody likes to talk about.
The Isle of Man has thought about this, and come up with the concept of home jurisdiction recovery siting. If you use an Isle of Man firm to do your disaster recovery, it seems that they will permit you to retain your home jurisdiction.
It's a very curious concept! There are many curious angles to this, and it could well take off in directions they hadn't thought of. One to watch.
Isle Of Man Set For Role As Global IT Disaster Recovery Hub
by Jason Gorringe, Tax-News.com, London
13 September 2004
The Isle of Man government has been actively pursuing measures that could propel the Island towards assuming the mantle of the world’s IT disaster recovery hub in the field of financial services, the local media has reported.
In a bid to achieve this, the Island’s authorities are seeking to agree memoranda of understanding (MOU) with multiple offshore jurisdictions which would allow firms using an Island-based disaster recovery service to operate under the same regulations as in their home jurisdictions, according to the Isle of Man Online.
Legislation has been passed with the aid of the Financial Supervision Commission, and it is said that the measures are the first of their type anywhere in the world.
The report quotes Tim Craine, director of e-business, as noting: “It was a perfect example of government working very closely with the private sector. There was an opportunity for the Isle of Man to become a world leader for disaster recovery if we could make it simple and easy for offshore companies to use.”
He added: “The FSC was happy to comply as long as the businesses using the service were subject to adequate supervision in their own jurisdictions, in order to protect the reputation of the Island.”
The initiative is to target offshore jurisdictions that may be vulnerable to natural disasters, such
The DDOS (distributed denial of service) attack is now a mainstream threat. It's always been a threat, but in some sense or other, it's been possible to pass it off. Either it happens to "those guys" and we aren't effected. Or, the nuisance factor of some hacker launching a DOS on our ISP is dealt with by waiting.
Or we do what the security folks do, which is to say we can't do anything about it, so we won't. Ignoring it is the security standard. I've been guilty of it myself, on both sides of the argument: Do X because it would help with DOS, to which the response is, forget it, it won't stop the DOS.
But DOS has changed. First to DDOS - distributed denial of service. And now, the application of DDOS has become over the last year or two so well institutionalised that it is a frequent extortion tool.
Authorize.com, a processor of credit cards, suffered a sustained extortion attack last week. The company has taken a blow as merchants have deserted in droves. Of course, they need to, because they need their payments to keep their own businesses alive. This signals a shift in Internet attacks to a systemic phase - an attack on a payment processor is an attack on all its customers as well.
Hunting around for some experience, Gordon of KatzGlobal.com gave this list of motives for DDOS:
Great list, Gordon! He reports that extortion attacks are 1 in 100 of the normal, and I'm not sure whether to be happier or more worried.
So what to do about DDOS? Well, the first thing that has to be addressed is the security mantra of "we can't do anything about it, so we'll ignore it." I think there are a few things to do.
Firstly, change the objective of security efforts from "stop it" to "not make it worse." That is, a security protocol, when employed, should work as well as the insecure alternative when under DOS conditions. And thus, the user of the security protocol should not feel the need to drop the security and go for the insecure alternative.
Perhaps better put as: a security protocol should be DOS-neutral.
Connection oriented security protocols have this drawback - SSL and SSH both add delay to the opening of their secure connection from client to server. Packet-oriented or request-response protocols should not, if they are capable of launching with all the crypto included in one packet. For example, OpenPGP mail sends one mail message, which is exactly the same as if it was a cleartext mail. (It's not even necessarily bigger, as OpenPGP mails are compressed.) OpenPGP is thus DOS-neutral.
This makes sense, as connection-oriented protocols are bad for security and reliability. About the best thing you can do if you are stuck with a connection-oriented architecture is to turn it immediately into a message-based architecture, so as to minimise the cost. And, from a DOS pov, it seems that this would bring about a more level playing field.
A second thing to look at is to be DNS neutral. This means not being slavishly dependent on DNS to convert ones domain names (like www.financialcryptography.com) into the IP number (like 62.49.250.18). Old timers will point out that this still leaves one open to an IP-number-based attack, but that's just the escapism we are trying to avoid. Let's close up the holes and look at what's left.
Finally, I suspect there is merit in make-work strategies in order to further level the playing field. Think of hashcash. Here, a client does some terribly boring and long calculation to calculate a valid hash that collides with another. Because secure message digests are generally random in their appearance, finding one that is not is lots of work.
So how do we use this? Well, one way to flood a service is to send in lots of bogus requests. Each of these requests has to be processed fully, and if you are familiar with the way servers are moving, you'll understand the power needed for each request. More crypto, more database requests, that sort of thing. It's easy to flood them by generating junk requests, which is far easier to do than to generate a real request.
The solution then may be to put guards further out, and insist the client matches the server load, or exceeds it. Under DOS conditions, the guards far out on the net look at packets and return errors if the work factor is not sufficient. The error returned can include the current to-the-minute standard. A valid client with one honest request can spend a minute calculating. A DDOS attack will then have to do much the same - each minute, using the new work factor paramaters.
Will it work? Who knows - until we try it. The reason I think of this is that it is the way many of our systems are already structured - a busy tight center, and a lot of smaller lazy guards out front. Think Akamai, think smart firewalls, think SSL boxes.
The essence though is to start thinking about it. It's time to change the mantra - DOS should be considered in the security equation, if only at the standards of neutrality. Just because we can't stop DOS, it is no longer acceptable to say we ignore it.
WebTrust is the organisation that sets auditing standards for certificate authorities. It's motto, "It's a matter of trust," is of course the marketing message they want you to absorb, and subject to skepticism. How deliciously ironic, then, that when you go to their site, click on Contact, you get redirected to another domain that uses the wrong certificate!
http://www.cpawebtrust.org/ is the immoral re-user of WebTrust's certificate. It's a presumption that the second domain belongs to the same organisation (the American Institute of Certified Public Accountants, AICPA), but the information in whois doesn't really clear that up due to conflicts and bad names.
What have WebTrust discovered? That certificates are messy, and are thus costly. This little cert mess is going to cost them a few thousand to sort out, in admin time, sign-off, etc etc. Luckily, they know how to do this, because they're in the business of auditing CAs, but they might also stop to consider that this cost is being asked of millions of small businesses, and this might be why certificate use is so low.
Paypal, the low-value credit card merchant processor masquerading as a digital currency, moved to bring its merchant base further into line with a new policy: fines for those who sell naughty stuff [1] [2]. Which, of course, is defined as the stuff that American regulators are vulnerable too, reflecting the pressure from competitive institutions duly forwarded to the upstart.
This time, it includes a new addition: cross-border pharmaceuticals that bust the US-FDA franchise. Paypal is the new bellweather of creative destruction, although strangely, no complaints by the RIAA as yet.
[1] PayPal to impose fines for breaking bans
[2] PayPal to Fine Gambling, Porn Sites
Just the other day, in discussing VeriSign's conflict of interest, I noted that absence of actual theft-inspired attacks on DNS. I spoke too soon - The Register now reports that the German eBay site was captured via DNS spoofing.
What makes this unusual is that DNS spoofing is not really a useful attack for professional thieves. The reason for this is cost: attacking the DNS roots and causing domains to switch across is technically easy, but it also brings the wrath of many BOFHs down on the heads of the thieves. This doesn't mean they'll be caught but it sure raises the odds.
In contrast, if a mail header is spoofed, who's gonna care? The user is too busy being a victim, and the bank is too busy dealing with support calls and trying to skip out on liability. The spam mail could have come from anywhere, and in many cases did. It's just not reasonable for the victims to go after the spoofers in this case.
It will be interesting to see who it is. One thing could be read from this attack - phishers are getting more brazen. Whether that means they are increasingly secure in their crime or whether the field is being crowded out by wannabe crooks remains to be seen.
Addendum 20040918: The Register reports that the Ebay domain hijacker was arrested and admitted to doing the DNS spoof. Reason:
"The 19 year-old says he didn't intend to do any harm and that it was 'just for fun'. He didn't believe the ploy was possible.
So, back to the status quo we go, and DNS attacks are not a theft-inspired attack. In celebration of the false alert to a potential change to the threats model, I've added a '?' to the title of this blog.
James Sherwood of ZDNet reports: "Some Web sites are now offering surfers the chance to download free "phishing kits" containing all the graphics, Web code and text required to construct the kind of bogus Web sites used in Internet phishing scams. "
Some Web sites are now offering surfers the chance to download free "phishing kits" containing all the graphics, Web code and text required to construct the kind of bogus Web sites used in Internet phishing scams.
According to security firm Sophos, the kits allow users to design sites that have the same look and feel as legitimate online banking sites that can then be used to defraud unsuspecting users by getting them to reveal the details of their financial accounts.
"By putting the necessary tools in the hands of amateurs, it's likely that the number of attacks will continue to rise," said Graham Cluley, senior technology consultant at Sophos.
Sophos warned that many of the kits also contain spamming software that enables potential fraudsters to send out thousands of phishing e-mails with direct links to their do-it-yourself fraud sites.
"The emergence of these 'build your own phish' kits means that anyone can now mimic bona fide banking Web sites and convince customers to disclose sensitive information such as passwords," Cluley said.
Many online banking Web sites now carry messages urging users not to open any e-mail that they suspect may be fraudulent and to telephone their bank for further information if they do receive suspicious e-mail.
Phishing has become such a problem that there are now several online antiphishing guides to educate users about the con artists' common tricks.
James Sherwood of ZDNet UK reported from London
Trends in the physical cash world - notes and coins issued by central banks - indicate that the CBs are moving to privatise the distribution and handling of cash float. The Federal Reserve has announced that it will no longer willingly (read: cheaply) take in surplus cash and ship it out on demand.
This makes a lot of sense, and what's more, it echoes the experiences of the DGC world, where back in 2000, the first independent market makers sprung into life and captured the bulk of the retail trading in digital gold. Leaving the issuers with the much more core job of looking after the tech, governing the issue and only doing occasional big movements of digital and metal.
I draw your attention to one aspect: if the CBs are getting out of the heavy end of carting cash around, I wonder if they are also posturing to get out of issuance altogether? It's not inconceivable - it's been permitted in NZ for a decade or more (and thus, is a plausible play for Australia as well), and the Federal Reserve has permitted all sorts of crazy experiments to go along. The Bank of England has been mildly supportive of the idea as well.
Who knows, check back in another decade.
by Ann All, editor * 13 August 2004
The Federal Reserve is poised to make some policy changes that will force many financial institutions to change the way they think about money.
In an effort to reduce its cash handling costs, the Fed has announced its intent to introduce a custodial inventory program which will encourage FIs to hold currency in their vaults rather than shipping it to the Fed.
In 2006, it also plans to begin imposing fees on depository institutions that deposit currency and order currency from Reserve Banks within the same week, a practice it calls cross shipping.
Morris Menasche, managing director of the Americas for Transoft International, a provider of cash management software and consulting services, said the proposed changes "will force practically every financial institution to look at its downstream supply of cash and figure out how they can consume more of their cash inventories."
"The Fed is saying 'enough is enough,'" said Bob Blacketer, director of consulting for Carreker Corporation, another provider of currency management software and consulting services. "It wants to get out of currency handling operations and focus more on policy making and risk management."
World view
The Fed's position is far from unique, Blacketer said. Central banks around the world are adopting a more privatized view of cash handling.
In Australia, the Reserve Bank has virtually exited the role of depository and distributor, leaving commercial banks fully accountable for cash on their balance sheets. As a result, three of the country's leading banks formed a shared utility called Cash Services Australia to provide currency transportation services for FIs.
In the United Kingdom, the Bank of England adopted a Note Circulation Scheme in which verified and sorted notes are segregated to specified NCS inventories, with banks receiving credit for balances placed in the NCS.
As a result, most British FIs began outsourcing cash handling operations or formed joint ventures with other FIs. Only one of Britain's largest banks continues to perform cash handling in-house, Blacketer said.
During 2002, U.S. Reserve Banks processed 34.2 billion notes at a total cost of approximately $342 million, according to the Fed. The number included 19.4 billion $5 through $20 bills -- nearly 6.7 billion of which were followed or preceded by orders of the same denomination by the same institution in the same business week.
Most cross shipping, "probably 75 to 80 percent" occurs at the nation's 100 largest depository institutions, Blacketer said.
Based on the 2002 data, the Fed estimates that it could avoid currency processing costs of up to $35 million a year by cutting down on cross shipping of $5 to $20 notes, the only denominations that would be initially included in the new policy.
The plan
The Fed's plan includes two parts. First, FIs will be allowed to transfer $5, $10 and $20 bills that they might otherwise cross-ship into custodial inventories. The currency will be owned by a Reserve Bank -- even though it will remain at an FI's facility.
The second part is a proposed penalty of $5 to $6 for each bundle of cross-shipped currency in the $5 to $20 denominations. FIs would not pay a penalty for the first 1,000 cross-shipped bundles in a particular zone or sub-zone each quarter.
According to the Fed, the exemption will limit the impact of the cross-shipping policy on institutions which may not be able to justify investments in sorting equipment, and will help FIs deal with unanticipated customer demands for cash.
To become eligible to hold a custodial inventory, an FI must commit to recirculating a significant amount of currency. Participating FIs also must have facilities large enough to segregate the currency from their own cash.
It's possible, said Blacketer, that some large banks with well developed cash handling infrastructures may be able to provide cash processing services for smaller FIs and other customers -- much as they have provided check processing services for years.
"Instead of a loss leader, they could break even or even make a small profit with their cash handling operations by providing cash products and services for customers like retailers, ISOs and credit unions," he said.
But Menasche said it may be difficult to eke profitability out of cash handling operations -- particularly if transportation costs are included.
"More than anything else, this is a logistics issue," he said. "It's easy to underestimate the costs of transporting cash. They could end up transporting the same cash three or four times."
The good news for cash management software providers like Transoft, Menasche said, is that the proposed changes are driving an increased interest in their products.
"Our decision support tools can help financial institutions assess cash processing and transportation costs, and show them when it may be cheaper to send cash back to the Fed and pay a penalty," he said. "If they allow those decisions to become subjective and decentralized, they could get into serious trouble."
The ATM effect
The tremendous growth of ATMs, from 200,000 machines in 1998 to some 370,000 machines today, has helped drive the increased demand for fit currency.
The Fed's proposed policy change could unduly impact FIs' ATM networks, particularly non-branch machines, according to Amy Dronzek, national manager of Cash Vault Services for KeyBank.
"Most cross shipping of currency in our industry results from the need for currency fit enough for automation, such as for ATMs. Large scale need for this type of currency requires automated fitness processing to be cost effective, historically proven more cost effective in a centralized versus a decentralized environment," Dronzek wrote in a letter submitted to the Fed.
Some FIs will have to invest in more currency sorting equipment to support their ATM networks, Dronzek wrote. The alternative will likely be paying higher fees to ATM service personnel.
"If the armored courier companies obtain currency from depository institutions, then they will increase ATM service fees for the additional handling of the currency that will be required," Dronzek wrote, "as Federal Reserve currency is viewed as 100 percent accurate due to the state-of-the-art, high-speed currency sorting equipment which many depository institutions will be unable to afford."
In KeyBank's comment letter, Dronzek urges the Fed to exempt ATMs from the new policy.
For more info on the Fed's proposal: http://www.federalreserve.gov/boarddocs/press/other/2003/20031008/attachment.pdf
And to read comments on the proposal: http://www.federalreserve.gov/generalinfo/foia/ProposedRegs.cfm
(posted under Federal Reserve Bank Currency Recirculation Policy)
"Enable depository institutions the opportunity for limited cross-shipping activity to support their ATM networks using a separate endpoint or other delimiter," she wrote. "This will minimize impact to the consumer by allowing institutions the opportunity to maintain existing ATM networks, especially those that are remote."
In its comment letter, Huntington Bank raised the possibility that "using recirculated money that does not meet strict fitness levels could cause ATM downtime or additional costs for emergency cash transportation."
Alternative approaches
Some FIs would like to see the Fed adopt an alternative approach.
In a comment letter, Greg Smith, a senior vice president at SunTrust Bank, encouraged the Fed to approach cash processing "in a similar fashion to check clearing and electronic payments types by helping to create a processing utility among the banks and armored carriers that would act as an intermediary between depository institutions and the Federal Reserve."
Jim Roemer, senior vice president of Cash Services for U.S. Bank, said in his letter that U.S. Bank is involved in discussions with other FIs to explore the idea of establishing a "cash clearing house," similar to Cash Services Australia.
"In order for the cash clearing house concept to be successful, the participating depository institutions will require some level of cooperation from the Federal Reserve," Roemer wrote.
In its comment letter, Wells Fargo also signals its intent to "proceed with the creation of a non-profit organization in conjunction with other financial institutions."
The Fed began a pilot of the custodial inventory program earlier this month, with 14 pilot sites and 10 participating depository institutions. According to a Fed spokesperson, the pilot program will run for six months, however "the clock will not begin until the last pilot is set up," likely in September.
Copyright 2004 NetWorld Alliance LLC. All rights reserved.
Terms and conditions of use.
All forms of security are about cost/benefit and risk analysis. But people have trouble with the notion that something is only secure up to a certain point [1]. So suppliers often pretend that their product is totally secure, which leads to interesting schisms between the security department and the marketing department.
Secrecy is one essential tool in covering up the yawning gulf between the public's need to believe in absolute security, and the supplier's need to deliver a real product. Quite often, anything to do with security is kept secret. This is claimed to deliver more protection, but that protection, known as "security by obscurity," can lead to a false sense of security.
In my experience, another effect often occurs: Institutional cognitive dissonance surrounding the myth of absolute security leads to security paralysis. Not only is the system secure, by fiat, but any attempt to point out the flaws is treated somewhere between an affront and a crime. Then, when the break occurs, regardless of the many unheeded warnings, widespread shock spreads rapidly as beliefs are shatttered.
Anyway, getting to the point: banks and other FIs rarely reveal how much security is built in, using real numbers. Below, the article reveals a dollar number for an attack on a Pin Entry Device (PED). For those in a hurry, skip down to the emboldened sections, half way down.
[1] addendum: This article, Getting Naked for Big Brother amply makes this point.
Behold the modern automated teller machine, a tiny mechanical fortress in a world of soft targets. But even with all those video cameras, audit trails, and steel reinforced cash vaults, wily thieves armed with social engineering techniques and street technology are still making bank. Now the financial industry is working to close one more chink in the ATM's armor: the humble PIN pad.
Last year Visa International formally launched a 50-point security certification process for "PIN entry devices" (PEDs) on ATMs that accept Visa. The review is exhaustive: an independent laboratory opens up the PED and probes its innards; it examines the manufacturing process that produced the device; and it attacks the PED as an adversary might, monitoring it, for example, to ensure that no one can identify which buttons are being pressed by sound or electromagnetic emission. "If we are testing a product that is essentially compliant, we typically figure it's about a four week process," says Ken Kolstad, director of operations at California-based InfoGard, one of three certification labs approved by Visa International worldwide.
Cash`n`Carrion
If that seems like a lot of trouble over a numeric keypad, you haven't cracked open an ATM lately. The modern PED is a physically and logically self contained tamper-resistant unit that encrypts a PIN within milliseconds of its entry, and within centimeters of the customer's fingertips. The plaintext PIN never leaves the unit, never travels over the bank network, isn't even available to the ATM's processor: malicious code running on a fully compromised Windows-based ATM machine might be able to access the cash dispenser and spit out twenties, but in theory it couldn't obtain a customer's unencrypted ATM code.
The credit card companies have played a large role in advancing the state of this obscure art. In additional to Visa's certification program, MasterCard has set an 1 April, 2005 deadline for ATMs that accept its card to switch their PIN encryption from DES to the more secure Triple DES algorithm (some large networks negotiated a more lenient deadline of December 2005). But despite these efforts, the financial sector continues to suffer massive losses to increasingly sophisticated ATM fraud artists, who take home some $50m a year in the U.S. alone, according to estimates by the Electronic Funds Transfer Association (EFTA). To make these mega withdrawals, swindlers have developed a variety of methods for cloning or stealing victim's ATM and credit cards.
Some techniques are low-tech. In one scam that Visa says is on the rise, a thief inserts a specially-constructed sleeve in an ATM's card reader that physically captures the customer's card. The con artist then lingers near the machine and watches as the frustrated cardholder tries to get his card back by entering his PIN. When the customer walks away, the crook removes the sleeve with the card in it, and makes a withdrawal.
At the more sophisticated end, police in Hong Kong and Brazil have found ATMs affixed with a hidden magstripe reader attached to mouth of the machine's real reader, expertly designed to look like part of the machine. The rogue reader skims each customer's card as it slides in. To get the PIN for the card, swindlers have used a wireless pinhole camera hidden in a pamphlet holder and trained on the PED, or attached fake PIN pads affixed over the real thing that store the keystrokes without interfering with the ATM's normal operation. "They'll create a phony card later and use that PIN," says Kurt Helwig, executive director of the EFTA. "They're getting pretty sophisticated on the hardware side, which is where the problem has been."
Visa's certification requirements try to address that hardware assisted fraud. Under the company's standards, each PED must provide "a means to deter the visual observation of PIN values as they are being entered by the cardholder". And the devices must be sufficiently resistant to physical penetration so that opening one up and bugging it would either cause obvious external damage, cost a thief at least $25,000, or require that the crook take the PIN pad home with him for at least 10 hours to carry out the modification.
"There are some mechanisms in place that help protect against some of these attacks... but there's no absolute security," says InfoGard's Kolstad. "We're doing the best we can to protect against it."
That balancing approach - accounting for the costs of cracking security, instead of aspiring to be unbreakable - runs the length and breadth of Visa's PED security standards. Under one requirement, any electronics utilizing the encryption key must be confined to a single integrated circuit with a geometry of one micron or less, or be encased in Stycast epoxy. Another requirement posits an attacker with a stolen PED, a cloned ATM card, and knowledge of the cyphertext PIN for that card. To be compliant, the PED must contain some mechanism to prevent this notional villain from brute forcing the PIN with an array of computer-controlled solenoid fingers programmed to try all possible codes while monitoring the output of the PED for the known cyphertext.
"In fact, these things are quite reasonable," says Hansup Kwon, CEO of Tranax Technologies, an ATM company that submitted three PEDs for approval to InfoGard. Before its PIN pads could be certified, Tranax had the change the design of the keycaps to eliminate nooks and crannies in which someone might hide a device capable of intercepting a cardholder's keystrokes. "We had to make the keypad completely visible from the outside, so if somebody attacks in between, it's complete visible," says Kwon.
Where Visa went wrong, Kwon says, is in setting an unrealistic timetable for certification. When Visa launched the independent testing program last November, it set a 1 July deadline: any ATMs rolling into service after that date would have to have laboratory certified PIN pads, or they simply couldn't accept Visa cards.
That put equipment makers in a tight spot, says Kwon. "It's almost a six months long process... If you make any design modification, it takes a minimum of three months or more to implement these changes," he says. "So there was not enough time to implement these things to meet the Visa deadline."
Visa International's official position is that they gave manufactures plenty of time - 1 July saw 31 manufacturers with 105 PIN pads listed on the company's webpage of approved PEDs. But in late June, with the deadline less than a week away, Visa suddenly dropped the certification deadline altogether. "I think what we realized was that it was important to work with the other industry players," says spokesperson Sabine Middlemass.
Visa says it's now working with rival MasterCard to develop an industry wide standard before setting a new deadline for mandatory compliance. In the meantime, the company is encouraging vendors to submit their PIN pads for certification under the old requirements anyway, voluntarily, for the sake of security.
Copyright © 2004, 0 (http://www.securityfocus.com/)
Ever since the BA crash in the early 90s, when an engine failed on takeoff, and the pilots shut down the wrong one from instrument confusion, mobile phones have been banned on British aircraft, and other countries more or less followed suit. Cell phones (mobiles, as they are called in many countries) were blamed initially, and as some say, it's guilty until proven innocent in air safety.
Now there is talk of allowing them again [1] [2]. They should never have been banned in the first place. Here's why.
(As a security engineer, it's often instructive to reverse-engineer the security decisions of other people's systems. Security is like economics, we don't get to try out our hypothesies except in real life. So we have to practice where we can. Here is a security-based analysis on whether it's safe to fly and dial.)
In security, we need a valid threat. Imagined threats are a waste of time and money. Once we identify and validate the threat (normally, by the damage it does) we create a regime to protect it. Then, we conduct some sort of test to show that the protection works. Otherwise, we are again wasting our time and money. We would be negligent, as it were, because we are wasting the clients money and potentially worse if we get it wrong.
Now consider pocket phones. It's pretty easy to see they are an imagined threat - there is no validated case [3]. But skip that part and consider the protection - banning mobile phones.
Does it work? Hell no. If you have a 747 full of people, what is the statistical likelihood of people leaving their phone on accidentally? Quite significant, really. Enough that there is going to be a continual, ever present threat of transmissions. Inescapably, mobile phones are on when the plane takes off and lands - through shear accidental activity.
In real safety systems, asking people not to do it is stupid. If it has to be stopped, it has to be stopped proactively. Which means one of three things:
If planes are vulnerable, then the operators have to respond. As they haven't responded, we can easily conclude that the planes are not vulnerable. If it tuns out that they are vulnerable, then instead of the warnings being justified as some might have it, we have a different situation:
The operators would be negligent. Grossly and criminally, probably, as if a plane were to go down through cell phone interference, saying "but we said 'turn it off'" simply doesn't cut the mustard.
So, presumably, planes are not vulnerable to cell phones.
PS: so why did operators ban phones? Two reasons that I know of. In the US, there were complaints that the fast moving phones were confusing the cells. Also, the imminent roll-out of in-flight phones in many airlines was known to be a dead duck if passengers could use their cellphones...
[1] To talk or not to talk, Rob Bamforth
http://www.theregister.co.uk/2004/08/09/in_flight_comms/
[2] Miracles and Wonders By Alan Cabal
http://www.nypress.com/17/30/news&columns/AlanCabal.cfm
[3] This extraordinarily flawed security analysis leaves one gaping... but it does show that if a cellphone is blasting away 30cm from flight deck equipment, there might be a problem.
http://www.caa.co.uk/docs/33/FOD200317web.pdf
The below Register article " America - a nation of corporate email snoops" reports on the trend in email snooping by US corporates. I'll spare you the trouble of reading it - 44% of large companies pay someone to monitor email, and 38% regularly audit the content.
In the search for the eavesdropper, it was always clear that this was a real threat. A small one, but a real one. Unfortunately, the entire crypto industry got distracted on protecting against another threat, the MITM, which was too difficult and obscure to be real. Consequently, the net community fielded systems that didn't really work because of their grossly costly rollouts, and eavesdropping wasn't covered in any real sense (1% of servers use SSL, and 2% of email is encrypted, after a decade of trying).
Since the dawn of Internet crypto time, we've now gone from eavesdropping as a small threat to a potentially large threat. What is really worrying is not so much the corporate eavesdropping, but that we are on the verge of seeing massive ISP-based eavesdropping. All to be reported with a shrug and smile. All because Internet security experts are convinced that the MITM is a threat.
By John Leyden
Published Tuesday 27th July 2004 17:16 GMT
Forget Big Brother, US conglomerates are paying low-tech snoopers to read workers' emails.
According to research from Forrester Consulting, 44 per cent of large US companies (20,000 workers and above) pay someone to monitor the firm's outgoing mail, with 38 per cent regularly auditing email content. According to the study - reported without question in the mainstream press - companies' motivation was mostly due to fears that employees were leaking confidential memos.
Proof, were it needed, that your own staff are the biggest security risk. If the study is to be believed, the dystopian visions of films such as Brazil and George Orwell's 1984 are an everyday reality of today's corporate America. Yes, that's right: "privacy officers" are scouring your email looking for incriminating snippets among the flirtatious email, jokes exchanged between mates and the small amount of work-related stuff you might send during the course of the day.
Scary stuff. And we're asked to believe they are often doing this with little recourse to technology. Even scarier.
Paranoid Android
Joking aside, the 44 per cent figure on corporate snoops struck us as very high. So we got in touch with Forrester asking it to justify its conclusions. Forrester directed our enquiries towards Proofpoint, the email filtering firm which sponsored the research. Forrester Consulting, the custom research arm of Forrester Research, did the leg work for the survey but it was Proofpoint which wrote up the final report.
So how does Proofpoint explain its findings on email monitoring? It's all to do with complying with external regulations.
A wide variety of external regulations applying to email are driving the monitoring trend, according to Keith Crosley, director of corporate communications at Proofpoint. He cited US regulations such as HIPAA (which regulates the handling of personal health information) and Gramm-Leach-Bliley (which regulates the handling of private personal and financial information) as examples.
"It's because of these concerns that companies employ staff to monitor outbound email. Technology solutions for detecting confidential information or for detecting other breaches of email policy or external regulations have, to date, not been particularly effective or popular so the best recourse that companies have has been to have human beings monitor email," he said.
Proofpoint's angle here is that its anti-spam technology can be used as a way of ensuring that outbound emails comply with government regulations. "We believe that companies will, over time, turn to technology to help enforce their internal policies," said Crosley.
The (email) Conversation
If low-tech snooping is currently so widespread, could Proofpoint name a company which is paying someone specifically to check emails? We'd welcome the chance to have a chat to a modern day Harry Call (the lead charecter played by Gene Hackman in 70s classic The Conversation) but sadly we're out of luck.
"We have come into contact with numerous companies that employ staff (even full time staff) to monitor or audit outbound email, but I don't have a company name that you could use," said Crosley. "Because of this 'anecdotal' information, I can say that the results of the survey didn't really surprise us. But as you might imagine, most companies are not willing to talk openly about the use of these sorts of techniques even though they are completely legal in the US."
"To people not familiar with this issue, however, the number does seem astonishing. But our findings on other points are not out of line with other recent email related research. In a somewhat similar survey conducted by the ePolicy Institute, which found that about 60 per cent of companies use some sort of technology to monitor incoming and outgoing email."
Readers can review Proofpoint's survey here. ®
Related stories
netReplay is watching you
Google's Gmail: spook heaven?
US defends cybercrime treaty
Security fears over UK 'snooper's charter'
Merrill Lynch shackles employee Net access
Privacy in the workplace is a 'myth'
Naming and shaming was at its finest last night as the Big Brother awards were presented to Britain's worst by Privacy International. The winners included British Gas, the US VISIT fingerprinting programme, and the British Minister for Children, see the articles below for details.
It's hard to measure conclusive success for these awards, as no doubt the winners will pretend to shrug their shoulders and carry on their evil works. But it seems to give some pause for thought: I bumped into a couple of people there who were potential winners, and even though not directly addressed by this year's lists, there was an almost masochistic sense of wanting to see and experience that which was beyond the pale. So I'd conclude that there is a significant knock-on effect - companies know of and are scared of the awards.
It occurrs to me that there is room for an award or two in our field. Maybe not FC, which is too small and fragmented as a discipline ... but certainly in the application of cryptography itself.
Negative awards could include
Positive awards also should be given. I'd suggest:
I'm sure there are lots of other ideas. What we need is a credible but independent crypto / infosec body to mount and deal such an award. Any takers?
Here's a couple of articles on the awards, FTR:
http://news.google.com/url?ntc=5M4B0&q=http://www.theregister.co.uk/2004/07/29/big_brother_awards/
http://management.silicon.com/government/0,39024677,39122716,00.htm
Over in the US of A, the mutual funds scandal continues to rumble on. In this case, a new article "Regulators overstep with mutual fund trustees" brings out one of the conundrums in the structure that is in place. In short, the Bank of America, as part of its $375 million settlement with Elliot Spitzer's office, also decided to sack the Trustees of its funds.
Now people are wondering how it is possible for a manager to sack the Trustee. As the trade group wrote:
''By accepting the proposition that Bank of America has the capacity to settle charges by causing the replacement of mutual fund trustees, the Commission would be suggesting that Bank of America, not the trustees, controls the funds, ..."
Well, it's a good point, isn't it! Reading the article in full and considering every action and event, it is clear that Bank of America, the manager, and the regulators, New York Attorney General and the SEC, have all quite happily ignored the Trustees. Until it is time to sack them, of course.
Trusts are nominally owned by their Trustees, and the Trustee is in charge of all management and governance decisions. He can't be sacked by the manager, but in the Trust documents, there is often provision for a Trust Protector. This person can generally sack the Trustee, and do all sorts of other things, as laid out in the Trust document. Or, at the least, there will be a way to sort out these issues, if only because Trustees are often old, and check out of their own accord.
So, skipping that storm in a teacup, what was the substance of the Trustees' claim to control the management and to guard the funds? Here it is:
".... In a meeting in May 2002, [the Trustees] voted to take action against short-term traders, called market timers. These traders buy and sell shares of foreign funds overnight, to lock in quick profits at the expense of other investors. The board instituted a 2 percent penalty fee for anyone holding shares less than 90 days, in order to dilute any profits from timing.
"But at that same meeting, according to Spitzer's allegations, Nations Fund managers persuaded the trustees to give one elite client a free ride. According to the minutes of the meeting reviewed by Spitzer's investigators, Bank of America wanted to let a hedge fund, Canary Capital Partners, market-time at will, without paying the fees. The eight trustees present at the meeting (two missed the session) agreed to the arrangement, Spitzer said.
Whoops-a-daisy! The Trustees gave a free pass to Canary Capital, which was the market timer that started the whole schamozzle. So the more likely future for the Trustees is that they have to face criminal or civil proceedings for all that money that they let slip by. It may be that being sacked when they can't be sacked is the best thing that ever happened to them.
What I don't understand about this part is why people expect these appointments to actually work. Of course, if you put a Trustee in place, he might honestly do his job to look after the assets. But, more than likely, he won't. Why should he?
The only reason he would do the job properly is if he was monitored. Nobody was monitoring these guys, and the Managers were allegedly conspiring with them to let the fraud continue. Well, of course! There was too much money involved not to try and steal it.
Which is why when I designed the 5PM I started out with the principle that nobody does the job unless monitored. And nobody monitors like the owner (forget auditors, they're just more highly paid versions of the Trustees). So the most important person in the 5PM is the 5th party - the user. Or, the owner, as they like to call themselves. And the most important part of the structure is not who is doing what, but how they tell the user what they are doing.
And monitor they do - when given a chance, and when given some explanation of how they are the only ones really standing between their assets and the crooks that are pretending to guard them.
Addendum 2004.09.06 SEC scrutinizes trustees of mutual funds
I must have written a hundred times that privacy enhancing payments are only private to a degree. From eCash to Ricardo to e-gold to PayPal to greenbacks to ... well, all of them can be tracked somewhat. They help protect honest people from bad trackers, but they don't protect real serious criminals from serious sustained levels of tracking.
In a sort of Darwin award for criminals, here's the story of a major heist that went wrong, because the crooks believed they could use a privacy-based payment system to get their ill-gotten gains. Oh well, no great loss to society here.
http://www.crime-research.org/news/13.07.2004/485/
(one section quoted from the interview)
What kinds of Internet crimes are the most dangerous at the moment? Can you explain them? What can you say about blackmail?Spread of computer viruses, plastic card frauds, theft of money from bank accounts, theft of computer information and violation of operation rules of computer systems are among the most dangerous offences on the Internet. In order to get one million of UAH (about $185-190 thousand) certain people phoned director of Odessa Airport, Ukraine and informed that they had placed an explosive device on board of a plane flying for Vienna and they also had blown up a bomb in the building opposite to the airport building to confirm their serious intentions.
The Security Service of Ukraine and the Air Security Office were informed of the accident right away. Criminals put detailed instructions on fulfillment of their requirements on the Internet. The main demand was one million of UAH. Criminals planned to use Privatbank's "Privat-24" Internet payment system to get the money. The useful feature for criminals in that case was that this system allowed anonymous creation and control over an account knowing only login and password. Therefore they used the Internet to secure anonymous and remote distribution of their threats and receipt of money.
Besides typical operational measures, there was a need to operationally establish data on technical information in computer networks as criminals used the Internet at all stages of their criminal offence. The Security Service decided to engage experts of a unit on fighting crimes in the sphere of high technologies at the Ministry of Internal Affairs. They were committed to establish senders of threat e-mails and the initiators of bank payments.
The response of ISP and the information they provided helped to determine phone numbers and addresses related to criminals, and also allowed to get firm evidences stored in log data bases of ISP and Privatbank.
Logs allowed to find out IP addresses of computers, e-mails and phones that helped to review concrete computers at the scenes.
The chronicle of events proves that prompt and qualified aid, provided by the unit on fighting crimes in the sphere of high technologies at the Ministry of Internal Affairs in January 2002, to officers of departments fighting terrorist and protecting state organization at Security Service allowed to reveal a criminal group, to prevent their criminal activity, and thus to give due to cyber terrorists.
Will K has alerted me to this: Over in the Jabber community, the long awaited arisal of opportunistic, ad hoc cryptography has spawned a really simple protocol to use OpenPGP messages over chat. It's so simple, you can see everything you want in this piece of XML:
<message to='reatmon@jabber.org/jarl' from='pgmillard@jabber.org/wj_dev2'> <body>This message is encrypted.</body> <x xmlns='jabber:x:encrypted'> qANQR1DBwU4DX7jmYZnncmUQB/9KuKBddzQH+tZ1ZywKK0yHKnq57kWq+RFtQdCJ [snip] Oin0vDOhW7aC =CvnG</x> </message>
Well, life doesn't get much simpler. So far it is lacking a key exchange format, but hey, here's one I knocked up:
<message to='reatmon@jabber.org/jarl' from='pgmillard@jabber.org/wj_dev2'> <body>This message includes sender's public key.</body> <x xmlns='jabber:x:publickey'> mQGiBDjddDERBADBnXl2kf4gLFNSLs4BRt/tt/ilv+wnC5HFgSpKyUk/ja2uKJ2C [snip] E7U1RhQBQTI= =ze8A</x> </message>
Yeah gods, it is so simple, it's embarrassing. Here's the wishlist for a portable, popular IM protocol for secure chat:
Hey presto, a secure chat. Yeah, sure, those nasty ugly bogeymen, the MITMs, could sneak in and do something nasty, but when that happens (don't hold your breath), we can worry about that (hint: web of trust is easy to add...).
One thing wasn't clear - the JEP document above indicates that Presence messages were signed. Anybody got a clue as to why anyone would sign a presence message?
An interesting article about how a manager lost his job when an instant messaging virus sent his entire recorded conversations to his buddy list. This brings out the ticking time bomb that is the archive of all ones conversations. I've always treated chat and IM as just that - idle chat, and don't record those comments please!
Yet I seem to be a minority - I know lots of people who record their every message. And they've got good reasons, which makes me feel even worse. This permanent archiving kind of takes the spontaneity out of it, and it certainly makes it un-chat-like. How does one feel about teasing around with ones partner, as one does, when the threat of a divorce case in 4 years time brings out how cruel you were? Or, idle musings on the safety features of a product, in an open whiteboard fashion, gets dragged into liability suits?
I don't think there is an easy answer to this dilemma. The medium of chat is as it is, and no amount of wishing for that personal, forgiving experience can make the chat archiving devil vanish.
But there are some things that could be done. Here's one idea - perhaps a chat client could present a policy button with a buddy connection. For example, there might be different buttons for partner/confidential, business/confidential, negotiation/confidential and client/attorney privilege.
The first might extend husband-wife protection, with an intent of making the chat not useable against each party in a court of law. The second might create an internally confidential status, so that any forwarding could be warned against. The third would extend confidentiality to the documents such that they couldn't be used in a dispute (this is a trap for young players that I fell into). And the fourth could make the discussions inaccessible to aggressive plaintiffs. (And, we'd of course need another button for "write your own." Not that many people would of course.)
To support these buttons, there would need to be a textual understanding. A contract, lawyers would call it. If we both selected the same contract, then we'd agreed up front to its provisions. In legal terms, this gives us some protection - the courts generally agree with what you said up front.
There are limits of course - for example contract protection gets weaker if criminal proceedings are being undertaken. In which case, if you murder your spouse, you shouldn't expect the marital confidentiality button to save you. Also, even if the words can't be presented in court, there might be a lot of value in just reading them.
But it could bring back enough peace of mind to enable IM to get chatty again. What say you? Select your chat confidence policy and IM me with comments....
A question I posed on the cryptography mailing list: The phishing thing has now reached the mainstream, epidemic proportions that were feared and predicted in this list over the last year or two. Many of the "solution providers" are bailing in with ill- thought out tools, presumably in the hope of cashing in on a buying splurge, and hoping to turn the result into lucrative cash flows.
Sorry, the question's embedded further down...
In other news, Verisign just bailed in with a service offering [1]. This is quite cunning, as they have offered the service primarily as a spam protection service, with a nod to phishing. In this way they have something, a toe in the water, but they avoid the embarrassing questions about whatever happened to the last security solution they sold.
Meanwhile, the security field has been deathly silent. (I recently had someone from the security industry authoritively tell me phishing wasn't a problem ... because the local plod said he couldn't find any!)
Here's my question - is anyone in the security field of any sort of repute being asked about phishing, consulted about solutions, contracted to build? Anything?
Or, are security professionals as a body being totally ignored in the first major financial attack that belongs totally to the Internet?
What I'm thinking of here is Scott's warning of last year [2]:
Subject: Re: Maybe It's Snake Oil All the Way Down At 08:32 PM 5/31/03 -0400, Scott wrote: ... >When I drill down on the many pontifications made by computer >security and cryptography experts all I find is given wisdom. Maybe >the reason that folks roll their own is because as far as they can see >that's what everyone does. Roll your own then whip out your dick and >start swinging around just like the experts.
I think we have that situation. For the first time we are facing a real, difficult security problem. And the security experts have shot their wad.
Comments?
iang
[1] Lynn Wheeler's links below if anyone is interested:
VeriSign Joins The Fight Against Online Fraud
http://www.informationweek.com/story/showArticle.jhtml;jsessionid=25FLNINV0L5DCQSNDBCCKHQ?articleID=22102218
http://www.infoworld.com/article/04/06/28/HNverisignantiphishing_1.html
http://zdnet.com.com/2100-1105_2-5250010.html
http://news.com.com/VeriSign+unveils+e-mail+protection+service/2100-7355_3-5250010.html?part=rss&tag=5250010&subj=news.7355.5
[2] sorry, the original email I couldn't find, but here's the thread, routed at:
http://www.mail-archive.com/cpunks@minder.net/msg01435.html
Nightblade [1], a coder on p2p netrworks, wrote about the "tragedy of the commons" and how it tended to destroy systems. His words were too dangerous, as now the site is down. I've scraped the google cache, below [2].
Tragedy of the Commons is something we've all known about [3]. I recall Bob H going on about it all the time. The solution is simple - property where there is scarcity. Unfortunately, property requires rights allocation, and payments, and methods of exchange.
Now, we've built all these things, but those building the p2p systems have not. Instead of going that distance - admittedly way beyond the expectation - comments in reply to his problem description (unpreserved) have stressed that there is a social element to this and if only the right social engineering approach can be found, it will all work. Which of course will not work, but it's a timely reminder of the sort of soft social thnking that has spread from the legacy of GPL.
Then, in our world, we've built the superlative rights allocation mechanisms. We've created strong payment systems (the field is littered with these), and we've done the exchange mechanisms. I've seen these things in operation, and when they are humming, the world moves dangerously quicker. Yet our application side has never carried it far enough.
So we have two worlds. One world knows how to build p2p. The other world knows how to solve the resource allocation problem. The question is, when do the worlds start colliding?
This was the central message of FC7 [4]: Don't underestimate the complexity of the words. It is a multidisciplinary problem, which means there is a huge need to reach across to the other disciplines (I identified 7 core areas, and that's a lot to deal with. In brief, p2p chat would be a layer 7 application, and the above rights, payments, exchange are all layers 1 through 6.)
But FC7 was widely ignored, and in my discussions on many (other) issues, I've come to the conclusion that we as a species don't like multidisciplinary concepts. We want the whole solution to live in our space of expertise. We get all woolly and nervous when we're asked to dwell into unfamiliar territory, as seen by the desire to cast the tragedy of the commons into the familiar "social" context of the GPL crowd.
Our Internet Commons of Ideas and Potential Applications seems hardly in danger of over-grazing. More, we have a group of deaf dumb and blind developers sitting at the edge of the commons, occasionally falling over dead from starvation, while in the middle there is good grazing.
[1] Nightblade - dead link:
http://www.i2p.net/user/view/13?PHPSESSID=077353367d123d3309d79f4837895bee
[2] Google cache
http://216.239.59.104/search?q=cache:5T_lYTAG1HIJ:www.i2p.net/node/view/215+I2P+P2P+IIP+chat&hl=en&ie=UTF-8
[3] Tragedy of the commons
http://en.wikipedia.org/wiki/Tragedy_of_the_commons
[4] Financial Cryptography in 7 layers
http://iang.org/papers/fc7.html
Submitted by Nightblade on May 16, 2004 - 19:35.
This is something I have been thinking about for a long time. I believe it is one the main reasons P2P networks fail. (Another related theory is called the "prisoner's dilemma")
Here is an example from my own experiences:
A long time ago, I had a freesite on Freenet. Initially I was able to insert the site with no difficulty. However, as the network conditions grew worse over the weeks and months I found it more and more difficult to insert new editions of my site.
When I had inserted the first editions of my site, I used HTL values like 10 to 13 and only inserted the site once. I figured that would be enough to make it so people could see it, and they could. However as the network degraded I had to increase my HTL value to the maximum of 25, and perform multiple inserts over several hours before my site became viewable to other Freenet users. Part of this was prompted by Freenet coding bugs, but I think part of it was also caused by "overcrowding" - too much data was being inserted, causing sites to be lost too quickly as Freenet became more popular.
At the same time, I imagine other freesite authors were doing the exact same thing as I was, increasing their HTL and doing multiple inserts, possibly from multiple locations. While in the short term this solved the problem, it eventually turned into an "arms race." I remember just before I gave up on Freenet I was spending the entire day - 12 hours or more - continually ramming my site into the network with FIW, hoping it would work!
In addition to this madness there was Frost, which was a very heavy user (some would say abuser) of Freenet network resources. The end result was that the "commons" of Freenet - bandwidth and data stores, were overused and became nonfunctional.
Lest someone say that such problems in Freenet were caused by Freenet itself (i.e. bugs) rather than commons-overuse, let me give another example.
Take for instance Gnutella. At one time I shared about half my hard drive on Gnutella, but over the months and years I used Gnutella I got pissed off at all the "freeloaders" (people who download but don't share much), and I started reducing the number of files I shared. Other Gnutella users have done the same thing as I did. Now when I go on to Gnutella (or any other popular file sharing network) I find it very difficult to find anyone sharing files. Nobody shares anymore, they just leech. This is because the "commons" - bandwidth - was "overgrazed" by a few people, and those who were sharing stopped and began to leech instead, thus destroying the commons of Gnutella.
I also see this same problem potentially occuring in I2P.
At the moment, everyone is friendly and does not overuse the commons (bandwidth), but what will happen when I2P becomes more popular? Those without such altruistic behaviour as ourselves will begin wasting valuable I2P bandwidth.
Define "waste", you say? Suppose I develop a filesharing application which runs on I2P. People use my application to pirate the latest software, for example, Microsoft Office. Suddenly I2P becomes painfully slow. What do we do? We do not know which tunnels carry pirated software, because they are encrypted and anonymous. We can only sit and hope for the software pirates to become bored and go somewhere else.
Perhaps you do not find software piracy a wasteful activity.... then instead consider a government agency which creates multiple I2P routers and sends millions of gigabytes of garbage data across the network (WAV files of white noise, for example). That would have the same effect as the pirates would.
What is the solution to the "Tragedy of the Commons Attack?" This is what I have been thinking about, and so far I have come up with some solutions - private members-only networks in the style of WASTE seem to be the most promising.
What do you think?
Anecdotally, it seems that now Europe has a world class currency, it's attracted world class forgers [1]. Perhaps catching the Issuer by surprise, the ECB and its satellites is facing significant efforts at injecting false currency. Oddly enough, the Euro note is very nice, and hard to forge. Which makes the claim that only the special note departments in the central banks can tell the forgeries quite a surprise.
Still, it is tiny amounts. I've heard estimates of 30% of the dollar issue washing around the poorer regions is forged, so it doesn't seem as though the gnomes in Frankfurt have much to complain about as yet.
And, it has to be taken as an accolade of sorts. If a currency is good, it is worth forging. Something we discovered in the digital cash world was that attacks against the system don't start until the issuer has moved about a million worth in float, and has a thousand or so users. Until then, the crooks haven't got the mass to be able to hide in, it's the case with some of these smaller systems that "everyone knows everyone" and that's a bit limiting if you are trying to fence some value through the market makers.
[1] http://www.dw-world.de/english/0,3367,1431_A_1244689_1_A,00.html
As well as the FT review, in a further sign that phishing is on track to being a serious threat to the Internet, Google yesterday covered phishing on the front page. 37 articles in one day didn't make a top story, but all signs are pointing to increasing paranoia. If you "search news" then you get about 932 stories.
It's mainstream news. Which is in itself an indictment of the failure of Internet security, a field that continues to reject phishing as a threat.
Let's recap. There are about three lines of potential defense. The user's mailer, the user's browser, and the user herself.
(We can pretty much rule out the server, because that's bypassed by this MITM; some joy would be experienced using IP number tracking, but that can by bypassed and it may be more trouble than it's worth.... We can also pretty much rule out authentication, as scammers that steal hundreds of thousands have no trouble stealing keys. Also in the bit bucket are the various strong URL schemes, which would help, but only when they have reached critical mass. No hope there.)
In turn, here's what they can do against phishing:
The user's mailer can only do so much here - it's job is to take emails, and that's what it does. It has no way of knowing that an email is a phishing attack, especially if the email carefully copies a real one. Bayesian filters might help here, but they are also testable, and they can be beaten by the committed attacker - which a phisher is. Spam tech is not the right way of thinking, because if one spam slips through, we don't mind. In contrast, if a phish slips through, we care a lot (especially if we believe the 5% hit rate.).
Likewise, the user is not so savvy. Most of the users are going to have trouble picking the difference between a real email and a fake one, or a real site and a fake one. It doesn't help that even real emails and sites have lots of subtle problems with them that will cause confusion. So I'd suggest that relying in the user as a way to deal with this is a loser. The more checking the better, and the more knowledge the better, but this isn't going to address the problem.
This leaves the browser. Luckily, in any important relationship, the browser knows, or can know, some things about that relationship. How many times visited, what things done there, etc etc. All the browser has to do then is to track a little more information, and make the user aware of that information.
But to do that, the browsers must change. The'yve got to change in 3 of these 4 ways:
1. cache certificate use statistics and other information, on a certificate and URL basis. Some browsers already cache some info - this is no big deal.
2. display the vital statistics of the connection in a chrome (protected) area - especially the number of visits. This we call the branding box. This represents a big change to browser security model, but, for various reasons, all users of browsers are going to benefit.
3. accept self-signed certs as *normal* security, again displayed in the chrome. This is essential to get people used to seeing many more cert-protected sessions, so that the above 2 parts can start to work.
4. servers should bootstrap as a normal default behavoiur using an on-demand generated self-signed cert. Only when it is routine for certs to be in existance for *any* important website, will the browser be able to reliably track a persistency and the user start to understand the importance of keeping an eye on that persistency.
It's not a key/padlock, it's a number. Hey presto, we *can* teach the user what a number means - it's the number of times that you visited BankofAmerica.com, or FunkofAmericas.co.mm or wheresover you are heading. If that doesn't seem right, don't enter in any information.
These are pretty much simple changes, and the best news is that Ye & Smith's "Trusted Paths for Browsers" showed that this was totally plausible.
Today, the Financial Times leads its InfoTech review with phishing [1]. The FT has new stats: Brightmail reports 25 unique phishing scams per day. Average amount shelled out for 62m emails by corporates that suffer: $500,000. And, 2.4bn emails seen by Brightmail per month - with a claim that they handle 20% of the world's mail. Let's work those figures...
That means 12bn emails per month are scams. If 62m emails cause costs of half a million, then that works out at $0.008 per email. 144bn emails per year makes for ... $1.152 billion dollars paid out every year [2].
In other words, each phishing email is generating losses to business of a penny. Black indeed - nobody has been able to show such a profit model from email, so we can pretty much guarantee that the flood is only just beginning.
(The rest of the article included lots of cartels trying to peddle solutions, and a mention that the IETF things email authentication might help. Fat chance of that, but it did mention one worrying development - phishers are starting to use viral techniques to infect the user's PC with key loggers. That's a very worrying development - as there is no way a program can defeat something that is permitted to invade by the Microsoft operating system.)
[1] The Financial Times, London, 23rd June 2004,
"Gone phishing," FT-IT Review.
[2] Compare and contrast this 1 billion dollar loss to the $5bn claimed by NYT last week:
"Phishing an epidemic, Browsers still snoozing"
http://www.financialcryptography.com/mt/archives/000153.html
"While it's difficult to pin down an exact dollar amount lost when identity thieves strike such institutions, Jones said 20 cases that have been proposed for federal prosecution involve $300,000 to $1 million in losses each."
This matches the amount reported in the Texas phishing case, although it refers to identity theft, not phishing (yes, they are not the same).
A study by Gartner Research [L04] found that about two million users gave such information to spoofed web sites, and that "Direct losses from identity theft fraud against phishing attack victims -- including new-account, checking account and credit card account fraud" cost U.S. banks and credit card issuers about $1.2 billion last year.
[L04] Avivah Litan, Phishing Attack Victims Likely Targets for Identity Theft, Gartner FirstTake, FT-22-8873, Gartner Research, 4 May 2004
As I've frequently observed, the worst companies to encourage into new money businesses are banks. Frank Trotter, now the Chairman of the highly successful Internet bank, Everbank, once famously observed that the companies best placed to enter the new money business were mass transits, telcos and couriers. I think he was dead right; and he's been proven correct, 2 out of 3 so far.
Here's a report about banks failing to get into the remittances business. The reasons should be fairly easy to spot, so I'll not bore with yet another list of critical factors.
U.S. banks fail to attract immigrant remittance business
Eduardo Porter NYT
Tuesday, June 08, 2004
The entry of big U.S. banks into the business of handling immigrants' remittances of money back home was expected three years ago to produce a financial revolution with potentially powerful policy implications for the United States and Latin America.
So far, the revolution does not seem to have panned out.
According to a report that was set for release on Monday by the Pew Hispanic Center, a nonprofit research group, banks have captured only a trifling share of the money flow. And the price of remitting money - which fell sharply in the late 1990s - has leveled off.
For the 10 million Latin American immigrants in the United States who last year sent $30 billion to their families back home, linked bank accounts and international cash-machine networks offer a cheaper way to send money than using traditional cash-based services like Western Union, a subsidiary of First Data, and Moneygram, which is owned by Viad. Through remittance services, banks also hoped to draw these immigrants - more than half of whom do not have U.S. bank accounts - into the formal financial sector.
Treasury Secretary John Snow has been a major promoter of this cause. In April, he joined finance ministers and central bank governors of the Group of 7 industrialized nations in a statement promising to continue working on ways to make it easier and cheaper for immigrants to send money to their home countries and "to integrate remittance services in the formal financial sector."
The fee for sending small amounts, after dropping from about 15 percent in the late 1990s to 9 percent in 2001, has not fallen much further. The average fee for sending around $200 to Latin America this year is 7.6 percent of the amount sent, according to the study.
"The banks have gone through all this effort; it's still early, but the numbers are still fairly small, given the size of the operation and the extent of their investment," said Roberto Suro, director of the Pew center. "Prices have stabilized, especially in Mexico, the biggest market, despite very substantial increases in volume and competition."
Today, some 50 companies vie for a slice of the roughly $1.2 billion a month that flows to Mexico. And banks have focused on the Mexican market. It was the Mexican government's decision to issue new identity cards for its expatriates in the United States, and the U.S. Treasury's decision to allow banks to accept the cards as identification when opening accounts, that drew many banks into the remittance business three years ago.
But despite hefty marketing efforts, the banks still have only a meager share of the business. According to the Pew report, the four largest banks in the business - Citibank, Wells Fargo, Bank of America and Harris Bank - together handle about 100,000 transfers a month. That is less than 3 percent of the 40 million remittance transactions to Mexico each year. And banks' effect on prices also seems muted. The average fee to send $200 to Mexico is 7.3 percent, compared with 8 percent in 2001.
Offerings from the banks have changed the market to some extent. Flat-fee structures make it less expensive to transfer larger sums of money: The average fee to send $400 to Mexico is 4.4 percent. And some of their products and services are cheaper than other transfer methods. But the banks remain hampered by many immigrants' lack of familiarity with formal banks and financial services as well as by other obstacles - like the small cash-machine networks and bank branch systems of Mexico and other Latin American countries, which seldom extend into rural areas.
Manuel Orozco, a remittance expert at Georgetown University who put together the report for Pew, said that U.S. banks had been aggressively pursuing the business for only about three years and were still learning how to gain new customers.
"The cost will fall," Orozco said, "as banks attract more customer deposits and so they can finance the transfers."
The New York Times
I'm not one for awards, as I usually see them as a career destroyer. But these ones are fun - their careers are already destroyed in the name of truth, justice, etc.
I refer of course to the mutual funds scandal. It has been a bit of an eye-opener for me, partly because of the dirt that was exposed, and partly because there are so many forces were against fixing the mess. It's known, for example, that regulators knew the game, but "could not do anything about it."
In a surprise move, Compliance Reporter, an industry governance magazine, has decided to give the so-called whistleblowers - Harrington, Nesfield and Goodwin - their award of "Compliance Persons of the The Year." While the press were busy digging up dirt on these guys to make them appear like evil participants (in some cases, to directly move attention away from the real evil participants, and in others, because it just made for a better story and they didn't understand the scam anyway) here is a group that recognises the rot in the system, and says:
"Though not compliance officers, Harrington, Nesfield and Goodwin's actions had more impact on the mutual fund industry than anyone's ..."
Elliot Spitzer also picked up the award for "Regulator of the Year." Equally controversial, and equally poignant. You have to hand it to these Compliance Reporter guys - they know how to point out that the rest of the industry ain't worth diddly squat when it comes to governance.
Fuller blurbs follow, FTR:
Compliance Persons Of The Year May 21, 2004 Noreen Harrington, James Nesfield, Andrew Goodwin, informants, The Canary Capital Partners case |
Harrington, Nesfield, and Goodwin sounded the alarm on mutual fund trading abuses by bringing the massive improprieties at Canary Capital Partners to light. By doing so they opened the door for New York Attorney GeneralEliot Spitzer, and eventually the Securities and Exchange Commission, to cast their nets on the mutual fund industry. Harrington is a "gutsy gal," said a Washington, D.C.-based securities lawyer. Though not compliance officers, Harrington, Nesfield and Goodwin's actions had more impact on the mutual fund industry than anyone's, the lawyer said. The informants are credited with being the catalysts for a complete ethical makeover of the entire industry. As Vanguard Group founder and former CEO John Bogle put it in a recent speech, "The shareholder is the raison d'etre for this industry's existence."
Harrington, whose job at Stern Asset Management was to distribute the Stern family's money to other investment funds, was the first to tell Spitzer's office about Canary Capital's trading irregularities. In June, she informed the Attorney General that Canary was parking investments in mutual funds in exchange for being allowed to rapidly trade other funds--also known as market timing. She also told the Attorney General's office that Canary had engaged in illegal late trading--the practice of executing trades at same day-prices after the 4 p.m. close.
Goodwin was a senior trader at Canary Capital, and Nesfield was a back-office consultant hired by Canary to recruit mutual fund companies that would let the hedge fund market-time their funds. Neither was on Canary's payroll when Harrington contacted Spitzer. Nesfield and Goodwin became informants in detailing Canary's abuses once Spitzer launched the investigation, and Spitzer's office acknowledged Harrington's and the informants' roles in the probe.
Harrington, Nesfield, and Goodwin's actions have opened the door to reforms that will keep continuing, said Donald Weiss, partner at Bell, Boyd & Lloyd in Chicago. As another lawyer added, there is just nothing like "a good squeal." Harrington and Goodwin's whereabouts are unknown. Nesfield lives in North Carolina and makes a living installing piers and working on fishing boats.
Regulator Of The Year May 21, 2004 Eliot Spitzer, New York Attorney General |
In early September, Spitzer grabbed the mutual fund industry by the horns and ushered in an age of regulation the likes of which the industry had not seen in more than half a century. "The mutual fund industry is presently undergoing its most thorough transformation since the enactment of the 1940 Act," said Mitch Herr, partner at Holland & Knight in Miami and a litigator who represents securities firms. "No one could reasonably disagree that this entire process arises out of Spitzer's seminal investigation into Canary Capital [Partners] and the various market participants who allowed it to engage in late trading and market timing," said Herr. "Spitzer has been instrumental in identifying systemic problems in the securities industry that need to be addressed by the entire regulatory community."
In early September, Spitzer's office brought a case against hedge fund Canary Capital for market timing and late trading in several mutual funds. It was a case that will live in financial history as the opening battle in the war on fraudulent mutual fund trading practices. "There is no doubt that Eliot Spitzer was the primary catalyst in what resulted in the biggest mutual fund scandal and reform effort in history," said David Tittsworth, executive director of the Investment Counsel Association of America. After the Canary case, the Securities and Exchange Commission, the NASD and the Attorney General of Massachusetts brought charges against mutual funds and individuals who engaged in the trading practices.
"Spitzer certainly deserves recognition for his vigorous intervention into the securities enforcement arena," said C. Evan Stewart, partner with Brown Raysman Millstien Felder & Steiner in New York. "Whether all of his efforts are consistent with the primacy of the federal securities laws and whether all of his prosecutions have been interposed consistent with public policy, however, remain to be seen." Spitzer has received his share of criticism for attempting to regulate mutual fund fees and for not acting in concert with federal regulators such as the SEC. In spite of the controversy, many feel Spitzer has succeeded in his crusade to improve the markets and investor confidence. "[Whether you] agree with him or not, the fact is the public investor sleeps better at night as a result of Spitzer's recent actions," said Bill Singer, partner with Gusrae Kaplan & Bruno in New York.
The PKI sector successfully pushed the notion that trust could be outsourced. This was a marketing claim that was never quite shown to be the case, in that PKI itself never delivered a workable business model. So maybe the PKI vendors will bounce back in another life, and show us what they meant.
I think not. Outsourcing trust to a PKI vendor is like outsourcing taste to a brewery, you may as well let the brewer drink the beer for you.
This conundrum should be obvious to any serious business person. It is possible to outsource process, and it is possible to outsource substantial elements of due diligence (DD). But in the end, you make the decision, and the document you get from some rating agency is just one input into the full process.
The credit ratings agencies are perhaps the best example. Do they do your trust for you? No, not really. They provide a list of the customer's credit events. As well as that useful input to the process, a good business conducts other checks. A forecourt - car seller - might check the driver's licence, and it might pay more attention to some things on the credit report than others. Car dealers make assessments of integrity by looking and talking to the person. And ultimately they trust in the courts, police and driver and vehicle registration people to provide limits.
There's an easy test. If the trust is outsourced to a firm, the firm can make the decision for you. Does the credit agency decide to sell a car on credit? No chance. Does a third party PKI decide to let the customer in to transfer her life's savings? No way.
There are cases where decisions of trust are made entirely by other organisations. In which case, I'd suggest, the model is back to front. What's happened is that you've outsourced your business to the trust provider. Or, the decision maker has outsourced customer acquisition to you. That which owns the customer, is the business. You're now in the business of providing leads.
So if all this is true, why did PKI vendors make such a big deal of outsourcing trust? They weren't trying to put their customers out of business by acquiring their customers, that's for sure. No, it seems as if was just another powerful image of marketing. Also, as an evocative reason with no substance, it was a qualifier. If a customer "bought" the message that trust could be outsourced, they were likely to buy into PKI, also.
Identity theft is a uniquely American problem. It reflects the massive - in comparison to other countries - use of data and credit to manage Americans' lives. Other countries would do well to follow the experiences, as "what happens there, comes here." Here are two articles on the modus operandi of the identity thief [1], and the positive side of massive data collection [2].
First up, the identity thief [1]. He's not an individual, he's a gang, or more like a farm. Your identity is simply a crop to process. Surprisingly, it appears that garbage collected from the streets (Americans call it trash) is still the seed material. Further, the database nation's targetting characteristics work for the thief as he doesn't need to "qualify" the victim any. If you receive lots of wonderful finance deals, he wants your business too.
Once sufficient information is collected (bounties paid per paper) it becomes a process of using PCs and innocent address authorities to weezle ones way into the prime spot. For example, your mail is redirected to the farm, the right mails are extracted, and your proper mail is conveniently re-delivered - the classic MITM. We all know paper identity is worthless for real security, but it is still surprising to see how easily we can be brought in to harvest.
[Addendum: Lynn Wheeler reports that a new study by Professor Judith Collins of Michigan State University reveals up to 70% of identity theft starts with employee insider theft [1.b]. This study, as reported by MSNBC, directly challenges the above article.]
Next up, a surprisingly thoughtful article on how data collection delivers real value - cost savings - to the American society [2]. The surprise is in the author, Declan McCullagh, who had previously been thought to be a bit of a Barbie for his sallacious use of gossip in the paparazzi tech press. The content is good but very long.
The real use of information is to make informed choices - not offer the wrong thing. Historically, this evolved as networks of traders that shared information. To counteract fraud that arose, traders kept blacklists and excluded no-gooders. A dealer exposed as misusing his position of power stood to lose a lot, as Adam Smith argued, far more indeed than the gain on any one transaction [3].
In the large, merchants with businesses exposed to public scrutiny, or to American-style suits, can be trusted to deal fairly. Indeed, McCullagh claims, the US websites are delivering approximately the same results in privacy protection as those in Europe. Free market wins again over centralised regulations.
Yet there is one area where things are going to pot. The company known as the US government, a sprawling, complex interlinking of huge numbers of databases, is above any consumer scrutiny and thus pressure for fair dealings. Indeed, we've known for some years that the policing agencies did an endrun around Congress' prohibition on databases by outsourcing to the private sector. The FBI's new purchase of your data from Checkpoint is "so secret that even the contract number may not be disclosed." This routine dishonesty and disrespect doesn't even raise an eyebrow anymore.
Where do we go from here? As suggested, the challenge is to enjoy the benefits of massive data conglomeration without losing the benefit of privacy and freedom. It'll be tough - the technological solutions to identity frauds at all levels from financial cryptographers have not succeeded in gaining traction, probably because they are so asymmetric, and deployment is so complicated as to rule out easy wins. Even the fairly mild SSL systems the net community put in place in the '90s have been rampantly bypassed by phishing-based identity attacks, not leaving us with much hope that financial cryptographers will ever succeed in privacy protection [4].
What is perhaps surprising is that we have in recent years redesigned our strong privacy systems to add optional identity tokens - for highly regulated markets such as securities trading [5]. The designs haven't been tested in the full, but it does seem as though it is possible to build systems that are both identity strong and privacy strong. In fact, the result seems to be stronger than either approach alone.
But it remains clear that deployment against an uninterested public is a hard issue. Every company selling privacy to my knowledge has failed. Don't hold your breath, or your faith, and keep an eye on how this so-far American disease spreads to other countries.
[1] Mike Lee & Brian Hitchen, "Identity Theft - The Real Cause,"
http://www.ebcvg.com/articles.php?id=217
[1.b] Bob Sullivan, "Study: ID theft usually an inside job,"
http://www.msnbc.msn.com/id/5015565
[2] Declan McCullagh, 'The upside of "zero privacy,"'
http://www.reason.com/0406/fe.dm.database.shtml
[3] Adam Smith, "Lecture on the Influence of Commerce on Manners," 1766.
[4] I write about the embarrassment known as secure browsing here:
http://iang.org/ssl/
[5] The methods for this are ... not publishable just yet, embarrassingly.
Paypal have announced their new list of "unacceptable goods" as covered by Wired [1]. It includes such odd things as human body parts, event tickets, batteries, food, medical equipment, malls, copies of software, ...
The list is 64 items long and an amazing read [2]. I'd hazard a guess that if anyone complained about some item, on the list it goes! Wired comments that postcards portraying topless subjects are permitted, as is food in the shape of genitalia, yet any other adult content must only be transacted on eBay. How long will it be before the "Mothers against evil uses of fruit" put a stop to that?
Pretty soon, the only thing left will be Paypal subscription fees.
The observation has been made (by Paypal themselves to industry conferences) that Paypal is best understood as a lower segment credit card facility for merchants. They permit small merchants to take payments. Paypal's heritage as a Palm Pilot person-to-person money is long forgotten, and now it seems that they have moved even closer to conservative values when it comes to deciding what's right and what's wrong for you to buy from approved merchants.
Luckily, over in the DGC community, there appears to be an alternate. Instead of focusing on the common carrier principle, and banning certain uses of the product, the gold issuers have adopted a customer rejection approach. Partly because of their historical background as privacy supporters, and partly due to free market leanings, the principle is that any Issuer retains the right to discharge a person's account, for any reason whatever.
I.e., the Issuer of a gold currency does not offer the service to just anyone, and you don't have your normal consumer right of equal service. This seems to have resulted in some quite fierce closures of accounts, but it also seems to have preserved the currencies as, well, currency.
[1] "PayPal Tightens Transaction Reins," By Christopher Null
http://www.wired.com/news/print/0,1294,58208,00.html
[2] "PayPal Acceptable Use Policy,"
http://www.paypal.com/cgi-bin/webscr?cmd=p/gen/ua/use/index_frame-outside
MAY 17, 2004 (IDG NEWS SERVICE) - The European Union plans to invest $13 million during the next four years to develop a secure communication system based on quantum cryptography, using physical laws governing the universe on the smallest scale to create and distribute unbreakable encryption keys, project coordinators said today.
The goal is to create unbreakable encryption keys
News Story by Philip Willan
If successful, the project will produce the cryptographer's Holy Grail -- absolutely unbreakable code -- and thwart the eavesdropping efforts of espionage systems such as Echelon, which intercepts electronic messages on behalf of the intelligence services of the U.S., Britain, Canada, New Zealand and Australia.
"The aim is to produce a communication system that cannot be intercepted by anyone, and that includes Echelon," said Sergio Cova, a professor from the electronics department of Milan Polytechnic and one of the project's coordinators. "We are talking about a system that requires significant technological innovations. We have to prove that it is workable, which is not the case at the moment."
Major improvements in geographic range and speed of data transmission will be required before the system becomes a commercial reality, Cova said.
"The report of the European Parliament on Echelon recommends using quantum cryptography as a solution to electronic eavesdropping. This is an effort to cope with Echelon," said Christian Monyk, the director of quantum technologies at Austrian company ARC Seibersdorf Research GmbH and overall coordinator of the project. Economic espionage has caused serious harm to European companies in the past, Monyk noted.
"With this project, we will be making an essential contribution to the economic independence of Europe," he said.
Quantum cryptography takes advantage of the physical properties of light particles, known as photons, to create and transmit binary messages. The angle of vibration of a photon as it travels through space -- its polarization -- can be used to represent a zero or a one under a system first devised by scientists Charles H. Bennett and Gilles Brassard in 1984. It has the advantage that any attempt to intercept the photons is liable to interfere with their polarization and can therefore be detected by those operating the system, the project coordinators said.
An intercepted key would therefore be discarded and a new one created for use in its place.
The new system, known as SECOQC (Secure Communication based on Quantum Cryptography), is intended for use by the secure generation and exchange of encryption keys, rather than for the actual exchange of data, Monyk said.
"The encrypted data would then be transmitted by normal methods," he said. Messages encrypted using quantum mechanics can currently be transmitted over optical fibers for tens of miles. The European project wants to extend that range by combining quantum physics with other technologies, Monyk said.
"The important thing about this project is that it is not based solely on quantum cryptography but on a combination with all the other components that are necessary to achieve an economic application," he said. "We are taking a really broad approach to quantum cryptography, which other countries haven't done."
Experts in quantum physics, cryptography, software and network development from universities, research institutes and private companies in Austria, Belgium, Britain, Canada, the Czech Republic, Denmark, France, Germany, Italy, Russia, Sweden and Switzerland will be contributing to the project, Monyk said.
In 18 months, project participants will assess progress on a number of alternative solutions and decide which technologies seem most promising and merit further development, project coordinators said. The goal is to have a workable technology ready in four years, but SECOQC will probably require three to four years of work beyond that before commercial use, Monyk said.
Cova was more cautious, noting, "This is the equivalent of the first flight of the Wright brothers, so it is too early to be talking already about supersonic transatlantic travel."
The technological challenges facing the project include the creation of sensors capable of recording the arrival of photons at high speed and photon generators that produce a single photon at a time, Cova said. "If two or three photons are released simultaneously, they become vulnerable to interception," he said.
Monyk believes there will be a global market of several million users once a workable solution has been developed. A political decision will have to be made regarding who those users will be in order to prevent terrorists and criminals from taking advantage of the completely secure communication network, he said.
"In my view, it should not be limited to senior government officials and the military, but made available to all users who need really secure communications," Monyk said, citing banks, insurance companies and law firms as potential clients. A decision will have to be made as to whether and how a key could be made available to law enforcement authorities under exceptional circumstances.
"It won't be up to us to decide who uses our results," said Cova.
Reprinted with permission from For more news from IDG visit IDG.net
Story copyright 2004 International Data Group. All rights reserved.
See QC - another hype cycle for commentary
Here is a work in progress Mindmap on all threats to the secure browsing process. It purports to be an attack tree, which is a technique to include and categorise all possible threats to a process. It is one possible aid to constructing a threat model, which latter is a required step to constructing a security model. The mindmap supports another work in progress on threat modelling for secure browsing.
This work was inspired by the Mozilla project's new policy on new CAs, coordinated by Frank Hecker. Unpublished as yet, it forms part of the controversial security debate surrounding the CA model.
( To recap: the secure browsing security model uses SSL as a protocol and the Certificate Authority model as the public key authentication regime, all wrapped up in HTTPS within the browser. Technically, the protocol and key regime are separate, but in practice they are joined at the hip, so any security modelling needs to consider them both together. SSL - the protocol part - has been widely scrutinised and has evolved to what is considered a secure form. In contrast the CA model has been widely criticised, and has not really evolved since its inception. It remains the weak link in security.
As part of a debate on how to address the security issues in secure browsing and other applications that use SSL/CA such as S/MIME, the threat model is required before we can improve the security model. Unfortunately, the original one is not much use, as it was a theoretical prediction of the MITM that did not come to pass. )
Professor David Chaum is working on the voting problem. On the face of it, this is an intractable problem given the requirement of voter secrecy. Yet David Chaum is one of the handful of cryptographers who have changed the game - his blinded tokens invention remains one of the half dozen seminal discoveries of the last half-century.
Of course, in financial voting, the requirement for ballot box privacy is not so stringent. Indeed votes are typically transferable as proxies, if not strictly saleable. For this reason, we can pretty much accomplish financial voting with what we know and have already (an addition of a nymous feature or a new issue would be two ways to do it).
But it is always worth following what is happening on the other side of the fence. Here's the abstract for David's paper, Secret Ballot Receipts and Transparent Integrity:
"Introduced here is a new kind of receipt. In the voting booth, it is as convincing as any receipt. And once the voter takes it out of the booth, it can readily be used to ensure that the votes it contains are included correctly in the final tally. But it cannot be used in improper influence schemes to show how the voter voted. The system incorporating the receipts can be proven mathematically to ensure integrity of the election against whatever incorrectly-behaving machines might do to surreptitiously change votes. Not only can receipts and this level of integrity enhance voter confidence, but they eliminate the need for trusted voting machines."
Cryptographers and software engineers are looking askance at the continued series of announcements in the Quantum Cryptography world. They are so ... vacuous, yet, so repititious. Surely nobody is buying this stuff?
'Fraid so. It's another hype cycle, in the making. Here's my analysis, as posted to the cryptography list.
Subject: Re: Bank transfer via quantum crypto
From: "Ian Grigg" <iang@...>
Date: Sun, April 25, 2004 14:47
To: "Ivan ..."
Cc: "Metzdowd Crypto" <cryptography@metzdowd.com>
Ivan Krstic wrote:
> I have to agree with Perry on this one: I simply can't see a compelling
> reason for the push currently being given to ridiculously overpriced
> implementations of what started off as a lab toy, and what offers - in
> all seriousness - almost no practical benefits over the proper use of
> conventional techniques.
You are looking at QC from a scientific perspective.
What is happening is not scientific, but business.
There are a few background issues that need to be
brought into focus.
1) The QC business is concentrated in the finance
industry, not national security. Most of the
fiber runs are within range. 10 miles not 100.
2) Within the finance industry, the security
of links is done majorly by using private lines.
Put in a private line, and call it secure because
only the operator can listen in to it.
3) This model has broken down somewhat due to the
arisal of open market net carriers, open colos, etc.
So, even though the mindset of "private telco line
is secure" is still prevalent, the access to those
lines is much wider than thought.
4) there is eavesdropping going on. This is clear,
although it is difficult to find confirmable
evidence on it or any stats:
"Security forces in the US discovered an illegally installed fiber
eavesdropping device in Verizon's optical network. It was placed at a
mutual fund company?..shortly before the release of their quarterly
numbers" Wolf Report March, 2003
(some PDF that google knows about.) These things
are known as vampire taps. Anecdotal evidence
suggests that it is widespread, if not exactly
rampant. That is, there are dozens or maybe hundreds
of people capable of setting up vampire taps. And,
this would suggest maybe dozens or hundreds of taps
in place. The vampires are not exactly cooperating
with hard information, of course.
5) What's in it for them? That part is all too
clear.
The vampire taps are placed on funds managers to
see what they are up to. When the vulnerabilities
are revealed over the fibre, the attacker can put
in trades that take advantage. In such a case,
the profit from each single trade might be in the
order of a million (plus or minus a wide range).
6) I have not as yet seen any suggestion that an
*active* attack is taking place on the fibres,
so far, this is simply a listening attack. The
use of the information happens elsewhere, some
batch of trades gets initiated over other means.
7) Finally, another thing to bear in mind is that
the mutual funds industry is going through what
is likely to be the biggest scandal ever. Fines
to date are at 1.7bn, and it's only just started.
This is bigger than S&L, and LTCM, but as the
press does not understand it, they have not
presented it as such. The suggested assumption
to draw from this is that the mutual funds are
*easy* to game, and are being gamed in very many
and various fashions. A vampire tap is just one
way amongst many that are going on.
So, in the presence of quite open use of open
lines, and in the presence of quite frequent
attacking on mutual funds and the like in order
to game their systems (endemic), the question
has arisen how to secure the lines.
Hence, quantum cryptogtaphy. Cryptographers and
engineers will recognise that this is a pure FUD
play. But, QC is cool, and only cool sells. The
business circumstances are ripe for a big cool
play that eases the fears of funds that their
info is being collected with impunity. It shows
them doing something.
Where we are now is the start of a new hype
cycle. This is to be expected, as the prior
hype cycle(s) have passed. PKI has flopped and
is now known in the customer base (finance
industry and government) as a disaster. But,
these same customers are desperate for solutions,
and as always are vulnerable to a sales pitch.
QC is a technology who's time has come. Expect
it to get bigger and bigger for several years,
before companies work it out, and it becomes the
same disputed, angry white elephant that PKI is
now.
If anyone is interested in a business idea, now
is the time to start building boxes that do "just
like QC but in software at half the price." And
wait for the bubble to burst.
iang
PS: Points 1-7 are correct AFAIK. Conclusions,
beyond those points, are just how I see it, IMHO.
In response to the most fanatical and interesting debate in recent monetary times, I published the following rant on the LD. (You should read the prior announcement to pick up the context, and also the 3 score or so responses, if you can get the archives of DGCChat.)
I don't claim to have nailed it, but nothing that was said later or before shook my suspicion that Liberty Dollar have architectured a flimflam currency, and are headed for a fall, some way, some day.
-------- Original Message --------
Subject: [dgc.chat] Liberty Bimetallism
Date: Sun, 11 Apr 2004 16:20:03 -0400
From: Ian Grigg
To: dgcchat@lists.goldmoney.com
It seems to me that Liberty Dollars are Bimetallic.
One metal is the silver, and the other is the USD.
Ignoring the fact that there isn't any more metal in a USD than a shiny strip these days, the notion of a currency trying to balance itself between the movements of two diverging metals may explain the turmoil.
Bimetallic currencies all come to a bad end, some day. This notion of trying to maintain the face value of the Liberty Dollar at something above the cost of silver, and around the price of dollars, has to have a bad end, according to anything I've ever read or heard about.
It's nice that a distribution chain can take a margin of approximately 100% before getting to the user. Really good that someone has figured out how to sell the concept of metallic currencies to the users out there, in a nice easy pretty package.
But, that doesn't mean that we should all drop our economic marbles and squeal for joy like a bunch of teenagers. There's more to music than a good looking pop star.
Apparently, the face value can go up, and we are exhorted to rush in and collect up the old ones. Because, when the change happens - phones ringing hot, must be soon now - we can all change our old notes to new notes. And, *double* our face value, in one deal.
Now, it seems to be a good deal. We seem to gain, coz the users will then take the face value and give us twice the benefit. Sellers are obligated to do some trading, so there is support at some level for this face value.
Great deal. The problem is, if there is money made by some, then there is money *lost* by others. Hence, this is a non-productive move of wealth from one group to another.
As it is non-productive, then it can't be sustainable. It flies against the sense of economic thought much prized in these places; on the face of it, and it is very much a facial issue, this is no better than the taxes, scams, cons and other evils that we bemoan.
Why is Liberty Dollar offering something for nothing?
Or, am I wrong? Is there any viable case to be made, in an economic sense, to support the notion that a solid, important currency can just turn around and rewrite a number from 10 to 20?
iang
subscribe: send blank email to dgcchat-join@lists.goldmoney.com
unsubscribe: send blank email to dgcchat-leave@lists.goldmoney.com
digest: send an email to dgcchat-request@lists.goldmoney.com
with "set yourname@yourdomain.com digest=on" in the message body
This paper, written for publication in a proceedings, covers the background of "why the Ricardian Contract?" It's now in final proofreading mode, so if anyone wants a review copy, let me know (still embargoed so no link posted).
This was a hard paper to write - I had to reverse-engineer the process of many years back. It travels the journey of how we came to place the contract as the keystone of issuance.
(Along that journey, or revisiting thereof, I had to dispose of any notions of making this paper the one and only for Ricardian Contracts - they suddenly sprung an 8 page limit on me, which put the 22 page draft into turmoil. So, I've stripped out requirements and also any legal commentary, which means - oh joy - two more papers needed...)
Many thanks to Hasan for the metaphor. The more I think about it, and write about it, the contract really does have a critical place in financial cryptography, such that it deserves that title: the keystone. Because, only when it is in place is the archway of governance capable of supporting the real application.
Or something. Expressive writing was never my strong suit, so the metaphor is doubly welcome. Bring them on!
One bright spot in the aforementioned report on cyber security is the section on security modelling [1] [2]. I had looked at this a few weeks back and found ... very little in the way of methadology and guidance on how to do this as a process [3]. The sections extracted below confirm that there isn't much out there, as well as listing what steps are know, and provide some references. FTR.
[1] Cybersecurity FUD, FC Blog entry, 5th April 2004, http://www.financialcryptography.com/mt/archives/000107.html
[2] Security Across the Software Development Lifecycle Task Force, _Improving Security Across the Software Development LifeCycle_, 1st April, 2004. Appendix B: PROCESSESTOPRODUCESECURESOFTWARE, 'Practices for Producing Secure Software," pp21-25 http://www.cyberpartnership.org/SDLCFULL.pdf
[3] Browser Threat Model, FC Blog entry, 26th February 2004. http://www.financialcryptography.com/mt/archives/000078.html
While principles alone are not sufficient for secure software development, principles can help guide secure software development practices. Some of the earliest secure software development principles were proposed by Saltzer and Schroeder in 1974 [Saltzer]. These eight principles apply today as well and are repeated verbatim here:
1. Economy of mechanism: Keep the design as simple and small as possible.
2. Fail-safe defaults: Base access decisions on permission rather than exclusion.
3. Complete mediation: Every access to every object must be checked for authority.
4. Open design: The design should not be secret.
5. Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.
6. Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
7. Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.
8. Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.
Later work by Peter Neumann [Neumann], John Viega and Gary McGraw [Viega], and the Open Web Application Security Project (http://www.owasp.org) builds on these basic security principles, but the essence remains the same and has stood the test of time.
Threat Modeling
Threat modeling is a security analysis methodology that can be used to identify risks, and guide subsequent design, coding, and testing decisions. The methodology is mainly used in the earliest phases of a project, using specifications, architectural views, data flow diagrams, activity diagrams, etc. But it can also be used with detailed design documents and code. Threat modeling addresses those threats with the potential of causing the maximum damage to an application.
Overall, threat modeling involves identifying the key assets of an application, decomposing the application, identifying and categorizing the threats to each asset or component, rating the threats based on a risk ranking, and then developing threat mitigation strategies that are then implemented in designs, code, and test cases. Microsoft has defined a structured method for threat modeling, consisting of the following steps [Howard 2002].
Other structured methods for threat modeling are available as well [Schneier].
Although some anecdotal evidence exists for the effectiveness of threat modeling in reducing security vulnerabilities, no empirical evidence is readily available.
Attack Trees
Attack trees characterize system security when faced with varying attacks. The use of Attack Trees for characterizing system security is based partially on Nancy Leveson's work with "fault trees" in software safety [Leveson]. Attack trees model the decisionmaking process of attackers. Attacks against a system are represented in a tree structure. The root of the tree represents the potential goal of an attacker (for example, to steal a credit card number). The nodes in the tree represent actions the attacker takes, and each path in the tree represents a unique attack to achieve the goal of the attacker.
Attack trees can be used to answer questions such as what is the easiest attack. The cheapest attack? The attack that causes the most damage? The hardest to detect attack? Attack trees are used for risk analysis, to answer questions about the system's security, to capture security knowledge in a reusable way, and to design, implement, and test countermeasures to attacks [Viega] [Schneier] [Moore].
Just as with Threat Modeling, there is anecdotal evidence of the benefits of using Attack Trees, but no empirical evidence is readily available.
Attack Patterns
Hoglund and McGraw have identified forty-nine attack patterns that can guide design, implementation, and testing [Hoglund]. These soon to be published patterns include:
1. Make the Client Invisible
2. Target Programs That Write to Privileged OS Resources
3. Use a User-Supplied Configuration File to Run Commands That Elevate Privilege
4. Make Use of Configuration File Search Paths
5. Direct Access to Executable Files
6. Embedding Scripts within Scripts
7. Leverage Executable Code in Nonexecutable Files
8. Argument Injection
9. Command Delimiters
10. Multiple Parsers and Double Escapes
11. User-Supplied Variable Passed to File System Calls
12. Postfix NULL Terminator
13. Postfix, Null Terminate, and Backslash
14. Relative Path Traversal
15. Client-Controlled Environment Variables
16. User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth)
17. Session ID, Resource ID, and Blind Trust
18. Analog In-Band Switching Signals (aka "Blue Boxing")
19. Attack Pattern Fragment: Manipulating Terminal Devices
20. Simple Script Injection
21. Embedding Script in Nonscript Elements
22. XSS in HTTP Headers
23. HTTP Query Strings
24. User-Controlled Filename
25. Passing Local Filenames to Functions That Expect a URL
26. Meta-characters in E-mail Header
27. File System Function Injection, Content Based
28. Client-side Injection, Buffer Overflow
29. Cause Web Server Misclassification
30. Alternate Encoding the Leading Ghost Characters
31. Using Slashes in Alternate Encoding
32. Using Escaped Slashes in Alternate Encoding
33. Unicode Encoding
34. UTF-8 Encoding
35. URL Encoding
36. Alternative IP Addresses
37. Slashes and URL Encoding Combined
38. Web Logs
39. Overflow Binary Resource File
40. Overflow Variables and Tags
41. Overflow Symbolic Links
42. MIME Conversion
43. HTTP Cookies
44. Filter Failure through Buffer Overflow
45. Buffer Overflow with Environment Variables
46. Buffer Overflow in an API Call
47. Buffer Overflow in Local Command-Line Utilities
48. Parameter Expansion
49. String Format Overflow in syslog()
These attack patterns can be used discover potential security defects.
References
[Saltzer] Saltzer, Jerry, and Mike Schroeder, "The Protection of Information in Computer Systems", Proceedings of the IEEE. Vol. 63, No. 9 (September 1975), pp. 1278-1308. Available on-line at http://cap-lore.com/CapTheory/ProtInf/.
[Neumann] Neumann, Peter, Principles Assuredly Trustworthy Composable Architectures: (Emerging Draft of the) Final Report, December 2003
[Viega] Viega, John, and Gary McGraw. Building Secure Software: How to Avoid Security Problems the Right Way, Reading, MA: Addison Wesley, 2001.
[Howard 2002] Howard, Michael, and David C. LeBlanc. Writing Secure Code, 2nd edition, Microsoft Press, 2002
[Schneier] Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons (2000)
[Leveson] Leveson, Nancy G. Safeware: System Safety and Computers, Addison-Wesley, 1995.
[Moore 1999] Moore, Geoffrey A., Inside the Tornado : Marketing Strategies from Silicon Valley's Cutting Edge. HarperBusiness; Reprint edition July 1, 1999.
[Moore 2002] Moore, Geoffrey A. Crossing the Chasm. Harper Business, 2002.
[Hogland] Hoglund, Greg, and Gary McGraw. Exploiting Software: How to break code. Addison-Wesley, 2004
As international currency is one of the big possible applications for Financial Cryptography, the way the currencies move makes for an important business backdrop. It's well known that volatility is good for business, and a rising market is good for startups...
In this vein, the prediction that the US Dollar is losing its pole position is starting to show true. James Turk, in goldmoney's Founder's Notes, presents "What Future for the U.S. Dollar?", being discussion by W. Joseph Stroupe of the Central Banks' of Japan and India decision to "ease up in buying dollars."
This signalling away from single-minded support of the dollar, by means of lesser reserves purchasing, will mean their currencies will rise, and their exports to the US will shrink. But it also means that Japan and India will be less vulnerable to the shrink in international value of their reserves, as the dollar moves further down.
This article "US complicit in its own decline", in the Asia Times, by the same author, is much longer and broader, and raises the surprising claim that Russia is manouvering to take an important position in oil supply. To become, it seems, the other Opec. Interesting stuff, which I mostly placed in the "reserved for future evidence" basket.
This article by Professor Burke, of the Riga Graduate School of Law, explains how the common law tradition creates a framework of contracts based on negotiation by equal parties, and tries to squeeze all agreement into its framework. Yet, form contracts do not squeeze so readily, and only the presence of legal fictions - ones fraught with potential for flaky rulings - can make these form contracts work under the regime of the classical negotiated contract.
Standard form contracts are those written by a vendor, for their clients, to create the terms and conditions for some product.
The difference is in the absence of negotiation, consideration of terms, and meeting of the minds. A form contract is presented, and there is no discussion as to terms. Indeed, Burke says, the counterparty, the customer, has no necessary appreciation of the terms, and not even any especial knowledge that there are terms to consider!
Thus, as an inescapable conclusion, there can be no meeting of the minds in a form contract. Yet, form contracts are totally legal, totally acceptable, and people travel to work every day on them: we derive huge economic benefit from them, from bus and airline tickets to dry cleaning stubs, from insurance contracts to software licences...
In fact, Burke proposes that form contracts are 99% of all contracts (albeit with recognition that there is no empirical study to back this up). Whereas 99% of the tradition of contract law is negotiated contracts. This is, to my layman's eye, a huge criticism of the state of the law, but we must drag ourselves back to the here and now.
Ricardian Contracts are Form Contracts. In this sense they are like airline tickets. The user acts according to a purchase of a product. The product is a payment, a transaction, and the product is "purchased" as a whole entity, including the terms and conditions that apply.
What the user does not do is negotiate the contract. The user of a Ricardo transaction doesn't enter into a bargain, nor examine the terms, nor suggest their own terms. They simply buy a product called a payment.
Indeed, in a form contract, as there is no meeting of the minds, there is no symbol to record that event - so, there is no "need" of signatures.
We've known for a long time that the digital signature of the Issuer was ... an act of some pretence, in the sense that it was completely overdone: the hash entanglement, the publication of the document, and original sales create far more of a record of the intent of the Issuer than any mathematical nonsense dressed up as a signature. But we have puzzled over the question of the user's intent.
Surely, goes classical contract logic, we needed the user's signature to record their intent and act of entering into the contract? No, it appears not, if Burke's critique is to be taken at face value. This is a form contract, and the usage of the product is as much as we can expect, and as much as we need.
This discovery doesn't solve every unanswered question about Ricardian Contracts, but it does shift emphasis away from trying to craft a user's signature as a legal symbol (a task of some contortion, if you know your digsig politics). In the general case, the Issuer of a Ricardian Contract needs a signature from the user no more than an auditorium owner needs a signature from members of the audience, as they walk in.
Both are still covered by terms and conditions of the contract for admittance. It may be that a signature is collected for entrance to a shareholders' meeting, as opposed to a concert, but that's a matter of content, not form.
A good article on the tracking of terror cells, drawing from some weaknesses in cell commsec. The article appears, and purports, to be complete, only because the methods described have already been rendered useless: A new weapon, a new defence. Anti-terror battles are like that; this shows how much more police-style investigation is effective against terrorism than a military posture.
http://www.iht.com/articles/508783.html
Terror network was tracked by cellphone chips
Don Van Natta Jr. and Desmond Butler/NYT
Thursday, March 4, 2004
How cellphones helped track global terror web
LONDON The terrorism investigation code-named Mont Blanc began almost by accident in April 2002, when authorities intercepted a cellphone call that lasted less than a minute and involved not a single word of conversation.
Investigators, suspicious that the call was a signal between terrorists, followed the trail first to one terror suspect, then to others, and eventually to terror cells on three continents.
What tied them together was a computer chip smaller than a fingernail. But before the investigation wound down in recent weeks, its global net caught dozens of suspected Qaeda members and disrupted at least three planned attacks in Saudi Arabia and Indonesia, according to counterterrorism and intelligence officials in Europe and the United States.
The investigation helped narrow the search for one of the most wanted men in the world, Khalid Shaikh Mohammed, who is accused of being the mastermind of the Sept. 11 attacks, according to three intelligence officials based in Europe. The U.S. authorities arrested Mohammed in Pakistan last March.
For two years, investigators now say, they were able to track the conversations and movements of several Qaeda leaders and dozens of operatives after determining that the suspects favored a particular brand of cellphone chip. The chips carry prepaid minutes and allow phone use around the world.
Investigators said they believed that the chips, made by Swisscom of Switzerland, were popular with terrorists because they could buy the chips without giving their names.
"They thought these phones protected their anonymity, but they didn't," said a senior intelligence official based in Europe. Even without personal information, the authorities were able to conduct routine monitoring of phone conversations.
A half-dozen senior officials in the United States and Europe agreed to talk in detail about the previously undisclosed investigation because, they said, it was completed. They also said they had strong indications that terror suspects, alert to the phones' vulnerability, had largely abandoned them for important communications and instead were using e-mail, Internet phone calls and hand-delivered messages.
"This was one of the most effective tools we had to locate Al Qaeda," said a senior counterterrorism official in Europe.
The officials called the operation one of the most successful investigations since Sept. 11, 2001, and an example of unusual cooperation between agencies in different countries. Led by the Swiss, the investigation involved agents from more than a dozen countries, including the United States, Pakistan, Saudi Arabia, Germany, Britain and Italy.
In 2002, the German authorities broke up a cell after monitoring calls by Abu Musab al-Zarqawi, who has been linked by some top U.S. officials to Al Qaeda, in which he could be heard ordering attacks on Jewish targets in Germany. Since then, investigators say, Zarqawi has been more cautious.
"If you beat terrorists over the head enough, they learn," said Colonel Nick Pratt, a counterterrorism expert and professor at the George C. Marshall European Center for Security Studies in Garmisch-Partenkirchen, Germany. "They are smart."
Officials say that on the rare occasions when operatives still use mobile phones, they keep the calls brief and use code words.
"They know we are on to them and they keep evolving and using new methods, and we keep finding ways to make life miserable for them," said a senior Saudi official. "In many ways, it's like a cat-and-mouse game."
Some Qaeda lieutenants used cellphones only to arrange a conversation on a more secure telephone. It was one such brief cellphone call that set off the Mont Blanc investigation.
The call was placed on April 11, 2002, by Christian Ganczarski, a 36-year-old Polish-born German Muslim who the German authorities suspected was a member of Al Qaeda. From Germany, Ganczarski called Khalid Shaikh Mohammed, said to be Al Qaeda's military commander, who was running operations at the time from a safe house in Karachi, Pakistan, according to two officials involved in the investigation.
The two men did not speak during the call, counterterrorism officials said. Instead, the call was intended to alert Mohammed of a Qaeda suicide bombing mission at a synagogue in Tunisia, which took place that day, according to two senior officials. The attack killed 21 people, mostly German tourists.
Through electronic surveillance, the German authorities traced the call to Mohammed's Swisscom cellphone, but at first they did not know it belonged to him. Two weeks after the Tunisian bombing, the German police searched Ganczarski's house and found a log of his many numbers, including one in Pakistan that was eventually traced to Mohammed. The German police had been monitoring Ganczarski because he had been seen in the company of militants at a mosque in Duisburg, and last June the French police arrested him in Paris.
Mohammed's cellphone number, and many others, were given to the Swiss authorities for further investigation. By checking Swisscom's records, Swiss officials discovered that many other Qaeda suspects used the Swisscom chips, known as Subscriber Identity Module, or SIM cards, which allow phones to connect to cellular networks.
For months the Swiss, working closely with counterparts in the United States and Pakistan, used this information in an effort to track Mohammed's movements inside Pakistan. By monitoring the cellphone traffic, they were able to get a fix on Mohammed, but the investigators did not know his specific location, officials said.
Once Swiss agents had established that Mohammed was in Karachi, the U.S. and Pakistani security services took over the hunt with the aid of technology at the U.S. National Security Agency, said two senior European intelligence officials. But it took months for them to actually find Mohammed "because he wasn't always using that phone," an official said. "He had many, many other phones."
Mohammed was a victim of his own sloppiness, said a senior European intelligence official. He was meticulous about changing cellphones, but apparently he kept using the same SIM card.
In the end, the authorities were led directly to Mohammed by a CIA spy, the director of central intelligence, George Tenet, said in a speech last month. A senior U.S. intelligence official said this week that the capture of Mohammed "was entirely the result of excellent human operations."
When Swiss and other European officials heard that U.S. agents had captured Mohammed last March, "we opened a big bottle of Champagne," a senior intelligence official said.
Among Mohammed's belongings, the authorities seized computers, cellphones and a personal phone book that contained hundreds of numbers. Tracing those numbers led investigators to as many as 6,000 phone numbers, which amounted to a virtual road map of Al Qaeda's operations, officials said.
The authorities noticed that many of Mohammed's communications were with operatives in Indonesia and Saudi Arabia. Last April, using the phone numbers, officials in Jakarta broke up a terror cell connected to Mohammed, officials said.
After the suicide bombings of three housing compounds in Riyadh, Saudi Arabia, on May 12, the Saudi authorities used the phone numbers to track down two "live sleeper cells." Some members were killed in shootouts with the authorities; others were arrested.
Meanwhile, the Swiss had used Mohammed's phone list to begin monitoring the communications and activities of nearly two dozen of his associates. "Huge resources were devoted to this," a senior official said. "Many countries were constantly doing surveillance, monitoring the chatter."
Investigators were particularly alarmed by one call they overheard last June. The message: "The big guy is coming. He will be here soon."
An official familiar with the calls said, "We did not know who he was, but there was a lot of chatter." Whoever "the big guy" was, the authorities had his number. A Swisscom chip was in the phone.
"Then we waited and waited, and we were increasingly anxious and worried because we didn't know who it was or what he had intended to do," an official said.
But in July, the man believed to be "the big guy," Abdullah Oweis, who was born in Saudi Arabia, was arrested in Qatar. "He is one of those people able to move within Western societies and to help the mujahedeen, who have lesser experience," an official said. "He was at the very center of the Al Qaeda hierarchy. He was a major facilitator."
In January, the operation led to the arrests of eight people accused of being members of a Qaeda logistical cell in Switzerland.
Some are suspected of helping with the suicide bombings of the housing compounds in Riyadh, which killed 35 people, including eight Americans.
Later, the European authorities discovered that Mohammed had contacted a company in Geneva that sells Swisscom phone cards. Investigators said he ordered the cards in bulk.
The New York Times
Copyright © 2003 The International Herald Tribune
Pelle writes: The old NeuClear web site has been replaced by a much improved collaborative wiki style web site.
I am trying to document the concepts, applications as well as legal and governance aspects of NeuClear.
Please feel free to register, comment and even write articles or snippets.
If there is interest I will setup a general purpose financial cryptography space within the site.
John Snow, the US Treasury Secretary, has stated that the US Government no longer stands behind Fannie Mae and Freddie Mac. (1, below, 2, 3)
This takes me back to the late eighties, when the old Lady's Governor announced that the Bank of England would no longer necessarily bail out a bank just because it was a bank. Many years later, driving back after b-school, I heard Eddie George on the radio announcing the bankrupcy and immediate wind-up of Barings Bank.
Barings was old and venerable, but tiny. A ripple in the pond. Fannie Mae and Freddie Mac are ... none of that!
WASHINGTON (Reuters) - U.S. Treasury Secretary John Snow took direct aim Tuesday at mortgage finance firms Fannie Mae and Freddie Mac, repeating previous warnings to investors that government-sponsored enterprises are not financially backed by the U.S. government.
"We don't believe in a 'too big to fail' doctrine, but the reality is that the market treats the paper as if the government is backing it. We strongly resist that notion," he said in prepared remarks before a bankers group here.
"You know there is that perception. And it's not a healthy perception and we need to disabuse people of that perception. Investments in Fannie (FNM: Research, Estimates) and Freddie (FRE: Research, Estimates) are uninsured investments," he said.
...
A new Group of Thirty (G30) report, Enhancing Public Confidence in Financial Reporting, commissioned after the last few years' spate of corporate failures has stated that it is Governance that has failed, not Accounting.
It is true that governance was the core failure in these cases. But, accounting is sleeping at the wheel, and asking to be not woken up right now is hardly useful.
Accounting, according to the G30 team, has integrity. Which, they drill down to mean these five criteria (see the doc for their definitions):
These things can be done better. Consistency and Neutrality are achieved by more and deeper automation - this is widely known.
Building on the former two, Reliability is then created by liberal dashes of crypto - sign and hash everything in site.
Once these three things are in place, Relevance and Understandability follows with public disclosure: not the sort that the accountants are thinking about - regulated, limited, formally filed reports - rather the new, open and dynamic engagement with the scrutinising public. Detail that is *outside* the regulatory environment, records that are in excess of requirements, but contribute to making a fair and open picture of a corporation.
Not, as the accountants think, by reducing the amount and simplicity of information so that the public can understand it, but, the total reverse: More quantity and more quality, so the public can ascertain for themselves what is important.
Why don't accountants think in these terms? I'd stab at this: they can't move because of the momentum of current practice and regulations. Which explains why the new trends appear in unregulated sectors such as DGCs, or previously unlisted companies such as eBay which reveals detailed statistics of its auction business.
All security models call for a threat model; it is one of the key inputs or factors in the construction of the security model. Secure browsing - SSL / HTTPS - lacked this critical analysis, and recent work over on the Mozilla Browser Project is calling for the rectification of this. Here's my attempt at a threat mode for secure browsing, in draft.
Comments welcome. One thing - I've not found any doco on how a threat model is written out, so I'm in the dark a bit. But, ignorance is no excuse for not trying...
Today's GoldMoney Alert from James Turk postulates that OPEC oil sales are already priced in Euro terms.
It's based on correlations over three years...
Three data points is not enough to draw a conclusion, but it's a very interesting postulation. One would need to look at the quarterly or monthly figures to develop any confidence in the claim.
Mind you, it is to be expected. If OPEC started pricing in Euros, they would have maybe not announced it, given the sensitivities. There's nothing like allowing the aggressive dollar traders to discover the fait accompli.
Still, the real question, as James pointed out, is whether they are invoicing in Euros, or accepting payment in Euros without undue conversion penalty. If Oil is in the process of switching from dollar trading to Euro trading, or even a hybrid, this reduces the need for central banks to hold so much dollar reserve, thus releasing more dollars to go back home to the US of A.
A cert for a new CA, conveniently named CACert, is being proposed for addition to Mozilla, the big open source group pushing out a successful browser.
As CACert is not a commercial organisation, and doesn't sell its certs for any sort of real money, this has sparked quite a debate.
Mozilla Foundation has held firm to its non-commercial roots so far, by announcing and publishing a draft policy and faq that espouses no-cost CA Cert addition, and fairly perfunctory checks.
The groundswell for reworking browser approach to the crypto security layer is growing. In 2003, I pressed the debate forward with a series of rants attacking the SSL/HTTPS (in)security design.
I suggest the way is now open for cryptographers to adopt economic cryptography, rather than the no-risk cryptography approach used and since discredited in SSL.
In the specific case of SSL/HTTPS, we recommend moving to:
Copying the successful economic cryptography model of SSH would definitely lift the ugly duckling SSL crypto system up out of the doldrums (1st in above rants page, "How effective is open source crypto?" discusses the woeful statistics for SSL certificate usage).
FC 2004 starts this monday in Key West, Florida, USA. If you're not heading there by now, you're ... probably not going to make it!
http://fc04.ifca.ai/schedule.htm
Hardware tokens from PicoDisk start at about 30 Euros for a 32Mb store that could fit on your keyring. These Italian stallions even have one that boasts AES encryption, and another with a biometric fingerprint sensor, for what it's worth...
Big enough to carry your secret keys, your transactions and your entire copy of WebFunds! You have to pay a bit more to get Java installed on them, but they do go up to a whopping 2Gb.
Of course, serious FCers know that the platform that runs your financial application also has to be secure, but nothing's perfect. These cheap tokens could go a long way to covering many of the sins found in "common windowing environments'" predeliction to being raided by viruses.
(I've since found out that these tokens are only accessible from Windows, and drivers are "closed". Whoops. My interest just hit the floor - it's hard enough to integrate these sorts of things into real apps without the supplier trying to stop you using them. Apologies for wasting your time!)
http://www.technologyreview.com/articles/huang1203.asp?p=0
A new generation of e-payment companies makes it easy to "pay as you go" for inexpensive Web content, portending big profits for online businesses.
By Gregory T. Huang
December 2003/January 2004
Ask Ron Rivest if he's ever been whisked away by the CIA in the middle of the night, and he laughs-but he doesn't say no. At Peppercoin, a two-year-old MIT spinoff in Waltham, MA, the renowned cryptographer oversees an operation far less secretive than an intelligence agency but almost as intense: a clearinghouse for electronic "micropayments," pocket-change transactions that may finally allow magazines, musicians, and a multitude of others to profit from selling their wares online. It's September, and with only weeks to go until commercial launch, Peppercoin's software engineers troubleshoot at all hours. Marketing executives shout across the room and over the phone, making deals.
But in the eye of the storm, Rivest is calm and collected. Eyes sparkling, real change jingling in his pocket, he even wears sandals with authority. What Peppercoin is trying to do, he says, is make it easy to "pay as you go" for inexpensive Web content-so you won't need to pay subscription fees, limit yourself to free content, or share files illegally. With a click of the mouse-and Peppercoin's software churning away behind the scenes-you can now download a single MP3 from an independent-music site, watch a news video clip, or buy the latest installment of a Web comic from your favorite artist. All for just pennies.
It sounds simple, but it wasn't possible a few months ago. Most Web merchants still can't support micropayments-transactions of about a dollar or less-because the processing fees from banks and credit card companies erase any profit. But Peppercoin, the brainchild of Rivest and fellow MIT computer scientist Silvio Micali, is in the vanguard of a new crop of companies-including BitPass of Palo Alto, CA, and Paystone Technologies of Vancouver, British Columbia-that make cash-for-bits transactions superefficient. These companies' founders are well aware of the string of defunct e-payment companies whose virtual currencies have gone the way of the Confederate dollar. But they've got something new up their sleeves: easier-to-use technology that allows Web sites to accept tiny payments by effectively processing them in batches, thereby cutting down on bank fees.
So throw out your current conceptions of Web surfing. Rather than sifting through pop-up ads and subscription offers, imagine dropping a quarter on an independent film, video game, specialized database, or more powerful search engine. If programmers and Web artists could profitably charge a few cents at a time, their businesses could flourish. And with an easy way for users to buy a richer variety of content, experts say, the current deadlock over digital piracy could effectively dissolve, giving way to a multibillion-dollar business stream that rejuvenates the wider entertainment industry the same way video rentals did Hollywood in the 1980s. Down the road, cell phones, personal digital assistants, and smart cards equipped with micropayment technology could even supplement cash in the real world.
"The key is timing and technology," says Rivest, who thinks Peppercoin has both right. The company's technical credibility, at least, is not an issue. Rivest coinvented the RSA public-key encryption system, used by Web browsers to make credit card purchases secure. Micali holds more than 20 patents on data security technologies and won the 1993 Gödel Prize, the highest award in theoretical computer science. Their system uses statistics and encryption to overcome profit-erasing transaction fees; the approach is unique and more efficient than its predecessors.
The timing looks good, too-not just for Peppercoin, but for other micropayment companies as well. "One year ago, it was, 'Will people pay?' Now it's, 'How will they pay?'" says Ian Price, CEO of British Telecommunications' Click and Buy division, which uses micropayments to sell articles, games, and other Web content to customers in more than 100 countries. And in September, Apple Computer announced that its online music store sold more than 10 million 99-cent songs in its first four months. Apple's success was the "starting gun for a track meet of companies" planning to roll out pay-per-download services by 2004, says Rob Carney, Peppercoin's founding vice president of sales and marketing.
Indeed, 40 percent of today's online companies would sell content they're currently giving away if they had a viable micropayment system, says Avivah Litan, an analyst at Gartner Research who specializes in Internet commerce. According to Forrester Research, the market for music downloads is expected to grow from $16 million in 2003 to $3 billion in 2008. And a Strategy Analytics report states that mobile-gaming revenues could top $7 billion by 2008. "The market is ready" for micropayments, says Rivest.
Even so, getting the technology to take off won't be easy. Micropayment companies need to make their systems fully reliable, secure, and easy to use. Just as important, they need to increase demand by working with Web businesses to deliver a broader range of digital products. So on the eve of Peppercoin's commercial launch, the question is not whether the timing and technology are good. It's whether they're good enough.
In Statistics We Trust
Understanding Peppercoin requires a little history. According to old English common law, the smallest unit of payment that could appear in a contract was a peppercorn. Silvio Micali's wife, a professor of law, suggested that as the name for his startup back in 2001, and it stuck (becoming "Peppercoin" for the sake of clarity). Now, in his office at MIT's Computer Science and Artificial Intelligence Laboratory, Micali is explaining what makes Peppercoin tick. On hand are technical books and papers in neat piles, should we need them. It's simple mathematics, says Micali-but don't believe him.
Micali knows two things: cryptography and coffee. His micropayment analogies involve cappuccinos. There are two standard ways of buying digital content, he says. One is like prepaying for a certain number of cappuccinos, the other like getting a bill at the end of the month for all the cappuccinos you've had. That is, the customer either pays up front for a bundle of content-say, 10 archived New York Times articles-or runs a tab that's settled every so often. The problem with both models is that the seller has to keep track of each customer's tab, and the buyer is locked into a particular store or site. But in the spring of 2001 came a "very lucky coffee break" when Micali and Rivest, whose office is just down the hall, put their heads together. "We started discussing this problem, and within minutes we had the basic solution," says Micali. "And we got very excited! First, from the discovery. Second, from the coffee."
What they discovered was a way to cut the overhead cost of electronic payments by processing only a statistical sample of transactions, like taking a poll. On average, Peppercoin might settle, say, one out of every 100 transactions-but it pays the seller 100 times the amount of that transaction. Given enough transactions, it all evens out, says Micali
It looks simple to the buyer, who only has to click on an icon to charge an item to her Peppercoin account, but the action behind the scenes is pretty complicated. In beta tests, special encryption software runs on both the buyer's and seller's computers, protecting their interactions from hackers and eavesdroppers. And encrypted in each transaction is a serial number that says how many purchases the customer has made over time, for how much, and from whom.
Ninety-nine transactions out of a hundred are not fully processed-but they're still logged by the seller's computer. One transaction out of a hundred, selected at random, is sent to Peppercoin. After Peppercoin pays the seller 100 times the value of that transaction, it bills the customer for all of her outstanding purchases from all sites that use Peppercoin. Since about one out of a hundred purchases is processed, her last bill will have come, on average, a hundred purchases ago. That's the trick: by paying the seller and charging the customer in lump sums every 100 purchases or so, Peppercoin avoids paying the fees charged by credit cards-roughly 25 cents per transaction-on the other 99 purchases. "This is fantastic," says Greg Papadopoulos, chief technology officer at Sun Microsystems and a member of Peppercoin's technical advisory board. "Ron and Silvio have done what needed to be done-get the cost of transactions down without ripping up the existing infrastructure of credit cards and banks."
But what's to keep all this fancy statistical footwork from cheating sellers out of their due? And what's to keep buyers and sellers both from cheating the system? "That's the secret sauce," says Micali.
He's talking about cryptography, the sweet science of codes and ciphers. Its inner workings are, well, cryptic-paper titles at conferences include things like unimodular matrix groups and polynomial-time algorithms-but it's used every day to keep communications, documents, and payments secure. Roughly speaking, says Rivest, statistical sampling of transactions makes the system efficient, while cryptography keeps the random selection process fair and secure. So Peppercoin charges users exactly what they owe, and if Peppercoin's payment to the seller happens to be more or less than the value of the purchases customers actually made, the discrepancy is absorbed by the seller. Over time, this jiggle will become negligible, especially compared to the amount of money Web sites will make that they couldn't make before.
Think about it for too long, and most people get a headache. But Micali and Rivest have been thinking about this sort of thing for 20 years, so they make a formidable and complementary team: Micali is as animated as Rivest is understated, like fire and ice. "They've done brilliant work over the years," says Martin Hellman, a professor emeritus of electrical engineering at Stanford University and a pioneer in cryptography going back to the 1970s. "Peppercoin has a clever approach."
But clever mathematics aside, the proof is in the pudding. In the end, Peppercoin's executives say, their system must be as easy to use as cash. Perry Solomon, Peppercoin's founding CEO, explains it this way, pulling some change out of his pocket. "I can give you this quarter, and you can look at it quickly and say, 'Okay, that's a quarter.' You don't need to call the bank to verify it." Online merchants, however, must check a credit card holder's identity and available credit before approving a purchase. Going to that trouble makes sense for a $50 sweater or a $4,495 Segway transporter, but not for a 50-cent song. So Peppercoin's software stamps each transaction with the digital equivalent of e pluribus unum-a guarantee to the seller that it's Peppercoin handling the transaction, and that payment is forthcoming. The seller can quickly verify this stamp and deliver the goods.
Bootstrapping with Bits
The theory may be impeccable, and the founders' credentials outstanding, but how does a startup transform a micropayment system into a practical, sellable product? That's the stuff of late-night whiteboard discussions enhanced by takeout Chinese food and bad TV movies, says Joe Bergeron, Peppercoin's vice president of technology. Bergeron, a baby-faced programming whiz, has the task of translating Rivest and Micali's algorithms into software. Like any good engineer at a startup, he has spent many a night under his desk trying to squeeze in a few hours of sleep. "I'm dreaming in Peppercoins now," he says.
Minting micropayments starts with hardware. A secure data center a few kilometers from company headquarters houses hundreds of thousands of dollars' worth of computing horsepower and memory. All of Peppercoin's money transfers flow electronically through these machines. A rack of 20 processors and backups and four levels of hardware security are set up in a special cage walled off by Plexiglas guaranteed to withstand a 90-minute riot; the rental contract even specifies that the cage will repel "small-arms fire and manual tools."
First Out of the Gate
Andreas Gebauer remembers the pesky young guy well. Five times in 2000, Firstgate Internet founder Norbert Stangl showed up at the Berlin offices of Stiftung Warentest (Product Testing Foundation), Germany's leading consumer reports magazine, to peddle his e-payment technology. Five times Gebauer, the magazine's online editor, said he wasn't interested. Finally, on the sixth trip, Gebauer agreed to give it a try if Stangl would just leave him alone.
Persistence pays off. "We've been very successful," says a converted Gebauer. In the three years since Stiftung Warentest adopted Firstgate's system, its monthly online revenues have skyrocketed from $5,000 to more than $100,000. And today, while the U.S. micropayment market is still in its early stages, Firstgate has some 2,500 merchant users and almost two million paying customers in Europe-and pulls in more than $1 million a month in revenues, making it one of the world's leading e-payment and distribution companies. Its users in media and publishing, the fastest-growing market segment, include the Independent, Der Spiegel, Reader's Digest, Encyclopedia Britannica, and Gruner and Jahr.
Firstgate's software, unlike Peppercoin's, must keep track of every transaction, and most are dollars rather than cents. But it works. Web customers can go to any Firstgate-enabled site, click on an article, and read it. They are billed via their credit card, debit card, or phone bill once they accrue a few dollars in charges. The system works by fetching digital content from Web merchants and delivering it only to paying customers. Firstgate charges a setup fee for merchants and pockets 10 to 30 percent of each transaction. (That may sound steep, but for micropayments, Firstgate can be cheaper than a credit card company.) Meticulously hand-tailored, the system has won a slew of European industry and consumer awards. "It's finely tuned, like a BMW," says Ian Price, CEO of British Telecommunications' Click and Buy division, which has partnered with Firstgate to sell online games, articles, and even a voting mechanism for interactive TV shows.
Most important, Firstgate has proven that a global market exists for Internet content priced in the $1 to $10 range, says Stangl, who is now the company's chairman. In late 2002, the company set up offices in New York. How will its success in signing up newspapers, magazines, and other media groups translate to the U.S. market? "We have experience working with so many online companies," says George Cain, Firstgate's CEO in North America. "What people are thinking about here, we've already got built into our system."
But Peppercoin's system must also be bulletproof to electronic problems. Take transaction speed, for instance. Peppercoin is working with one Web site that delivers 1,000 digital maps per second. For Peppercoin to handle that many purchases, and for buyers to get their content without waiting, the behind-the-scenes computations must happen in milliseconds. As Bergeron explains, sketching a flow chart on a whiteboard, the software module that identifies what the buyer is paying for, verifies that the payment is good, and sends the digital content to the buyer has been taking a few milliseconds too long in beta tests. The solution: do these steps in parallel, and manage customer queries in a flexible way by devoting more computing resources to the steps that take longer. Trimming bits of fat like this saves precious processing time per click-and ultimately keeps the system running efficiently.
Perhaps even more crucial to Peppercoin's success, though, is its sales strategy. "The challenge isn't getting people to buy the math. It's enabling a new business model for the Web," says Rob Carney. In two respects, micropayment startups are fundamentally different from online person-to-person payment companies like Mountain View, CA-based PayPal, one of the most successful of e-payment companies. First, they are enabling Web merchants to sell low-priced digital content, not physical items. Second, they don't have anything approaching the captive market that PayPal has in the customers who use eBay, the San Jose, CA, online auction house that purchased PayPal in 2002.
So Peppercoin's plan-similar to those of other micropayment startups (see table "The Micropayment Movement,")-is to go after Web merchants, work with them to decide what kinds of content to sell, and build up a brand name with which to approach larger distributors. It's a painstaking process; Solomon and Carney have attended more than 400 sales meetings in two years, trying to persuade merchants that Peppercoin's own fees-which work out to be much lower than the flat transaction fees charged by credit cards-are a small price to pay for the extra business micropayments will generate.
But all this work is starting to pay off. "Peppercoin has been a huge benefit for us," says Rex Fisher, chief operations officer at Music Rebellion, a Terre Haute, IN, company that last June started selling 99-cent MP3s by the download, using a beta version of Peppercoin's system. The bottom line: micropayments allow the music site to triple its profit margin, as compared with traditional payment methods. As for the user interface-buyers sign up for a Peppercoin account and then click on music icons to charge songs-Fisher says he's working with Peppercoin to make it "easy and hassle free." He acknowledges that it's still early, however, and that results in the next year will say more about the overall success of micropayments.
Other users go further in their praise for e-payments as enablers of new kinds of Web content. "The promised land is filled with micropayments," gushes David Vogler, a digital-entertainment executive formerly in charge of online content at Disney and Nickelodeon. One of Vogler's current ventures is a humor site called CelebrityRants.com. There, using Peppercoin's software, you can buy animated recordings of embarrassing diatribes or confessions from celebrities caught on tape-everyone from Britney Spears to new California governor Arnold Schwarzenegger. "We explored many solutions, but Peppercoin seemed like the right horse to bet on," says Vogler. Moreover, he adds, it was "insanely easy" to get the system up and running. That and a painless consumer experience seem to be the keys to early adoption.
So this is how it starts: not with a conglomerate of media giants adopting micropayments, but with pockets of small entertainment and Web-services sites. Plenty of sites will still be free, supported by advertising, says Carney. But micropayments, alongside ad sales and subscriptions, will become another leg of the stool that supports Web businesses. And micropayment companies are hoping that their systems will give entrepreneurs and consumers the freedom to try out new kinds of commerce on the Web, and to buy and sell an ever wider variety of digital goods. "The Web was dying," says Kurt Huang, CEO of BitPass, a micropayment startup he cofounded while he was a graduate student at Stanford University. "We needed to do something to change its economics."
Take Web comics. Today there are more than 3,000 online cartoonists worldwide, and that number is growing fast, says Scott McCloud, an author and Web comic artist based in Newbury Park, CA. "Micropayments are the missing piece of the puzzle," he says. Using a beta version of BitPass's technology-users prepay a few dollars into an account-McCloud sold 1,500 copies of his comics for 25 cents each in eight weeks. Not huge numbers, to be sure, but the potential for steady growth is there. And it's not supplementary income-this is how Web artists will make their money. "We're not just slapping a price tag on what could be free," says McCloud. "This is allowing us to do work that we couldn't do before."
The Micropayment Movement
Company Technology Market/Status
BitPass (Palo Alto, CA) Costs of Web content and services are deducted from an account
prepaid via credit card or PayPal Independent artists, publishers, musicians; beta trials under way; commercial release in late 2003
Firstgate Internet (Cologne, Germany) Servers fetch Web content and deliver it to customers; charges
appear on credit card or phone bill News and analyst reports; in operation since 2000; nearly two million customers in Europe
PayLoadz
(New York, NY) Delivers digital items via e-mail after users have paid using PayPal Electronic books, music, software; commercial release in May 2002;
9,600 sellers signed up
Paystone Technologies
(Vancouver, British Columbia) Customer accesses Web content after paying via bank account Music, publishing; commercial release in May 2003; 700 sellers signed up
Peppercoin
(Waltham, MA) Uses statistics and encryption to process a sample of transactions; users pay via credit card once per 100 or so transactions Music, games, publishing; commercial release in late 2003
The Coin-Op Web?
In the 1990s, e-payment startups like DigiCash, Flooz, and Beenz crashed because dot-com companies didn't think they needed the technology to make money, and because consumers expected Web content to be free. Times have changed, but there are still plenty of skeptics who doubt micropayments will catch on broadly, considering that MP3 listeners and Web-comics fans are the technology's main U.S. consumers so far. Even those who have made their fortunes in the online-payments world acknowledge that it's an uphill battle. "It's quite possible they could fail miserably in this economic climate," says Max Levchin, cofounder and former chief technology officer of PayPal (see sidebar "The PayPal Precedent").
But both the supply of digital content and consumers' willingness to pay for it are increasing, and the micropayment companies' strategy of signing up Web merchants, one at a time, has promise. "There will be small companies who figure out how to play this chicken-and-egg game," says Andrew Whinston, director of the Center for Research in Electronic Commerce at the University of Texas at Austin. "The key is to become successful before big companies like Microsoft get into it."
The PayPal Precedent
Max Levchin believes that micropayment companies' two keys to success are a simple user interface and an aggressive distribution strategy. TR's 2002 Innovator of the Year, Levchin is the cofounder and former chief technology officer of PayPal, the online-payments pioneer that was sold to eBay for $1.5 billion in October 2002.
Technology Review: Are micropayments ready to take off?
Max Levchin: The Apple music store is a good example that 99-cent payments are a reality. What is uniquely different about the market now is that personal publishing has become a lot more pervasive than it was three to five years ago. There are literally thousands of Web sites that specialize in comics, music, and art that's only available on the Internet. [Artists] look to the Internet to actually make money. So demand is definitely increasing. The question is, are these solutions actually what the market needs?
TR: What do Peppercoin and other micropayment startups need to do to become successful?
Levchin: Most of the technical challenge is about the user interface, not the billing process. Overall, Peppercoin's [beta version] user interface is very raw. I have to download software. I have to wait for a confirmation e-mail. What if my computer crashes? You should never force people to download software. The security is a good thing, but it adds complexity.
TR: What's the greatest challenge, going forward?
Levchin: The biggest difficulty, by far, is distribution. How do you get all these people to start using the system? At PayPal, as soon as we "infected" a couple popular eBay merchants, very quickly we saw this massive growth, where buyers started pushing other merchants to sign up. But there isn't a giant market online right now where you can go to look at all digital content available. Digital merchants are very disparate. And consumers aren't going to sign up, download software, or prepay for a card, because there are not that many places to spend it yet. So marketing to digital merchants directly is one way to go. But it will take an incredible amount of human effort to get enough people to sign up.
For a glimpse into the future of micropayments, look overseas. In Japan, most mobile content and services, such as cell-phone users downloading games and ring tones, are paid. And micropayments are becoming prevalent in Europe's publishing and news-media markets. Firstgate Internet, a digital content distributor in Cologne, Germany, has nearly two million customers and 2,500 clients, including British Telecommunications' Click and Buy, and it is bringing in more than $1 million a month in revenues, says founder and chairman Norbert Stangl (see sidebar "First Out of the Gate"). Its most successful kinds of low-price content: news, research articles, and financial reports.
But Firstgate tallies each purchase separately and pays credit card fees, so its own fees are higher for merchants than most micropayment startups'. Peppercoin and BitPass hope to succeed in the U.S. market by being more efficient for small payments. So will micropayments take off here? "The truth is, nobody knows," says Guy Kawasaki, CEO of Garage Technology Ventures, a venture capital firm that is funding BitPass. "But I look around and I see 50,000 unsigned bands in the world. I see thousands of bloggers, analysts, and artists who want to publish their stuff. And how many databases would you want to search for 50 cents?" Asked when he expects to see a return on his investment, the former Apple guru laughs and says, "Before I die!"
Other observers see a clear path to adoption. "The future of micropayments is very simple," says Sun's Papadopoulos. "You'll get to a critical mass on the network. It will become the equivalent of pocket change, and you'll see fierce price competition on digital content." Falling prices, companies hope, will only increase demand. And as digital content gets cheaper, the temptation to pirate should diminish.
We're already seeing competition: last summer, the music-download store BuyMusic.com put up billboards parodying Apple's music ads and undercutting Apple's 99-cent pricing by selling songs for as little as 79 cents. With America Online, MusicMatch, and Roxio (Napster 2.0) launching stores as well, the music industry will be a proving ground-or perhaps a killing field-for e-payment technologies.
As the contest begins, most micropayment startups have enough capital to see them through the rollout phase. In September, Peppercoin announced that it had raised $4.25 million in its second round of venture funding. But in the long run, how will micropayment companies stay in business? Signing up Web merchants is fine now-deals are quick and the need is there-but an eventual goal is to hook up with a distributor that will become the eBay of bits.
So as Peppercoin makes final preparations for its commercial launch, Carney and Solomon make sales calls. Engineers sit on the edges of their seats, watching the ebb and flow of processing loads and user levels on their monitors. Rivest and Micali, ever patient, stay out of the limelight. If victory arrives, it won't come thundering out of the sky. For companies like Peppercoin, success will build up gradually, like coins clinking into a piggy bank, one by one.
A couple of articles on Insider Trading by Sheldon Richman : first part is an analysis of the Martha Stewart case . Second part is a broader look at the concept of Insider Trading .
Is Insider Trading good? Or bad? Here's some personal comments...
It's a tricky question. On the face of it, Insider Trading is a straight out-and-out fraud. An insider has internal information that will - in the insider's opinion - cause the stock price to move. So, she buys or sells ahead of the move, and takes the profits.
This is a straight fraud because it takes money from the shareholder base. The shareholders are poorer because they did not enjoy the benefits of the change in price. Of course, this assumes that an insider cannot also be a shareholder, and therein lies the conflict of interest: an insider has a fiduciary duty to shareholders, which may be breached if acting on the basis of own shareholdings.
However, the real issue that is at the heart of this fraud is that, economically, it's pretty nigh on impossible to detect and prosecute. In practical terms, the information is a) in the heads of the insiders, b) subject to misinformation constraints as much as any market noise, and c) hard to determine as being "inside" or "outside" some magic circle.
Thus in purely transaction cost terms, making Insider Trading illegal is a very difficult sell. It's a bit like the Music intellectual property debate: songs became property when records were invented, because it was now possible to control their sales by following the shellac and the pianola rolls and sheet music. Of course it took a few decades for this to shake out.
Songs lost their property characteristics with the invention of the personal MP3 player, and we are into the first decade of shaking out right now....
Invisiblog - anonymous weblog publishing (added: 15-Dec-2003)
invisiblog.com lets you publish a weblog using GPG and the Mixmaster anonymous remailer network. You don't ever have to reveal your identity - not even to us. You don't have to trust us, because we'll never know who you are.
File-Exchange - File dump and public key retrieval mechanism (added: 15-Dec-2003)
File-Exchange allows you to exchange files with other people without giving away your identity or harming your privacy by just using a web browser.
XCA (added: 14-Dec-2003)
This application is intended for creating and managing X.509 certificates and RSA keys (DSA keys maybe supported in a later release since they are not wideley used in PKI cryptography). Everything that is needed for a CA is implemented. All CAs can sign sub-CAs rekursively. These certificate chains are shown clearly in a list-view. For an easy company-wide use there are customiseable templates that can be used for certificate or request generation. All crypto data is stored in a local Berkeley database.
Workshop on Electronic Contracting (WEC) (added: 22-Dec-2003)
http://tab.computer.org/tfec/cec04/cfpWEC.html
Real world commerce is largely built on a fabric of contracts. Considered abstractly, a contract is an agreed framework of rules used by separately interested parties to coordinate their plans in order to realize cooperative opportunities, while simultaneously limiting their risk from each other's misbehavior. Electronic commerce is encouraging the growth of contract-like mechanisms whose terms are partially machine understandable and enforceable.
Digital Money Forum (DM7) (added: 22-Dec-2003)
http://www.consult.hyperion.co.uk/digmon7.html
The programme will cover the key aspects surrounding the implementation and use of digital money; i.e the regulatory, technical, social, and economic.
Financial Cryptography '04 (added: 22-Dec-2003)
Money and trust in the digital world. Dedicated to the relationship between cryptography and data security and cutting-edge financial and payment technologies and trends... Emerging financial instruments and trends, legal regulation of financial technologies and privacy issues, encryption and authentication techologies, digital cash, and smartcard payment systems...
Financial Cryptography Payments Events Circuit (added: 22-Dec-2003)
http://www.financialcryptography.com/content/circuit/
A list of events in the Financial Cryptography space, including ones back to the birth of the field.
OpenMoney (added: 22-Dec-2003)
http://www.openmoney.org/omp/brief.html
OpenMoney's Brief on how community currencies can communicate. By way of a slide presentation.
GoldMoney (added: 22-Dec-2003)
More staid, stable and regular than the others. Users tend to be "store of value" people. Holders of the US patent on fully reserved digital gold currencies. Strong governance, fully implemented 5PM.
Pecunix (added: 22-Dec-2003)
Small, offshore, technically adept DGC. Notable for many related businesses such as integrated securities exchange and real time gold exchange. Strong governance model, well in excess of size.
eBullion (added: 22-Dec-2003)
Has cool cryptocard security widget to stop others from stealing value.
1mdc (added: 22-Dec-2003)
Derivative gold system, reserved in e-gold rather than physical metal. 1mdc is effectively a layered DGC providing protected and private claims on another DGC. This model is representative of future developments, where one Issuer
varies and improves the offerings of another.
e-gold (added: 22-Dec-2003)
The market leader, with some $20m in reserves, and 50k transactions per day (mostly tiny). The only one with decent statistics, but governance model is incomplete.
PayPal (added: 22-Dec-2003)
PayPal are the biggest splash in the FC world. Historically based on First Virtual, they are a bank/credit card derivative dollar currency. As they charge Retailers high fees, they are a retail payment system, rather than being a true money, but they do permit user-to-user payments (the mistake of many a retail system).
May Scale (added: 23-Dec-2003)
http://www.interestingsoftware.com/mayscale.html
A simple chart showing how different monies achieve different hardnesses. The May Scale puts digital monies into a perspective for loading and retail considerations.
WIN is not WASTE (added: 29-Dec-2003)
WINW is a small worlds networking utility. It was inspired by WASTE, a P2P darknet product released by Nullsoft in May 2003 and then withdrawn a few days later. The WINW project has diverged from its original mission to create a clean-room WASTE clone. Today, the WINW feature set is different from that of WASTE, and its protocol is incompatible with WASTE's protocol. However, WINW and WASTE achieve similar goals: they allow people who trust each other to communicate securely.
Phonebook - Linux Virtual Disk (added: 15-Dec-2003)
http://www.freenet.org.nz/Wiki/PhoneBook
Phonebook is an encrypted Linux filesystem (virtual disk) with unique 'plausible deniability' and 'disinformation' features.
COIN-OR - Computational Infrastructure for Operational Research (added: 30-Dec-2003)
http://www-124.ibm.com/developerworks/opensource/coin/
The Computational Infrastructure for Operations Research (COIN-OR**, or simply COIN) project is an initiative to spur the development of open-source software for the operational research community.
Computer Programs for Social Network Analysis (added: 30-Dec-2003)
http://www.sfu.ca/~insna/INSNA/soft_inf.html
Comprehensive and up to date list of SNA software.
Making reliable distributed systems in the presence of software errors (added: 12-Dec-2003)
http://www.sics.se/~joe/thesis/armstrong_thesis_2003.pdf
This thesis presents Erlang together with a design methodology, and set of libraries for building robust systems (called OTP). The central problem addressed by this thesis is the problem of constructing reliable systems from programs which may themselves contain errors. I argue how certain of the requirements necessary to build a fault-tolerant system are solved in the
language, and others are solved in the standard libraries. Together these form a basis for building fault-tolerant sodware systems.
The Financial Cryptography 2004 conference has quietly (!) announced their accepted papers:
http://fc04.ifca.ai/program.htm
Click above, or read on for the full programme....
The Ephemeral Pairing Problem
Jaap-Henk Hoepman
Efficient Maximal Privacy in Voting and Anonymous Broadcast
Jens Groth
Practical Anonymity for the Masses with MorphMix
Marc Rennhard and Bernhard Plattner
Call Center Customer Verification by Query-Directed Passwords
Lawrence O'Gorman, Smit Begga, and John Bentley
A Privacy-Friendly Loyalty System Based on Discrete Logarithms over Elliptic Curves
Matthias Enzmann, Marc Fischlin, and Markus Schneider
Identity-based Chameleon Hash and Applications
Giuseppe Ateniese and Breno de Medeiros
Selecting Correlated Random Actions
Vanessa Teague
Addressing Online Dictionary Attacks with Login Histories and Humans-in-the-Loop
S. Stubblebine and P.C. van Oorschot
An Efficient and Usable Multi-Show Non-Transferable Anonymous Credential System
Pino Persiano and Ivan Visconti
Electronic National Lotteries
Elisavet Konstantinou, Vasiliki Liagokou, Paul Spirakis, Yannis C. Stamatiou, and Moti Yung
Mixminion: Strong Anonymity for Financial Cryptography
Nick Matthewson and Roger Dingledine
Interleaving Cryptography and Mechanism Design: The Case of Online Auctions
Edith Elkind and Helger Lipmaa
The Vector-Ballot E-Voting Approach
Aggelos Kiayias and Moti Yung
Microcredits for Verifiable Foreign Service Provider Metering
Craig Gentry and Zulfikar Ramzan
Stopping Timing Attacks in Low-Latency Mix-Based Systems
Brian N. Levine, Michael K. Reiter, and Chenxi Wang
Secure Generalized Vickrey Auction without Third-Party Servers
Makoto Yokoo and Koutarou Suzuki
Provable Unlinkability Against Traffic Analysis
Ron Berman, Amos Fiat, and Amnon Ta-Shma
http://tab.computer.org/tfec/cec04/cfpWEC.html
Real world commerce is largely built on a fabric of contracts. Considered abstractly, a contract is an agreed framework of rules used by separately interested parties to coordinate their plans in order to realize cooperative opportunities, while simultaneously limiting their risk from each other's misbehavior. Electronic commerce is encouraging the growth of contract-like mechanisms whose terms are partially machine understandable and enforceable.
The First IEEE International Workshop on Electronic Contracting (WEC) is the forum to discuss innovative ideas at the interface between business, legal, and formal notions of contracts. The target audiences will be researchers, scientists, software architects, contract lawyers, economists, and industry professionals who need to be acquainted with the state of the art technologies and the future trends in electronic contracting. The event will take place in San Diego, California, USA on July 6 2004. IEEE SIEC 2004 will be held in conjunction with The International Conference on Electronic Commerce (IEEE CEC 2004).
Topics of interest include but are not limited to the following:
* Contract languages and user interfaces
* Computer aided contract design, construction, and composition
* Computer aided approaches to contract negotiation
* "Smart Contracts"
* "Ricardian Contracts"
* Electronic rights languages
* Electronic rights management and transfer
* Contracts and derived rights
* Relationship of electronic and legal enforcement mechanisms
* Electronic vs legal concepts of non-repudiation
* The interface between automatable terms and human judgement
* Kinds of recourse, including deterrence and rollback
* Monitoring compliance
* What is and is not electronically enforceable?
* Trans-jurisdictional commerce & contracting
* Shared dynamic ontologies for use in contracts
* Dynamic authorization
* Decentralized access control
* Security and dynamism in Supply Chain Management
* Extending "Types as Contracts" to mutual suspicion
* Contracts as trusted intermediaries
* Anonymous and pseudonymous contracting
* Privacy vs reputation and recourse
* Instant settlement and counter-party risk
Submissions and Important Dates:
Full papers must not exceed 20 pages printed using at least 11-point type and single spacing. All papers should be in Adobe portable document format (PDF) format. The paper should have a cover page, which includes a 200-word abstract, a list of keywords, and author's e-mail address on a separate page. Authors should submit a full paper via electronic submission to boualem@cse.unsw.edu.au. All papers selected for this conference are peer-reviewed. The best papers presented in the conference will be selected for special issues of a related computer science journal.
* Submissions must be received no later than January 10, 2004.
* Authors will be notified of their submission's status by March 2, 2004
* Camera-Ready versions must be received by April 2, 2004
Microsoft's new bounty program has all the elements of a classic movie script [1]. In Sergio Leone's 3rd spagetti western, The Man with No Name makes good money as a bounty hunter [2]. Is this another chance for him?
Microsoft's theory is that they can stick up a few wanted posters, and thus rid the world of these Ugly virus writers. Law Enforcement Officers with angel eyes will see this as a great opportunity. Microsoft has all this cash, and the LEOs need a helping hand. Surely with the right incentives, they can file the report to the CyberSecurity Czar in Washington that the tips are flooding in?
Wait for the tips, and go catch the Uglies. (And pay out the bounty.) Nothing could be simpler than that. Wait for more tips, and go catch more Uglies. And...
Wait a minute! In the film, Tuco (the Ugly) and The Man with no Name (the Good) are in cohoots! Somehow, Tuco always gets away, and the two of them split the bounty. Again and again... It's a great game.
Make no mistake, $250,000 in Confederate Gold just changed the incentive structure for your average virus writer. Up until now, writing viruses was just for fun. A challenge. Or a way to take down your hated anti-spam site. Some way for the frustrated ex-soviet nuclear scientist to stretch his talents. Or a way to poke a little fun at Microsoft.
Now, there is financial incentive. With one wanted poster, Microsoft has turned virus writing into a profitable business. All Blondie has to do is write a virus, blow away a few million user installations, and then convince Tuco to sit still for a while in a Yankee jail.
The Man with No Name may just ride again!
[1] http://news.com.com/2100-1083-5103923.html
[2] http://www.amazon.com/exec/obidos/ASIN/6304698798/
Thierry Meyssan writes a useful summary of the "most optimistic" or the "most pessimistic" case against the dollar, depending on which side of the geo-politico-economic divide one falls. In essence, the move to switch away from the dollar as international unit of account is gaining some momentum. Perhaps surprisingly to some, the notion of an islamic gold unit is also making some ground, so it may be that there are is a triumvurate of currency stability in earth's future:
Dollars, Euros, and gold.
What's this got to do with anything? In essence, money is an application of FC. Only in understanding the way the world is looking at - and pushing - money, can we understand how a new money project might unfold.
(Ed: I added an emphasis to stress the prediction within [and 2004] now confimed in 2006, 2006-2 and 2008)
(interesting but non-financial geopolitical backdrop - snipped, see URL)
Whatever happens, Washington can no longer backtrack. In fact, the survival of the U.S. is menaced - not by an external enemy, but by internal economic weakness and tensions running between its communities. Many are becoming conscious of the fact that U.S. power is based upon a mirage, the dollar. These are only pieces of paper, printed when more are needed, while the rest of the world feels obliged to use them.
For the past three years, Jacques Chirac and Gerhard Schroder have engaged France and Germany in a pitiless war against the United States. They have sent emissaries world wide to convince other States to convert their monetary reserves to euros. The first to accept were Iran, Iraq and North Korea. Precisely the countries described by George W. Bush as those of the "axis of evil".
Meanwhile, Vladimir Putin has begun restoring the economic independence of the Russian Federation. He has reimbursed - ahead of time - the debts that Yeltsin had contracted with the International Monetary Fund and will also make an early repayment, before the end of the year, of the remaining debts to the Club of Paris.
Putin has calmly announced that he plans State control of the natural riches of his country. He has reminded others that the oligarchs made their fortunes overnight by appropriating, with the complicity of Yeltsin, all that belonged to the U.S.S.R. and that the State can demand the return of wealth which should never have been handed over to the oligarchs.
When Gerhard Schroder visited Putin at the beginning of October, Putin intimated that he would begin by regaining control over Russian gas and petrol and that he would convert their trading, now in dollars, to euros.
On his part, the Prime Minister of Malaysia, Mahatir Mohammad, has experimented with the abandonment of the dollar in international exchanges, but in order to replace it with gold. He has signed bilateral agreements with his country's business partners. Malaysian exports and imports will in future be traded in gold.
Buoyed by this, he has suggested to the Islamic Bank of Development a plan that would put an end to U.S. dominance. Inspired by the Arab cartel which created the oil shortage of 1974, he has outlined a decisive monetary assault. The idea is to switch world petrol trading to gold, thereby provoking the fall of the dollar and the collapse of the U.S. economy.
At first, Saudi Arabia, menaced by Washington neo-conservatives, was opposed to this plan, but became agreable to it. The Islamic Bank of Development presented the proposal at the summit meeting of the Organisation of the Islamic Conference (OIC) which was held recently in Malaya, presided over by Dr. Mahatir Mohammad. It was agreed that bilateral agreements for a transition to the gold standard would be prepared between Islamic countries, during the course of next year. At the next summit meeting, to be held in Senegal, the fifty-seven Member-States of the Islamic Conference would be invited to sign a multilateral agreement.
Vladimir Putin who also attended the Islamic Conference, since a large number of the citizens of the Russian Federation are Muslims, encouraged the plan.
Abandoning the dollar presents a long and difficult struggle, for Europe as well as for Muslim countries. While an international campaign accusing Dr. Mahatir Mohammed of ressuscitating anti-semitism has been launched, Henry Kissinger and Condoleeza Rice appealed to the oligarch, Mikhail Khodorkovsky, for help in order to neutralise Putin. Last weekend, however, Putin had him arrested and detained.
Whatever happens, the monetary war has been declared. In the short-term, U.S. domination is menaced.
Thierry Meyssan
Journalist, writer and President of R\xe9seau Voltaire
http://www.reseauvoltaire.net
http://www.mises.org/fullstory.asp?control=1333
The importance of the Austrian school of economics is nowhere better demonstrated than in the area of monetary theory. It is in this realm that the simplifying assumptions of mainstream economic theory wreak the most havoc. In contrast, the commonsensical, "verbal logic" of the Austrians is entirely adequate to understand the nature of money and its valuation by human actors.
Menger on the Origin of Money
The Austrian school has offered the most comprehensive explanation of the historical origin of money. Everyone recognizes the benefits of a universally accepted medium of exchange. But how could such a money come into existence? After all, self-interested individuals would be very reluctant to surrender real goods and services in exchange for intrinsically worthless pieces of paper or even relatively useless metal discs. It's true, once everyone else accepts money in exchange, then any individual is also willing to do so. But how could human beings reach such a position in the first place?
One possible explanation is that a powerful ruler realized, either on his own or through wise counselors, that instituting money would benefit his people. So he then ordered everyone to accept some particular thing as money.
There are several problems with this theory. First, as Menger pointed out, we have no historical record of such an important event, even though money was used in all ancient civilizations. Second, there's the unlikelihood that someone could have invented the idea of money without ever experiencing it. And third, even if we did stipulate that a ruler could have discovered the idea of money while living in a state of barter, it would not be sufficient for him to simply designate the money good. He would also have to specify the precise exchange ratios between the newly defined money and all other goods. Otherwise, the people under his rule could evade his order to use the newfangled "money" by charging ridiculously high prices in terms of that good.
Menger's theory avoids all of these difficulties. According to Menger, money emerged spontaneously through the self-interested actions of individuals. No single person sat back and conceived of a universal medium of exchange, and no government compulsion was necessary to effect the transition from a condition of barter to a money economy.
In order to understand how this could have occurred, Menger pointed out that even in a state of barter, goods would have different degrees of saleableness or saleability. (Closely related terms would be marketability or liquidity.) The more saleable a good, the more easily its owner could exchange it for other goods at an "economic price." For example, someone selling wheat is in a much stronger position than someone selling astronomical instruments. The former commodity is more saleable than the latter.
Notice that Menger is not claiming that the owner of a telescope will be unable to sell it. If the seller sets his asking price (in terms of other goods) low enough, someone will buy it. The point is that the seller of a telescope will only be able to receive its true "economic price" if he devotes a long time to searching for buyers. The seller of wheat, in contrast, would not have to look very hard to find the best deal that he is likely to get for his wares.
Already we have left the world of standard microeconomics. In typical models, we can determine the equilibrium relative prices for various real goods. For example, we might find that one telescope trades against 1,000 units of wheat. But Menger's insight is that this fact does not really mean that someone going to market with a telescope can instantly walk away with 1,000 units of wheat.
Moreover, it is simply not the case that the owner of a telescope is in the same position as the owner of 1,000 units of wheat when each enters the market. Because the telescope is much less saleable, its owner will be at a disadvantage when trying to acquire his desired goods from other sellers.
Because of this, owners of relatively less saleable goods will exchange their products not only for those goods that they directly wish to consume, but also for goods that they do not directly value, so long as the goods received are more saleable than the goods given up. In short, astute traders will begin to engage in indirect exchange. For example, the owner of a telescope who desires fish does not need to wait until he finds a fisherman who wants to look at the stars. Instead, the owner of the telescope can sell it to any person who wants to stargaze, so long as the goods offered for it would be more likely to tempt fishermen than the telescope.
Over time, Menger argued, the most saleable goods were desired by more and more traders because of this advantage. But as more people accepted these goods in exchange, the more saleable they became. Eventually, certain goods outstripped all others in this respect, and became universally accepted in exchange by the sellers of all other goods. At this point, money had emerged on the market.
The Contribution of Mises
Even though Menger had provided a satisfactory account for the origin of money, this process explanation alone was not a true economic theory of money. (After all, to explain the exchange value of cows, economists don't provide a story of the origin of cows.) It took Ludwig von Mises, in his 1912 The Theory of Money and Credit, to provide a coherent explanation of the pricing of money units in terms of standard subjectivist value theory.
In contrast to Mises's approach, which as we shall see was characteristically based on the individual and his subjective valuations, most economists at that time clung to two separate theories. On the one hand, relative prices were explained using the tools of marginal utility analysis. But then, in order to explain the nominal money prices of goods, economists resorted to some version of the quantity theory, relying on aggregate variables and in particular, the equation MV = PQ.
Economists were certainly aware of this awkward position. But many felt that a marginal utility explanation of money demand would simply be a circular argument: We need to explain why money has a certain exchange value on the market. It won't do (so these economists thought) to merely explain this by saying people have a marginal utility for money because of its purchasing power. After all, that's what we're trying to explain in the first place—why can people buy things with money?
Mises eluded this apparent circularity by his regression theorem. In the first place, yes, people trade away real goods for units of money, because they have a higher marginal utility for the money units than for the other commodities given away. It's also true that the economist cannot stop there; he must explain why people have a marginal utility for money. (This is not the case for other goods. The economist explains the exchange value for a Picasso by saying that the buyer derives utility from the painting, and at that point the explanation stops.)
People value units of money because of their expected purchasing power; money will allow people to receive real goods and services in the future, and hence people are willing to give up real goods and services now in order to attain cash balances. Thus the expected future purchasing power of money explains its current purchasing power.
But haven't we just run into the same problem of an alleged circularity? Aren't we merely explaining the purchasing power of money by reference to the purchasing power of money?
No, Mises pointed out, because of the time element. People today expect money to have a certain purchasing power tomorrow, because of their memory of its purchasing power yesterday. We then push the problem back one step. People yesterday anticipated today's purchasing power, because they remembered that money could be exchanged for other goods and services two days ago. And so on.
So far, Mises's explanation still seems dubious; it appears to involve an infinite regress. But this is not the case, because of Menger's explanation of the origin of money. We can trace the purchasing power of money back through time, until we reach the point at which people first emerged from a state of barter. And at that point, the purchasing power of the money commodity can be explained in just the same way that the exchange value of any commodity is explained. People valued gold for its own sake before it became a money, and thus a satisfactory theory of the current market value of gold must trace back its development until the point when gold was not a medium of exchange. *
The two great Austrian theorists Carl Menger and Ludwig von Mises provided explanations for both the historical origin of money and its market price. Their explanations were characteristically Austrian in that they respected the principles of methodological individualism and subjectivism. Their theories represented not only a substantial improvement over their rivals, but to this day form the foundation for the economist who wishes to successfully analyze money.
Peppercoin, a venture by Ron Rivest and Silvio Micali to monetarise certain token money ideas based on statistical settlement, raised some money ($4 million on top of $1.7 million).
This is a standard crypto-hype-venture capital-DRM play. The crypto is cool, the people are the doyens of the cryptography field, and the market is open. What more perfect combination?
But, this is no new money venture. It is striking in its ignorance. Peppercoin ignores all the lessons of the past, in so complete a fashion, that one wonders what they were thinking?
It has been very clear since about the late 90's that the retail model is bankrupt. Both Paypal and e-gold - the two successful money models so far - cracked this problem in innovative ways. Yet Peppercoin decided to ignore their work and go back to the merchant-consumer model.
It's also been more or less clear that the downloaded client model is also a dead loss. I personally have been guilty of belatedly recognising that, and in the late 90s we rectified at least our understanding, if not our technology line. (The XML-X project was our answer to that.) There are ways to make the downloaded client model work, but they require integration with the application in a way that is decidedly absent in the peppercoin model.
A further delito is the micropayments trap. Simple mathematics will show that micropayments don't work. Simply take any given merchant, and calculate the most possible number of transactions, multiple by the low amount of each transaction, then work out how much revenue you got. Digital and IBM already discovered this at a cost of countless millions, and you can too, with a $5 pocket calculator.
The only thing left is that Peppercoin has some sort of secret weapon. Always possible, and always unlikely. Except them to raise another round, and then get absorbed somewhere and quietly forgotten.
This perceptive remark was made by Hasan (Martin Bramwell) in a private document. Actually, he said that the keystone of the issuance is the contract! So I am toying around with the phrase to see what rings the loudest bell.
No matter, they are Hasan's words. And, it is a remarkable observation that is worthy of deep attention. Consider this - can you find another project that pays even lip service to the contract in its architecture?
I never have. We invented the Ricardian Contract back in 1995, and at the time, even though we announced its form and discussed it widely, we fell into the trap of assuming it was too obvious. Indeed, that's what people told us, lulling us into its obviousness!
We actually thought that there was no point in pressing the place and case of the contract in Financial Cryptography, because fairly soon, all and sundry would sweep up the obvious construct, and our "first" would become just another forgotten footnote in the rubble of FC history.
Yet, none of that came to pass. Even though every small fact in this construct is easily established, just like static physics, and every piece of logic stands strong, the resultant archway appears too tall to see.
Why is that? It's not as if the contract is hard to understand. You take some text, you shove in some parsable elements, you sign it with OpenPGP's cleartext signature, and you hash the document. Really basic crypto, as it should be.
That then becomes the starting point for ... everything!
Maybe, if it's not in the construct, it's in what we did with it, that the mystery lies. Like a keystone, we built an entire aquaduct of governance over it. We took that hash, and tied it into the servers and the transactions and the repudiability. We took the signature and tied that into the Issuer. We took the text and tied that as a contract into the reserves.
And, we took the hash again and showed how the user was now part and party to the contract.
And... And...
Maybe our emphasis is wrong. Instead of looking at the keystone, we should be looking at the arches. Or, maybe the topdown view of the edifice is preferred, and how we got to the top of the world should be covered with hand waving and press releases.
Whichever. The water of governance flows without pause because it rides over something built around the keystone of a contract. The Ricardian Contract supports a civilisation of Financial Cryptography in a way that makes one realise that these are words yet to be appreciated.
Why is there no layer for Security in FC?
(Actually, I get this from time to time. "Why no X? ?!?" It takes a while to develop the answer for each one. This one is about security, but I've also been asked about Law and Economics.)
Security is all pervasive. It is not an add on. It is a requirement built in from the beginning and it infects all modules.
Thus, it is not a layer. It applies to all, although, more particularly, Security will be more present in the lower layers.
Well, perhaps that is not true. It could be said that Security divides into internal and external threats, and the lower layers are more normally about external threats. The Accounting and Governance layers are more normally concerned with the insider threat.
Superficially, security appears to be lower in the stack. But, a true security person recognises that an internal threat is more damning, more dangerous, and more frequent in reality than an external threat. In fact, real security work is often more about insider threats than outsider threats.
So, it's not even possible to be vaguely narrow about Security. Even the upper layers, Finance and Vaue, are critical, as you can't do much security until you understand the application that you are protecting and its concommitant values.