April 08, 2014

A very fast history of cryptocurrencies BBTC -- before Bitcoin

Before Bitcoin, there was cryptocurrency. Indeed, it has a long and deep history. If only for the lessons learnt, it is worth studying, and indeed, in my ABC of Bitcoin investing, I consider not knowing anything before the paper as a red flag. Hence, a very fast history of what came before.


The first known (to me) attempt at cryptocurrencies occurred in the Netherlands, in the late 1980s, which makes it around 25 years ago or 20BBTC. In the middle of the night, the petrol stations in the remoter areas were being raided for cash, and the operators were unhappy putting guards at risk there. But the petrol stations had to stay open overnight so that the trucks could refuel.

Someone had the bright idea of putting money onto the new-fangled smartcards that were then being trialled, and so electronic cash was born. Drivers of trucks were given these cards instead of cash, and the stations were now safer from robbery.

At the same time the dominant retailer, Albert Heijn, was pushing the banks to invent some way to allow shoppers to pay directly from their bank accounts, which became eventually to be known as POS or point-of-sale.

Even before this, David Chaum, an American Cryptographer had been investigating what it would take to create electronic cash. His views on money and privacy led him to believe that in order to do safe commerce, we would need a token money that would emulate physical coins and paper notes. Specifically, the privacy feature of being able to safely pay someone hand-to-hand, and have that transaction complete safely and privately.

As far back as 1983 or 25BBTC, David Chaum invented the blinding formula, which is an extension of the RSA algorithm still used in the web's encryption. This enables a person to pass a number across to another person, and that number to be modified by the receiver. When the receiver deposits her coin, as Chaum called it, into the bank, it bears the original signature of the mint, but it is not the same number as that which the mint signed. Chaum's invention allowed the coin to be modified untraceably without breaking the signature of the mint, hence the mint or bank was 'blind' to the transaction.

All of this interest and also the Netherlands' historically feverish attitude to privacy probably had a lot to do with David Chaum's decision to migrate to the Netherlands. When working in the late 1980s at CWI, a hotbed of cryptography and mathematics research in Amsterdam, he started DigiCash and proceeded to build his Internet money invention, employing amongst many others names that would later become famous: Stefan Brands, Niels Ferguson, Gary Howland, Marcel "BigMac" van der Peijl, Nick Szabo, and Bryce "Zooko" Wilcox-Ahearn.

The invention of blinded cash was extraordinary and it caused an unprecedented wave of press attention. Unfortunately, David Chaum and his company made some missteps, and fell foul of the central bank (De Nederlandsche Bank or DNB). The private compromise that they agreed to was that Digicash's e-cash product would only be sold to banks. This accommodation then led the company on a merry dance attempting to field a viable digital cash through many banks, ending up eventually in bankruptcy in 1998. The amount of attention in the press brought very exciting deals to the table, with Microsoft, Deutsche Bank and others, but David Chaum was unable to use them to get to the next level.

On the coattails of Digicash there were hundreds of startups per year working on this space, including my own efforts. In the mid 1990s, the attention switched from Europe to North America for two factors: the Netscape IPO had released a huge amount of VC interest, and also Europe had brought in the first regulatory clampdown on digital cash: the 1994 EU Report on Prepaid Cards, which morphed into a reaction against DigiCash.

Yet, the first great wave of cryptocurrencies spluttered and died, and was instead overtaken by a second wave of web-based monies. First Virtual was a first brief spurt of excitement, to be almost immediately replaced by Paypal which did more or less the same thing.

The difference? Paypal allowed the money to go from person to person, where as FV had insisted that to accept money you must "be a merchant," which was a popular restriction from banks and regulators, but people hated it. Paypal also leapt forward by proposing its system as being a hand-to-hand cash, literally: the first versions were on the Palm Pilot, which was extraordinarily popular with geeks. But this geek-focus was quickly abandoned as Paypal discovered that what people -- real users -- really wanted was money on the web browser. Also, having found a willing userbase in ebay community, its future was more or less guaranteed as long as it avoided the bank/regulatory minefield laid out for it.

As Paypal proved the web became the protocol of choice, even for money, so Chaum's ideas were more or less forgotten in the wider western marketplace, although the tradition was alive in Russia with WebMoney, and there were isolated pockets of interest in the crypto communities. In contrast, several ventures started up chasing a variant of Paypal's web-hybrid: gold on the web. The company that succeeded initially was called e-gold, an American-based operation that had its corporation in Nevis in the Caribbean.

e-gold was a fairly simple idea: you send in your physical gold or 'junk' silver, and they would credit e-gold to your account. Or you could buy new e-gold, by sending a wire to Florida, and they would buy and hold the physical gold. By tramping the streets and winning customers over, the founder managed to get the company into the black and up and growing by around 1999. As e-gold the currency issuer was offshore, it did not require US onshore approval, and this enabled it for a time to target the huge American market of 'goldbugs' and also a growing worldwide community of Internet traders who needed to do cross-border payments. With its popularity on the increase, the independent exchange market exploded into life in 2000, and its future seemed set.

e-gold however ran into trouble for its libertarian ideal of allowing anyone to have an account. While in theory this is a fine concept, the steady stream of ponzis, HYIPs, 'games' and other scams attracted the attention of the Feds. In 2005, e-gold's Florida offices were raided and that was the end of the currency as an effective force. The Feds also proceeded to mop up any of the competitors and exchange operations they could lay their hands on, ensuring the end of the second great wave of new monies.

In retrospect, 9/11 marked a huge shift in focus. Beforehand, the USA was fairly liberal about alternative monies, seeing them as potential business, innovation for the future. After 9/11 the view switched dramatically, albeit slowly; all cryptocurrencies were assumed to be hotbeds of terrorists and drugs dealers, and therefore valid targets for total control. It's probably fair to speculate that e-gold didn't react so well to the shift. Meanwhile, over in Europe, they were going the other way. It had become abundantly clear that the attempt to shutdown cryptocurrencies was too successful, Internet business preferred to base itself in the USA, and there had never been any evidence of the bad things they were scared of. Successive generations of the eMoney law were enacted to open up the field, but being Europeans they never really understood what a startup was, and the less-high barriers remained deal killers.

Which brings us forward to 2008, and the first public posting of the Bitcoin paper by Satoshi Nakamoto.



What's all this worth? The best way I can make this point is an appeal to authority:

Satoshi Nakamoto wrote, on releasing the code:
> You know, I think there were a lot more people interested in the 90's,
> but after more than a decade of failed Trusted Third Party based systems
> (Digicash, etc), they see it as a lost cause. I hope they can make the
> distinction that this is the first time I know of that we're trying a
> non-trust-based system.

Bitcoin is a result of history; when decisions were made, they rebounded along time and into the design. Nakamoto may have been the mother of Bitcoin, but it is a child of many fathers: David Chaum's blinded coins and the fateful compromise with DNB, e-gold's anonymous accounts and the post-9/11 realpolitik, the cypherpunks and their libertarian ideals, the banks and their industrial control policies, these were the whole cloth out of which Nakamoto cut the invention.

And, finally it must be stressed, most all successes and missteps we see here in the growing Bitcoin sector have been seen before. History is not just humming and rhyming, it's singing loudly.

Posted by iang at 07:14 PM | Comments (1) | TrackBack (0)

April 06, 2014

The evil of cryptographic choice (2) -- how your Ps and Qs were mined by the NSA

One of the excuses touted for the Dual_EC debacle was that the magical P & Q numbers that were chosen by secret process were supposed to be defaults. Anyone was at liberty to change them.

Epic fail! It turns out that this might have been just that, a liberty, a hope, a dream. From last week's paper on attacking Dual_EC:

"We implemented each of the attacks against TLS libraries described above to validate that they work as described. Since we do not know the relationship between the NIST- specified points P and Q, we generated our own point Q′ by first generating a random value e ←R {0,1,...,n−1} where n is the order of P, and set Q′ = eP. This gives our trapdoor value d ≡ e−1 (mod n) such that dQ′ = P. (Our random e and its corresponding d are given in the Appendix.) We then modified each of the libraries to use our point Q′ and captured network traces using the libraries. We ran our attacks against these traces to simulate a passive network attacker.

In the new paper that measures how hard it was to crack open TLS when corrupted by Dual_EC, the authors changed the Qs to match the P delivered, so as to attack the code. Each of the four libraries they had was in binary form, and it appears that each had to be hard-modified in binary in order to mind their own Ps and Qs.

So did (a) the library implementors forget that issue? or (b) NIST/FIPS in its approval process fail to stress the need for users to mind their Ps and Qs? or (c) the NSA knew all along that this would be a fixed quantity in every library, derived from the standard, which was pre-derived from their exhaustive internal search for a special friendly pair? In other words:

"We would like to stress that anybody who knows the back door for the NIST-specified points can run the same attack on the fielded BSAFE and SChannel implementations without reverse engineering.

Defaults, options, choice of any form has always been known as bad for users, great for attackers and a downright nuisance for developers. Here, the libraries did the right thing by eliminating the chance for users to change those numbers. Unfortunately, they, NIST and all points thereafter, took the originals without question. Doh!

Posted by iang at 07:32 PM | Comments (0) | TrackBack (0)

April 01, 2014

The IETF's Security Area post-NSA - what is the systemic problem?

In the light of yesterday's newly revealed attack by the NSA on Internet standards, what are the systemic problems here, if any?

I think we can question the way the IETF is approaching security. It has taken a lot of thinking on my part to identify the flaw(s), and not a few rants, with many and aggressive defences and counterattacks from defenders of the faith. Where I am thinking today is this:

First the good news. The IETF's Working Group concept is far better at developing general standards than anything we've seen so far (by this I mean ISO, national committees, industry cartels and whathaveyou). However, it still suffers from two shortfalls.

1. the Working Group system is more or less easily captured by the players with the largest budget. If one views standards as the property of the largest players, then this is not a problem. If OTOH one views the Internet as a shared resource of billions, designed to serve those billions back for their efforts, the WG method is a recipe for disenfranchisement. Perhaps apropos, spotted on the TLS list by Peter Gutmann:

Documenting use cases is an unnecessary distraction from doing actual work. You'll note that our charter does not say "enumerate applications that want to use TLS".

I think reasonable people can debate and disagree on the question of whether the WG model disenfranchises the users, because even though a a company can out-manouver the open Internet through sheer persistence and money, we can still see it happen. In this, IETF stands in violent sunlight compared to that travesty of mouldy dark closets, CABForum, which shut users out while industry insiders prepared the base documents in secrecy.

I'll take the IETF any day, except when...

2. the Working Group system is less able to defend itself from a byzantine attack. By this I mean the security concept of an attack from someone who doesn't follow the rules, and breaks them in ways meant to break your model and assumptions. We can suspect byzantium disclosures in the fingered ID:

The United States Department of Defense has requested a TLS mode which allows the use of longer public randomness values for use with high security level cipher suites like those specified in Suite B [I-D.rescorla-tls-suiteb]. The rationale for this as stated by DoD is that the public randomness for each side should be at least twice as long as the security level for cryptographic parity, which makes the 224 bits of randomness provided by the current TLS random values insufficient.

Assuming the story as told so far, the US DoD should have added "and our friends at the NSA asked us to do this so they could crack your infected TLS wide open in real time."

Such byzantine behaviour maybe isn't a problem when the industry players are for example subject to open observation, as best behaviour can be forced, and honesty at some level is necessary for long term reputation. But it likely is a problem where the attacker is accustomed to that other world: lies, deception, fraud, extortion or any of a number of other tricks which are the tools of trade of the spies.

Which points directly at the NSA. Spooks being spooks, every spy novel you've ever read will attest to the deception and rule breaking. So where is this a problem? Well, only in the one area where they are interested in: security.

Which is irony itself as security is the field where byzantine behaviour is our meat and drink. Would the Working Group concept past muster in an IETF security WG? Whether it does or no depends on whether you think it can defend against the byzantine attack. Likely it will pass-by-fiat because of the loyalty of those involved, I have been one of those WG stalwarts for a period, so I do see the dilemma. But in the cold hard light of sunlight, who is comfortable supporting a WG that is assisted by NSA employees who will apply all available SIGINT and HUMINT capabilities?

Can we agree or disagree on this? Is there room for reasonable debate amongst peers? I refer you now to these words:

On September 5, 2013, the New York Times [18], the Guardian [2] and ProPublica [12] reported the existence of a secret National Security Agency SIGINT Enabling Project with the mission to “actively [engage] the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs.” The revealed source documents describe a US $250 million/year program designed to “make [systems] exploitable through SIGINT collection” by inserting vulnerabilities, collecting target network data, and influencing policies, standards and specifications for commercial public key technologies. Named targets include protocols for “TLS/SSL, https (e.g. webmail), SSH, encrypted chat, VPNs and encrypted VOIP.”
The documents also make specific reference to a set of pseudorandom number generator (PRNG) algorithms adopted as part of the National Institute of Standards and Technology (NIST) Special Publication 800-90 [17] in 2006, and also standardized as part of ISO 18031 [11]. These standards include an algorithm called the Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC). As a result of these revelations, NIST reopened the public comment period for SP 800-90.

And as previously written here. The NSA has conducted a long term programme to breach the standards-based crypto of the net.

As evidence of this claim, we now have *two attacks*, being clear attempts to trash the security of TLS and freinds, and we have their own admission of intent to breach. In their own words. There is no shortage of circumstantial evidence that NSA people have pushed, steered, nudged the WGs to make bad decisions.

I therefore suggest we have the evidence to take to a jury. Obviously we won't be allowed to do that, so we have to do the next best thing: use our collective wisdom and make the call in the public court of Internet opinion.

My vote is -- guilty.

One single piece of evidence wasn't enough. Two was enough to believe, but alternate explanations sounded plausible to some. But we now have three solid bodies of evidence. Redundancy. Triangulation. Conclusion. Guilty.

Where it leaves us is in difficulties. We can try and avoid all this stuff by e.g., avoiding American crypto, but it is a bit broader that that. Yes, they attacked and broke some elements of American crypto (and you know what I'm expecting to fall next.). But they also broke the standards process, and that had even more effect on the world.

It has to be said that the IETF security area is now under a cloud. Not only do they need to analyse things back in time to see where it went wrong, but they also need some concept to stop it happening in the future.

The first step however is to actually see the clouds, and admit that rain might be coming soon. May the security AD live in interesting times, borrow my umbrella?

Posted by iang at 11:56 PM | Comments (0) | TrackBack (0)

March 31, 2014

NSA caught again -- deliberate weakening of TLS revealed!?

In a scandal that is now entertaining that legal term of art "slam-dunk" there is news of a new weakness introduced into the TLS suite by the NSA:

We also discovered evidence of the implementation in the RSA BSAFE products of a non-standard TLS extension called “Extended Random.” This extension, co-written at the request of the National Security Agency, allows a client to request longer TLS random nonces from the server, a feature that, if it enabled, would speed up the Dual EC attack by a factor of up to 65,000. In addition, the use of this extension allows for for attacks on Dual EC instances configured with P-384 and P-521 elliptic curves, something that is not apparently possible in standard TLS.

This extension to TLS was introduced 3 distinct times through an open IETF Internet Draft process, twice by an NSA employee and a well known TLS specialist, and once by another. The way the extension works is that it increases the quantity of random numbers fed into the cleartext negotiation phase of the protocol. If the attacker has a heads up to those random numbers, that makes his task of divining the state of the PRNG a lot easier. Indeed, the extension definition states more or less that:

4.1. Threats to TLS

When this extension is in use it increases the amount of data that an attacker can inject into the PRF. This potentially would allow an attacker who had partially compromised the PRF greater scope for influencing the output.

The use of Dual_EC, the previously fingered dodgy standard, makes this possible. Which gives us 2 compromises of the standards process that when combined magically work together.

Our analysis strongly suggests that, from an attacker’s perspective, backdooring a PRNG should be combined not merely with influencing implementations to use the PRNG but also with influencing other details that secretly improve the exploitability of the PRNG.

Red faces all round.

Posted by iang at 06:12 PM | Comments (0) | TrackBack (0)