A post on Matthew Green's blog highlights that Snowden revelations helped the push for HTTPS everywhere.
Firefox also has a similar result, indicating a web-wide world result of 80%.
(It should be noted that google's decision to reward HTTPS users by prioritising it in search results probably helped more than anything, but before you jump for joy at this new-found love for security socialism, note that it isn't working too well in the fake news department.)
The significance of this is that back in around 2005 some of us first worked out that we had to move the entire web to HTTPS. Logic at the time was:
Why is this important? Why do we care about a small group of sites are still running SSL v2. Here's why - it feeds into phishing:1. In order for browsers to talk to these sites, they still perform the SSL v2 Hello. 2. Which means they cannot talk the TLS hello. 3. Which means that servers like Apache cannot implement TLS features to operate multiple web sites securely through multiple certificates. 4. Which further means that the spread of TLS (a.k.a. SSL) is slowed down dramatically (only one protected site per IP number - schlock!), and 5, this finally means that anti-phishing efforts at the browser level haven't a leg to stand on when it comes to protecting 99% of the web.Until *all* sites stop talking SSL v2, browsers will continue to talk SSL v2. Which means the anti-phishing features we have been building and promoting are somewhat held back because they don't so easily protect everything.
For the tl;dr: we can't protect the web when HTTP is possible. Having both HTTP and HTTPS as alternatives broke the rule: there is only one mode, and it is secure, and allowed attackers like phishers to just use HTTP and *pretend* it was secure.
The significance of this for me is that, from that point of time until now, we can show that a typical turn around the OODA loop (observe, orient, decide, act) of Information Security Combat took about 14 years. Albeit in the Internet protocol world, but that happens to be a big part of it.
During that time, a new bogeyman turned up - the NSA listening to everything - but that's ok. Decent security models should cover multiple threats, and we don't so much care which threat gets us to a comfortable position.
So says NIST...
10 years ago I annoyed the entire crypto-supply industry:
Hypothesis #1 -- The One True Cipher Suite In cryptoplumbing, the gravest choices are apparently on the nature of the cipher suite. To include latest fad algo or not? Instead, I offer you a simple solution. Don't.
There is one cipher suite, and it is numbered Number 1.
Cypersuite #1 is always negotiated as Number 1 in the very first message. It is your choice, your ultimate choice, and your destiny. Pick well.
The One True Cipher Suite was born of watching projects and groups wallow in the mire of complexity, as doubt caused teams to add multiple algorithms- a complexity that easily doubled the cost of the protocol with consequent knock-on effects & costs & divorces & breaches & wars.
It - The One True Cipher Suite as an aphorism - was widely ridiculed in crypto and standards circles. Developers and standards groups like the IETF just could not let go of crypto agility, the term that was born to champion the alternate. This sacred cow led the TLS group to field something like 200 standard suites in SSL and radically reduce them to 30 or 40 over time.
Now, NIST has announced that AES as a single standard algorithm is worth $250 billion economic benefit over 20 years of its project lifetime - from 1998 to now.
h/t to Bruce Schneier, who also said:
"I have no idea how to even begin to assess the quality of the study and its conclusions -- it's all in the 150-page report, though -- but I do like the pretty block diagram of AES on the report's cover."
One good suite based on AES allows agility within the protocol to be dropped. Entirely. Instead, upgrade the entire protocol to an entirely new suite, every 7 years. I said, if anyone was asking. No good algorithm lasts less than 7 years.
Crypto-agility was a sacred cow that should have been slaughtered years ago, but maybe it took this report from NIST to lay it down: $250 billion of benefit.
In another footnote, we of the Cryptix team supported the AES project because we knew it was the way forward. Raif built the Java test suite and others in our team wrote and deployed contender algorithms.
Jimmy Nguyen criticises SegWit on the basis that it breaks the signature of a contract according to US law. This is a reasonable argument to make but it is also not a particularly relevant one. In practice, this only matters in the context of a particularly vicious and time-wasting case. You could argue that all of them are, and lawyers will argue on your dime that you have to get this right. But actually, for the most part, in court, lawyers don’t like to argue things that they know they are going to lose. The contract is signed, it’s just not signed in a particularly helpful fashion. For the most part, real courts and real judges know how to distinguish intent from technical signatures, so it would only be relevant where the law states that a particular contract must be signed in a particular way, and then we’ve got other problems. Yes, I know, UCC and all that, but let’s get back to the real world.
But there is another problem, and Nguyen’s post has triggered my thinking on it. Let’s examine this from the perspective of triple entry. When we (by this I mean to include Todd and Gary) were thinking of the problem, we isolated each transaction as being essentially one atomic element. Think of an entry in accounting terms. Or think of a record in database terms. However you think about it, it’s a list of horizontal elements that are standalone.
When we sign it using a private key, we take the signature and append it to the entry. By this means, the entry becomes stronger - it carries its authorisation - but it still retains its standalone property.
So, with the triple entry design in that old paper, we don’t actually cut anything out of the entry, we just make it stronger with an appended signature. You can think of it as is a strict superset of the old double entry and even the older single entry if you wanted to go that far. Which makes it compatible which is a nice property, we can extract double entry from triple entry and still use all the old software we’ve built over the last 500 years.
And, standalone means that Alice can talk to Bob about her transactions, and Bob can talk to Carol about his transaction without sharing any irrelevant or private information.
Now, Satoshi’s design for triple entry broke the atomicity of transactions for consensus purposes. But it is still possible to extract out the entries out of the UTXO, and they remain standalone because they carry their signature. This is especially important for say an SPV client, but it’s also important for any external application.
Like this: I’m flying to Shanghai next week on Blockchain Airlines, and I’ve got to submit expenses. I hand the expenses department my Bitcoin entries, sans signatures, and the clerk looks at them and realises they are not signed. See where this is going? Because, compliance, etc, the expenses department must now be a full node. Not SPV. It must now hold the entire blockchain and go searching for that transaction to make sure it’s in there - it’s real, it was expended. Because, compliance, because audit, because tax, because that’s what they do - check things.
If Bitcoin is triple entry, this is making it a more expensive form of triple entry. We don’t need those costs, bearing in mind that these costs are replicated across the world - every user, every transaction, every expenses report, every accountant. For the cost of including a signature, an EC signature at that, the extra bytes gain us a LOT of strength, flexibility and cost savings.
(You could argue that we have to provide external data in the form of the public key. So whoever’s got the public key could also keep the sigs. This is theoretically true but is starting to get messy and I don’t want to analyse right now what that means for resource, privacy, efficiency.)
Some might argue that this causes more spread of Bitcoin, more fullnodes and more good - but that’s the broken window fallacy. We don’t go around breaking things to cause the economy to boom. A broken window is always a dead loss to society, although we need to constantly remind the government to stop breaking things to fix them. Likewise, we do not improve things by loading up the accounting departments of the world with additional costs. We’re trying to remove those costs, not load them up, honestly!
Then, but malleability! Yeah, that’s a nuisance. But the goal isn’t to fix malleability. The goal is to make the transactions more certain. Segwit hasn’t made transactions more certain if it has replaced one uncertainty with another uncertainty.
Today, I’m not going to compare one against the other - perhaps I don’t know enough, and perhaps others can do it better. Perhaps it is relatively better if all things are considered, but it’s not absolutely better, and for accounting, it looks worse.
Which does rather put the point on ones worldview. SegWit seems to retain the certainty but only as outlined above: when ones worldview is full nodes, Bitcoin is your hammer and your horizon. E.g., if you’re only thinking about validation, then signatures are only needed for validation. Nailed it.
But for everyone else? Everyone else, everyone outside the Bitcoin world is just as likely to simply decline as they are to add a full node capability. “We do not accept Bitcoin receipts, thanks very much.”
Or, if you insist on Bitcoin, you have to go over to this authority and get a signed attestation by them that the receipt data is indeed valid. They’ve got a full node. Authenticity as a service. Some will think “business opportunity!” whereas others will think “huh? Wasn’t avoiding a central authority the sort of thing we were trying to avoid?”
I don’t know what the size of the market for interop is, although I do know quite a few people who obsess about it and write long unpublished papers (daily reminder - come on guys, publish the damn things!). Personally I would not make that tradeoff. I’m probably biased tho, in the same way that Bitcoiners are biased: I like the idea of triple entries, in the same way that Bitcoiners like UTXO. I like the idea that we can rely on data, in the same way that Bitcoiners like the idea that they can rely on a bunch of miners.
Now, one last caveat. I know that SegWit in all its forms is a political food fight. Or a war, depending your use of the language. I’m not into that - I keep away from it because to my mind war and food fights are a dead loss to society. I have no position one way or the other. The above is an accounting and contractual argument, albeit with political consequences. I’m interested to hear arguments that address the accounting issues here, and not at all interested in arguments based on “omg you’re a bad person and you’re taking money from my portfolio.”
I’ve little hope of that, but I thought I’d ask :-)
Financial cryptography is that space between crypto and finance which by nature of its inclusiveness of all economic activities, is pretty close to most of life as we know it. We bring human needs together with the net marketplace in a secure fashion. It’s all interconnected, and I’m not talking about IP.
Today I’m trying to solve my messaging problem. In short, tweak my messaging design to better supports the use case or community I have in mind, from the old client-server days into a p2p world. But to solve this I need to solve the institutional concept of persons, i.e. those who send messages. To solve that I need an identity framework. To solve the identity question, I need to understand how to hold assets, as an asset not held by an identity is not an asset, and an identity without an asset is not an identity. To resolve that, I need an authorising mechanism by which one identity accepts another for asset holding, that which banks would call "onboarding" but it needs to work for people not numbers, and to solve that I need a voting solution. To create a voting solution I need a resolution to the smart contracts problem, which needs consensus over data into facts, and to handle that I need to solve the messaging problem.
Bugger.
A solution cannot therefore be described in objective terms - it is circular, like life, recursive, dependent on itself. Which then leads me to thinking of an evolutionary argument, which, assuming an argument based on a higher power is not really on the table, makes the whole thing rather probabilistic. Hopefully, the solution is more probabilistically likely than human evolution, because I need a solution faster than 100,000 years.
This could take a while. Bugger.
Timeline on a hash collision attack:
1993 | SHA0 published |
1995 | SHA1 published due to weaknesses found |
2001 | SHA2 published due to expectations of weakness in SHA1 |
2005 | Shandong team MD5 attacked, SHA1 worried |
2009? | RocketSSL breached for using MD5 |
2014 | Chrome responds and starts phasing out SHA1 |
2017 | CWI & Google announce collision attack on SHA1 |
The point I wish to make here is that SHA1 was effectively deprecated in 2001 with the publication of SHA2. If you are vulnerable to a collision attack, then you had your moment of warning sixteen years ago.
On the other hand, think about this for a moment - in 2005 the Shandong shot was heard around the cryptographic world. Everyone knew! But we now see that SHA1 lasted an additional 12 years before it crumbled to a collision attack. That shows outstanding strength, an incredible run.
On the third hand, let's consider your protocol. If your protocol is /not/ vulnerable to a collision attack then SHA1 is still good. As is SHA0 and MD5. And, as an aside, no protocol should be vulnerable to a collision attack - such weakness is probably a bug.
So SHA1 is technically only a problem if you have another weakness in your protocol. And if you have that weakness, well, it's a pretty big one, and you should be worried for everything, not just SHA1.
On the fourth hand, however, institutions are too scared to understand the difference, and too bureaucratic to suggest better practices like eliminating collision vulnerabilities. Hence, all software suppliers have been working to deprecate SHA1 from consideration. To show you how asinine this gets, some software suppliers are removing older hash functions so, presumably you can't use them - to either make new ones or check old ones. (Doh!)
Security moves as a herd not as a science. Staying within the herd provides sociability in numbers, but social happiness shouldn't be mistaken for security, as the turkey well knows.
Finally, on the fifth hand, I still use SHA1 in Ricardo for naming Ricardian Contracts. Try for the life of me, and I still can't see how to attack it with collisions. As, after all, the issuer signs his own contract, and if he collides, he's up for both contracts, and there are copies of both distributed...
There is no cause for panic, if you've done your homework.
Today is the day that Corda goes open source. Which will be commented far and wide, so not a lot of point in duplicating that effort. But perhaps a few comments on where this will lead us, as a distributed ledger sector.
For a long time, since 2009, Bitcoin dominated the scene. Ethereum successfully broke that monopoly on attention, not without a lot of pain, but it is safe to say that for a while now, there have been two broad churches in town.
As Corda comes out, it will create a third force. From today, the world moves to three centers of gravity. As with the the fabled three-body-gravity problem of astrophysics, it's a little difficult to predict how this will pan out but some things can be said.
This post is to predict that third force. First, a recap of features, and shortfalls. Then, direction, and finally interest.
Featurism. It has to be said again and again (and over and over) that Corda is a system that was built for what the finance world wanted. It wasn't ever a better blockchain, indeed, it's not even a blockchain - that was considered optional and in the event, discarded. It also wasn't ever a smarter contract, as seen against say Ethereum.
Corda was what made sense to corporates wanting to trade financial instruments - a focus which remains noticeably lacking in 'the incumbent chains' and the loud startups.
Sharing. In particular, as is well hashed in the introductory paper: Corda does not share the data except with those who are participants to the contract. This is not just a good idea, it's the law - there are lots and lots of regulations in place that make sharing data a non-starter if you are in the regulated world. Selling a public, publishing blockchain to a bank is like selling a prime beef steak to a vegetarian - the feedgrain isn't going to improve your chances of a sale.
Toasting. Corda also dispenses with the planet-warming proof of work thing. While an extraordinary innovation, it just will not fly in a regulated world. Sorry about that, guys. But, luckily, it turns out we don't need it in the so-called private chain business - because we are dealing with a semi-trusted world in financial institutions, they can agree on a notary to sign off on some critical transactions; And -- innovation alert here -- as it happens, the notary is an interface or API. It can be a single server, or if you feel like going maximal, you can hook up a blockchain at that point. In theory at least, Corda can happily use Bitcoin to do its coordination, if you write the appropriate notary interface. If that's your thing. And for a few use cases, a blockchain works for the consensus part.
These are deviations. Then there are similarities.
Full language capability. Corda took one lead from Ethereum which was the full Turing-complete VM - although we use Java's JVM as it's got 20 years of history, and Java is the #1 language in finance. Which we can do without the DAO syndrome because our contracts will be user-driven, not on an unstoppable computer - if there's a problem, we just stop and resolve it. No problem.
UTXO. Corda also took the UTXO transaction model from Bitcoin - for gains in scaleability and flexibility.
There's a lot more, but in brash summary - Corda is a lot closer to what the FIs might want to use.
Minuses. I'm not saying it's perfect, so let me say some bad things: Corda is not ready for production, has zero users, zero value on-ledger. It has not been reviewed for security, nor does that make sense until it's built out. It's missing some key things (which you can see in the docs or the new technical paper). It hasn't been tested at scale, neither with a regulator nor with a real user base.
Direction. Corda has a long long way to go, but the way it is going is ever closer to that same direction - what financial institutions want. The Ethereum people and the Bitcoin people have not really cottoned on to user-driven engineering, and remain bemused as to who the users of their system are.
Which brings us to the next point - interest. Notwithstanding all the above, or perhaps because of it - Corda already has the attention of the financial world:
All of which is to say: I predict that Corda will shoot to pole position. That's because it is powered by its members, and it is focussed to their needs. A clear feedback loop which is totally absence in the blockchain world.
The Game. Today, Corda becomes the third force in distributed ledger technologies. But I also predict it's not only the game changer, it's the entire game.
The reason I say that is because it is the only game that has asked the users what they want. In contrast, Bitcoin told the users it wanted an unstoppable currency - sure, works for a small group but not for the mass market. Ethereum told their users they need an unstoppable machine - which worked how spectacularly with the DAO? Not. What. We. Wanted.
Corda is the only game in town because it's the only one that asked the users. It's that simple.
(Editor's note: Dates are approximate. Written in May of 2014 as an educational presentation to lawyers, notes forgotten until now. Freshened 2016.09.11.)
Where is the contract? This is a question that has bemused the legal fraternity, bewitched the regulator, and sent the tech community down the proverbial garden path. Let's track it down.
Within the financial cryptography community, we have seen the discussion of contracts in approximately these ways:
Let's look at each in turn.
a(i) Nick Szabo theorised the notion of smart contracts as far back as 1994. His design postulated the ability of our emerging financial cryptography technology to automate the performance of human agreements within computer programs that also handled money. That is, they are computer programs that manage performance of a contract with little or less human intervention.
At an analogous level at least, smart contracts are all around. So much of the performance of contracts is now built into the online services of corporations that we can't even count them anymore. Yet these corporate engines of performance were written once then left running forever, whereas Szabo's notion went a step further: he suggested smart contracts as more of a general service to everyone: your contractual-programmer wrote the smart contract and then plugged it into the stack, or the service or the cloud. Users would then come along and interact with this machine, to get services.
a(ii). Bitcoin. In 2009 Bitcoin deployed a limited form of Smart Contracts in an open service or cloud setting called the blockchain. This capability was almost a side-effect of a versatile payments transaction of smart contracts. After author Satoshi Nakamoto left, the power of smart contracts was reduced in scope somewhat due to security concerns.
To date, success has been limited to simple uses such as Multisig which provides a separation of concerns governance pattern by allowing multiple signers to release funds.
If we look at the above graphic we can see a fairly complicated story that we can now reduce into one smart contract. In a crowd funding, a person will propose a project. Many people will contribute to a pot of money for that project until a particular date. At that date, we have to decide whether the pot of money is enough to properly fund the project and if so, send over the funds. If not, return the funds.
To code this up, the smart contract has to do these steps:
A new service called Lighthouse now offers crowdfunding but keep your eyes open for crowdfunding in Ethereum as their smart contracts are more powerful.
Back in 1996, as part of a startup doing bond trading on the net, I created a method to bring a classical 'paper' contract into touch with a digital accounting system such as cryptocurrencies. The form, which became known as the Ricardian Contract, was readily usable for anything that you could put into a written contract, beyond its original notion of bonds.
In short: write a standard contract such as a bond. Insert some machine-readable tags that would include parties, amounts, dates, etc that the program also needed to display. Then sign the document using a cleartext digital signature, one that preserves the essence as a human-readable contract. OpenPGP works well for that. This document can be seen on the left of this bow-tie diagram.
Then - hash the document using a cryptographic message digest function that creates a one-for-one identifier for the contract, as seen in the middle. Put this identifier into every transaction to lock in which instrument we're paying back and forth. As the transactions start from one genesis transaction and then fan out to many transactions, all of them including the Ricardian hash, with many users, this is shown in the right hand part of the bow-tie.
See 2004 paper and wikipedia page on the Ricardian contract. We have then a contract form that is readable by person and machine, and can be locked into every transaction - from the genesis transaction, value trickles out to all others.
The Ricardian Contract is now emerging in the Bitcoin world. Enough businesses are looking at the possibilities of doing settlement and are discovering what I found in 1996 - we need a secure way to lock tangible writings of a contract on to the blockchain. A highlight might be NASDAQ's recent announcements, and Coinprism's recent work with OpenAssets project [1, 2, 3], and some of the 2nd generation projects have incorporated it without much fuss.
c(i). Around 2006 Chris Odom built OpenTransactions, a cryptocurrency system that extended Ricardian Contract beyond issuance. The author found:
"While these contracts are simply signed-XML files, they can be nested like russian dolls, and they have turned out to become the critical building block of the entire Open Transactions library. Most objects in the library are derived, somehow, from OTContract. The messages are contracts. The datafiles are contracts. The ledgers are contracts. The payment plans are contracts. The markets and trades are all contracts. Etc.I originally implemented contracts solely for the issuing, but they have really turned out to have become central to everything else in the library."
In effect Chris Odom built an agent-based system using the Ricardian Contract to communicate all its parameters and messages within and between its agents. He also experimented with Smart Contracts, but I think they were a server-upload model.
c(ii). CommonAccord construct small units containing matching smart code and prose clauses, and then compose these into full contracts using the browser. Once composed, the result can be read, verified and hashed a la Ricardian Contracts, and performed a la smart contracts.
c(iii) Let's consider person to person trading. With face-to-face trades, the contract is easy. With mail order it is harder, as we have to identify each components, follow a journey, and keep the paper work. With the Internet it is even worse because there is no paperwork, it's all pieces of digital data that might be displayed, might be changed, might be lost.
Shifting forward to 2014 and OpenBazaar decided to create a version of eBay or Amazon and put it onto the Bitcoin blockchain. To handle the formation of the contract between people distant and anonymous, they make each component into a Ricardian Contract, and place each one inside the succeeding component until we get to the end.
Let's review the elements of a contract in a cycle:
✓ Invitation to treat is found on blockchain similar to web page.
✓ offer by buyer
✓ acceptance by merchant
✓ (performance...)
✓ payment (multisig partner controls the money)
The Ricardian Contract finds itself as individual elements in the formation of the wider contract formation around a purchase. In each step, the prior step is included within the current contractual document. Like the lego blocks above, we can create a bigger contract by building on top of smaller components, thus implementing the trade cycle into Chris Odom's vision of Russian Dolls.
In conclusion, the question of the moment was:
Where is the contract?
So far, as far as the technology field sees it, in three areas:
I see the future as convergence of these primary ideas: the parts or views we call smart & legal contracts will complement each other and grow together, being combined as elements into fuller agreements between people.
For those who think nothing much has changed in the world of contracts for a century or more, I say this: We live in interesting times!
(Editor's reminder: Written in May of 2014, and the convergence notion fed straight into "The Sum of all Chains".)
The Nakamoto Signature might be a thing. In 2014, the Sidechains whitepaper by Back et al introduced the term Dynamic Membership Multiple-party Signature or DMMS -- because we love complicated terms and long impassable acronyms.
Or maybe we don't. I can never recall DMMS nor even get it right without thinking through the words; in response to my cognitive poverty, Adam Back suggested we call it a Nakamoto signature.
That's actually about right in cryptology terms. When a new form of cryptography turns up and it lacks an easy name, it's very often called after its inventor. Famous companions to this tradition include RSA for Rivest, Shamir, Adleman; Schnorr for the name of the signature that Bitcoin wants to move to. Rijndael is our most popular secret key algorithm, from the inventors names, although you might know it these days as AES. In the old days of blinded formulas to do untraceable cash, the frontrunners were signatures named after Chaum, Brands and Wagner.
On to the Nakamoto signature. Why is it useful to label it so?
Because, with this literary device, it is now much easier to talk about the blockchain. Watch this:
The blockchain is a shared ledger where each new block of transactions - the 10 minutes thing - is signed with a Nakamoto signature.
Less than 25 words! Outstanding! We can now separate this discussion into two things to understand: firstly: what's a shared ledger, and second: what's the Nakamoto signature?
Each can be covered as a separate topic. For example:
the shared ledger can be seen as a series of blocks, each of which is a single document presented for signature. Each block consists of a set of transactions built on the previous set. Each succeeding block changes the state of the accounts by moving money around; so given any particular state we can create the next block by filling it with transactions that do those money moves, and signing it with a Nakamoto signature.
Having described the the shared ledger, we can now attack the Nakamoto signature:
A Nakamoto signature is a device to allow a group to agree on a shared document. To eliminate the potential for inconsistencies aka disagreement, the group engages in a lottery to pick one person's version as the one true document. That lottery is effected by all members of the group racing to create the longest hash over their copy of the document. The longest hash wins the prize and also becomes a verifiable 'token' of the one true document for members of the group: the Nakamoto signature.
That's it, in a nutshell. That's good enough for most people. Others however will want to open that nutshell up and go deeper into the hows, whys and whethers of it all. You'll note I left plenty of room for argument above; Economists will look at the incentive structure in the lottery, and ask if a prize in exchange for proof-of-work is enough to encourage an efficient agreement, even in the presence of attackers? Computer scientists will ask 'what happens if...' and search for ways to make it not so. Entrepreneurs might be more interested in what other documents can be signed this way. Cryptographers will pounce on that longest hash thing.
But for most of us we can now move on to the real work. We haven't got time for minutia. The real joy of the Nakamoto signature is that it breaks what was one monolithic incomprehensible problem into two more understandable ones. Divide and conquer!
The Nakamoto signature needs to be a thing. Let it be so!
In news today, BebaPay, the google transit payment system in Nairobi, is shutting down. As predicted in this blog, the payment system was a disaster from the start, primarily because it did not understand the governance (aka corruption) flow of funds in the industry. This resulted in the erstwhile operators of the system conspiring to make sure it would not work.
How do I know this? I was in Nairobi when it first started up, and we were analysing a lot of market sectors for payments technology at the time. It was obvious to anyone who had actually taken a ride on a Matatu (the little buses that move millions of Kenyans to work) that automating their fares was a really tough sell. And, once we figured out how the flow of funds for the Matatu business worked, from inside sources, we knew a digital payments scheme was dead on arrival.
As an aside there is a play that could have been done there, in a nearby sector, which is the tuk-tuks or motorbike operators that are clustered at every corner. But that's a case-study for another day. The real point to take away here is that you have to understand the real flows of money, and when in Africa, understand that what we westerners call corruption means that our models are basically worthless.
Or in shorter terms, take a ride on the bus before you decide to improve it.
Meanwhile, in other news, Safaricom are now making a big push into the retail POS world. This was also in the wings at the time, and when I was there, we got the inside look into this field due to a friend who was running a plucky little mPesa facilitation business for retails. He was doing great stuff, but the elephant in the room was always Safaricom, and it was no polite toilet-trained beast. Its reputation for stealing other company's business ideas was a legend; in the payment systems world, you're better off modelling Safaricom as a bank.
Ah, that makes more sense... You'll note that Safaricom didn't press over-hard to enter the transit world.
The other great takeway here is that westerners should not enter into the business of Africa lightly if at all. Westerners' biggest problem is that they don't understand the conditions there, and consequently they will be trapped in a self-fulfilling cycle of western psuedo-economic drivel. Perhaps even more surprising, they also can't turn to their reliable local NGOs or government partners or consultancies because these people are trained & paid by the westerners to feed back the same academic models.
How to break out of that trap economically is a problem I've yet to figure out. I've now spent a year outside the place, and I can report that I have met maybe 4 or 5 people amongst say 100 who actually understand the difference? Not a one of these is employed by an NGO, aid department, consultant, etc. And, these impressive organisations around the world that specialise in Africa are in this situation -- totally misinformed and often dangerously wrong.
I feel very badly for the poor of the world, they are being given the worst possible help, with the biggest smile and a wad of cash to help it along its way to failure.
Which leads me to a pretty big economic problem - solving this requires teaching what I learnt in a few years over a single coffee - can't be done. I suspect you have to go there, but even that isn't saying what's what.
Luckily however the developing world -- at least the parts I saw in Nairobi -- is now emerging with its own digital skills to address their own issues. Startup labs abound! And, from what I've seen, they are doing a much better job at it than the outsiders.
So, maybe this is a problem that will solve itself? Growth doesn't happen at more than 10% pa, so patience is perhaps the answer, not anger. We can live and hope, and if an NGO does want to take a shot at the title, I'm in for the 101th coffee.
It is commonly touted that Bitcoin solves the Byzantine Generals Problem because it deals with coordination between geographically separated points, and there is an element of attack on the communications. But I am somewhat suspicious that this is not quite the right statement.
To review "the Byzantine Generals Problem," let's take some snippets from Lamport, Shostack and Pease's seminal paper of that name:
We imagine that several divisions of the Byzantine army are camped outside an enemy city, each division commanded by its own general. The generals can communicate with one another only by messenger. After observing the enemy, they must decide upon a common plan of action. However, some of the generals may be traitors, trying to prevent the loyal generals from reaching agreement. The generals must have an algorithm to guarantee thatA. All loyal generals decide upon the same plan of action.
The loyal generals will all do what the algorithm says they should, but the traitors may do anything they wish. The algorithm must guarantee condition A regardless of what the traitors do.
The loyal generals should not only reach agreement, but should agree upon a reasonable plan. We therefore also want to insure thatB. A small number of traitors cannot cause the loyal generals to adopt a bad plan.
Lamport, Shostack and Pease, "the Byzantine Generals Problem", ACM Transactions on Programming Languages and Systems,Vol.4, No. 3, July 1982, Pages 382-401.
My criticism is one of strict weakening. Lamport et al addressed the problem of Generals communicating, but there are no Generals in the Bitcoin design. If we read Lamport, although it doesn't say it explicitly, there are N Generals, exactly, and they are all identified, being loyal or disloyal as it is stated. Which means that the Generals Problem only describes a fixed set in which everyone can authenticate each other.
While still a relevant problem, the Internet world of p2p solutions has another issue -- the sybil attack. Consider "Exposing Computationally-Challenged Byzantine Impostors" from 2005 by Aspnes, Jackson and Krishnamurthy:
Peer-to-peer systems that allow arbitrary machines to connect to them are known to be vulnerable to pseudospoofing or Sybil attacks, first described in a paper by Douceur [7], in which Byzantine nodes adopt multiple identities to break fault-tolerant distributed algorithms that require that the adversary control no more than a fixed fraction of the nodes. Douceur argues in particular that no practical system can prevent such attacks, even using techniques such as pricing via processing [9], without either using external validation (e.g., by relying on the scarceness of DNS domain names or Social Security numbers), or by making assumptions about the system that are unlikely to hold in practice. While he describes the possibility of using a system similar to Hashcash [3] for validating identities under certain very strong cryptographic assumptions, he suggests that this approach can only work if (a) all the nodes in the system have nearly identical resource constraints; (b) all identities are validated simultaneously by all participants; and (c) for "indirect validations," in which an identity is validated by being vouched for by some number of other validated identities, the number of such witnesses must exceed the maximum number of bad identities. This result has been abbreviated by many subsequent researchers [8, 11, 19-21] as a blanket statement that preventing Sybil attacks without external validation is impossible.J. Aspnes, C. Jackson, and A. Krishnamurthy, "Exposing computationally-challenged byzantine impostors," Tech. Report YALEU/DCS/TR-1332, Yale University, 2005, http://www.cs.yale.edu/homes/aspnes/papers/tr1332.pdf
Prescient, or what? The paper then goes on to argue that the solution to the sybil attack is precisely in weakening the restriction over identity: *The good guys can also duplicate*.
We argue that this impossibility result is much more narrow than it appears, because it gives the attacking nodes a significant advantage in that it restricts legitimate nodes to one identity each. By removing this restriction...
This is clearly not what Lamport et al's Generals were puzzling over in 1982, but it is as clearly an important problem, related, and one that is perhaps more relevant to Internet times.
It's also the one solved according to the Bitcoin model. If Bitcoin solved the Byzantine Generals Problem, it did it by shifting the goal posts. Where then did Satoshi move the problem to? What is his problem?
With p2p in general and Bitcoin in particular, we're talking more formally about a dynamic membership set, where the set comes together once to demand strong consensus and that set is then migrated to a distinct set for the next round, albeit with approximately the same participants.
What's that? It's more like a herd, or a school of fish. As it moves forward, sudden changes in direction cause some to fall off, others to join.
The challenge then might be to come up with a name. Scratching my head to come up with an analogue in human military affairs, it occurs that the Crusades were something like this: A large group of powerful knights, accompanied by their individual retainers, with a forward goal in mind. The group was not formed on state lines typical of warfare but religious lines, and it would for circumstances change as it moved. Some joined, while some crusaders never made it to the front; others made it and died in the fighting, and indeed some entire crusades never made it out of Europe.
Crusaders were typically volunteers, often motivated by greed, force or threat of reputation damage. There were plenty of non-aligned interests in a crusade, and for added historical bonus, they typically travelled through Byzantium or Constantinople as it was then known. And, as often bogged down there, literally falling to the Byzantine attacks of the day.
Perhaps p2p faces the Byzantine Crusaders Problem, and perhaps this is what Bitcoin has solved?
In the alternate, I've seen elsewhere that the problem is referred to as the Revolutionaries' Problem. This term also works in that it speaks to the democracy of the moment. As a group of very interested parties come together they strike out at the old ways and form a new consensus over financial and other affairs.
History will be the judge of this, but it does seem that for the sake of pedagogy and accuracy, we need a new title. Bysantium Crusaders' problem? Democratic Revolutionaries' problem? Consensus needed!
Just got tipped to Andrew Poelstra's faq on ASICs, where he says of Adam Back's Proof of Work system in Bitcoin:
In places where the waste heat is directly useful, the cost of mining is merely the difference between electric heat production and ordinary heat production (here in BC, this would be natural gas). Then electricity is effectively cheap even if not actually cheap.
Which is an interesting remark. If true -- assume we're in Iceland where there is a need for lots of heat -- then Bitcoin mining can be free at the margin. Capital costs remain, but we shouldn't look a gift horse in the mouth?
My view remains, and was from the beginning of BTC when Satoshi proposed his design, that mining is a dead-weight loss to the economy because it turns good electricity into bad waste, heat. And, the capital race adds to that, in that SHA2 mining gear is solely useful for ... Bitcoin mining. Such a design cannot survive in the long run, which is a reflection of Gresham's law, sometimes expressed as the simplistic aphorism of "bad money drives out good."
Now, the good thing about predicting collapse in the long run is that we are never proven wrong, we just have to wait another day ... but as Ben Laurie pointed out somewhere or other, the current incentives encourage the blockchain mining to consume the planet, and that's not another day we want to wait for.
Not a good thing. But if we switch production to some more socially aligned pattern /such as heating/, then likely we could at least shift some of the mining to a cost-neutrality.
Why can't we go further? Why can't we make the information calculated socially useful, and benefit twice? E.g., we can search for SETI, fold some DNA, crack some RSA keys. Andrew has commented on that too, so this is no new idea:
7. What about "useful" proofs-of-work?These are typically bad ideas for all the same reasons that Primecoin is, and also bad for a new reason: from the network's perspective, the purpose of mining is to secure the currency, but from the miner's perspective, the purpose of mining is to gain the block reward. These two motivations complement each other, since a block reward is worth more in a secure currency than in a sham one, so the miner is incentivized to secure the network rather than attacking it.
However, if the miner is motivated not by the block reward, but by some social or scientific purpose related to the proof-of-work evaluation, then these incentives are no longer aligned (and may in fact be opposed, if the miner wants to discourage others from encroaching on his work), weakening the security of the network.
I buy the general gist of the alignments of incentives, but I'm not sure that we've necessarily unaligned things just by specifying some other purpose than calculating a SHA2 to get an answer close to what we already know.
Let's postulate a program that calculates some desirable property. Because that property is of individual benefit only, then some individual can pay for it. Then, the missing link would be to create a program that takes in a certain amount of money, and distributes that to nodes that run it according to some fair algorithm.
What's a program that takes in and holds money, gets calculated by many nodes, and distributes it according to an algorithm? It's Nick Szabo's smart contract distributed over the blockchain. We already know how to do that, in principle, and in practice there are many efforts out there to improve the art. Especially, see Ethereum.
So let's assume a smart contract. Then, the question arises how to get your smart contract accepted as the block calculation for 17:20 on this coming Friday evening? That's a consensus problem. Again, we already know how to do consensus problems. But let's postulate one method: hold a donation auction and simply order these things according to the amount donated. Close the block a day in advance and leave that entire day to work out which is the consensus pick on what happens at 17:20.
Didn't get a hit? If your smart contract doesn't participate, then at 17:30 it expires and sends back the money. Try again, put in more money? Or we can imagine a variation where it has a climbing ramp of value. It starts at 10,000 at 17:20 and then adds 100 for each of the next 100 blocks then expires. This then allows an auction crossing, which can be efficient.
An interesting attack here might be that I could code up a smartcontract-block-PoW that has a backdoor, similar to the infamous DUAL_EC random number generator from NIST. But, even if I succeed in coding it up without my obfuscated clause being spotted, the best I can do is pay for it to reach the top of the rankings, then win my own payment back as it runs at 17:20.
With such an attack, I get my cake calculated and I get to eat it too. As far as incentives go to the miner, I'd be better off going to the pub. The result is still at least as good as Andrew's comment, "from the network's perspective, the purpose of mining is to secure the currency."
What about the 'difficulty' factor? Well, this is easy enough to specify, it can be part of the program. The Ethereum people are working on the basis of setting enough 'gas' to pay for the program, so the notion of 'difficulty' is already on the table.
I'm sure there is something I haven't thought of as yet. But it does seem that there is more of a benefit to wring from the mining idea. We have electricity, we have capital, and we have information. Each of those is a potential for a bounty, so as to claw some sense of value back instead of just heating the planet to keep a bunch of libertarians with coins in their pockets. Comments?
This is what financial cryptography is about (h/t to Jeroen). Copied from Qi of the TheCodelessCode:
A novice asked of master Bawan: “Say something about the Heartbleed Bug.”
Said Bawan: “Chiuyin, the Governor’s treasurer, is blind as an earthworm. A thief may give him a coin of tin, claim that it is silver and receive change. When the treasury is empty, which man is the villain? Speak right and I will spare you all blows for one week. Speak wrong and my staff will fly!”
The novice thought: if I say the thief, Bawan will surely strike me, for it is the treasurer who doles out the coins. But if I say the treasurer he will also strike me, for it is the thief who takes advantage of the situation.
When the pause grew too long, Bawan raised his staff high. Suddenly enlightened, the novice cried out: “The Governor! For who else made this blind man his treasurer?”
Bawan lowered his staff. “And who is the Governor?”
Said the novice: “All who might have cried out ‘this man is blind!’ but failed to notice, or even to examine him.”
Bawan nodded. “This is the first lesson. Too easily we praise Open Source, saying smugly to each other, ‘under ten thousand eyeballs, every bug is laid bare’. Yet when the ten thousand avert their gaze, they are no more useful than the blind man. And now that I have spared you all blows for one week, stand at ease and tell me: what is the second lesson?”
Said the novice: “Surely, I have no idea.”
Bawan promptly struck the novice’s skull with his staff. The boy fell to the floor, unconscious.
As he stepped over the prone body, Bawan remarked: “Code as if everyone is the thief.”
One of the excuses touted for the Dual_EC debacle was that the magical P & Q numbers that were chosen by secret process were supposed to be defaults. Anyone was at liberty to change them.
Epic fail! It turns out that this might have been just that, a liberty, a hope, a dream. From last week's paper on attacking Dual_EC:
"We implemented each of the attacks against TLS libraries described above to validate that they work as described. Since we do not know the relationship between the NIST- specified points P and Q, we generated our own point Q′ by first generating a random value e ←R {0,1,...,n−1} where n is the order of P, and set Q′ = eP. This gives our trapdoor value d ≡ e−1 (mod n) such that dQ′ = P. (Our random e and its corresponding d are given in the Appendix.) We then modified each of the libraries to use our point Q′ and captured network traces using the libraries. We ran our attacks against these traces to simulate a passive network attacker.
In the new paper that measures how hard it was to crack open TLS when corrupted by Dual_EC, the authors changed the Qs to match the P delivered, so as to attack the code. Each of the four libraries they had was in binary form, and it appears that each had to be hard-modified in binary in order to mind their own Ps and Qs.
So did (a) the library implementors forget that issue? or (b) NIST/FIPS in its approval process fail to stress the need for users to mind their Ps and Qs? or (c) the NSA knew all along that this would be a fixed quantity in every library, derived from the standard, which was pre-derived from their exhaustive internal search for a special friendly pair? In other words:
"We would like to stress that anybody who knows the back door for the NIST-specified points can run the same attack on the fielded BSAFE and SChannel implementations without reverse engineering.
Defaults, options, choice of any form has always been known as bad for users, great for attackers and a downright nuisance for developers. Here, the libraries did the right thing by eliminating the chance for users to change those numbers. Unfortunately, they, NIST and all points thereafter, took the originals without question. Doh!
In the light of yesterday's newly revealed attack by the NSA on Internet standards, what are the systemic problems here, if any?
I think we can question the way the IETF is approaching security. It has taken a lot of thinking on my part to identify the flaw(s), and not a few rants, with many and aggressive defences and counterattacks from defenders of the faith. Where I am thinking today is this:
First the good news. The IETF's Working Group concept is far better at developing general standards than anything we've seen so far (by this I mean ISO, national committees, industry cartels and whathaveyou). However, it still suffers from two shortfalls.
1. the Working Group system is more or less easily captured by the players with the largest budget. If one views standards as the property of the largest players, then this is not a problem. If OTOH one views the Internet as a shared resource of billions, designed to serve those billions back for their efforts, the WG method is a recipe for disenfranchisement. Perhaps apropos, spotted on the TLS list by Peter Gutmann:
Documenting use cases is an unnecessary distraction from doing actual work. You'll note that our charter does not say "enumerate applications that want to use TLS".
I think reasonable people can debate and disagree on the question of whether the WG model disenfranchises the users, because even though a a company can out-manouver the open Internet through sheer persistence and money, we can still see it happen. In this, IETF stands in violent sunlight compared to that travesty of mouldy dark closets, CABForum, which shut users out while industry insiders prepared the base documents in secrecy.
I'll take the IETF any day, except when...
2. the Working Group system is less able to defend itself from a byzantine attack. By this I mean the security concept of an attack from someone who doesn't follow the rules, and breaks them in ways meant to break your model and assumptions. We can suspect byzantium disclosures in the fingered ID:
The United States Department of Defense has requested a TLS mode which allows the use of longer public randomness values for use with high security level cipher suites like those specified in Suite B [I-D.rescorla-tls-suiteb]. The rationale for this as stated by DoD is that the public randomness for each side should be at least twice as long as the security level for cryptographic parity, which makes the 224 bits of randomness provided by the current TLS random values insufficient.
Assuming the story as told so far, the US DoD should have added "and our friends at the NSA asked us to do this so they could crack your infected TLS wide open in real time."
Such byzantine behaviour maybe isn't a problem when the industry players are for example subject to open observation, as best behaviour can be forced, and honesty at some level is necessary for long term reputation. But it likely is a problem where the attacker is accustomed to that other world: lies, deception, fraud, extortion or any of a number of other tricks which are the tools of trade of the spies.
Which points directly at the NSA. Spooks being spooks, every spy novel you've ever read will attest to the deception and rule breaking. So where is this a problem? Well, only in the one area where they are interested in: security.
Which is irony itself as security is the field where byzantine behaviour is our meat and drink. Would the Working Group concept past muster in an IETF security WG? Whether it does or no depends on whether you think it can defend against the byzantine attack. Likely it will pass-by-fiat because of the loyalty of those involved, I have been one of those WG stalwarts for a period, so I do see the dilemma. But in the cold hard light of sunlight, who is comfortable supporting a WG that is assisted by NSA employees who will apply all available SIGINT and HUMINT capabilities?
Can we agree or disagree on this? Is there room for reasonable debate amongst peers? I refer you now to these words:
On September 5, 2013, the New York Times [18], the Guardian [2] and ProPublica [12] reported the existence of a secret National Security Agency SIGINT Enabling Project with the mission to “actively [engage] the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs.” The revealed source documents describe a US $250 million/year program designed to “make [systems] exploitable through SIGINT collection” by inserting vulnerabilities, collecting target network data, and influencing policies, standards and specifications for commercial public key technologies. Named targets include protocols for “TLS/SSL, https (e.g. webmail), SSH, encrypted chat, VPNs and encrypted VOIP.”
The documents also make specific reference to a set of pseudorandom number generator (PRNG) algorithms adopted as part of the National Institute of Standards and Technology (NIST) Special Publication 800-90 [17] in 2006, and also standardized as part of ISO 18031 [11]. These standards include an algorithm called the Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC). As a result of these revelations, NIST reopened the public comment period for SP 800-90.
And as previously written here. The NSA has conducted a long term programme to breach the standards-based crypto of the net.
As evidence of this claim, we now have *two attacks*, being clear attempts to trash the security of TLS and freinds, and we have their own admission of intent to breach. In their own words. There is no shortage of circumstantial evidence that NSA people have pushed, steered, nudged the WGs to make bad decisions.
I therefore suggest we have the evidence to take to a jury. Obviously we won't be allowed to do that, so we have to do the next best thing: use our collective wisdom and make the call in the public court of Internet opinion.
My vote is -- guilty.
One single piece of evidence wasn't enough. Two was enough to believe, but alternate explanations sounded plausible to some. But we now have three solid bodies of evidence. Redundancy. Triangulation. Conclusion. Guilty.
Where it leaves us is in difficulties. We can try and avoid all this stuff by e.g., avoiding American crypto, but it is a bit broader that that. Yes, they attacked and broke some elements of American crypto (and you know what I'm expecting to fall next.). But they also broke the standards process, and that had even more effect on the world.
It has to be said that the IETF security area is now under a cloud. Not only do they need to analyse things back in time to see where it went wrong, but they also need some concept to stop it happening in the future.
The first step however is to actually see the clouds, and admit that rain might be coming soon. May the security AD live in interesting times, borrow my umbrella?
In a scandal that is now entertaining that legal term of art "slam-dunk" there is news of a new weakness introduced into the TLS suite by the NSA:
We also discovered evidence of the implementation in the RSA BSAFE products of a non-standard TLS extension called "Extended Random." This extension, co-written at the request of the National Security Agency, allows a client to request longer TLS random nonces from the server, a feature that, if it enabled, would speed up the Dual EC attack by a factor of up to 65,000. In addition, the use of this extension allows for for attacks on Dual EC instances configured with P-384 and P-521 elliptic curves, something that is not apparently possible in standard TLS.
This extension to TLS was introduced 3 distinct times through an open IETF Internet Draft process, twice by an NSA employee and a well known TLS specialist, and once by another. The way the extension works is that it increases the quantity of random numbers fed into the cleartext negotiation phase of the protocol. If the attacker has a heads up to those random numbers, that makes his task of divining the state of the PRNG a lot easier. Indeed, the extension definition states more or less that:
4.1. Threats to TLS
When this extension is in use it increases the amount of data that an attacker can inject into the PRF. This potentially would allow an attacker who had partially compromised the PRF greater scope for influencing the output.
The use of Dual_EC, the previously fingered dodgy standard, makes this possible. Which gives us 2 compromises of the standards process that when combined magically work together.
Our analysis strongly suggests that, from an attacker's perspective, backdooring a PRNG should be combined not merely with influencing implementations to use the PRNG but also with influencing other details that secretly improve the exploitability of the PRNG.
Red faces all round.
Ricardo is now cloud-enabled.
Which I hasten to add, is not the same thing as cloud-based, if your head is that lofty. Not the same thing, at all, no sir, feet firmly placed on planet earth!
Here's the story. Apologies in advance for this self-indulgent rant, but if you are not a financial cryptographer, the following will appear to be just a lot of mumbo jumbo and your time is probably better spent elsewhere... With that warning, let's get our head up in the clouds for a while.
As a client-server construction, much like a web arrangement, and like Bitcoin in that the client is in charge, the client is of course vulnerable to loss/theft. So a backup of some form is required. Much analysis revealed that backup had to be complete, it had to be off-client, and also system provided.
That work has now taken shape and is delivering backups in bench-conditions. The client can backup its entire database into a server's database using the same point-to-point security protocol and the same mechanics as the rest of the model. The client also now has a complete encrypted object database using ChaCha20 as the stream cipher and Poly1305 as the object-level authentication layer. This gets arranged into a single secured stream which is then uploaded dynamically to the server. Which latter offers a service that allows a stream to be built up over time.
Consider how a client works: Do a task? Make a payment? Generate a transaction!
Remembering always it's only a transaction when it is indeed transacted, this means that the transaction has to be recorded into the database. Our little-database-that-could now streams that transaction onto the end of its log, which is now stream-encrypted, and a separate thread follows the appends and uploads additions to the server. (Just for those who are trying to see how this works in a SQL context, it doesn't. It's not a SQL database, it follows the transaction-log-is-the-database paradigm, and in that sense, it is already stream oriented.)
In order to prove this client-to-server and beginning to end, there is a hash confirmation over the local stream and over the server's file. When they match, we're golden. It is not a perfect backup because the backup trails by some amount of seconds; it is not therefore /transactional/. People following the latency debate over Bitcoin will find that amusing, but I think this is possibly a step too far in our current development; a backup that is latent to a minute or so is probably OK for now, and I'm not sure if we want to try transactional replication on phone users.
This is a big deal for many reasons. One is that it was a quite massive project, and it brought our tiny startup to a complete standstill on the technical front. I've done nothing but hack for about 3 months now, which makes it a more difficult project than say rewriting the entire crypto suite.
Second is the reasoning behind it. Our client side asset management software is now going to be using in a quite contrary fashion to our earlier design anticipations. It is going to manage the entire asset base of what is in effect a financial institution (FI), or thousands of them. Yet, it's going to live on a bog-standard Android phone, probably in the handbag of the Treasurer as she roves around the city from home to work and other places.
Can you see where this is going? Loss, theft, software failure, etc. We live in one of the most crime ridden cities on the planet, and therefore we have to consider that the FI's entire book of business can be stolen at any time. And we need to get the Treasurer up and going with a new phone in short order, because her customers demand it.
Add in some discussions about complexity, and transactions, and social networking in the app, etc etc and we can also see pretty easily that just saving the private keys will not cut the mustard. We need the entire state of phone to be saved, and recovered, on demand.
But wait, you say! Of course the solution is cloud, why ever not?
No, because, cloud is insecure. Totally. Any FI that stores their customer transactions in the cloud is in a state of sin, and indeed it is in some countries illegal to even consider it. Further, even if the cloud is locally run by the institution, internally, this exposes the FI and the poor long suffering customer to fantastic opportunities for insider fraud. What I failed to mention earlier is that my user base considers corruption to be a daily event, and is exposed to frauds continually, including from their FIs. Which is why Ricardo fills the gap.
When it comes to insider fraud, cloud is the same as fog. Add in corruption and it's now smog. So, cloud is totally out, or, cloud just means you're being robbed blind like you always were, so there is no new offering here. Following the sense of Digicash from 2 decades earlier, and perhaps Bitcoin these days, we set the requirement: The server or center should not be able to forge transactions, which, as a long-standing requirement (insert digression here into end-to-end evidence and authentication designs leading to triple entry and the Ricardian Contract, and/or recent cases backing FIs doing the wrong thing).
To bring these two contradictions together however was tricky. To resolve, I needed to use a now time-honoured technique theorised by the capabilities school, and popularised by amongst others Pelle's original document service called wideword.net and Zooko's Tahoe-LAFS: the data that is uploaded over UDP is encrypted to keys only known to the clients.
And that is what happens. As my client software database spits out data in an append-only stream (that's how all safe databases work, right??) it stream-encrypts this and then sends the stream up to the server. So the server simply has to offer something similar to the Unix file metaphor: create, read, write, delete *and append*. Add in a hash feature to confirm, and we're set. (It's similar enough to REST/CRUD that it's worth a mention, but different enough to warrant a disclaimer.)
A third reason this is a big deal is because the rules of the game have changed. In the 1990s we were assuming a technical savvy audience, ones who could manage public keys and backups. The PGP generation, if you like. Now, we're assuming none of that. The thing has to work, and it has to keep working, regardless of user foibles. This is the Apple-Facebook generation.
This benchmark also shines adverse light on Bitcoin. That community struggles to deal with theft, lurching from hack to bug to bankruptcy. As a result of their obsession over The Number One Criminal (aka the government) and with avoiding control at the center, they are blinded to the costly reality of criminals 2 through 100. If Bitcoin hypothetically were to establish the user-friendly goal that they can keep going in the face of normal 2010s user demands and failure modes, it'd be game over. They basically have to handwave stuff away as 'user responsibility' but that doesn't work any more. The rules of the game have changed, we're not in the 1990s anymore, and a comprehensive solution is required.
Finally, once you can do things like cloud, it opens up the possibilities for whole new features and endeavours. That of course is what makes cloud so exciting for big corporates -- to be able to deliver great service and features to customers. I've already got a list of enhancements we can now start to put in, and the only limitation I have now is capital to pay the hackers. We really are at the cusp of a new generation of payment systems; crypto-plumbing is fun again!
Editorial note: this rant was originally posted here but has now moved to a permanent home where it will be updated with new thoughts.
As many have noticed, there is now a permathread (Paul's term) on how to do random numbers. It's always been warm. Now the arguments are on solid simmer, raging on half a dozen cryptogroups, all thanks to the NSA and their infamous breach of NIST, American industry, mom's apple pie and the privacy of all things from Sunday school to Angry Birds.
Why is the topic of random numbers so bubbling, effervescent, unsatisfying? In short, because generators of same (RNGs), are *hard*. They are in practical experience trickier than most of the other modules we deal with: ciphers, HMACs, public key, protocols, etc.
Yet, we have come a long way. We now have a working theory. When Ada put together her RNG this last summer, it wasn't that hard. Out of our experience, herein is a collection of things we figured out; with the normal caveat that, even as RNs require stirring, the recipe for 'knowing' is also evolving.
Entropy collector ----\In short, many collectors of entropy feed their small contributions in to a Mixer, which uses the melded result to seed an Expander. The high level caller (application) uses this Expander to request her random numbers.
\ _____ _________ / \ / \
Entropy collector ---->( mixer )----->( expansion )-----> RNs
\_____/ \_________/
/
Entropy collector ----/
That all said, good luck! Comments to the normal place, please, and Ed's note: this will improve in time.
I was pointed to Ripple and found it was actually a protocol (I thought it was a business, that's the trap with slick marketing). Worth a quick look. To my surprise, it was actually quite neat. However, tricks and traps abound, so this is a list of criticisms. I am hereby going to trash certain features of the protocol, but I'm trying to do it in the spirit of, please! Fix these things before it is too late. Been there, got the back-ache from the t-shirt made of chain mail. The cross you are building for yourself will be yours forever!
Ripple's low level protocol layout is about what Gary Howland's SOX1 tried to look like, with more bells and whistles. Ripple is a protocol that tries to do the best of today's ideas that are around (with a nod towards Bitcoin), and this is one of its failings: It tries to stuff *everything* into it. Big mistake. Let's look at this with some choice elements.
Numbers: Here are the numbers it handles:
1: 16-bit unsigned integer
2: 32-bit unsigned integer
3: 64-bit unsigned integer
6: Currency Amount
16: 8-bit unsigned integer
17: 160-bit unsigned integer
Positive. One thing has been spotted and spotted well: in computing and networking we typically do not need negative numbers, and in the rare occasions we do, we can handle it with flags. Same with accounting. Good!
Now, the negatives.
First bad: Too many formats! It may not be clear to anyone doing this work de novo, but it is entirely clear to me now that I am in SOX3 - that is, the third generation of not only the basic formats but the suite of business objects - that the above is way too complicated.
x.509 and PGP formats had the same problem: too many encodings. Thinking about this, I've decided the core problem is historical and philosophical. The engineers doing the encodings are often highly adept at hardware, and often are seduced by the layouts in hardware. And they are often keen on saving every darn bit, which requires optimising the layout up the wazoo! Recall the old joke, sung to the Monty Python tune:
But this has all changed. Now we deal in software, and scripting languages have generally pointed the way here. In programming and especially in network layouts, we want *one number*, and that number has to cope with all we throw at it. So what we really want is a number that goes from 0 to infinity. Luckily we have that, from the old x.509/telco school. Here's a description taken from SDP1:
Compact Integer A Compact Integer is one that expands according to the size of the unsigned integer it carries. ...
A Compact Integer is formed from one to five bytes in sequence. If the leading (sign) bit in each byte is set, then additional bytes follow. If a byte has the sign bit reset (0) then this is the last byte. The unsigned integer is constructed by concatenating the lower order 7 bits in each byte.
A one byte Compact Integer holds an integer of 0 to 127 in the 7 lower order bits, with the sign bit reset to zero. A two byte Compact Integer can describe from 128 to 16383 (XXXX check). The leading byte has the sign bit set (1) and the trailing byte has the sign bit reset (0).
That's it (actually, it can be of infinite length, unlike the description above). Surprisingly, everything can be described in this. In the evolution of SOX, we started out with all the above fields listed by Ripple, and they all fell by the wayside. Now, all business objects use CompactInts, all the way through, for everything. Why? Hold onto that question, we'll come back to it...
Second bad: Let's look at Ripple's concept of currencies:
Native CurrencyNative amounts are indicated by the most-significant bit (0x8000000000000000) being clear. The remaining 63 bits represent a sign-and-magnitude integer. Positive amounts are encoded with the second highest bit (0x4000000000000000) set. The lower 62 bits denote the absolute value.
Ripple/IOU Currencies
Amounts of non-native currencies are indicated by the most-significant bit (0x8000000000000000) being set. They are encoded as a 64-bit raw amount followed by a 160-bit currency identifier followed by a 160-bit issuer. The issuer is always present, even if zero to indicate any issuer is acceptable.
The 64-bit raw amount is encoded with the most-significant bit set and the second most significant bit set if the raw amount is greater than zero. If the raw amount is zero, the remaining bits are zero. Otherwise, the remaining bits encode the mantissa (between 10^15 and 10^16-1) and exponent.
Boom! *Ripple puts semantics into low level syntax*. Of course, the result is a mess. Trying to pack too much business information into the layout has caused an explosion of edge cases.
What is going on here is that the architects of ripple protocol have not understood the power of OO. The notion of a currency is a business concept, not a layout. The packets that do things like transactions are best left to the business layer, and those packets will define what a currency amount means. And, they'll do it in the place where limits can be dealt with:
RationaleThe Ripple code uses a specialized internal and wire format to represent amounts of currency. Because the server code does not rely on a central authority, it can have no idea what ranges are sensible with currencies. In addition, it's handy to be able to use the same code both to handle currencies that have inflated to the point that you need billions of units to buy a loaf of bread and to handle currencies that have become extremely valuable and so extremely small amounts need to be tracked.
The design goals are:
Accuracy - 15 decimal digits
Wide representation range - 10^80 units of currency to 10^-80 units
Deterministic - no vague rounding by unspecified rules.
Fast - Integer math only.
(my emphasis) They have recognised the problems well. Now come back to that question: why does SOX3 use CompactInts everywhere? Because of the above Rationale. In the business object (recall, the power of OO) we can know things like "billions of units to buy a loaf of bread" and also get the high value ones into the same format.
Next bad: Contractual form. This team has spotted the conundrum of currency explosion, because that's the space they chose: everyone-an-issuer (as I termed it). Close:
Custom CurrenciesCurrency 160-bit identifier is the hash of the currency definition block, which is also its storage index. Contains: Domain of issuer. Issuer's account. Auditor's account (if any). Client display information. Hash of policy document.
So, using the hash leads to an infinite currency space, which is the way to handle it. Nice! Frequent readers know where I'm going with this: their currency definition block is a variation of the Ricardian Contract, in that it contains, amongst other things, a "Hash of the policy document."
It's very close, it's almost a good! But it's not close enough. One of the subtleties of the Ricardian Contract is that because it put that information into the contract, *and not in some easily cached record*, it forced the following legal truth on the entire system: the user has the contract. Only with the presence of the contract can we now get access to data above, only with the presence of the contract can we even display to the user simple things like decimalisation. This statement -- the user has the contract -- is a deal changer for the legal stability of the business. This is your get out of jail free card in any dispute, and this subtle power should not be forgone for the mere technical benefit of data optimisation of cached blocks.
Next bad:
There are three types of currencies on ripple: ripple's native currency, also known as Ripples or XRP, fiat currencies and custom currencies. The later two are used to denominate IOUs on the ripple network.Native currency is handled by the absence of a currency indicator. If there is ever a case where a currency ID is needed, native currency will use all zero bits.
Custom currencies will use the 160-bit hash of their currency definition node as their ID. (The details have not been worked out yet.)
National currencies will be handled by a naming convention that species the three-letter currency code, a version (in case the currency is fundamentally changed), and a scaling factor. Currencies that differ only by a scaling factor can automatically be converted as transactions are processed. (So whole dollars and millionths of a penny can both be traded and inter-converted automatically.)
What's that about? I can understand the decision to impose one microcurrency into the protocol, but why a separate format? Why four separate formats? This is a millstone that the software will have to carry, a cost that will drag and drag.
There is no reason to believe that XRP or Natives or Nationals can be handled any differently from Customs. Indeed, the quality of the software demands that they be handled equivalently, the last thing you want is exceptions and multiple paths and easy optimisations. indeed, the concept of contracts demands it, and the false siren of the Nationals is just the journey you need to go on to understand what a contract is. A USD is not a greenback is not a self-issued dollar is not an petrodollar, and this:
Ripple has no particular support for any of the 3 letter currencies. Ripple requires its users to agree on meaning of these codes. In particular, the person trusting or accepting a balance of a particular currency from an issuer, must agree to the issuer's meaning.
is a cop-out. Luckily the solution is simple, scrap all the variations and just stick with the Custom.
Next: canonical layouts. Because this is a cryptographic payment system, in the way that only financial cryptographers understand, it is required that there be for every element and every object and every packet a single reliable canonical layout. (Yeah, so that rules out XML, JSON, PB, Java serialization etc. Sorry guys, it's that word: security!)
The short way to see this is signing packets. If you need to attach a digital signature, the recovery at the other end has to be bit-wise perfect because otherwise the digital signature fails.
We call this a canonical layout, because it is agreed and fixed between all. Now, it turns out that Ripple has a canonical layout: binary formats. This is Good. Especially, binary works far better with code, quality, networking, and canonicalisation.
But Ripple also has a non-canonical format: JSON. This is a waste of energy. It adds little benefit because you need the serialisation methods for both anyway, and the binary will always be of higher quality because of those more demanding needs mentioned above. I'd say this is a bad, although as I'm not aware of what they benefit they get from the JSON, I'll reserve judgement on that one.
Field Name Encodings -- good. This list recognises the tension for security coding. There needs to be a single place where all the tags are defined. I don't like it, but I haven't seen a better way to do this, and I think what we are seeing here in the Field Name Encodings' long list of business network objects is just that: the centralised definition of what format to expect to follow each tag.
Another quibble -- I don't see much mention of versions. In practice, business objects need them.
Penultimate point, if you're still with us. Let's talk layering. As is mooted above, the overall architecture of the Ripple protocol bundles the business information in with the low level packet stuff. In the same place as numbers, we also have currencies defined, digital signing and ledger items! That's just crazy. Small example, hashes:
4: 128-bit hash
5: 256-bit hash
And then there is the Currency hash of the contract information, which is a 160-bit encoding... Now, this is an unavoidable tension. The hash world won't stay still -- we started out with MACs of 32 bytes, then MD5 of 128 bits, SHA1 of 160 bits, and, important to realise, that SHA1 is deprecated, we now are faced with SHA2 in 4 different lengths, Keccak with variable length sponge function hashes, and emerging Polys of 64 bits.
Hashes won't sit still. I've also got in my work hash truncations of 48 bits or so, and pretend-hashes of 144 bits! Those latter are for internal float accounts for things like Chaumian blinded money (c.f., these days ZeroCoin).
So, Hashes are as much a business object decision as anything else. Hashes therefore need to be treated as locally designated but standardised units. Just setting hashes into the low layer protocol isn't the answer, you need a suite of higher level objects. The old principle of the one true cipher suite just doesn't work when it comes to hashes.
One final point. In general, it has to be stressed: in order to do network programming *efficiently* one has to move up the philosophical stack and utilise the power of Object Oriented Programming (used to be called OOP). Too many network protocols fall into a mess because they think OOP is an application choice, and they are at a lower place in the world. Not so; if there is anywhere that OOP makes a huge difference it is in network programming. If you're not doing it, you're probably costing yourself around 5 times the effort *and reducing your security and reliability*. That's at least according to some informal experiments I've run.
Ripple's not doing it, and this will strongly show in the altLang family.
(Note to self, must publish that Ouroboros paper which lays this out in more detail.)
It being Christmas and we're all looking for a little fun, David Wagner has posted a challenge that was part of a serious study conducted by Ka-Ping Yee and himself:
Are you up to it? Are you a hacker-hero or a manager-mouse? David writes:
I believe I've managed to faithfully reconstruct the version of Ping's code that contains the deliberately inserted bug. If you would like to try your hand at finding the bug, you can look at it yourself:
http://www.cs.berkeley.edu/~daw/tmp/pvote-backdoored.zip
I'm copying Ping, in case he wants to comment or add to this.
Some grounds rules that I'd request, if you want to try this on your own:
Here's some additional information that may help you.
We told reviewers that there exists at least one bug, in Navigator.py, in a region that contains 100 lines of code. I've marked the region using comments. So, you are free to focus on only that part of the code (I promise you that we did not deliberately insert any bug anywhere else outside that region). Of course, I'm providing all the code, because you may need to understand how it all interacts. The original Pvote code was written to be as secure and verifiable as we could make it; I'm giving you a modified version that was modified to add a bug after the fact. So, this is not some "obfuscated Python" contest where the entire thing was designed to conceal a malicious backdoor: it was designed to be secure, and we added a backdoor only as an afterthought, as a way to better understand the effectiveness of code review.
To help you conduct your code review, it might help to start by understanding the Pvote design. You can read about the theory, design, and principles behind Pvote in our published papers:
The Pvote code probably won't make sense without understanding some aspects of its design and how it is intended to be used, so this background material might be helpful to you.
We also gave reviewers an assurance document, which outlines the "assurance case" (a detailed argument describing why we believe Pvote is secure and fit for purpose and free of bugs). Here's most of it:
http://www.cs.berkeley.edu/~daw/tmp/pvad-excerpts.pdf
Why not all of it? Because I'm lazy. The full assurance document contains the actual, unmodified Pvote code. We wrote the assurance document for the unmodified version of Pvote (without the deliberately inserted bug), and the full assurance document includes the code of the unmodified Pvote. If you were to look at that and compare it to the code I gave you above, you could quickly identify the bug by just doing a diff -- but that would completely defeat the purpose of the exercise. If I had copious free time, I'd modify the assurance document to give you a modified document that matches the modified code -- but I don't have time to do that. So, instead, I've just removed the part of the assurance document that contained the region of the code where we inserted our bug (namely, Navigator.py), and I'm giving you the rest of the assurance document.
In the actual review, we provided reviewers with additional resources that won't be available to you. For instance, we outlined for them the overall design principles of Pvote. We also were available to interactively answer questions, which helped them quickly get up to speed on the code. During the part where we had them review the modified Pvote with a bug inserted, we also answered their questions -- here's what Ping wrote about how we handled that part:
Since insider attacks are a major unaddressed threat in existing systems, we specifically wanted to experiment with this scenario. Therefore, we warned the reviewers to treat us as untrusted adversaries, and that we might not always tell the truth. However, since it was in everyone’s interest to use our limited time efficiently, we settled on a time-saving convention. We promised to truthfully answer any question about a factual matter that the reviewers could conceivably verify mechanically or by checking an independent source — for example, questions about the Python language, about static properties of the code, about its runtime behaviour, and so on.
Of course, since this is something you're doing on your own, you won't get the benefit of interacting with us and having us answer questions for you (to save you time). I realize this does make code review harder. My apologies.
You can assume that someone else has done some runtime testing of the code. We deliberately chose a bug that would survive "Logic & Accuracy Testing" (a common technique in elections, where election officials conduct a test in advance where they cast some ballots, typically chosen so that at least one vote has been cast for each candidate, and then check that the system accurately recorded and tallied those votes). Focus on code review.
-- David
Dan Bernstein discussed various attempts to resolve bugs in ciphersuites in his keynote "Failures of secret-key cryptography" at the March Fast Sofware Encryption event. Then (fast-forwarding to pages 35-38) he says:
“Cryptographic algorithm agility”:
(1) the pretense that bad crypto is okay if there’s a backup plan +
(2) the pretense that there is in fact a backup plan.SSL has a crypto switch that in theory allows switching to AES-GCM.
But most SSL software doesn’t support AES-GCM.The software does support one non-CBC option: RC4.
Now widely recommended, used for 50% of SSL traffic.
after which, DJB then proceeds to roundly trash RC4 as a favoured algorithm... Which is entirely fair, as RC4 has been under a cloud and deprecated since the late 1990s. In the slides, he reportst results from AlFardan–Bernstein– Paterson–Poettering–Schuldt that somewhere between 2^24 and 2^32 ciphertexts is what it takes to recover a plaintext byte. Ouch!
This attack on algorithm agility is strongly reminiscent of the One True Cipher Suite, which hypothesis also takes as a foundational assumption that algorithm agility doesn't work. So, abandon algorithm agility if you value your users.
(For further amusement, in slide 2-3, he takes aim at what I and Gutmann pronounced in 2011 and Adi's Shamir's statement that crypto is bypassed, which I'll post on later.)
Dan Bernstein discussed various attempts to resolve bugs in ciphersuites in his keynote "Failures of secret-key cryptography" at the March Fast Sofware Encryption event. Then (fast-forwarding to pages 35-38) he says:
“Cryptographic algorithm agility”:
(1) the pretense that bad crypto is okay if there’s a backup plan +
(2) the pretense that there is in fact a backup plan.SSL has a crypto switch that in theory allows switching to AES-GCM.
But most SSL software doesn’t support AES-GCM.The software does support one non-CBC option: RC4.
Now widely recommended, used for 50% of SSL traffic.
after which, DJB then proceeds to roundly trash RC4 as a favoured algorithm... Which is entirely fair, as RC4 has been under a cloud and deprecated since the late 1990s. In the slides, he reportst results from AlFardan–Bernstein– Paterson–Poettering–Schuldt that somewhere between 2^24 and 2^32 ciphertexts is what it takes to recover a plaintext byte. Ouch!
This attack on algorithm agility is strongly reminiscent of the One True Cipher Suite, which hypothesis also takes as a foundational assumption that algorithm agility doesn't work. So, abandon algorithm agility if you value your users.
(For further amusement, in slide 2-3, he takes aim at what I and Gutmann pronounced in 2011 and Adi's Shamir's statement that crypto is bypassed, which I'll post on later.)
Quotes that struck me as on-point: Chris Skinner says of SEPA or the Single-European-Payment-Area:
One of the key issues is that when SEPA was envisaged and designed, counterparty credit risk was not top of the agenda; post-Lehman Brothers crash and it is.
What a delight! Oh, to design a payment system without counterparty risk ... Next thing they'll be suggesting payments without theft!
Meanwhile Dan Kaminsky says in delicious counterpoint, commenting on Bitcoin:
But the core technology actually works, and has continued to work, to a degree not everyone predicted. Time to enjoy being wrong. What the heck is going on here?First of all, yes. Money changes things.
A lot of the slop that permeates most software is much less likely to be present when the developer is aware that, yes, a single misplaced character really could End The World. The reality of most software development is that the consequences of failure are simply nonexistent. Software tends not to kill people and so we accept incredibly fast innovation loops because the consequences are tolerable and the results are astonishing.
BitCoin was simply developed under a different reality.
The stakes weren’t obscured, and the problem wasn’t someone else’s.
They didn’t ignore the engineering reality, they absorbed it and innovated ridiculously
Welcome to financial cryptography -- that domain where things matter. It is this specialness, that ones code actually matters, that makes it worth while.
Meanwhile, from the department of lolz, comes Apple with a new patent -- filed at least.
The basic idea, described in a patent application “Ad-hoc cash dispensing network” is pretty simple. Create a cash dispensing server at Apple’s datacenter, to which iPhones, iPads and Macs can connect via a specialized app. Need some quick cash right now and there’s no ATM around? Launch the Cash app, and tell it how much do you need. The app picks up your location, and sends the request for cash to nearby iPhone users. When someone agrees to front you $20, his location is shown to you on the map. You go to that person, pick up the bill and confirm the transaction on your iPhone. $20 plus a small service fee is deducted from your iTunes account and deposited to the guy who gave you the cash.
The good thing about being an FCer is that you can design that one over beers, and have a good belly laugh for the same price. I don't know how to put it gently, but hey guys, don't do that for real, ok?!
All by way of saying, financial cryptography is where it's at!
Coming across this retreat from the wonders of OAuth 2.0 in the daily Lynnogram reminded me of my own Hypotheses, little seeds of experience that can sometimes defend the diligent designer from the wayward path to hell:
OAuth 2.0 and the Road to HellBy Eran Hammer Thursday July 26, 2012
They say the road to hell is paved with good intentions. Well, that’s OAuth 2.0.
Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.
There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.
Eran Hammer was instrumental in OAuth 1.0. He then moved the efforts into an IETF working group so as to "improve" it to Enterprise grade. He didn't survive the transition.
Which reminded me of what I'd written in "Hypothesis 4 - The First Requirement of Security is Usability." Sadly, until today it seems, I had neglected to publish the relevant seed! Hardly a usable Hypothesis, indeed, and apologies to all. Here it is:
#4.4 No Committees!It should be clear by now that committees are totally out of the question. They are like whirlpools, great spiralling sinks of talent, so paddle as fast as possible in the other direction.
On the other hand, if you are having trouble shrinking your team or agreeing with them, a committee over yonder can be useful as a face saving idea. Point them in that direction of the whirlpool, give them a nudge, and then get back to work.
And, for my sins, I have also packed in H4.5-7, and the entirety of Hypothesis 4 is now published.
As the Internet wars of the 21st Century roll on, it's worth thinking about how to build systems in the new context. My ruminations follow.
The map has changed substantially from that of the 1990s, or the 20th century. In the last decade, 3 groups became paramount and are now battling it out: Apple, google and Facebook. That war isn't settled yet, which keeps the media hacks writing something vaguely wise and exciting, keeps our headlines breathless and tempting (as above :) .
Although no winner is declared, the protagonists have changed the privacy map. Before, it was about keeping ones data carefully partitioned, giving corporations only what they needed to know, leaning on them to at least try to protect it, and preserve some sense of privacy policy. And keep away from those who were just scarfing it up for sales.
Privacy used to be relatively simple, although we didn't know it then, nor appreciate it.
What we have now is cloud - all your data are being collected in one place by one person. To stress the positive, this works. At least, the sum total of apps and services offered by each of the behemoths seems to be better for us if we follow without question, and we carry the risk unknowingly. And, far better than previous models even dreamed of (here, I'm thinking of Microsoft's 1990s worldview).
But there are many negatives. Privacy is a total nightmare -- all our eggs are in their baskets.
Reliability seems to suffer too. The great migrations we have witnessed from client tools to cloud apps have seen many failures. We see reversals, costly absences, and drop-outs too. I personally no longer trust my Calendar - one of the earliest and simplest apps of all - because Apple's migration to cloud failed and I could not recover the data nor usability of same without just signing on completely and utterly. Most of you will have your own stories (I see them every day on the lists).
Worse, most users will not realise that it wasn't their fault, that they are victims of the slow suffocation of a vaporous white fluffy juggernaut. It isn't just Apple; there are so many insistent "offerings" in google's products that just by clicking NO so many times, by mistake you will end up in cloud 9 without intending, knowing, or any chance of redemption. I hear similar things about Facebook.
This is all by way of reaching the following question: what is a poor app designer supposed to do? My choice is to use google apps everywhere, or not. Or Apple, or Facebook.
I think I have to design my own version of a privacy model. And, I have to do it in such a fashion that it is both implementable without the resources of a major slice of the stock market, and in a fashion that delivers most of the benefits in a controllable way. OK, and it has to run seemlessly across platforms, not chew up bandwidth on the android, and make tea on demand.
I would rather trust Apple with my privacy, and the privacy of my customers. Honestly, they do a good job at most things. But somehow, the switch to cloud has stripped that very trust away. Google & Facebook the same - whatever one thought in the past, things have changed.
(For those hoping for the solution, sorry. I'm still thinking. :) Today's rambling is just about why I can't see how to place my customer's data in their hands.)
The question of the moment has been answered by WaPo - Flame was from the same USA/Israeli campaign that used Stuxnet to destroy 1000 centrifuges:
The United States and Israel jointly developed a sophisticated computer virus nicknamed Flame that collected intelligence in preparation for cyber-sabotage aimed at slowing Iran’s ability to develop a nuclear weapon, according to Western officials with knowledge of the effort.
USA and Israel have launched a campaign of cyber attacks against Iran. Unilateral, undeclared, surprise attacks.
Last year the Pentagon concluded “that computer sabotage coming from another country can constitute an act of war” according to the Wall Street Journal. Back in 2010, Secretary of State Hillary Clinton declared “countries or individuals that engage in cyber attacks should face consequences and international condemnation.”
Given the evidentiary nature of tracking a nation's actions in war, a newspaper expose is generally about as good as it gets. For non-USA readers, the Washington Post is one of the top handful papers, and they won't be making mistakes on this one. In this case, we get one better - the FBI are investigating leaks into the Stuxnet disclosures, and have thus confirmed the basic accuracy of the WaPo revelations.
Targetting and destroying centrifuges is about as clear and directed an attack as it gets in warfare. By way of analogue, it's little different to the classical bombing attacks of WWII directed against ball-bearing factories in the Ruhr. Like the city/factory bombing campaign of those days, this campaign also appears to be flexible in target scope:
Flame came to light last month after Iran detected a series of cyberattacks on its oil industry. The disruption was directed by Israel in a unilateral operation that apparently caught its American partners off guard, according to several U.S. and Western officials who spoke on the condition of anonymity.
OK, let's take down their oil industry. Who the heck cares?
Well, perhaps there are some responses possible. We'll leave with just this one thought from Bruce Schneier:
There's a common belief within the U.S. military that cyberweapons [mitigation] are not in our best interest: that we currently have a military advantage in cyberspace that we should not squander. That's not true. We might have an offensive advantage -- although that's debatable -- but we certainly don't have a defensive advantage. More importantly, as a heavily networked country, we are inherently vulnerable in cyberspace.
(I edited one word out for context. In his discussion he is talking about cyberweapon treaties, but the wider point remainst true.)
Ye reap what ye sew, and all that. For the USA, it's gonna get biblical - the new question for Washington DC is whether DHS is prepared for Iranian counterstrikes in cyberspace.
Bruce Schneier points at SkypeMorph:
To prevent the Tor traffic from being recognized by anyone analyzing the network flow, SkypeMorph uses what's known as traffic shaping to convert Tor packets into User Datagram Protocol packets, as used by Skype. The traffic shaping also mimics the sizes and timings of packets produced by normal Skype video conversations. As a result, outsiders observing the traffic between the end user and the bridge see data that looks identical to a Skype video conversation.The SkypeMorph developers chose Skype because the software is widely used throughout the world, making it hard for governments to block it without arousing widespread criticism. The developers picked the VoIP client's video functions because its flow of packets more closely resembles Tor traffic. Voice communications, by contrast, show long pauses in transmissions, as one party speaks and the other listens.
Yup, it is a time honoured tradition in firewall hopping to pretend to be someone else who gets an easier ride. Typically, we have webapps because applications needed to be let through the firewall. To respond to firewalls getting pernickety, everyone started layering their simple traffic over HTTP. It worked, but annoyed the firewall people who realised they'd been bypassed.
But, if HTTP wants to do this, it can't. Or Skype - how does it pretend to be Skype?
There is an answer. If we scale up this approach - copying the best over time - then the best will try to be better. Better, in this case, means removing any and all information that can be analysed.
In other words, Indistinguishable from Random.
In the past we would have said that defeating traffic analysis is too hard. But that's what is going at the coal face, regardless of the theoretical view. So let's take it to its logical conclusion - the end game is not to copy someone else, but to be indistinguishable from random.
When everyone is doing that, we all win. Skype is already a long way there, it seems. Can your protocol do better?
I'm sitting here minding my own business in early hours of Sunday morning - OK, completing a risks analysis - and my machine starts churning away. Now, if you're an old time guy like me, you'll probably recall when cycles were expensive, kilobytes had deep emotional meaning, and noise delta was trauma. Either way, even though I have the late-gen Mac Air and it has very nice heat management, as well as more sex-appeal than any 10 shares in Facebook, I remain ... sensitive!
Hunting around, I find that the culprit: google's Chrome. Ok, so what page? Well, none and all of them - Google's algorithm seems to have decided to cover my surfin sites with adverts for the DSD - Defence Signals Directorate - which is Australia's spook agency copy of the fabled NSA. Indeed 2 adverts to a page, all in impeccable Flash with zany cool graphics.
Blech. DSD is inviting me - and anyone, seems they're not that fussy - to join their great cyber-warfare jihad against the anonymous planet (us? they ain't saying).
And they've started the campaign by stealing my cycles. No fair!
Not to mention, even Blind Freddy could tell what was wrong with their lame HR campaign. OK, maybe not, but here it is anyway: Firstly, people who are great defenders need great attack skills. Where do these budding ueber-cyber-guerillas get their skills from?
Attacking, is what. Hacking, to use the street parlance.
So we already have an issue - if you've led an exciting life of packet crime, you'll be knocked out in the security process. I know this at first hand - among the hundred-odd interesting factoids of my international life of mystery, the only time alarm bells went off : a mildly amusing moment a few dog-lives ago doing some crypto-manipulation and social-engineering for private pleasure.
Where are we then? Anyone we want working at DSD simply won't be allowed, and anyone who is allowed is suspicious for the same reason!
Which brings us to point 2: the security process. To work at DSD you require TOP SECRET. That's if you're the janitor, and even he's not allowed to know it. And, while it sounds sexy, security clearances are a drag of immense proportions. One year's delay *minimum* (some guy was telling me it took him 13 months to get his TS back again because it lapsed for 12 months ...).
It doesn't take a PhD in Economics to realise this is as dead a duck can be, and that's before we look at public service pay scales.
But wait, there's more! While you're waiting on that coveted TS to come through (and filling in reams of data concerning the mating habits of your first 10 pets) you can't work in the field. They can't offer you a job. They can't even offer you another job.
Bottom line: DSD are conducting a massive poaching campaign against .. themselves. They are advertising for people working in /other government departments/ who already have TS ... so they can steal them. Can you say beggar thy neighbour? DSD can. And when they do manage to steal the hard-won cyber-slave of some poor other schmuck department, they won't pay anywhere near enough to cover the anguish of TS.
Wake up guys!
Save your money! You've been doing this for 2 years now. The pool is fished out. Try another strategy. Sack your PR guy and hire some hackers. And, further, I've shut down Chrome... Cop that!
Over on New School, my threat-modelling-is-for-birds rant last month went down like a lead balloon.
Now, I'm gonna take the rap for this one. I was so happy to have finally found the nexus between threat modelling and security failure that has been bugging me for a decade, that I thought everyone else would get it in a blog post. No such luck, schmuck. Have another go!
Closest pin to the donkey tail went to David, who said:
Threat modeling is yet another input into an over all risk analysis.
Right. And that's the point. Threat modelling by itself is incomplete. What's the missing bit? The business. Look at this gfx, risk'd off some site. This is the emerging 31000 risk management typology (?) in slightly cut down form.
The business is the 'context' as shown by the red arrow. When you get into "the biz" you discover it's a place of its own, a life, a space, an entire world. More pointedly, the biz provides you with (a) the requirements and (b) a list of validated threats. E.g., history, as we already deal with them.
The biz provides the foundation and context for all we do - we protect the business, without which we have no business meddling.
(Modelling distraction: As with the graphic, the source of requirements is often painted at the top of the diagram, and requirements-driven architecture is typically called top-down. Alternatively and depending on your contextual model, we can draw it as a structural model: our art or science can sit on top of the business. We are not lifting ourselves up by our bootstraps; we are lifting the business to greater capabilities and margins. So it may be rational and accurate to call the business a bottom-up input.)
Either way, business is a mess, and one we can't avoid. We have to dive in and absorb, and in our art we filter out the essence of the problem from the business into a language we call 'requirements'.
Then, the "old model" of threat modelling is somewhere in that middle box. For sake of this post, turn the word risk into threat. Follow the blue arrow, it's somewhere in there.
The point then of threat modelling is that it is precisely the opposite of expected: it's perfect. In more particular words, it lives without a context. Threat modelling proceeds happily without requirements that are grounded in a business problem or a customer base, without a connection to the real world.
Threat modelling is perfect by definition, which we achieve by cutting the scope off at the knees.
Bring in the business and we get real messy. Human, tragic. And out of the primordial swamp of our neighbors crawls the living, breathing, propogating requirements that make a real demand on us - survival, economy, sex, war, whatever it is that real people ask us for.
Adam talks about Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of Privilege... which sounds perfect. And I'd love to play the game :)
But real life is these things: insider trading, teenager versus teenager versus parent versus parent, revolutions of colour, DVD sharing, hiding in plain sight, movement tracking of jealous loved ones, don't ask, don't tell, whistleblowing, revenge,...
Which philosophy of business context first and foremost also explains a lot of other things. Just by way of one example, it gives an essential clue as to why only end-to-end security is worth anything. Application security automatically has a better chance of including the business; point-to-point designs like IPSec, SSL, DNSSec, etc have no chance. They've already handwaved anything past the relevant nodes into utter obscurity.
Last week's post on the jaws of Trust sparked a bit of interest, and Chris asks what I think about Convergence in comments. I listened to this talk by Moxie Marlinspike, and it is entertaining.
The 'new idea' is not difficult. The idea of Convergence is for independent operators (like CAcert or FSFE or FSF) to run servers that cache certificates from sites. Then, when a user browser comes across a new certificate, instead of accepting the fiat declaration from the CA, it gets a "second opinion" from one of these caching sites.
Convergence is best seen as conceptually extending or varying the SSH or TOFU model that has already been tried in browsers through CertPatrol, Trustbar, Petnames and the like.
In the Trust-on-first-use model, we can make a pretty good judgement call that the first time a user comes to a site, she is at low risk. It is only later on when her relationship establishes (think online banking) that her risk rises.
This risk works because likelihood of an event is inversely aligned with the cost of doing that attack. One single MITM might be cost X, two might be X+delta, so as it goes on it gets more costly. In two ways: firstly, in maintaining the MITM over time against Alice costs go up more dramatically than linear additions of a small delta. In this sense, MITMs are like DOSs, they are easier to mount for brief periods. Secondly, because we don't know of Alice's relationship before hand, we have to cast a very broad net, so a lot of MITMs are needed to find the minnow that becomes the whale.
First-use-caching or TOFU works then because it forces the attacker into an uneconomic position - the easy attacks are worthless.
Convergence then extends that model by using someone else's cache, thus further boxing the attacker in. With a fully developed Convergence network in place, we can see that the attacker has to conduct what amounts to being a perfect MITM closer to the site than any caching server (at least at the threat modelling level).
Which in effect means he owns the site at least at the router level, and if that is true, then he's probably already inside and prefers more sophisticated breaches than mucking around with MITMs.
Thus, the very model of a successful mitigation -- this is a great risk for users to accept if only they were given the chance! It's pretty much ideal on paper.
Now move from paper threat modelling to *the business*. We can ask several questions. Is this better than the fiat or authority model of CAs which is in place now? Well, maybe. Assuming a fully developed network, Convergance is probably in the ballpark. A serious attacker can mount several false nodes, something that was seen in peer2peer networks. But a serious attacker can take over a CA, something we saw in 2011.
Another question is, is it cheaper? Yes, definately. It means that the entire middle ground of "white label" HTTPS certs as Mozilla now shows them can use Convergence and get approximately the same protection. No need to muck around with CAs. High end merchants will still go for EV because of the branding effect sold to them by vendors.
A final question is whether it will work in the economics sense - is this going to take off? Well, I wish Moxie luck, and I wish it work, but I have my reservations.
Like so many other developments - and I wish I could take the time to lay out all the tall pioneers who provided the high view for each succeeding innovation - where they fall short is they do not mesh well with the current economic structure of the market.
In particular, one facet of the new market strikes me as overtaking events: the über-CA. In this concept, we re-model the world such that the vendors are the CAs, and the current crop are pushed down (or up) to become sub-CAs. E.g., imagine that Mozilla now creates a root cert and signs individually each root in their root list, and thus turns it into a sub-root list. That's easy enough, although highly offensive to some.
Without thinking of the other ramifications too much, now add Convergance to the über-CA model. If the über-CA has taken on the responsibility, and manages the process end to end, it can also do the Convergence thing in-house. That is, it can maintain its set of servers, do the crawling, do the responding. Indeed, we already know how to do the crawling part, most vendors have had a go at it, just for in-house research.
Why do I think this is relevant? One word - google. If the Convergence idea is good (and I do think it is) then google will have already looked at it, and will have already decided how to do it more efficiently. Google have already taken more steps towards ueber-CA with their decision to rewire the certificate flow. Time for a bad haiku.
Google sites are pinned now / All your 'vokes are b'long to us / Cache your certs too, soon.
And who is the world's expert at rhyming data?
Which all goes to say that Convergence may be a good idea, a great one even, but it is being overtaken by other developments. To put it pithily the market is converging on another direction. 1-2 years ago maybe, yes, as google was still working on the browser at the standards level. Now google are changing the way things are done, and this idea will fall out easily in their development.
(For what it is worth, google are just as likely to make their servers available for other browsers to use anyway, so they could just "run" the Convergance network. Who knows. The google talks to no-one, until it is done, and often not even then.)
I've long realised that threat modelling isn't quite it.
There's some malignancy in the way the Internet IT Security community approached security in the 1990s that became a cancer in our protocols in the 2000s. Eventually I worked out that the problem with the aphorism What's Your Threat Model (WYTM?) was the absence of a necessary first step - the business model - which lack permitted threat modelling to be de-linked from humanity without anyone noticing.
But I still wasn't quite there, it still felt like wise old men telling me "learn these steps, swallow these pills, don't ask for wisdom."
In my recent risk management work, it has suddenly become clearer. Taking from notes and paraphrasing, let me talk about threats versus risks, before getting to modelling.
A threat is something that threatens, something that can cause harm, in the abstract sense. For example, a bomb could be a threat. So could an MITM, an eavesdropper, or a sniper.
But, separating the abstract from the particular, a bomb does not necessarily cause a problem unless there is a connection to us. Literally, it has to be capable of doing us harm, in a direct sense. For this reason, the methodologists say:
Risk = Threat * Harm
Any random bomb can't hurt me, approximately, but a bomb close to me can. With a direct possibility of harm to us, a threat becomes a risk. The methodologists also say:
Risk = Consequences * Likelihood
That connection or context of likely consequences to us suddenly makes it real, as well as hurtful.
A bomb then is a threat, but just any bomb doesn't present a risk to anyone, to a high degree of reliability. A bomb under my car is now a risk! To move from threats to risks, we need to include places, times, agents, intents, chances of success, possible failures ... *victims* ... all the rest needed to turn the abstract scariness into direct painful harm.
We need to make it personal.
To turn the threatening but abstract bomb from a threat to a risk, consider a plane, one which you might have a particular affinity to because you're on it or it is coming your way:
⇒ people dying⇒ financial damage to plane ⇒ reputational damage to airline⇒ collateral damage to other assets⇒ economic damage caused by restrictions⇒ war, military raids and other state-level responses
Lots of risks! Speaking of bombs as planes: I knew someone booked on a plane that ended up in a tower -- except she was late. She sat on the tarmac for hours in the following plane.... The lovely lady called Dolly who cleaned my house had a sister who should have been cleaning a Pentagon office block, but for some reason ... not that day. Another person I knew was destined to go for coffee at ground zero, but woke up late. Oh, and his cousin was a fireman who didn't come home that day.
Which is perhaps to say, that day, those risks got a lot more personal.
We all have our very close stories to tell, but the point here is that risks are personal, threats are just theories.
Let us now turn that around and consider *threat modelling*. By its nature, threat modelling only deals with threats and not risks and it cannot therefore reach out to its users on a direct, harmful level. Threat modelling is by definition limited to theoretical, abstract concerns. It stops before it gets practical, real, personal.
Maybe this all amounts to no more than a lot of fuss about semantics?
To see if it matters, let's look at some examples: If we look at that old saw, SSL, we see rhyme. The threat modelling done for SSL took the rather abstract notions of CIA -- confidentiality, integrity and authenticity -- and ended up inverse-pyramiding on a rather too-perfect threat of MITM -- Man-in-the-Middle.
We can also see from the lens of threat analysis versus risk analysis that the notion of creating a protocol to protect any connection, an explicit choice of the designers, led to them not being able to do any risk analysis at all; the notion of protecting certain assets such as credit cards as stated in the advertising blurb was therefore conveniently not part of the analysis (which we knew, because any risk analysis of credit cards reveals different results).
Threat modelling therefore reveals itself to be theoretically sound but not necessarily helpful. It is then no surprise that SSL performed perfectly against its chosen threats, but did little to offend the risks that users face. Indeed, arguably, as much as it might have stopped some risks, it helped other risks to proceed in natural evolution. Because SSL dealt perfectly with all its chosen threats, it ended up providing a false sense of false security against harm-incurring risks (remember SSL & Firewalls?).
OK, that's an old story, and maybe completely and boringly familiar to everyone else? What about the rest? What do we do to fix it?
The challenge might then be to take Internet protocol design from the very plastic, perfect but random tendency of threat modelling and move it forward to the more haptic, consequences-directed chaos of risk modelling.
Or, in other words, we've got to stop conflating threats with risks.
Critics can rush forth and grumble, and let me be the first: Moving to risk modelling is going to be hard, as any Internet protocol at least at the RFC level is generally designed to be deployed across an extremely broad base of applications and users.
Remember IPSec? Do you feel the beat? This might be the reason why we say that only end-to-end security cuts the mustard, because end-to-end implies an application, and this draw in the users to permit us to do real risk modelling.
It might then be impossible to do security at the level of an Internet-wide, application-free security protocol, a criticism that isn't new to the IETF. Recall the old ISO layer 5, sometimes called "the security layer" ?
But this doesn't stop the conclusion: threat modelling will always fail in practice, because by definition, threat modelling stops before practice. The place where users are being exposed and harmed can only be investigated by getting personal - including your users in your model. Threat modelling does not go that far, it does not consider the risks against any particular set of users that will be harmed by those risks in full flight. Threat modelling stops at the theoretical, and must by the law of ignorance fail in the practical.
Risks are where harm is done to users. Risk modelling therefore is the only standard of interest to users.
Zooko writes a case study in support of H3,
there is only One Mode, and it is Secure:
When Phil Zimmermann and I were designing ZRTP, we had a complete protocol design for the secure stream and a working prototype, and then we decided, against my better judgment, to add the feature of users being able to switch back and forth between secure mode and insecure ("clear") mode. (This was before [Jon Callas] got involved in the protocol-level details of the ZRTP that eventually became [RFC 6189]. I think it was in 2006.)
This caused the state transition diagram (which lived on a piece of 8.5x11 paper pinned to the wall above my screen) to balloon from, if I recall correctly, approximately five states with around ten transitions to approximately ten states with closer to twenty transitions. We then added and removed several successive bugs in the protocol or in the prototype implementation over the next few months, all of which were in this newly added feature of the protocol.
Phil was concerned about our schedule and frustrated by the delay. He remarked that if he had known adding "clear mode" was going to cost so much development time he wouldn't have done it.
Later, the existence of the insecure mode again added complexity when we wanted to add support for PBXes and call transfer, and instead of working out how to handle call transfer from one state (secure mode), we had to work out how to do it from several states—secure mode, clear mode, the state where you've indicated that you wish to switch from secure mode to clear mode but the other party hasn't yet ACKed, and the state where you've indicated that you wish to switch from clear mode to secure mode but the two parties haven't finished negotiating the new session. (By this time I had left the ZRTP project to begin work on what eventually became Tahoe-LAFS, and my contribution to the PBX/call-transfer issue was merely a little bit of consulting.)
I think that in retrospect ZRTP would have been better if we had followed Grigg's H3.
Hm, digging around in my keepsakes cabinet, I unfortunately do not find the original state transition diagram that I mentioned above, but I do find an artifact that I wrote a few months later—a sketch of a protocol that I called "ZRTP lite" which was ZRTP as it existed at that time minus insecure mode, algorithm negotiation, the "confirm" packet, and the max-retries timeout. The resulting state transition diagram has only three states (five if you count the beginning state—before the phone call has begun—and the ending state—after the call has ended), and only five transitions (counting start and stop).
I never showed this to anyone until now. See attachment.
My next major project after ZRTP was Tahoe-LAFS, and in that project my co-designers and I have adhered to Grigg's H3. We occasionally get requests to add an insecure mode to Tahoe-LAFS and so far we have declined to do so. I could be provoked into posting further musings on that issue. (The fact that we are refusing to accommodate repeated requests from users is certainly a warning flag in my view and deserves careful justification.)
Regards,
Zooko
[RFC 6189] http://zfone.com/docs/ietf/rfc6189.html
Google researchers say they've devised a way to significantly reduce the time it takes websites to establish encrypted connections with end-user browsers, a breakthrough that could make it less painful for many services to offer the security feature. ....The finding should come as welcome news to those concerned about online privacy. With the notable exceptions of Twitter, Facebook, and a handful of Google services, many websites send the vast majority of traffic over unencrypted channels, making it easy for governments, administrators, and Wi-Fi hotspot providers to snoop or even modify potentially sensitive communications while in transit. Companies such as eBay have said it's too costly to offer always-on encryption.
The Firesheep extension introduced last year for the Firefox browser drove home just how menacing the risk of unencrypted websites can be.
Is this a case of, NIST taketh away what Google granteth?
I have long held an opinion that the exciting work in Computer Science has been over in the p2p area. Now, Arnaud Legout writes about his course in p2p:
Hi, I have given a course on P2P for 4 years now, and I decided to release it publicly due to the lack of such a public resource.
I believe this course can be of interest to a large audience: developers of P2P protocols that want to better understand how it works, researchers that want a comprehensive introduction to the field, students that are looking for a document on the subject, etc.
This course is focused on P2P content replication with a very comprehensive part on BitTorrent. I also address some very important notions of distributed security. I usually give this course on seven lectures of 3 hours.
I put the slides of my course online (with a Creative Commons license BY-NC-SA, which means you can reuse anything as long as you give credit and don't make money out of it).
http://cel.archives-ouvertes.fr/cel-00544132/en/
I recommend to download the PowerPoint version (PPTX), as the PDF version will not render the animations and the animated slides will look cluttered.
Of course, comments are highly welcomed.
Regards,
Arnaud Legout.
You can contact him at arnaud.legout at inria dot fr. Happy learning!
Iran's Bushehr nuclear power plant in Bushehr Port:
"An error is seen on a computer screen of Bushehr nuclear power plant's map in the Bushehr Port on the Persian Gulf, 1,000 kms south of Tehran, Iran on February 25, 2009. Iranian officials said the long-awaited power plant was expected to become operational last fall but its construction was plagued by several setbacks, including difficulties in procuring its remaining equipment and the necessary uranium fuel. (UPI Photo/Mohammad Kheirkhah)"
Click onwards for full sized image:
Compliant? Minor problem? Slight discordance? Conspiracy theory?
(spotted by Steve Bellovin)
There appears to be a wave of something going through the infosec industry. There are reports like this:
In the past month, we've had several customers at work suddenly insist that we make modifications to their firewalls and/or load balancers to redirect *all* incoming HTTP traffic to HTTPS (which of course isn't always entirely sane to do on proxying devices, but they apparently don't trust their server admins to maintain an HTTP redirect). Most of them cited requirements from their PCI-DSS auditors. One apparently was outright told that their redirects were "a security problem" because they presented an open socket on port 80, and they needed to be refusing all HTTP to their servers at the firewall. I think we gave them sufficient wording to convince their auditor that blocking access to the redirect itself wasn't going to do anyone any good.
Then, there have been long discussions circulating around the meaning of this hypothesis in security design:
there is only one Mode, and it is Secure
Which, if I can say in small defence, is an end-point, a result, an arrival that does not in the slightest hint at how or why we got there. Or by what path, which by way of example is the topic of this very blog post.
The Electronic Frontier Foundation has announced and pushed a new experimental browser plugin to take browsing on that very path towards more and pervasive HTTPS:
HTTPS Everywhere is a Firefox extension produced as a collaboration between The Tor Project and the Electronic Frontier Foundation. It encrypts your communications with a number of major websites.Many sites on the web offer some limited support for encryption over HTTPS, but make it difficult to use. For instance, they may default to unencrypted HTTP, or fill encrypted pages with links that go back to the unencrypted site.
The HTTPS Everywhere extension fixes these problems by rewriting all requests to these sites to HTTPS.
And Jeff Hodges, Collin Jackson and Adam Barth have published an Internet Draft called Strict Transport Security based on this paper, which in essence tells anyone who connects to the insecure HTTP service to instead switch across to the secure HTTPS service.
Now, leaving aside whether these innovations will cause a few confusions in compatibility, site-redesign and access, all common ills that we would normally bow and scrape before, ... it would seem that this time there is some emphasis behind it: As well as the EFF plugin above, Paypal and NoScript have adopted STS. As Paypal was at one time the number 1 target for phishing-style attacks, this carries some weight. And as NoScript is allegedly used by practically all the security people on the planet, this could be influential.
A few words on what appears to be happening here. In the Internet security field is that the 800lb gorilla -- breaches, viruses, botnets, phishing,... -- it seems that outsiders like EFF, Paypal and PCI-DSS auditors are starting cleaning up the field. And in this case, they're looking for easy targets.
One such target was long identified: turn off HTTP. Yup, throw another bundle of tinder on that fire that's burning around me ... but meanwhile, switch all HTTP traffic to HTTPS.
In an ideal world, every web request could be defaulted to HTTPS.
It's a simple thing, and an approximate first step towards the hypothesis of there is only one mode and it is secure. Switching to HTTPS for everything does a few things, obvious and subtle.
1. the obvious thing is that the user can now be seriously expected to participate in the "watch the padlock" protocol, because she's no longer being trained to ignore the padlock by the rest of the site. The entire site is HTTPS, that's easy enough for the users to understand.
2. The second thing that the notion of pervasive HTTPS does is to strip away some (not all) of the excuses for other parties. Right now, most decision makers almost totally ignore HTTPS. Browser manufacturers, server manufacturers, CAs, cartels, included. It is all compliance thinking, all eyes are turned elsewhere. If, for any circumstance, for any user, for any decision maker, there is a failure, then there is also an easy excuse as to "why not." Why it didn't work, why I'm not to blame, why someone else should fix their problems.
Probably, there are more excuses than we can count (I once counted 99...) (the latest excuse from NIST, Mozilla and Microsoft).
However, if the PCI-DSS auditors make HTTPS the normal and only mode of operation, that act will strip away a major class of excuses. It's on, always on, only on. Waddya mean, it went off? This means the security model can actually be pinned on the suppliers and operators, more of the time. At least, the outrage can be pinned on them, when it doesn't work, and it will have some merit.
3. The third thing it does is move a lot of attention into the HTTPS sphere. This is much more important, but more subtle.
More attention, a growing market, more expectations means more certs, more traffic, more reliance on the server cert, etc. But it also means more attention to client certs, more programmers, more admins, more more more ...
Which will elevate the use of HTTPS and its entire security model overall; which will hopefully get the people who can make a difference -- here I'm thinking of Mozilla and Microsoft and the other browser security UI production teams -- to put a bit more effort and thinking into this problem.
Some would say they are working hard, and I know they are. But let me put this in context. Last I heard, Jonathan had a team of 2-3 programmers working on Firefox security UI. (And, yes, that's a set-up for a correction, thanks!)
This team is so tiny that we can even know their names.... Let me know and I'll post their names :)
Yet, this security UI is protecting the online access of probably 100 million people across the planet. And the current viewpoint of the owning organisation is that this is a backwater that isn't really important, other areas are much more important.
(Just to point out that I'm not picking on Mozilla unfairly: Microsoft is no better, albeit more clear in their economic reasoning. Google probably has one person, Opera may have one too. Konqueror? Apple won't say a thing...)
(Prove me wrong, guys! Ya know ya want to! :)
That which I described a month back relates to an application called payments, and a variation designed to make it better. One aspect that probably rankles is that a lot of people in the so-called Architecture role do something else, and that's not it
This is what is killing Enterprise Architecture… every computer programmer, systems designer, software architect, solutions architect, technology architect, computer operator, PC owner, data architect, database architect, network architect, business analyst, systems analyst, enterprise architect, service architect, object architect, project manager and CIO calls whatever they want to or maybe, whatever they are doing, “Architecture.” It is chaos. No wonder we don’t have Enterprises that are coherent, integrated, flexible, dynamic, interoperable, reusable, aligned, lean and mean and working.
So sayeth John Zachman, who is credited with having invented a term or two in the space.
Here's one variation in the workspace of Architecture, which some call programming in the large. Consider the app from the last post, payments. It's cohesive, tight, and operates to a set of rules. It has strong boundaries. We can analogise this process to be like a building; hence it is probably fair to claim someone who designs and builds a payment system can call themselves an architect. If not fair, uncontroversial.
There are also many others applications, and each of them can be analogised to a building. They are each cohesive units, strict rules within and without, and they are generally focussed on particular purposes. For each building, we have different Architects, and for each style of building, we have different styles of Architecture.
Then, there are places where there are many applications. Larger companies, government departments, banks, etc, have dozens or hundreds of these applications. And, what's more, these applications are often connected in long, difficult transactions. A particular client's request might go wandering through from app to app, department to department, database to database until it finally pops out the other end.
This is a very different building process, it's a macroworld compared to the individual building's microworld. In order for the whole organisation to deliver on its goals, all of the apps and people must somehow play towards the same goal; which means we have to bring all these together somehow. Architects have to talk to Architects, apps to apps, people to people, and many variations.
It is tempting then to extend the application scenario and simply view an organisation as a very large application. But this runs into trouble very quickly. Firstly, all the individual apps run to their own timeline. Secondly, departments, and people, are jealous of their independence. Thirdly, the complexity is such that without very good oversight, very good understanding, and quite deft tuning and aligning, an Architect of an entire organisation will likely fail to make a mark.
In order to make all this work, and improve, we need to set some guidelines. Firstly, once delivered, applications do not change much, if at all. Just like we don't go moving buildings around just because they are now inconveniently placed. Secondly, the interconnections between the apps are capable of changing , and being re-used for new and diverse purposes. This is the streets and walkways between the buildings, which typically become public ownership; they always seem to be being dug up and moved around for one reason or another.
Hence we start to look at how the applications can look to each other, and we also start to standardise this as a first step. From this view comes the march of webservices, XML, WSDL, and friends: if all apps deliver their output in standardised forms, then it becomes a matter of cheap scripting for other apps to tap that output. Once that is done, we can start to re-connect, inter-connect and so forth. Hypothetically...
Which leads to the next issue. Although we've tackled ways to reconfigure the I/O of the business units, just like we used to reconnect computer peripherals with cat(1), and we've created a fast & flexible form of BPR, we still haven't reduced the complexity. For that, a wave of tools and methodologies has rushed in like a tsunami to try and map out the complexity in some fashion to allow our supreme leader to be the true master of all she surveys: service discovery, service cataloguing, service levels, dependency mapping, defect discovery, transaction troubleshooting, model-driven reconfiguration, use-case mapping for large transactions, etc etc. With the flood of tools comes the flotsam of buzzwords, jetsam of key phrases, and sales calls for yet more tools. Anyone in the business will tell you this is a very costly area, as every new tool promises far more than any other tool has ever delivered, and you only find out after paying the price that there is a new must-have tool just around the corner.
All of which goes to suggest that this is a very different game to Architecture. This is really City Planning, and although it is a sub-branch of what the Architecture Professionals like to consider their domain, it is a very separate field. An application Architect is very focussed on the tech, and needs to tear himself away from it to understand enough of the business in order to build something that is wanted. But in the wider organisation domain, the reverse is more true: this person is very focussed on the business, and she needs to tear herself away from the business just enough to understand what the tech can do for her, in the way of interfaces, data formats, etc.
Just like the Architect meets the City Planner at the boundary line, driveway and other pipes, the Application Architect meets the "Company Planner" at the external I/O interfaces of the App. The desire for every building to have the same boring driveway is seen in webservices, which can be considered to be a lowest common denominator of application driveway.
What then to call this person? Because the domain Architects grew out and upwards from this field, on the supremacy of their applications, they are sometimes called Enterprise Architects. But this is a parochial, bottom-up view by technologists, who press the case for more technological tools. We can also look at the question top-down, business oriented, and there, we find a familiar thread: we are talking about Business Process Re-engineering up in the early 1990s, and the rampant charge of the MBAs at that time, and then a stampede of competing methodologies from consultancy companies with different names, variations but one common theme: interconnecting the units of business in better and more efficient ways.
In the last decade, we saw the Enterprise Architecture approach possibly succeeding and outlasting, where others have not. However, it is not easy to attribute the success to its rightness; more likely it is attributable to the rise of the computer and information within modern service business. Enterprise Architecture rose upwards on the back of tech companies' assault on the business of business, and less so on its own merits.
All of these have something of a fad in them, but something of the truth. That old dirty word BPR was essentially what we wanted, but we want it to be done every week, not every decade. Every other buzz/fad/gee-whiz word like TQM, JIT, 6 Sigma, Lean, Business Transformation etc etc that followed (or led) tried to repair some of the gaps in the previous (or following) fad, but they all suffered their blind spots and their seduction to increasing complexity. Any recipe teachable to ordinary people in ordinary business fails to understand the whole complexity of business, it seems. Leaving plenty of room for the next fad...
Meanwhile, the MBAs had the ability to talk to all the departments in their language, understand the papers and draw the lines they wanted, dominate and improve on all the various methodologies ... but they didn't have the technical credibility to make it happen in a computer-driven world. (They would have loved Agile.)
The IT-focussed Architects have the latter, but not the former; they know how to program in the small with data and functions and objects. But when it comes to programming in the large, with people and customers and processes and refunds and laws and giveaways and expediting and sickleave and and and, not only is their experience simply lacking, but the rules of programming are completely different.
Which all leads to what, exactly? Although I think some very good stuff is happening in the Enterprise Architecture world, I also sense the mismatch in viewpoints. To me, I get the feeling that this evolutionary game isn't over, there's a few more cycles before we figure out who is the real master of the high-density information city. It is like we are back in the 1970s, admiring the traffic jams and the smog in Los Angeles, thinking that all this activity means we're on the right track.
We know that Cities have City Planners, and it is a distinct discipline. We just haven't found, named and understood the City Planner for information-driven business as yet. Right now, I feel we're back in last century, 50 years ago, Los Angeles: admiring the traffic jams and pile-ups, selling traffic lights, thinking the info-smog tells us we're on the right track, and complaining that City Hall is digging up the streets. Again :-(
(Hattip to Martin Bramwell who provided much of the helicopter view for this essay!)
Somebody asked me how I do Architecture, and it wasn't a question I could easily answer. Darn! I am one, but what is one, and how do I do it?
Since then, I've been doing a bit of reading, and I think I have the answer. Here it is, by way of an anecdote called Rafi's, which was a project, and also a ribs & beer place. Crack open a beer, order in some ribs, and settle in.
In 2000 or thereabouts, my team and I were sitting on the most sophisticated and working Internet payments system around. Having got it in place, we were then faced with the problem that bedevilled most all payment systems: people don't do payments, so much as, they do trade. This was a conundrum, one I wrote about in FC7, in where I claim that a financial cryptography system without a "financial application" is approximately worthless.
Some slight context. To answer the obvious question here, we already had our application, being the swapping of financial instruments for cash, and back again. Good as it was, I was looking for more. A wider complication is that many obvious applications have beartraps in them lurking for unwitting payment systems, so there wasn't an easy answer such as copy amazon or google or expedia or Rafi's ribs place. And, finally, luckily, no customer was demanding this, it was an internally-generated strategic demand, so I could take my time.
In trying to do more payments, then, the problem evolved into figuring out what trade people did, and trying to participate in more of that. In order to get a grounding in that, I surveyed how payments integrated into life. Initially, I thought about invoice cycles, because a payment is generated out of such a cycle. This seemed fairly tractable, but I wasn't comfortable with the variations. I went wider, and thought about trading of all form, the wider business exchange that goes on that generates an invoice cycle, and finally a payment.
In principle, as we'd already done financial trades, and it was simply a matter of substituting finance with some other business, walking the transaction, architecting it and implementing it.
Or so I initially thought, but it wasn't to be. Take a hotel check-in, an example I sometimes used to convince people *not* to get into the retail payments business. When you finally get to the hotel, and walk in the door, you start the process, something like this: "I'm after a room ..."
"We have singles, doubles, and the presidential suite ..." How much is the double? "100 per night" OK, I'd like three nights. "We only have doubles available for 2 nights, but we have the singles." OK, and do you have a room at the back, not facing the highway? "Ah, yes you'll be wanting the presidential suite then ..."
And on and on it goes. The point is that, while invoicing cycles include some variability, trade cycles are all variability, to the point of arbitrary, unpredictable chaos.
Examining such a process and trying to automate it presents rather special challenges. It is somewhat clear that we can create an ontology to capture all the probable paths of the hotel reception cycle. And indeed, this is what many projects tried to do: define the process, automate the use-cases. Consider flight booking systems. It's also possible to define an exceptions protocol, to catch those tricky "backside room," special meal requests or the more prosaic desire for a welcoming Carib.
But it's hard. And risky, as having built it, how do we know the customers will agree with the ontology? Building such things only makes sense if they work, guaranteed, and that's unlikely in this case.
But the real killer is that having done all that, ones grand structure is almost useless for the next business problem. That is, a flight booking system won't translate so easily to a hotel process, even though we want to offer both of them together (and many do). And it certainly doesn't have much relationship to a book drop-shipping business. Unless, that is, one believes that there is all-powerful super-meta-hyper methodology that can capture all interactions between all humans. An AP-SMH-UML, if you like, or perhaps the über-SAP. If you believe in that, just let me know how many decades you'll finance the building of it, and I'm willing to take your money :) if I can get to the head of the queue...
In the alternate, it is possible to understand the essence of trade, approximate, and find some benefits. And this is where my months of thinking started to pay out some small glimmers of inspiration.
The first thing I realised is that, a payment is but a tiny part. A bigger part is the invoicing process, which often involves a bill being delivered, checked, paid for, and confirmed. On both sides. This is an application in and of itself, it is probably 5 times bigger than a payment.
And we still haven't given it any semantics, any meaning for the end-user. Intuitively, we have to deliver meaning in order to make it reach a customer (or, academically, recall the FC7 hypothesis above). But as soon as I tried to add some sort of semantics around the invoicing, I ended up with the killer issue above: a mess of interactions of no real structure surrounding an already challenging invoicing protocol, with a payment or three tacked on. What started out as simple modelling revealed an intractable mess, which by size dominated the original mission. My finger-in-the-air estimate is this: 1% payment, 4% invoice, 95% the messages of chaos.
Logic therefore said to me that I if I could improve the 95% and reduce the chaos as it were, then this would likely dominate any efforts to improve the 4%, at a ratio of around 20 to 1! And, following this logic, the payment was now almost optional, almost vestigial. The less I thought about payments, the better.
My next glimmer was to treat trade as a series of requests and responses, back and forth. But, that didn't quite work out because many of the so-called requests were unanticipated, and unresponded. Think of advertising's part in trade, and the request-response model is dead, which may explain why so many billing systems layered over HTTP look so ghoulish. So eventually I was led to treating interaction as a somewhat arbitrary series of messages, showing patterns of no particular import to us.
The inspiration was then to flip the architecture system around: trade is messages, and to improve trade, improve the messaging capability. I didn't want semantics, I wanted freedom from semantics. I wanted a messaging system, with payments tacked on, rather than a payments system based on messaging. Indeed, I wanted a messaging system that could support arbitrary messages, and payments were just a trivial example of a message, within the full set of possible messages. (Trivial, because we already had them, and trivial, to focus the mind on the full possible set. Not trivial just from being payments, as these are anything but trivial.)
So the mission became to convert our payments-system-built-over-messaging into a messaging-system-with-payments. It seemed elegant enough, so over several nights at Rafi's, over ribs and beer, I outlined the story for my team:
messaging-with-payments, not payments-over-messaging.
That's how Project Rafi's was born. I sold it to my team through beer & ribs, toasted the name, and commissioned a demo.
Of course, when people talk of messaging as an application, they think chat or Instant Messaging or SMS. So I seized that metaphor, and turned it into the popular expression of the mission. We were adding IM to our payment system, or so we said. Which was a subtly different subset to what I wanted, but close enough to be easy to sell, easy to discuss, and easy to tune in the detailed requirements and later implementation.
Sounds simple! Let's look back to the meat on the ribs of the original question: how did I do the architectural part? Looking at the above, the serial or static view is like this:
But this assumes too much omniscience, reality is far rougher, full of errors, omissions, unknowns. So, we need an error-correcting overlay, a feedback cycle. The dynamic view is then cyclical:
e. at each step, I test my results & conclusions against the earlier parts. E.g., test the solution against the requirements .. then test against the known business variations ... then similar business problems ... then the business statement.
e-bis. something breaks.
f. Out of which breach, identify the broken part (solution, requirements, research, problem).
f-bis. Jump back to a,b,c or d, depending. Evolve and re-state the broken input.
g. keep doing it until elegance strikes me on the forehead and I can't shake it off!
Now, this might be special, but it's not unique, Indeed, you can find this on the net if you look around. Here's one from Malan and Bredemeyer:
A 9 word summary might be set the mission, start high, go deeper, test, iterate. Or, if you want the 1 word secret, it is Iterate! Those with a sense of military history will see Boyd's OODA loop in there, with the problem being the enemy, the enemy being the problem, and the challenge to spin faster than the problem spins you :) And those with appetite will appreciate now why there are always so many ribs on the plate, and why architecture and fast food don't go well together.
What might offend some people is that it is so simple. I think the response to simplicity is that this easy model hides a lot, but that which it hides is not the essence of Architecture. Rather, the hidden part is the essence of something else, which the Architect is able to successfully integrate without spoiling the secret. And as this post is about Architecture, and is already long enough, I'll stop right there.
THE
the secretthe secret of software
the secret of software architecture
the secret of software architecture is . . .i t e r a t i o n !
Hence that age-old complaint of the frustrated architect, "there aren't enough ribs on my plate!"
Sometime in 1995 I needed a high performance database to handle transactions. Don't we all have these moments?
Like everyone, I looked at the market, which was then Oracle, Informix and so forth. But unlike most, I had one advantage. I had seen the dirty side of that business because I'd been called in several times to clean up clagged & scrunted databases for clients who really truly needed their data back up. From that repetitive work came an sad realisation that no database is safe, and the only safe design is one that records in good time a clean, unchangeable and easily repairable record of what was happening, for the inevitable rebuild. At any one time.
Recovery is the key, indeed, it is the primary principle of database design. So, for my transaction database, I knew I'd have to do that with Oracle, or the rest, and that's where the lightbulb came on. The work required to wrap a reliability layer around a commercial database is approximately as much as the work required to write a small, purpose-limited database. I gambled on that inspiration, and it proved profitable. In one month, I wrote a transaction engine that did the work for 10 years, never losing a single transaction (it came close once though!).
My design process also led me to ponder the truism that all fast stuff happens in memory, and also, that all reliance stuff happens at the point of logging the transaction request. Between these two points is the answer, which SAP seems to have stumbled on:
... As memory chips get cheaper, more and more of them are being packed into servers. This means that firms, instead of having to store their data on separate disks, can put most of them into their servers’ short-term memory, where they can be accessed and manipulated faster and more easily. The software SAP is releasing next week, a new version of Business ByDesign, its suite of online services for small companies, aims to capitalise on this trend, dubbed “in-memory”. SAP also plans to rewrite other programs along similar lines. ...
The answer is something almost akin to a taboo: the database is only in memory, and the writes to slow storage are only transaction logging, not database actions. Which leads to the conclusion that when it crashes, all startups are recoveries, from the disk-based transaction log. If this were an aphorism, it would be like this: There is only one startup, and it is a recovery.
In-memory technology is already widespread in systems that simply analyse data, but using it to help process transactions is a bigger step. SAP’s software dispenses with the separate “relational databases” where the data behind such transactions are typically stored, and instead retains the data within the server’s memory. This, says Vishal Sikka, the firm’s chief technologist, not only speeds up existing programs—it also makes them cheaper to run and easier to upgrade, and makes possible real-time monitoring of a firm’s performance.
In its space, an in-memory database will whip the standard SQL-based database in read-times, which is the majority usage, and it doesn't have to be a slouch in write times either, because a careful design can deliver writes-per-transaction on par with the classical designs. Not only in performance but in ROI, because the design concept forces it into a highly-reliable, highly-maintainable posture which reduces on-going costs.
But this inversion of classical design is seen as scary by those who are committed to the old ways. Why such a taboo? Partly because, in contrast to my claim that recovery is the primary principle of database design, it has always been seen as an admission of failure, as very slow, as fraught with danger, in essence, something to be discriminated against. And, it is this discrimination that I've seen time and time again: nobody bothers to prove their recovery, because "it never happens to them." Recovery is insurance for databases, and is not necessary except to give your bosses a good feeling.
But that's perception. Reality is different. Recovery can be very fast for all the normal reasons, the processing time for recovering each individual record is about the same as reading in the record off-disk anyway. And, if you really need your data, you really need your recovery. The failure and fall-back to recovery needs to be seen in balance: you have to prove your recovery, so you may as well make it the essence not the fallback.
That said, there are of course limitations to what SAP calls the in-memory approach. This works when you don't mind the occasional recovery, in that always-on performance isn't really possible. (Which is just another way of re-stating the principle that data never fails, because the transaction integrity takes priority over other desires like speed). Also, complexity and flexibility. It is relatively easy to create a simple database, and it is relatively easy to store a million records in the memory available to standard machines. But this only works if you can architecturally carve out that particular area out of your business and get it to stand alone. If you are more used to the monolithic silos with huge databases, datamining, data-ownership fights and so forth, this will be as irrelevant to you as a McDonalds on the Moon.
Some observers are not convinced. They have not forgotten that many of SAP’s new products in the past decade have not been big successes, not least Business ByDesign. “There is healthy scepticism as to whether all this will work,” says Brent Thill of UBS, an investment bank. Existing customers may prefer not to risk disrupting their customised computer systems by adopting the new software.
And don't forget that 3 entire generations of programmers are going to be bemused, at sea, when they ask for the database schema and are told there isn't one. For most of them, there is no difference between SQL and database.
On a closing note, my hat's off to the Economist for picking up this issue, and recognising the rather deeper questions being asked here. It is rare for anyone in the media to question the dogma of computing architecture, let alone a tea-room full of economists. Another gem:
These efforts suggest that in-memory will proliferate, regardless of how SAP will fare. That could change the way many firms do business. Why, for example, keep a general ledger, if financial reports can be compiled on the fly?
Swoon! If they keep this up, they'll be announcing the invention of triple entry bookkeeping in a decade, as that is really what they're getting at. I agree, there are definitely many other innovations out there, waiting to be mined. But that depends on somewhat adroit decision-making, which is not necessarily in evidence. Unfortunately, this in-memory concept is too new idea to many, so SAP will need to plough the ground for a while.
Larry Ellison, the boss of Oracle, which makes most of its money from such software, has ridiculed SAP’s idea of an in-memory database, calling it “wacko” and asking for “the name of their pharmacist”.
But, after they've done that, after this idea is widely adopted, we'll all be looking back at the buggy whip crowd, and voting who gets the "Ken Olsen of the 21st Century Award."
In a paper Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL_, by Christopher Soghoian and Sid Stammby, there is a reasonably good layout of the problem that browsers face in delivering their "one-model-suits-all" security model. It is more or less what we've understood all these years, in that by accepting an entire root list of 100s of CAs, there is no barrier to any one of them going a little rogue.
Of course, it is easy to raise the hypothetical of the rogue CA, and even to show compelling evidence of business models (they cover much the same claims with a CA that also works in the lawful intercept business that was covered here in FC many years ago). Beyond theoretical or probable evidence, it seems the authors have stumbled on some evidence that it is happening:
The company’s CEO, Victor Oppelman confirmed, in a conversation with the author at the company’s booth, the claims made in their marketing materials: That government customers have compelled CAs into issuing certificates for use in surveillance operations. While Mr Oppelman would not reveal which governments have purchased the 5-series device, he did confirm that it has been sold both domestically and to foreign customers.
(my emphasis.) This has been a lurking problem underlying all CAs since the beginning. The flip side of the trusted-third-party concept ("TTP") is the centralised-vulnerability-party or "CVP". That is, you may have been told you "trust" your TTP, but in reality, you are totally vulnerable to it. E.g., from the famous Blackberry "official spyware" case:
Nevertheless, hundreds of millions of people around the world, most of whom have never heard of Etisalat, unknowingly depend upon a company that has intentionally delivered spyware to its own paying customers, to protect their own communications security.
Which becomes worse when the browsers insist, not without good reason, that the root list is hidden from the consumer. The problem that occurs here is that the compelled CA problem multiplies to the square of the number of roots: if a CA in (say) Ecuador is compelled to deliver a rogue cert, then that can be used against a CA in Korea, and indeed all the other CAs. A brief examination of the ways in which CAs work, and browsers interact with CAs, leads one to the unfortunate conclusion that nobody in the CAs, and nobody in the browsers, can do a darn thing about it.
So it then falls to a question of statistics: at what point do we believe that there are so many CAs in there, that the chance of getting away with a little interception is too enticing? Square law says that the chances are say 100 CAs squared, or 10,000 times the chance of any one intercept. As we've reached that number, this indicates that the temptation to resist intercept is good for all except 0.01% of circumstances. OK, pretty scratchy maths, but it does indicate that the temptation is a small but not infinitesimal number. A risk exists, in words, and in numbers.
One CA can hide amongst the crowd, but there is a little bit of a fix to open up that crowd. This fix is to simply show the user the CA brand, to put faces on the crowd. Think of the above, and while it doesn't solve the underlying weakness of the CVP, it does mean that the mathematics of squared vulnerability collapses. Once a user sees their CA has changed, or has a chance of seeing it, hiding amongst the crowd of CAs is no longer as easy.
Why then do browsers resist this fix? There is one good reason, which is that consumers really don't care and don't want to care. In more particular terms, they do not want to be bothered by security models, and the security displays in the past have never worked out. Gerv puts it this way in comments:
Security UI comes at a cost - a cost in complexity of UI and of message, and in potential user confusion. We should only present users with UI which enables them to make meaningful decisions based on information they have.
They love Skype, which gives them everything they need without asking them anything. Which therefore should be reasonable enough motive to follow those lessons, but the context is different. Skype is in the chat & voice market, and the security model it has chosen is well-excessive to needs there. Browsing on the other hand is in the credit-card shopping and Internet online banking market, and the security model imposed by the mid 1990s evolution of uncontrollable forces has now broken before the onslaught of phishing & friends.
In other words, for browsing, the writing is on the wall. Why then don't they move? In a perceptive footnote, the authors also ponder this conundrum:
3. The browser vendors wield considerable theoretical power over each CA. Any CA no longer trusted by the major browsers will have an impossible time attracting or retaining clients, as visitors to those clients’ websites will be greeted by a scary browser warning each time they attempt to establish a secure connection. Nevertheless, the browser vendors appear loathe to actually drop CAs that engage in inappropriate be- havior — a rather lengthy list of bad CA practices that have not resulted in the CAs being dropped by one browser vendor can be seen in [6].
I have observed this for a long time now, predicting phishing until it became the flood of fraud. The answer is, to my mind, a complicated one which I can only paraphrase.
For Mozilla, the reason is simple lack of security capability at the *architectural* and *governance* levels. Indeed, it should be noticed that this lack of capability is their policy, as they deliberately and explicitly outsource big security questions to others (known as the "standards groups" such as IETF's RFC committees). As they have little of the capability, they aren't in a good position to use the power, no matter whether they would want to or not. So, it only needs a mildly argumentative approach on the behalf of the others, and Mozilla is restrained from its apparent power.
What then of Microsoft? Well, they certainly have the capability, but they have other fish to fry. They aren't fussed about the power because it doesn't bring them anything of use to them. As a corporation, they are strictly interested in shareholders' profits (by law and by custom), and as nobody can show them a bottom line improvement from CA & cert business, no interest is generated. And without that interest, it is practically impossible to get the various many groups within Microsoft to move.
Unlike Mozilla, my view of Microsoft is much more "external", based on many observations that have never been confirmed internally. However it seems to fit; all of their security work has been directed to market interests. Hence for example their work in identity & authentication (.net, infocard, etc) was all directed at creating the platform for capturing the future market.
What is odd is that all CAs agree that they want their logo on their browser real estate. Big and small. So one would think that there was a unified approach to this, and it would eventually win the day; the browser wins for advancing security, the CAs win because their brand investments now make sense. The consumer wins for both reasons. Indeed, early recommendations from the CABForum, a closed group of CAs and browsers, had these fixes in there.
But these ideas keep running up against resistance, and none of the resistance makes any sense. And that is probably the best way to think of it: the browsers don't have a logical model for where to go for security, so anything leaps the bar when the level is set to zero.
Which all leads to a new group of people trying to solve the problem. The authors present their model as this:
The Firefox browser already retains history data for all visited websites. We have simply modified the browser to cause it to retain slightly more information. Thus, for each new SSL protected website that the user visits, a Certlock enabled browser also caches the following additional certificate information:
A hash of the certificate.
The country of the issuing CA.
The name of the CA.
The country of the website.
The name of the website.
The entire chain of trust up to the root CA.When a user re-visits a SSL protected website, Certlock first calculates the hash of the site’s certificate and compares it to the stored hash from previous visits. If it hasn’t changed, the page is loaded without warning. If the certificate has changed, the CAs that issued the old and new certificates are compared. If the CAs are the same, or from the same country, the page is loaded without any warning. If, on the other hand, the CAs’ countries differ, then the user will see a warning (See Figure 3).
This isn't new. The authors credit recent work, but no further back than a year or two. Which I find sad because the important work done by TrustBar and Petnames is pretty much forgotten.
But it is encouraging that the security models are battling it out, because it gets people thinking, and challenging their assumptions. Only actual produced code, and garnered market share is likely to change the security benefits of the users. So while we can criticise the country approach (it assumes a sort of magical touch of law within the countries concerned that is already assumed not to exist, by dint of us being here in the first place), the country "proxy" is much better than nothing, and it gets us closer to the real information: the CA.
From a market for security pov, it is an interesting period. The first attempts around 2004-2006 in this area failed. This time, the resurgence seems to have a little more steam, and possibly now is a better time. In 2004-2006 the threat was seen as more or less theoretical by the hoi polloi. Now however we've got governments interested, consumers sick of it, and the entire military-industrial complex obsessed with it (both in participating and fighting). So perhaps the newcomers can ride this wave of FUD in, where previous attempts drowned far from the shore.
Which reminds me to push out yet another outrageous chapter in secure protocol design. In my hypothesis #4 on Protocol Design, I claim this:
#4.3 Simplicity is Inversely Proportional to the Number of DesignersNever doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.
Margaret MeadSimplicity is proportional to the inverse of the number of designers. Or is it that complexity is proportional to the square of the number of designers?
Sad but true, if you look at the classic best of breed protocols like SSH and PGP, they delivered their best results when one person designed them. Even SSL was mostly secure to begin with, and it was only the introduction of PKI with its committees, world-scale identity models, digital signature laws, accountants and lawyers that sent it into orbit around Pluto. Committee-designed monsters such as IPSec and DNSSEC aren't even in the running.
Sometimes a protocol can survive a team of two, but we are taking huge risks (remember the biggest failure mode of all is failing to deliver anything). Either compromise with your co-designer quickly or kill him. Your users will thank you for either choice, they do not benefit if you are locked in a deadly embrace over the sublime but pernickety benefits of MAC-then-encrypt over encrypt-then-MAC, or CBC versus Counter-mode, or or or...
More at hypotheses on Secure Protocol Design.
The systems world will shortly be celebrating a major anniversary milestone. UNIX® is turning 40 years old! Most of us know the story of how UNIX was born, but what about why? Was it born strictly because its founders wanted to play a computer game on a different platform? And why does UNIX continue to thrive 15 years after an (in)famous Byte Magazine article that asked, "Is UNIX dead?"
Good questions for systems architects. You might not want to study Unix, but you should in your time study the full evolution of a few systems, and compare and contrast: why did this one succeed and others fail? Surprisingly, the reasons are generally different each time. It's almost as if the gods of the market learn and spite us if we copy the past successes, each time.
The very early story:
Ken Thompson developed a game on the GE-645 mainframe called Space Travel. Unfortunately, the game was just running too slow on the GE box, so Thompson rewrote (in assembly) the game for DEC's PDP-7, with the help of Dennis Ritchie. The porting experience led Ken to develop a new OS for the PDP-7. This included a file system as well as the new multi-tasking operating system itself. They included a command-line interpreter and some small utility programs.The project was originally named Unics, and it could eventually support two simultaneous users, which led to some financial support from Bell. In 1970, UNIX became the official name for the operating system, which ran on the PDP-11/20. It also included roff (a text formatting program) and a text editor.
The things to drag out of that is: Very Small Team. No Corporate Direction. And, they did what they needed. What did they need? Roff. Which was used for preparing papers ... which just happens to have been what all academic institutions need from their computers (above all other needs).
Ultimately, it was rewritten in C in 1973, which made it portable and changed the history of operating systems.
OK, sort of. More clearly, it made it readable. So when Computer Science departments started installing it, they found themselves with a readable, understandable and advanced operating system. What does that mean? Teaching! Unix became a favourite in the Universities because it gave a full pedagogical experience. Which lead to a generation of engineers in the early 1980s demanding it.
The portability came later, because these engineers want it. In practice, the first port off PDP-11s was done in Wollongong, as an academic exercise. But in the mid 1980s, the cost of hardware had come down, and the cost of software was rapidly rising. At the crossing point was a demand for a cheap operating system, and this meant "one you didn't have to write yourself" (curiously echoed by Bill Gates' purchase of QDOS for $50k.)
Unix took over the market for all challengers to IBM, because they couldn't beat IBM on its ability to invest in software. See _The Soul of a new Computer_ for a counterpoint, a book every architect should read. Or to be fair, Unix took over "sophisticated" end of the market, and MS-DOS took over the "basic PS" end. Between them, they destroyed IBM -- to then the greatest tech company ever -- in a single decade.
IBM of course fought back in the 1990s, and adopted Unix as well. However, marketing games roll on. In this article, IBM adopts the "licensed" view of what Unix is, which is either polite to the licensors or deceptive depending on who butters your bread. It misses out the other two gorillas in the room: BSD on Mac and Linux. Does anyone have any figures on operating systems revenues for Mac OSX 10 and Linux family?
Compare the chart in the IBM article to the one posted a month back.
Why does Unix live on? In part because it can do the serious things, in part because it is simply available, and in part its history was so much more oriented to the net, and it has gradually evolved to be good for it. Whereas the main competitor comes from the user-interface world, and eschewed the net in its core principles until 2002. During the 2000s, Microsoft lost a lot of ground because of security and lack of openness, ground that others were well prepared to take up. To read Sun Tzu:
The good fighters of old first put themselves beyond the possibility of defeat, and then waited for an opportunity of defeating the enemy. To secure ourselves against defeat lies in our own hands, but the opportunity of defeating the enemy is provided by the enemy himself. Thus the good fighter is able to secure himself against defeat, but cannot make certain of defeating the enemy.
Patience also is necessary. Why think about these things? Because the same battles are being fought out today. As in the charts here, but also between facebook and myspace, between blue-ray and HD-DVD, between the gold currencies, between the rebel currencies and the banks, between old media and the new. In each of these battles, the same factors emerge, as do winners and losers, and the only easy way to predict that future is to really study the examples of the past.
(Yes, there are harder and more expensive ways to learn, such as playing it yourself.)
SSL is a protocol that gives a point-to-point connection with some protection against MITM (man-in-the-middle). Attacks against SSL as a security paradigm can be characterised in three levels:
The reason we have to see this holistically, and include all three levels, is that SSL as a tool is worthless if the wider attacks are not dealt with, SSL is only valuable to the extent that they are dealt with, and SSL gets in the way to the extent that it imposes unreasonable responsibilities or expectations on the application or user.
Classical Phishing is the latter case, in that an email is sent to a user, who then clicks on a link. The application is not attacked, the "business" is. This is the MITM that caused the explosion in phishing around 2003. MITB (man-in-the-browser) is the next level down, attacking the application directly. The perverse linkage to the MITM of phishing is that the phishing explosion financed the move into application-level attacks.
Now, all of that is security as it is delivered to users. There is then protocol security, and here SSL has generally excelled. It meets its internal protocol goals very well, and always has done so. Now news has surfaced of a rare thing: an in-protocol attack against SSL:
Members of an industry-wide consortium had been meeting in secret since late September to hash out a fix that will be rolled into thousands of individual pieces of hardware and software that implement the TLS protocol. The bug became public when it was disclosed by a researcher not involved in the project. So far, OpenSSL is the only provider that is close to releasing a patch, according to this status page.
How serious is this? Initial comments were that it was hard to exploit, but now news is filtering through that Anil Kurmus has written an exploit against Twitter.
It's an evolving topic. What I find interesting however is that this represents a good event to make some OODA observations. Here's what I've gathered so far:
"late September" | bug announced to industry consortium secretly? (name?) | 0m |
20091104 | public disclosure by Marsh Ray and Steve Dispensa | 2m |
20091110 | twitter exploit announced | 2m |
The interesting conclusion to draw here is we can possibly measure the size of the OODA loop for the SSL industry, at least in this one event.
And here is what is emerging over time:
20091207 | fix for Mozilla's NSS crypto library announced by Nelson Bolyard | 3m |
20100208 | Firefox nightly builds now include partial patch announced by Daniel Veditz | 5m |
20130409 | SSL Pulse reports 80.3% of SSL sites patched and protected (picture shows 20130905) | 41m |
(Editor's note: Second table added and augmented after original publication.)
Around three weeks ago, I had a data disaster. In a surprise attack, 2 weeks worth of my SQL DATABASE was wiped out. Right after my FIRST weekend demo of some new code. The horror!
On that Monday morning I went into shell-shock for hours as I tried to trace where the results of the demo -- the very first review of my panzer code -- had disappeared to. By 11:00 there was no answer, and finger-of-blame pointed squarely at some SQL database snafu. The decision was reached to replace the weaponry with tried and trusty blades of all wars previous: flat files, the infantry of data. By the end of the day, the code was written to rebuild the vanguard from its decimated remains, and the next day, work-outs and field exercises proved the results. Two tables entirely replaced, reliably.
That left the main body, a complex object split across many tables, and the rearguard of various sundry administrative units. It took another week to write the object saving & restoring framework, including streaming, model-objects along the lines of MVC for each element, model-view-controller conversions and unit-testing. (I need a name for this portable object pattern. It came from SOX and I didn't think much of it at the time, but it seems nobody else does it.) Then, some days of unit tests, package tests, field tests, and so forth. Finally, 4-5 days of application re-working to use object database methods, not SQL.
16 days later, up and going. The army is on the march, SQL is targetted, acquired, destroyed. Defeated, wiped off the map, no longer to blot on the territory of my application. 2 days of mop-up and I'm back to the demo.
Why go on a holy crusade against SQL? There are several motives for this war:
And then there's the interface. Let us not shame the mere technical _casus belli_ above, let us put the evil that is SQL in a section of abomination, all of its own:
SQL is in the top 5 of the computing world's most awful anachronisms. It's right up there with ASN1, X.509, LDAP, APL, and other embarrassments to the world of computer science. In this case, there is one reason why SQL stands out like the sideways slice of death of a shamed samurai: data! these things, SQL included, were all designed when data was king, we all had to bow before the august power of our corporate bytes, while white-suited priests inserted the holy decks and the tapes of glory into bright shining temples of the mainframe of enlightenment.
But those imperial times are over. The false data-is-god was slain, discarded and buried, in the glorious & bloody revolution of the Object, that heathen neologism that rose up and slew and enslaved data during the early 1980s. Data-only is slain, data is dead. Data is captured, enslaved, owned. It is now objects, objects, objects. Data is an immoral ghost of its former self, when let out from its rightful context of semantic control.
These are the reasons why I leapt to the field to do battle to the death with the beast that is SQL. My report from the field is as follows:
I continue to mop up. What's the bottom line? Control. The application controls the objects and the objects control their data.
So what went wrong with SQL? Fundamentally it was designed in a day when data was seriously different. Those days are gone, now data is integrally related to code. It's called "object oriented" and even if you don't know how to do it, it is how it is doing it to you. The object owns the data, not the database. Data seen naked is an abomination, and SQL is just rags on a beast; it's still a beast.
Sadly, the world is still a decade or two away from this. And, to be fair, hattip to Jeroen van Gelderen for the OO database he designed and built for Webfunds. Using that was a lesson in how much of a delight it was to be OO all the way down to the disk-byte.
I've had a rather troubling rash of blog comment failure recently. Not on FC, which seems to be ok ("to me"), but everywhere else. At about 4 in the last couple of days I'm starting to get annoyed. I like to think that my time in writing blog comments for other blogs is valuable, and sometimes I think for many minutes about the best way to bring a point home.
But more than half the time, my comment is rejected. The problem is on the one hand overly sophisticated comment boxes that rely on exotica like javascript and SSO through some place or other ... and spam on the other hand.
These things have destroyed the credibility of the blog world. If you recall, there was a time when people used blogs for _conversations_. Now, most blogs are self-serving promotion tools. Trackbacks are dead, so the conversational reward is gone, and comments are slow. You have to be dedicated to want to follow a blog and put a comment on there, or stupid enough to think your comment matters, and you'll keep fighting the bl**dy javascript box.
The one case where I know clearly "it's not just me" is John Robb's blog. This was a *fantastic* blog where there was great conversation, until a year or two back. It went from dozens to a couple in one hit by turning on whatever flavour of the month was available in the blog system. I've not been able to comment there since, and I'm not alone.
This is denial of service. To all of us. And, this denial of service is the greatest evidence of the failure of Internet security. Yet, it is easy, theoretically easy to avoid. Here, it is avoided by the simplest of tricks, maybe one per month comes my way, but if I got spam like others get spam, I'd stop doing the blog. Again denial of service.
Over on CAcert.org's blog they recently implemented client certs. I'm not 100% convinced that this will eliminate comment spam, but I'm 99.9% convinced. And it is easy to use, and it also (more or less) eliminates that terrible thing called access control, which was delivering another denial of service: the people who could write weren't trusted to write, because the access control system said they had to be access-controlled. Gone, all gone.
According to the blog post on it:
The CAcert-Blog is now fully X509 enabled. From never visited the site before and using a named certificate you can, with one click (log in), register for the site and have author status ready to write your own contribution.
Sounds like a good idea, right? So why don't most people do this? Because they can't. Mostly they can't because they do not have a client certificate. And if they don't have one, there isn't any point in the site owner asking for it. Chicken & egg?
But actually there is another reason why people don't have a client certificate: it is because of all sorts of mumbo jumbo brought up by the SSL / PKIX people, chief amongst which is a claim that we need to know who you are before we can entrust you with a client certificate ... which I will now show to be a fallacy. The reason client certificates work is this:
If you only have a WoT unnamed certificate you can write your article and it will be spam controlled by the PR people (aka editors).If you had a contributor account and haven’t posted anything yet you have been downgraded to a subscriber (no comment or write a post access) with all the other spammers. The good news is once you log in with a certificate you get upgraded to the correct status just as if you’d registered.
We don't actually need to know who you are. We only need to know that you are not a spammer, and you are going to write a good article for us. Both of these are more or less an equivalent thing, if you think about it; they are a logical parallel to the CAPTCHA or turing test. And we can prove this easily and economically and efficiently: write an article, and you're in.
Or, in certificate terms, we don't need to know who you are, we only need to know you are the same person as last time, when you were good.
This works. It is an undeniable benefit:
There is no password authentication any more. The time taken to make sure both behaved reliably was not possible in the time the admins had available.
That's two more pluses right there: no admin de-spamming time lost to us and general society (when there were about 290 in the wordpress click-delete queue) and we get rid of those bl**dy passwords, so another denial of service killed.
Why isn't this more available? The problem comes down to an inherent belief that the above doesn't work. Which is of course a complete nonsense. 2 weeks later, zero comment spam, and I know this will carry on being reliable because the time taken to get a zero-name client certificate (free, it's just your time involved!) is well in excess of the trick required to comment on this blog.
No matter the *results*, because of the belief that "last-time-good-time" tests are not valuable, the feature of using client certs is not effectively available in the browser. That which I speak of here is so simple to code up it can actually be tricked from any website to happen (which is how CAs get it into your browser in the first place, some simple code that causes your browser to do it all). It is basically the creation of a certificate key pair within the browser, with a no-name in it. Commonly called the self-signed certificate or SSC, these things can be put into the browser in about 5 seconds, automatically, on startup or on absence or whenever. If you recall that aphorism:
There is only one mode, and it is secure.
And contrast it to SSL, we can see what went wrong: there is an *option* of using a client cert, which is a completely insane choice. The choice of making the client certificate optional within SSL is a decision not only to allow insecurity in the mode, but also a decision to promote insecurity, by practically eliminating the use of client certs (see the chicken & egg problem).
And this is where SSL and the PKIX deliver their greatest harm. It denies simple cryptographic security to a wide audience, in order to deliver ... something else, which it turns out isn't as secure as hoped because everyone selects the wrong option. The denial of service attack is dominating, it's at the level of 99% and beyond: how many blogs do you know that have trouble with comments? How many use SSL at all?
So next time someone asks you, why these effing passwords are causing so much grief in your support department, ask them why they haven't implemented client certs? Or, why the spam problem is draining your life and destroying your social network? Client certs solve that problem.
SSL security is like Bismarck's sausages: "making laws is like making sausages, you don't want to watch them being made." The difference is, at least Bismark got a sausage!
Footnote: you're probably going to argue that SSCs will be adopted by the spammer's brigade once there is widespread use of this trick. Think for minute before you post that comment, the answer is right there in front of your nose! Also you are probably going to mention all these other limitations of the solution. Think for another minute and consider this claim: almost all of the real limitations exist because the solution isn't much used. Again, chicken & egg, see "usage". Or maybe you'll argue that we don't need it now we have OpenID. That's specious, because we don't actually have OpenID as yet (some few do, not all) and also, the presence of one technology rarely argues against another not being needed, only marketing argues like that.
I got some good criticism on the post about accounting as a profession. Clive said this which I thought I'd share:
As an engineer who's father was an accountant I will give you three guesses as to what he told me not to do when I grew up... Oddly it is the same for engineers, we tend to tell our children to do other things. As I've said before if you want to get on in life you should learn to speak the language that the man who cuts your cheque at the end of the month does, or more correctly his boss ;)So even if you are just a humble team leader get yourself three courses,
- MBA,
- Vocal training,
- Psychology or Method acting.
And no I'm not joking about 3.
He's talking about what we do when we get to 30 and beyond, e.g., most readers of this blog. For us older folks looking back, it is depressing that the world looks so sucky; but this is a time-honoured thing. The myths have been stripped away, the rot revealed.
But the youth of today is perpetually optimistic, and the question they ask is eternal and (Spence-like) opinionated: what to study, first?
What then do we recommend for a first degree for someone near 20? It seems that nobody promotes the accountancy field, including the incumbents. Accountants don't practice accountancy, if they are any good. The only accountant I ever knew well committed suicide.
An MBA doesn't work, this is something that should be done after around 5-10 years of experience. Hence, I'm not convinced a straight business degree ("Bachelors in Business Studies" ?) makes sense either, because all that additional stuff doesn't add value until experience is there to help it click into place.
I wouldn't suggest economics. It is like law and accounting, in that it helps to provide a very valuable perspective throughout higher business planes. But it doesn't get you jobs, and it is too divorced from practical life, too hard to apply in detail. Engineering seems far too specialised these days, and a lot of it is hard to work in and subject to outsourcing. Science is like engineering but without the focus.
To my mind, the leading contenders as a first degree are (in no particular order):
⇒ law,
⇒ computer science,
⇒ biotech, and
⇒ marketing.
Firstly, they seem to get you jobs; secondly, law, compsci and marketing are easy to apply generally and broadly, and pay dividends throughout life. I'm not quiet sure about Biotech in the "broad" sense, but it is the next big thing, it is the wave to ride in.
Comp sci was the wave of the 1980s and 1990s. Now it is routine. Any technical degree these days tends to include a lot of comp sci, so if there is a tech you enjoy, do that degree and turn it into a comp sci degree on the inside.
Law is in my list because it is the ultimate defensive strategy. Headline Law tends to offend with its aggressively self-serving guild behaviour ("a man who represents himself has a fool for a client and a fool for a lawyer") and as a direct practice (courts) the field seems made for crooks. More technically, all disputes are win-lose by definition, and therefore litigation is destructive by definition, not productive. This is offensive to most of humanity.
But litigation is only the headline, there are other areas. You can apply the practical aspects of law in any job or business, and you can much more easily defend yourself and your business against your future fall, if you have a good understanding of the weapons of mutual destruction (a.k.a. lawsuits). About half of the business failures I've seen have occurred because there was no good legal advisor on the team; this is especially true of financial cryptography which is why I've had to pick up some of it; what one person I know calls "bush lawyering."
The downside to studying law is that you can lose your soul. But actually the mythology in law is not so bad because it is grounded in fundamental rights, so keep those in mind, and don't practice afterwards. It's nowhere near as bad as the computing scene (no grounding at all, e.g., open source) or the marketing blah blah (your mission is to unground other's perceptions!).
Marketing is there because every successful business needs it, and you can only be successful with it. MBAs are full of marketing, which reflects its centrality (and also gives a good option for picking it up later). But marketing is also dangerous because it gives you the tools to fool yourself and all around you, and once you've become accustomed to the elixir, your own grounding is at risk.
I don't advise any of the arts (including Clive's points 2,3) as a primary degree for youth, because businesses hire on substance, so it is important to have some to offer. E.g., people who study psychology tend to end up doing HR ("human resources"), badly, perhaps because they lack the marketing sense to make HR the most important part of the business.
Likewise, avoid anything that is popular, soft, fun, nice and that all your touchy-feely friends want to do. When there are too many people and too little substance, the competition suppresses everyone and makes you all poor. That's the best result because at least it is honest; a very few dishonest ones become rich because they figure out the game. The notion that you can study acting, media, history, photography or any of the finer arts, and then make a living, doesn't bear talking about. It is literally gambling with lives, and has no place in advice to young people.
Editor's note: TOdd wrote this long comment to Audits V and I thought it had to be a post:
Regarding the failure of financial auditing, or statutory audits, there is probably a body of knowledge to be found in academia and business journals. There is certainly a lot of wisdom and knowledge among the accounting profession, although it is heavily suppressed, and auditors, like bankers, start out opaque and unself-aware. All three of these things grow deeper over lifelong habit (lack of honest self appraisal, lack of communication skills to talk about their business in anything but literal terms, and lack of any motive or impulse to be honest or candid even if they wanted to.) So, you'll find the best research on this problem in the business schools and press, for whom auditors are a business problem to be understood, and in the accountancy schools who still harbor a lot of great minds, with too much integrity to survive in the global audit firms. The audit profession took root in the 1930s and I would have to guess that it was captured from day one, by the publicly listed companies they were supposed to be auditing.
Accountants have had the choice to improve themselves at several historic points in time; the 1929 crash, the end of WW2, when every other economy was demolished, and the end of the Soviet threat. What they've actually done was continue fiddling with their false definitions of economic substance, called GAAP, which is really intended to modulate the lies and maintain as much opaqueness as the public would tolerate.
The greatest opportunity to improve business reporting, if that were the intention, has come from improvements in database, computing, and the internet. Internally of course, companies have built information tools to navigate and understand their customers, suppliers, financial structures and inner working. All of it conceived, developed and maintained for the benefit of senior executives. The host-centric, server-centric architecture of the dominant computing architectures (ibm, att, dec, sun, microsoft etc) reflect this.
There is nothing that reveals the intent and will of the AICPA more clearly than its design choices in XBRL. And I doubt if anybody will ever write the book about XBRL, since the people who realized what a betrayal it was, while it was being cooked up, were physically nauseated and left the standards bodies, myself included. Outside the meeting room and convention halls, there were more than a few people who saw what was happening-- and why would they pay annual dues of $thousands, plus travel costs, to attend the next XBRL conference, unless they were part of the corrupt agenda themselves?
I am reminded of the State of Washington democratic party convention I attended a few years ago-- more than 2/3s of the 1000 delegates from the precincts, statewide had never been to a convention before. And, by the end of the convention, a percentage even larger than that, was in open rebellion against the selection of candidates and railroading of the platform and agenda, by top party officials. So, 2/3s of them would never bother participating in the Democratic Party in the next election cycle either.
The people responsible for the sabotage and corruption of the AICPA's XBRL and other technologies, are Barry Melancon, working on behalf of opaque interests in the audit firms and wall street, and, the young turks they hired, Charlie Hoffman and Eric Cohen. Hoffman bubbled up in the Seattle area as an evangelist for microsoft technologies in accounting firms and probably never understood where the money and support for his magic carpet ride was coming from. Microsoft itself being a front-end for IBM and wall street. There have been a few, who try from time to time, to make these technologies honest, such as David RR Weber, Glen Gray, Bill McCarthy...
A more hopeful technology, ebXML emerged shortly after XBRL, and again the history is so vast, somebody should write a book---indeed would write a book-- if they had the stomach for it. Now, here, we ran into a different set of entrenched interests, the EDI industry and adjacent companies and interests. It was a fabulous project, with at least ten different workgroups, each with a lot of dedicated people, supported by many great companies.
To sum it all up-- there are people who want to use the power of computers and communications to reach process improvements, labor savings, AND transparency for all stakeholders. These people have developed over many years, a very complete understanding of business processes in their industries and somewhat less completely, a generalized architecture for all economic transactions. However, there are a plutocracy who own all their companies and make all of the hiring and firing decisions. Obviously, these people at the very top, have leaned hard on the tiller, since the early days.
And the accounting and auditing profession knows where its bread is buttered, see Bob Elliot's diagram of "five stage value chain."
I'm not sure why I ended up on this link ... but it was something very strange. And then I saw this:
"Null References: The Billion Dollar Mistake"
That resonated! So what's it about? Well, some dude named Tony Hoare says:
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
I'm in for some of that. Where do I sign?
It gets better! Hidden in there was an entire track of bad ideas! I'm in awe! Look at this:
"Standards are Great, but Standardisation is a Really Bad Idea"Standards arise from consensus between competitors signaling maturity in a marketplace. A good standard can ensure interoperability and assist portability, allowing the switching of suppliers. A widely adopted standard can create new markets, and impose useful constraints which in turn foster good design and innovation. Standards are great, and as the old joke goes, that's why we have so many of them!
If standards represent peace, then formal standardisation can be war! Dark, political, expensive and exclusive games played out between large vendors often behind closed doors. There are better ways to forge consensus and build agreements and the notion of a committee taking often a number of years to writing a specification, especially in the absence of implementation experience appears archaic in today's world of Agile methods, test driven development, open source, Wikis and other Web base collaborations.
When I die and go to heaven's conferences, that's what I want hear. But there's more!
"RPC and its Offspring: Convenient, Yet Fundamentally Flawed"... A fundamental goal of these and countless other similar technologies developed over the years was to shoehorn distributed systems programming into the local programming models so customary and convenient to the average developer, thereby hiding the difficult issues associated with distribution such as partial failure, availability and location issues, and consistency and synchronization problems. ....
The old abstract-away-the-net trick... yes, that one never works, but we'll throw it in for free.
But wait, there's MORE:
"Java enterprise application standards..."In the late 90's, the J2EE standard was greeted with much enthusiasm, as a legion of Java developers were looking to escape proprietary vendor lock-in and ill-conceived technologies and embrace a standard, expert-designed specification for building enterprise applications in Java.
Unfortunately, the results were mixed, and included expensive failures like CMP/BMP entity beans. History has shown that both committee-led standards and container-managed frameworks gave way to open source driven innovation and lightweight POJO-based frameworks.
oh, POJO, Oh, OH, PO JO! (Why am I thinking about that scene in _When Harry met Sally_?)
If you're a good software engineer (and all readers of this blog are *good* at something) you'll also appreciate this one:
"Traditional Programming Models: Stone knives and bearskins in the Google age"Programming has been taught using roughly the same approach for decades, but today's systems use radically different architectures -- consider the explosion in the count of processors and cores, massively distributed environments running parallel computations, and fully virtualized operating environments.
Learn how many of yesterday's programming principles have inadvertently become today's worst practices, and how these anti-patterns continue to form the basis of modern programming languages, frameworks and infrastructure software.
...
How come these guys have all the fun?
When and where is this assembly of net angels? Unlike Dave, I am not giving away tickets. Worse, you can't even pay for it with money. It's over! March 2009, and nobody knew. Who do I kill?
It's terrifically cliched to say it these days, but the net is one of the great engineering marvels of science. The Economist reports it as 40 years old:
Such contentious issues never dawned on the dozen or so engineers who gathered in the laboratory of Leonard Kleinrock (pictured below) at the University of California, Los Angeles (UCLA) on September 2nd, 1969, to watch two computers transmit data from one to the other through a 15-foot cable. The success heralded the start of ARPANET, a telecommunications network designed to link researchers around America who were working on projects for the Pentagon. ARPANET, conceived and paid for by the defence department’s Advanced Research Projects Agency (nowadays called DARPA), was unquestionably the most important of the pioneering “packet-switched” networks that were to give birth eventually to the internet.
Right, ARPA funded a network, and out of that emerged the net we know today. Bottom-up, not top-down like the European competitor, OSI/ISO. Still, it wasn't about doing everything from the bottom:
The missing link was supplied by Robert Kahn of DARPA and Vinton Cerf at Stanford University in Palo Alto, California. Their solution for getting networks that used different ways of transmitting data to work together was simply to remove the software built into the network for checking whether packets had actually been transmitted—and give that responsibility to software running on the sending and receiving computers instead. With this approach to “internetworking” (hence the term “internet), networks of one sort or another all became simply pieces of wire for carrying data. To packets of data squirted into them, the various networks all looked and behaved the same.
I hadn't realised that this lesson is so old, but that makes sense. It is a lesson that will echo through time, doomed to be re-learnt over and over again, because it is so uncomfortable: The application is responsible for getting the message across, not the infrastructure. To the extent that you make any lower layer responsible for your packets, you reduce reliability.
This subtlety -- knowing what you could push down into the lower layers, and what you cannot -- is probably one of those things that separates the real engineers from the journeymen. The wolves from the sheep, the financial cryptographers from the Personal-Home-Pagers. If you thought TCP was reliable, you may count yourself amongst latter, the sheepish millions who believed in that myth, and partly got us to the security mess we are in today. (Related, it seems is that cloud computing has the same issue.)
Curiously, though, from the rosy eyed view of today, it is still possible to make the same layer mistake. Gunnar reported on the very same Vint Cerf saying today (more or less):
Internet Design Opportunities for ImprovementThere's a big Gov 2.0 summit going on, which I am not at but in the event apparently John Markoff asked Vint Cerf ths following question: "what would you have designed differently in building the Internet?" Cerf had one answer: "more authentication"
I don't think so. Authentication, or authorisation or any of those other shuns is again something that belongs in the application. We find it sits best at the very highest layer, because it is a claim of significant responsibility. At the intermediate layers you'll find lots of wannabe packages vying for your corporate bux:
* IP * IP Password * Kerberos * Mobile One Factor Unregistered * Mobile Two Factor Registered * Mobile One Factor Contract * Mobile Two Factor Contract * Password * Password Protected transport * Previous Session * Public Key X.509 * Public Key PGP * Public Key SPKI * Public Key XML Digital Signature * Smartcard * Smartcard PKI * Software PKI * Telephony * Telephony Nomadic * Telephony Personalized * Telephony Authenticated * Secure remote password * SSL/TLS Client Authentication * Time Sync Token * Unspecified
and that's just in SAML! "Holy protocol hodge-podge Batman! " says Gunnar, and he's not often wrong.
Indeed, as Adam pointed out, the net works in part because it deliberately shunned the auth:
The packet interconnect paper ("A Protocol for Packet Network Intercommunication," Vint Cerf and Robert Kahn) was published in 1974, and says "These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate."
So what was Vint Cerf getting at? He clarified in comments to Adam:
The point is that the current design does not have a standard way to authenticate the origin of email, the host you are talking to, the correctness of DNS responses, etc. Does this autonomous system have the authority to announce these addresses for routing purposes? Having standard tools and mechanisms for validating identity or authenticity in various contexts would have been helpful.
Right. The reason we don't have standard ways to do this is because it is too hard a problem. There is no answer to what it means:
people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn't get us in too much trouble. (Hi S! Hi C!)
which therefore means, it is precisely and only whatever the application wants. Or, if your stack design goes up fully past layer 7 into the people layer, like CAcert.org, then it is what your boss wants. So, Skype has it, my digital cash has it, Lynn's X959 has it, and PGP has it. IPSec hasn't got it, SSL hasn't got it, and it looks like SAML won't be having it, in truck-loads :) Shame about that!
Digital signature technology can help here but just wasn't available at the time the TCP/IP protocol suite was being standardized in 1978.
(As Gunnar said: "Vint Cerf should let himself off the hook that he didn't solve this in 1978.") Yes, and digital signature technology is another reason why modern clients can be designed with it, built in and aligned to the application. But not "in the Internet" please! As soon as the auth stuff is standardised or turned into a building block, it has a terrible habit of turning into treacle. Messy brown sticky stuff that gets into everything, slows everyone down and gives young people an awful insecurity complex derived from pimples.
Oops, late addition of counter-evidence: "US Government to let citizens log in with OpenID and InfoCard?" You be the judge!
Two hard numbers effecting the attack model. The cost of attacking a CAPTCHA system with people in developing regions, from the Economist's report on the state of the CAPTCHA nation:
The biggest flaw with all CAPTCHA systems is that they are, by definition, susceptible to attack by humans who are paid to solve them. Teams of people based in developing countries can be hired online for $3 per 1,000 CAPTCHAs solved. Several forums exist both to offer such services and parcel out jobs. But not all attackers are willing to pay even this small sum; whether it is worth doing so depends on how much revenue their activities bring in. “If the benefit a spammer is getting from obtaining an e-mail account is less than $3 per 1,000, then CAPTCHA is doing a perfect job,” says Dr von Ahn.
And here, outside our normal programme, is news from RAH that people pay for the privilege of being a suicide bomber:
A second analysis with Palantir uncovered more details of the Syrian networks, including profiles of their top coordinators, which led analysts to conclude there wasn't one Syrian network, but many. Analysts identified key facilitators, how much they charged people who wanted to become suicide bombers, and where many of the fighters came from. Fighters from Saudi Arabia, for example, paid the most -- $1,088 -- for the opportunity to become suicide bombers.
It's important to examine security models remote to our own, because it it gives us neutral lessons on how the economics effects the result. An odd comparison there, that number $1088 is about the value required to acquire a good-but-false set of identity documents.
Following yesterday's post, here's a today example of thinking about unknowns -- yesterday's, today's and tomorrow's. Currently the experts in crypto and protocol circles are championing "algorithm agility". Why? Because SHA1 is under a cloud, and MD5 is all-but-drowned. It should be replaced!
MD5 is yesterday's unknown. I vaguely recall that MD5 was "not best" in 1995, and the trick was to put in SHA1 (and not SHA0). So is SHA1: this blog has frequently reported on SHA1 running into stormy weather, since the original Wang et al papers. I even placed it into a conceptual framework of Pareto security. We've known this since mid 2004, which makes it very yesterday's news.
Unfortunately, a lot of groups did not heed the warnings, and are still running systems based either loosely or completely on SHA1, MD5 or worse. And now the researchers have the bit between their academic teeth, and are attacking the CAs and their certs.
The good news is that people are now working to replace SHA1. The bad news is that they are looking around for someone (else) to blame. And one of the easy targets is the protocol itself, which is "insufficiently agile". Developers can do something about this, they can add "algorithm agility" and then the problem will go away, so the theory goes.
But another story can be seen in the SSL and OpenPGP communities. Both of these places spent inordinate amounts of time on algorithm agility in the past: RSA, DSA, bit strength, hashes within signatures, AES, DES, Blowfish, ... these are all negotiable, replaceable, variable _within the protocol_ at some level or other. In fact, during the 1990s, a great and glorious war was fought against patents, governments, businesses, individuals and even bits&bytes. For various reasons, each of these battlegrounds became a siren call for more agility.
But it seems that both camps forgot to make the hash function fully agile and replaceable. (Disclosure: I was part of the OpenPGP group, so I have more of a view on that camp. There were occasional muted discussions on this, but in general, the issue was always deferred. There was a mild sense of urgency, knowledge of the gap, but finishing the document was always more important than changing the protocol. I still don't disagree with that sense of priotities.)
Hash replacement was forgotten because all of the efforts were spent on fighting the last war. Agility wasn't pursued in its generality, because it was messy, complex and introduced more than its share of problems; instead, only known threats from the last war were made agile. And one was forgotten, of course, being the one never in dispute.
Instead of realising the trap, and re-thinking the approach, the cries of algorithm agility are getting louder. In 2019 we'll be here again, with complete and perfect hash agility (NIST-approved no doubt) and what will happen? Something else will break, and it'll be something that we didn't make agile. The cycle will start again.
Instead, cut the gordian knot, and go with The One:
"There is one cipher suite, and it is numbered Number 1."
When you see a cloud, don't buy an umbrella; replace the climate. With #2. Completely. This way, you and your users will benefit from all of the potential for a redesign of the entire protocol, not just the headline things from the last war that catch up to you today.
Most well-designed systems will last at least 5 years. Which gives room for a general planning cycle for replacing the lot. Which -- once you understand the strategy -- is a tractable design problem, because in modern software product development, most products are replaced completely every few years. A protocol need be no different.
So, once every cycle, you might spent a year cleaning up from the yesterday's war. And then, we get back to thinking about tomorrow's war for several years.
The CA and PKI business is busy this week. CAcert, a community Certification Authority, has a special general meeting to resolve the trauma of the collapse of their audit process. Depending on who you ask, my resignation as auditor was either the symptom or the cause.
In my opinion, the process wasn't working, so now I'm switching to the other side of the tracks. I'll work to get the audit done from the inside. Whether it will be faster or easier this way is difficult to say, we only get to run the experiment once.
Meanwhile, Mike Zusman and Alex Sotirov are claiming to have breached the EV green bar thing used by some higher end websites. No details available yet, it's the normal tease before a BlabHat style presentation by academics. Rumour has it that they've exploited weaknesses in the browsers. Some details emerging:
With control of the DNS for the access point, the attackers can establish their machines as men-in-the-middle, monitoring what victims logged into the access point are up to. They can let victims connect to EV SSL sites - turning the address bars green. Subsequently, they can redirect the connection to a DV SSL sessions under a certificates they have gotten illicitly, but the browser will still show the green bar.
Ah that old chestnut: if you slice your site down the middle and do security on the left and no or lesser security on the right, guess where the attacker comes in? Not the left or the right, but up the middle, between the two. He exploits the gap. Which is why elsewhere, we say "there is only one mode and it is secure."
Aside from that, this is an interesting data point. It might be considered that this is proof that the process is working (following the GP theory), or it might be proof that the process is broken (following the sleeping-dogs-lie model of security).
Although EV represents a good documentation of what the USA/Canada region (not Europe) would subscribe as "best practices," it fails in some disappointing ways. And in some ways it has made matters worse. Here's one: because the closed proprietary group CA/B Forum didn't really agree to fix the real problems, those real problems are still there. As Extended Validation has held itself up as a sort of gold standard, this means that attackers now have something fun to focus on. We all knew that SSL was sort of facade-ware in the real security game, and didn't bother to mention it. But now that the bigger CAs have bought into the marketing campaign, they'll get a steady stream of attention from academics and press.
I would guess less so from real attackers, because there are easier pickings elsewhere, but maybe I'm wrong:
"From May to June 2009 the total number of fraudulent website URLs using VeriSign SSL certificates represented 26% of all SSL certificate attacks, while the previous six months presented only a single occurrence," Raza wrote on the Symantec Security blogs.... MarkMonitor found more than 7,300 domains exploited four top U.S. and international bank brands with 16% of them registered since September 2008.
.... But in the latest spate of phishing attempts, the SSL certificates were legitimate because "they matched the URL of the fake pages that were mimicking the target brands," Raza wrote.VeriSign Inc., which sells SSL certificates, points out that SSL certificate fraud currently represents a tiny percentage of overall phishing attacks. Only two domains, and two VeriSign certificates were compromised in the attacks identified by Symantec, which targeted seven different brands.
"This activity falls well within the normal variability you would see on a very infrequent occurrence," said Tim Callan, a product marketing executive for VeriSign's SSL business unit. "If these were the results of a coin flip, with heads yielding 1 and tails yielding 0, we wouldn't be surprised to see this sequence at all, and certainly wouldn't conclude that there's any upward trend towards heads coming up on the coin."
Well, we hope that nobody's head is flipped in an unsurprising fashion....
It remains to be seen whether this makes any difference. I must admit, I check the green bar on my browser when online-banking, but annoyingly it makes me click to see who signed it. For real users, Firefox says that it is the website, and this is wrong and annoying, but Mozilla has not shown itself adept at understanding the legal and business side of security. I've heard Safari has been fixed up so probably time to try that again and report sometime.
Then, over to Germany, where a snafu with a HSM ("high security module") caused a root key to be lost (also in German). Over in the crypto lists, there are PKI opponents pointing out how this means it doesn't work, and there are PKI proponents pointing out how they should have employed better consultants. Both sides are right of course, so what to conclude?
Test runs with Germany's first-generation electronic health cards and doctors' "health professional cards" have suffered a serious setback. After the failure of a hardware security module (HSM) holding the private keys for the root Certificate Authority (root CA) for the first-generation cards, it emerged that the data had not been backed up. Consequently, if additional new cards are required for field testing, all of the cards previously produced for the tests will have to be replaced, because a new root CA will have to be generated. ... Besides its use in authentication, the root CA is also important for card withdrawal (the revocation service).
The first thing to realise was that this was a test rollout and not the real thing. So the test discovered a major weakness; in that sense it is successful, albeit highly embarrassing because it reached the press.
The second thing is the HSM issue. As we know, PKI is constructed as a hierarchy, or a tree. At the root of the tree is the root key of course. If this breaks, everything else collapses.
Hence there is a terrible fear of the root breaking. This feeds into the wishes of suppliers of high security modules, who make hardware that protect the root from being stolen. But, in this case, the HSM broke, and there was no backup. So a protection for one fear -- theft -- resulted in a vulnerability to another fear -- data loss.
A moment's thought and we realise that the HSM has to have a backup. Which has to be at least as good as the HSM. Which means we then have some rather cute conundrums, based on the Alice in Wonderland concept of having one single root except we need multiple single roots... In practice, how do we create the root inside the HSM (for security protection) and get it to another HSM (for recovery protection)?
Serious engineers and architects will be reaching for one word: BRITTLE! And so it is. Yes, it is possible to do this, but only by breaking the hierarchical principle of PKI itself. It is hard to break fundamental principles, and the result is that PKI will always be brittle, the implementations will always have contradictions that are swept under the carpet by the managers, auditors and salesmen. The PKI design is simply not real world engineering, and the only thing that keeps it going is the institutional deadly embrace of governments, standards committees, developers and security companies.
Not the market demand. But, not all has been bad in the PKI world. Actually, since the bottoming out of the dotcom collapse, certs have been on the uptake, and market demand is present albeit not anything beyond compliance-driven. Here comes a minor item of success:
VeriSign, Inc. [SNIP] today reported it has topped the 1 billion mark for daily Online Certificate Status Protocol (OCSP) checks.[SNIP] A key link in the online security chain, OCSP offers the most timely and efficient way for Web browsers to determine whether a Secure Sockets Layer (SSL) or user certificate is still valid or has been revoked. Generally, when a browser initiates an SSL session, OCSP servers receive a query to check to see if the certificate in use is valid. Likewise, when a user initiates actions such as smartcard logon, VPN access or Web authentication, OCSP servers check the validity of the user certificate that is presented. OSCP servers are operated by Certificate Authorities, and VeriSign is the world's leading Certificate Authority.
[SNIP] VeriSign is the EV SSL Certificate provider of choice for more than 10,000 Internet domain names, representing 74 percent of the entire EV SSL Certificate market worldwide.
(In the above, I've snipped the self-serving marketing and one blatant misrepresentation.)
Certificates are static statements. They can be revoked, but the old design of downloading complete lists of all revocations was not really workable (some CAs ship megabyte-sized lists). We now have a new thing whereby if you are in possession of a certificate, you can do an online check of its status, called OCSP.
The fundamental problem with this, and the reason why it took the industry so long to get around to making revocation a real-time thing, is that once you have that architecture in place, you no longer need certificates. If you know the website, you simply go to a trusted provider and get the public key. The problem with this approach is that it doesn't allow the CA business to sell certificates to web site owners. As it lacks any business model for CAs, the CAs will fight it tooth & nail.
Just another conundrum from the office of security Kafkaism.
Here's another one, this time from the world of code signing. The idea is that updates and plugins can be sent to you with a digital signature. This means variously that the code is good and won't hurt you, or someone knows who the attacker is, and you can't hurt him. Whatever it means, developers put great store in the apparent ability of the digital signature to protect themselves from something or other.
But it doesn't work with Blackberry users. Allegedly, a Blackberry provider sent a signed code update to all users in United Arab Emirates:
Yesterday it was reported by various media outlets that a recent BlackBerry software update from Etisalat (a UAE-based carrier) contained spyware that would intercept emails and text messages and send copies to a central Etisalat server. We decided to take a look to find out more....
Whenever a message is received on the device, the Recv class first inspects it to determine if it contains an embedded command — more on this later. If not, it UTF-8 encodes the message, GZIPs it, AES encrypts it using a static key (”EtisalatIsAProviderForBlackBerry”), and Base64 encodes the result. It then adds this bundle to a transmit queue. The main app polls this queue every five seconds using a Timer, and when there are items in the queue to transmit, it calls this function to forward the message to a hardcoded server via HTTP (see below). The call to http.sendData() simply constructs the POST request and sends it over the wire with the proper headers.
Oops! A signed spyware from the provider that copies all your private email and sends it to a server. Sounds simple, but there's a gotcha...
The most alarming part about this whole situation is that people only noticed the malware because it was draining their batteries. The server receiving the initial registration packets (i.e. “Here I am, software is installed!”) got overloaded. Devices kept trying to connect every five seconds to empty the outbound message queue, thereby causing a battery drain. Some people were reporting on official BlackBerry forums that their batteries were being depleted from full charge in as little as half an hour.
So, even though the spyware provider had a way to turn it on and off:
It doesn’t seem to execute arbitrary commands, just packages up device information such as IMEI, IMSI, phone number, etc. and sends it back to the central server, the same way it does for received messages. It also provides a way to remotely enable/disable the spyware itself using the commands “start” and “stop”.
There was something wrong with the design, and everyone's blackberry went mad. Two points: if you want to spy on your own customers, be careful, and test it. Get quality engineers on to that part, because you are perverting a brittle design, and that is tricky stuff.
Second point. If you want to control a large portion of the population who has these devices, the centralised hierarchy of PKI and its one root to bind them all principle would seem to be perfectly designed. Nobody can control it except the center, which puts you in charge. In this case, the center can use its powerful code-signing abilities to deliver whatever you trust to it. (You trust what it tells you to trust, of course.)
Which has led some wits to label the CAs as centralised vulnerability partners. Which is odd, because some organisations that should know better than to outsource the keys to their security continue to do so.
But who cares, as long as the work flows for the consultants, the committees, the HSM providers and the CAs?
Let's talk about why we want Identity. There appear to be two popular reasons why Identity is useful. One is as a handle for the customer experience, so that our dear user can return day after day and maintain her context.
The other is as a vector of punishment. If something goes wrong, we can punish our user, no longer dear.
It's a sad indictment of security, but it does seem as if the state of the security nation is that we cannot design, build and roll-out secure and safe systems. Abuse is likely, even certain, sometimes relished: it is almost a business requirement for a system of value to prove itself by having the value stolen. Following the inevitable security disaster, the business strategy switches smoothly to seeking who to blame, dumping the liability and covering up the dirt.
Users have a very different perspective. Users are well aware of the upsides and downsides, they know well: Identity is for good and for bad.
Indeed, one of the persistent fears of users is that an identity system will be used to hurt them. Steal their soul, breach their privacy, hold them to unreasonable terms, ultimately hunt them down and hurt them, these are some of the thoughts that invasive systems bring to the mind of our dear user.
This is the bad side of identity: the individual and the system are "in dispute," it's man against the machine, Jane against Justice. Unlike the usage case of "identity-as-a-handle," which seems to be relatively well developed in theory and documentation, the "identity-as-punishment" metaphor seems woefully inadequate. It is little talked about, it is the domain of lawyers and investigators, police and journalists. It's not the domain of technologists. Outside the odd and forgettable area of law, disputes are a non-subject, and not covered at all where I believe it is required the most: marketing, design, systems building, customer relations, costs analysis.
Indeed, disputes are taboo for any business.
Yet, this is unsustainable. I like to think of good Internet (or similar) systems as an inverted pyramid. On the top, the mesa, is the place where users build their value. It needs to be flat and stable. Efficient, and able to expand horizontally without costs. Hopefully it won't shift around a lot.
Dig slightly down, and we find the dirty business of user support. Here, the business faces the death of a 1000 tiny support cuts. Each trivial, cheap and ignorable, except in the aggregate. Below them deeper down are the 100 interesting support issues. Deeper still, the 10 or so really serious red alerts. Of which one becomes a real dispute.
The robustness of the pyramid is based on the relationship between the dispute at the bottom, the support activity in the middle, and the top, as it expands horizontally for business and for profit.
Your growth potential is teetering on this one thing: the dispute at the apex of the pyramid. And, if you are interested in privacy, this is the front line, for a perverse reason: this is where it is most lost. Support and full-blown disputes are the front line of privacy and security. Events in this area are destroyers of trust, they are the bane of marketing, the nightmare of PR.
Which brings up some interesting questions. If support is such a destroyer of trust, why is it an afterthought in so many systems? If the dispute is such a business disaster, why is resolution not covered at all? Or hidden, taboo? Or, why do businesses think that their dispute resolution process starts with their customers' identity handles? And ends with the lawyers?
Here's a thought: If badly-handled support and dispute events are leaks of privacy, destroyers of trust, maybe well-handled events are builders of trust? Preservers of privacy?
If that is plausible, if it is possible that good support and good dispute handling build good trust ... maybe a business objective is to shift the process: support designed up front, disputes surfaced, all of it open? A mature and trusted provider might say: we love our disputes, we promote them. Come one, come all. That's how we show we care!
An imature and unstrusted provider will say: we have no disputes, we don't need them. We ask you the user to believe in our promise.
The principle that the business hums along on top of an inverted pyramid, that rests ultimately on a small powerful but brittle apex, is likely to cause some scratching of technophiliac heads. So let me close the circle, and bring it back to the Identity topic.
If you do this, if you design the dispute mechanism as a fully cross-discipline business process for the benefit of all, not only will trust go positive and privacy become aligned, you will get an extra bonus. A carefully constructed dispute resolution method frees up the identity system, as the latter no longer has to do double duty as the user handle *and* the facade of punishment. Your identity system can simply concentrate on the user's experience. The dark clouds of fear disappear, and the technology has a chance to work how the techies said it would.
We can pretty much de-link the entire identity-as-handles from the identity-as-punishment concept. Doing that removes the fear from the user's mind, because she can now analyse the dispute mechanism on its merits. It also means that the Identity system can be written only for its technical and usability merits, something that we always wanted to do but never could, quite.
(This is the rough transcript of a talk I gave at Identity & Privacy conference in London a couple of weeks ago. The concept was first introduced at LexCybernetoria, it was initially tried by WebMoney, partly explored in digital gold currencies, and finally was built in
CAcert's Arbitration project.)
Dani writes: Here is an in-depth analysis of one of the (if not THE) most advanced malware currently in circulation. Please note the wide selection of defensive and offensive measures, including extensive use of strong cryptography.
we summarize the inner workings and practical implications of this latest malicious software application produced by the Conficker developers. In addition to the dual layers of packing and encryption used to protect A and B from reverse engineering, this latest variant also cloaks its newest code segments, along with its latest functionality, under a significant layer of code obfuscation to further hinder binary analysis.
Its choice of crypto suite is RSA4096, RC4, MD6 (all designed by Ron Rivest, as the authors note). A fascinating read for all interested in information security.
In comments to yesterday's H5, "AC2" asked:
Confused by the applicability of this particular hypothesis (and in general all of them)...Are you primarily concerned with over the wire comms, especially outside the 'boundaries' (whatever they are) of an organisation's network?
If not, then would App to DB communication fall in its remit? H5 is pretty much broken in this space afaik, as such communication IS dependent on underlying infrastructure/ protocols to a great extent.. Unless you suggest that the app encrypt everything apart from the SELECT WHERE FROM etc in the SQL...
Posted by: AC2 at February 16, 2009 03:05 AM
Good question, and here comes a sweeping, hand waving answer.
Security is all-encompassing, top-to-bottom, left-to-right, from the requirements all out. This hypothesis says that the requirement to secure covers the entire space.
"App to DB" comms and SQL fall within that. So the answer is, yes, H5 decidedly covers everything, including SELECT WHERE FROM etc in SQL. That is, there is no point in securing the rest of it if your database is an open love letter to the attacker.
How do we apply this in practice? Here are two answers. The first answer could be called "don't do that!" I personally have not followed the literature on this, because I looked at SQL databases a long time ago (I used to port them into Unix systems, and repair broken ones for desperate clients) and decided they can't be economically made secure. So I don't ever use them. Period. I use things I can secure, which means, I write my own databases. Sometimes, these are known as "flat files" in the industry; I once saw an estimate that something like 30% of databases are of this form.
The second answer could be called "kick-back." Once we decide that the need is for end-to-end security, this changes a lot of things. One of the things is that we assume that the data can become public, or should be able to be public, without any loss. For example, in my digital cash work, if all the server-side transaction databases were viewable by an attacker, it wouldn't mean that he could steal money (although there is an obvious privacy angle here). For second example, refer to any Lynn.std.rant on x9.59 and you will find the same deal, in that because of the analysis revealing miles and miles of processes and steps, it was reasoned that there was no way to keep the stuff unviewable, so the only response was to neutralise the data. C.f., "naked and vulnerable transactions" cannot be protected with encryption.
So to come back to AC2's comment, yes, something is pretty much broken in this space. Whether one thinks it is H5, or whether one thinks it is the original decision to use an unsecurable and unreliable tool is rather a political choice for you. If your job is SQL programming, and you want to keep your job, I know which way you're going to call it, and nobody can fault you for that :) OTOH, if you did set the requirements for security, ... well, how serious was that requirement?
PS: in all the above, where I use the word "secure" I really mean the superset of "reliable", including in this case, security.
So much of what we do today is re-inventing what others have already done, but has been lost in the noise. Today, Ian Brown said:
EIFFEL is a group of leading networking researchers funded by the European Commission to "provide a place for discussing and exchanging ideas and research trajectories on the future of the Internet architecture and governance building as a foundation of the future networked society." I'm proud they have asked me to speak on Tuesday at their second meeting, alongside such luminaries as MIT's Dr David "end-to-end principle" Clark. You can see my presentation below — any comments (before or after Tuesday morning) most welcome!
What caught my eye, other than the grandiose goal, was the mention of an end-to-end principle. Lo and behold, Ian linked to this over on wikipedia:
The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled.According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization, hence, Transmission Control Protocol (TCP) retransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached.
Aha! Well, that makes sense. This is very close to what I say in this unpublished hypothesis, here:
Hypothesis #5 -- Security Begins at the Application and Ends at the Mind The application must do most of the work [1]. For really secure systems, only application security is worthwhile. That is simply because most applications are delivered into environments where they have little control or say over how the underlying system is set up.
#5.1 Security below the application layer is unreliable For security needs, there is little point in for example relying on (specifying) IPSec or the authentication capabilities or PKI or trusted rings or such devices. These things are fine under laboratory conditions, but you have no control once it leaves your door. Out there in the real world, and even within your own development process, there is just too much scope for people to forget or re-implement parts that will break your model.
If you need it, you have to do it yourself. This applies as much to retries and replay protection as to authentication and encryption; in a full secure system you will find yourself dealing with all these issues at the high layer eventually, anyway, so build them in from the start.
Try these quick quizzes. It helps if you close your eyes and think of the question at the deeper levels.
- Is your firewall switched on? How do you know? Or, how do you know when it is not switched on?
- Likewise, is your VPN switched on? How do you know?
- Does TLS provide replay protection [2]?
These questions lead to some deeper principles.
Super hint! Now out there and published ... leaving only one more to go.
As an aside, the set of 7 hypotheses covering secure design is around 60% published. As time goes on, I find new info that I try and integrate in, which slows down the process. Some of them are a little rough (or, a mess) and some of them fight for the right to incorporate a key point. One day, one day...
From the "articles I wish I'd written" department, Chandler points to an article by Daniels Geer & Conway on all the ways security is really a subset of reliability. Of course!
I think this is why the best engineers who've done great security things start from the top; from the customer, the product, the market. They know that in order to secure something, they had better know what the something is before even attempting to add a cryptosec layer over it.
Which is to say, security cannot be a separate discipline. It can be a separate theory, a bit like statics is a theory from civil engineering, or triage is a part of medicine. You might study it in University, but you don't get a job in it; every practitioner needs some basic security. If you are a specialist in security, your job is more or less to teach it to practitioners. The alternate is to ask the practitioners to teach you about the product, which doesn't seem sensible.
Gunnar lauds a post on why there are few architects in the security world:
Superb post by Mark on what I think is the biggest problem we have in security. One thing you learn in consulting is that no matter what anyone tells you when you start a project about what problem you are trying to solve, it is always a people problem. The single biggest problem in security is too many breakers not enough builders. Please understand I am not saying that breakers are not useful, we need them, and we need them to continue to get better so we can build more resilient systems. But the industry is about 90% breaking and 10% building and that's plain bad.
It’s still predominantly made up of an army of skilled hackers focused on better ways to break systems apart and find new ways to exploit vulnerabilities than “security architects” who are designing secure components, protocols and ultimately secure systems.
Hear hear! And why is this? One easy answer: breaking something is a solid metric. It's either broken or not, in general. Any journo can understand it.
On the other hand, building it is too difficult a signal. There is no easy number, there is no binary result. It takes a business focus over decades to understand that one architecture delivers more profits for users and corporates alike than another, and by then, the architects have moved on, so even, then the result may not be clear.
Let's take an old example. Back around 1996, a couple of bored uni students cracked Netscape's secure browsing. The first time was by crunching the 40 bit crypto using the idle lab computers, and the second time was by predicting the less-than-random numbers injected into the protocol. These students were lauded in the press for having done something grand.
They then went on to a much harder task, and were almost never heard of again. What was that harder task? Building secure and usable systems. One of them tried to build a secure communications platform, another is trying to build a secure computing platform. So far, they have not succeeded at either task, but these are much harder tasks.
The true heroes back in the mid-1990s were the Netscape engineers who got something going and delivered it to the public, not the kids who scratched the paint off it. The breaches mentioned above were jokes, and bad ones at that, because they distracted attention on what was really being built. Case in point, is that, even today, if we had twice as much 40 bit crypto as we do 128 bit crypto, we'd probably be twice as secure, because the attack of relevance simply isn't the bored uni student, it is the patient phisher.
If you recall names in this story, recall them for what they tried to build, and not for what they broke.
Since the famous Bill Gates Memo, around the same time as phishing and related frauds went institutional, Microsoft has switched around to deal with the devil within: security. In so doing, it has done what others should have done, and done it well. However, there was always going to be a problem with turning the super-tanker called Windows into a battleship.
I predicted a while back that (a) Vista would probably fail to make a difference, and (b) the next step was to start thinking of a new operating system. This wasn't the normal pique, but the cold-hearted analysis of the size of the task. If you work for 20 years making your OS easy but insecure, you don't have much chance of fixing that, even with the resources of Microsoft.
The Economist brings an update on both points. Firstly, on Vista's record after 18 months in the market:
To date, some 140m copies of Vista have been shipped compared with the 750m or more copies of XP in daily use. But the bulk of the Vista sales have been OEM copies that came pre-installed on computers when they were bought. Anyone wanting a PC without Vista had to order it specially.Meanwhile, few corporate customers have bought upgrade licences they would need to convert their existing PCs to Vista. Overwhelmingly, Windows users have stuck with XP.
Even Microsoft now seems to accept that Vista is never going to be a blockbuster like XP, and is hurrying out a slimmed-down tweak of Vista known internally as Windows 7. This Vista lite is now expected late next year instead of 2010 or 2011.
It's not as though Vista is a dud. Compared with XP, its kernel—the core component that handles all the communication between the memory, processor and input and output devices—is far better protected from malware and misuse. And, in principle, Vista has better tools for networking. All told, its design is a definite improvement—albeit an incremental one—over XP.
Microsoft tried and failed to turn it around, security+market-wise. We might now be looking at the end of the franchise known as Windows. To be clear, while we are past the peak, any ending is a long way off in the distant future.
Classical strategy thinking says that there are two possible paths here: invest in a new franchise, or go "cash-cow". The latter means that you squeeze the revenues from the old franchise as long as possible, and delay the termination of the franchise as long as possible. The longer you delay the end, the more revenues you get. The reason for doing this is simple: there is no investment strategy that makes money, so you should return the money to the shareholders. There is a simple example here: the music majors are decidedly in cash-cow, today, because they have no better strategy than delaying their death by a thousand file-shares.
Certainly, with Bill Gates easing out, it would be possible to go cash-cow, but of course, we on the outside can only cast our augeries and wonder at the signs. The Economist suggests that they may have taken the investment route:
Judging from recent rumours, that's what it is preparing to do. Even though it won't be in Windows 7, Microsoft is happy to talk about “MinWin”—a slimmed down version of the Windows core. It’s even willing to discus its “Singularity” project—a microkernel-based operating system written strictly for research purposes. But ask about a project code-named “Midori” and everyone clams up.By all accounts, Midori (Japanese for “green” and, by inference, “go”) capitalises on research done for Singularity. The interesting thing about this hush-hush operating system is that it’s not a research project in the normal sense. It's been moved out of the lab and into incubation, and is being managed by some of the most experienced software gurus in the company.
With only 18 months before Vista is to be replaced, there's no way Midori—which promises nothing less than a total rethink of the whole Windows metaphor—could be ready in time to take its place. But four or five years down the road, Microsoft might just confound its critics and pleasantly surprise the rest of us.
Comment? Even though I predicted Microsoft would go for a new OS, I think this is a tall order. There are two installed bases in the world today, being Unix and Windows. It's been that way for a long time, and efforts to change those two bases have generally failed. Even Apple gave up and went Unix. (The same economics works against the repeated attempts to upgrade the CPU instruction set.)
The flip-side of this is that the two bases are incredibly old and out-of-date. Unix's security model is "ok" but decidedly pre-PC, much of what it does is simply irrelevant to the modern world. For example, all the user-to-user protection is pointless on a one-user-one-PC environment, and the major protection barrier has accidentally become a hack known as TCP/IP, legendary for its inelegant grafting onto Unix. Windows has its own issues.
So we know two things: a redesign is decades over-due. And it won't budge the incumbents; both are likely to live another decade without appreciable change to the markets. We would need a miracle, or better, a killer-app to budge the installed base.
Hence the cold-hearted analysis of cash-cow wins out.
But wait! The warm-blooded humanists won't let that happen for one and only one reason: it is simply too boring to contemplate. Microsoft has so many honest, caring, devoted techies within that if a decision were made to go cash-cow, there would be a mass-defection. So the question then arises, what sort of a hybrid will be acceptable to shareholders and workers? Taking a leaf from recent politics, which is going through a peak-energy-masquerade of its own these days, some form of "green platform" has appeal to both sides of the voting electorate.
Last week's discussion (here and here) over how there is only one mode, and it is secure, brought forth the delicious contrast with browsing and security: yes, you can do that but it doesn't work well. No, I'm not talking about the logos being cross-sited, but all of the 100 little flaws that you find when you try and do a website for secure purposes.
So why bother? Financial cryptography eats its own medicine, but it doesn't do it for breakfast, lunch and desert. Which reminds me to introduce another of the sub-hypes for critique:
#4.2 Usability Determines the Number of Users Ease of use is the most important determinant to the number of users. Ease of implementation is important, ease of incorporation is also important, and even more important is the ease of use by end-users. This reflects a natural subdivision into several classes of users: implementors, integrators and end-users, each class of which can halt the use of the protocol if they find it ... unusable. As they are laid out serially between you and the marketplace, you have to consider usability to all of them.
The protocol should be designed to be easy to code up, so as to help implementors help integrators help users. It should be designed to be easy to interface to, so as to help integrators help users. It should be designed to be easy to configure, so as to help users get security.
If there are any complex or tricky features, ask yourself whether the benefit is really worth the cost of coder's time. It is not that developers cannot do it, it is simply that they will not do it; nobody has all the time in the world, and a protocol that is twice as long to implement is twice as likely to not get done.
Same for integrators of systems. If the complexity provided by the protocol and the implementation causes X amount of work, and another protocol costs only X/2 then there is a big temptation to switch. Regardless of absolute or theoretical security.
Same for users.
Anon asks:
> ian: I never understood why you insist on using HTTPS for the blog... maybe you can shed light ?
Fair question, and often I ask myself whether it is worth the extra effort. As succinctly as I can put it, it is because of the fundamental principle:
This principle is not so well understood in today's Internet security business, so I'll explain. Whenever a system has two modes, there is always weakness as it switches from one mode to another. In security systems, we get security weakness as we switch from unsecured mode to secured mode.
A very basic problem with security is that attackers are intelligent and active and users are distracted and passive. Not exactly dumb, but just paying attention to other things. So attackers will search out weaknesses, and users will not notice weaknesses, and therefore attacks at the weaknesses.
Then, attackers will attack at the switch in mode, and users won't really notice. Easy to say, but how does this work in practice? Consider browsing. You go to the website of your bank by typing the name into the google bar on the top right of google (ok, *you* might not, but you have a friend who will...) and clicking on the top result [1]. Or you could do it any number of other ways. Whichever, you end up at a website. Then you click around looking for the place to type your password and username.
The session started out "insecure", and ended up "secure". Hopefully. Normal users will pay attention at the beginning, but their attention wanes with each click. So in general, they won't notice when they switched into secure model. Which also means they won't notice who they switched too, which in turn leads to an easy attack: around the time the user is not paying attention, switch somewhere else that looks like what they expect.
Hence, phishing, in all its variations.
The fundamental flaw here is that we browse insecurely and then switch to secure mode for something important. We can eliminate a whole class of attacks here by being secure always. Never having to switch. Hence the principle; in that *if* you are doing anything that requires security, you are a million times better off if you *always* do everything secure [2].
Financial Cryptography is using HTTPS to remind people doing serious security work of that principle: you should design your systems to be always secure. Every time you click on the FC website and see that it is in HTTPS, I want you to remember that the application known as secure browsing is fundamentally broken, because it breaches the 3rd hypothesis: There is only one mode, and it is secure [3].
You, and your friend, are at risk because of that. To paraphrase an old saying, you enter a state of sin when you design a system with a security switch in it. It follows that, if we want to do anything that involves security on the web, then, everything should be in HTTPS, not just this blog. All blogs, all wikis, all websites, all the REST, everything.
[1] I was looking for an example, so I googled Bank of America. The first link took me straight to a https site (via a redirect). Outstanding!
Clicking around from the second link on google, I found that it switched me (via North Carolina) across to a login box in HTTPS with this site: https://businessconnect.ebanking-services.com/nubi/signin.aspx . Firefox says (oddly) "This web site does not supply identity information." .... but the certificate says it is Metavante Corporation ... ok, so this is a bad example, even I am totally confused by what is happening here...
[2] What "secure" means and how you do that are other questions that can only be answered with reference to the specific circumstances. E.g., for browsing, secure means that you are talking to the right site, and nobody else.
[3] How we got to this state of affairs, where practically everyone on the planet believes that insecure browsing is normal, should be considered as a research question.
Duane points to a recent debate, something about DNSSEC and its added value, which resulted in this comment by one Thierry Moreau: DNNSEC is almost worthless! The reasons appear to be based on an analysis of three usage models, and each usage model heads for the rocks for one reason or other. Thierry points out that user #1 is not interested, user #2 is small, and user #3 will be not be allowed. The analysis is worth a read, as it is nicely laid out (regardless of whether you agree or not).
What is going wrong with DNSSEC? From the outside, the results are clear: it was never used. In my terms it breaches parts of my 4th hypothesis, which is, in short, "Usability is Number One." To start off with:
#4.1 Zero Users means Zero SecurityThe most common failure mode of any security protocol is not being used by users, at all.
There have been thousands of attempts at secure protocols in recent Internet times. Many did not get completed, many were completed but were rejected as too hard to use, and many great protocols missed the boat and were swamped by bad protocols. These are therefore all failures; their delivered security is zero. Zip, zilch, nada.
Perfect security, multiplied by zero users, always equals Zero security. Try it with any variation of zero you like, and any grade of security. Count up as many security projects as you like, and look at the very strong correlation: Security perfectly reaches zero with all known forms of mathematics, if it is has zero users.
Only a delivered protocol that protects and ships packets for actual, warm, live, talkative users can deliver security. A good protocol with some gaping holes will always outperform a perfect protocol that remains undelivered, in security terms. A good protocol in widespread use will generally outperform a better protocol that is poorly used.
Again simple mathematics tells us why: a protocol that is perfect that protects one person perfectly, is still limited to that one person. The mathematics of security says that is a one. If you can reduce your protocol's theoretical security from 100% to 99%, and get ten users, that then means you can reach 9.9, in delivered security to those ten users. Approximately. If you can reduce to 98%, but gain 100 users then your security reaches 98.
Security is as delivered to users, and is summed across them. Therefore, it goes up almost perfectly with the number of users. By far the biggest determinant of security is then the number of users that you can gain. Consider that first and foremost.
This result certainly applies to DNSSEC, and the hypothesis title of Usability may hint at why. Or not.
Life is slowly improving with that old tired security model called secure browsing. Here's a roundup:
Firefox have their new security UI in place whereby you can click on exceptions to store the certificates as accepted and trust by you (being your most important authority). There is an annoying bug where it loses the cache from time to time, and this causes the user to have to re-do the exceptions. That is being pursued in the bug channels, and will be cracked as soon as someone figures out a way to reproduce it.
Meanwhile, this blog's certificate expired, causing much confusion because it was misdiagnosed as the above. Once I had figured the real cause, the cert was re-issued within an hour or so. Thanks to Philipp for that one! You might also note that the certificate is one of those AltServerName specials, where you can shove in many names into the one certificate. This is a kludge of the highest order, but it is what is required for the small end of the market when IP#s are not handed out like candy.
Which brings us to the ServerNameIndication news . This is the long-awaited fix in TLS that enables Apache webservers to do virtual hosting of the secured sites, something that has been missing since forever. Once we can do virtual hosting of secured sites, this means all the smaller operators using own Linux and BSD machines can move just about anything to TLS. This means people can finally start to employ security on websites as it was meant to be:
There is only one mode, and it's secure.
Unfortunately, the SSL model broke about five minutes after deployment when people separated the websites into non-SSL and SSL. Military people will quickly realise that this is a split-forces pattern, and a disaster that must and did happen. _Do not split your forces!_ is one thing that is hammered into the new recruit until the soldier mumbles it in sleep.
ServerNameIndication or SNI is the most important fix there is in secure browsing today. I argue this in the face of strong candidates, such as security UI improvements, key continuity models (a.k.a. SSH), secure password protocols, EV, CardSpace, etc. The reason this is more important is that it is a structural & market forces change, not a technical change, and market forces trumps tech every time.
<obligatory><rant>
Take EV for example, as the most popular answer to phishing. It adds around 1000 new certs to the market. The top end. And it changes little for those big companies, other than a new paint job. Green is in, this season, it seems. The industry is divided as to whether it adds nothing, or just a little. Even the EV people agree that it is not intended to solve phishing... Either way, most agree it isn't worth the bother (to resist or to implement), and it consumed significant resources which were therefore wasted.In comparison, TLS/SNI will unleash one million Linux boxes that can now start serving 10 million websites in TLS. This is no paint job, SNI is a revolution in blood; most of those new certs will trigger IE's new warning colour as well. Currently, the Linux multitudes cannot serve security, more or less, because they have only one IP# each. It's just not worth the bother for one site, see the split-forces issue. With SNI, it removes a massive barrier: the IP# limitation, and we no longer have to compromise our security between the two models.
I predict we'll add a million new TLS websites over the few years after Apache SNI is released.
Which will then have a massive, truly massive knock-on effect on all developers of all software applications: Suddenly, developers will find their insecure applications being put into security areas, because TLS secures, right? Suddenly, ordinary developers will have to start thinking about security. Because if users mount secure websites, that means they need security, right? Suddenly, developers will discover an itch to get more familiar with security programming, practices and tricks. And this will all flow into their applications and across to users.
The humble cert will be reborn. Can this massive claim be true? The good part is that even if only a small part of it is true, it's a win for everyone except phishers...
</rant></obligatory>
So, story aside, where are we at? The Apache HTTPD team are now debating the patch to put it into the next release version of 2.2.9. Here's a snippet from joe from redhat:
Changing the dirconf structure fields in-place seems ugly and may even be thread-unsafe (not sure). I still can't see how this handles half the cases it needs to, as I've said several times now - SSLVerifyClient is only one part of this. From a quick look I can't see how a reneg would be forced for any of:1) SSLCipherSuite changed since original vhost
2) SSLCACeritificate* changed since original vhost (where both
3) SSLOCSP* changed since original vhostbut it certainly should be. A lot of the mod_ssl code will need to be very carefully reviewed since some core assumptions are being broken by supporting SNI. I would go through each of the config directive which supports vhost context in turn. What about SSLCertificateChainFile? What about CRLs? etc etc.
It is also a complete cop-out to claim these issues aren't specific to SNI since we explicitly don't support any non-SNI configuration in which these paths can be triggered. And for very good reason: *they don't work properly*.
joe
Which is to say, OK, we're looking at it, but SNI is such a dramatic change to the codebase that it needs to be carefully reviewed. If you know the codebase, you could do the world a huge favour by piling in and helping the review.
My opinion? Like any manager, I've seen the demo, so ship it now :) Indeed, I encourage you to call your supplier and ask why it didn't ship 10 years ago. In fact, if your website was unencrypted and unsecured because your sysadm grumbled about TLS support, call your supplier and ask why he shouldn't be sued for a decade of breaches, angst and confusion?
More practically, SNI could be released now with a caveat that says "not for security production use yet!" Compromise on security before delivery. A caveat can be in the config files, and the use of the fix to TLS will require special configuration anyway.
This is the shear economics theory of security: we need to get it into production use with a non-production status, because we need that 1 million insecure secure singularity, as above. Do the maths: a million times 99.99% secure is way more than our current numbers.
And, if a few Linux boxen have a few security losers amongst them, they take some for the team. That's what Linux is there for, right? To carve the territory for the rest of us, and take some arrows in the back. Others agree. Some Linux distros already ship with the fix to TLS:
Oden wrote: > FYI. SNI is in Mandriva Linux 2008.1.Then you should pull it out ASAP, as noted by others the patch currently in trunk is broken in several ways, with possible security configuration implications...
Apache of course will leap forth and blame the Mandriva guys for breaching the cosy protocol. Tut, tut, look what happened to Debian! But in this case they are wrong. Security on this scale is far too important to be held back by concern over security bugs. If there is one thing we learnt in the 1990s, it is that perfect security is more of an enemy than any attacker.
In the aftermath of the OpenSSL failure due to a vendor patch (which bit the vendor badly...) there has been more analysis. Clearly, early attempts to describe this were flawed, and mine was no exception, as the precise failure was not well described.
I did miss out on one important thing, pointed out by Philipp Güring: when doing high-sec apps, it is necessary to mix in different sources, because we should assume that the lower layers will fail. But, while necessary, it is not sufficient. We still require to show that our mixed randoms are getting into the right place. Then, we need some form of testing strategy that shows this. I suggest:
In addition to other's comments, I found this extremely useful, from David Brodbeck, posted in comments on EC:
In aviation it's common to talk about an "accident chain," the series of small mistakes that lead up to an incident. Breaking any one link in the chain would have stopped the accident from happening. That's kind of what happened here.
I suggest we incorporate the accident chain into the security lingo. We can guess that each mistake by itself was probably innocuous, otherwise we would have fixed it. Instead of the current 'best practices' of fingerpointing, it is then much better to document the chain of mistakes that led to the accident. And think about how you are going to deal with them. Ideally, be able to show that anyone component could fail completely, and disaster would not then follow.
Finally, as I mentioned, it's better to own the problem than to avoid it. I was heartened then to see that Eric Young wrote:
I just re-checked, this code was from SSLeay, so it pre-dates OpenSSL taking over from me (about 10 years ago, after I was assimilated by RSA Security).So in some ways I'm the one at fault for not being clear enough about why 'purify complains' and why it was not relevant. Purify also incorrectly companied about a construct used in the digest gathering code which functioned correctly, but purify was also correct (a byte in a read word was uninitialised, but it was later overwritten by a shifted byte).
One of the more insidious things about Purify is that once its complaints are investigated, and deemed irrelevant (but left in the library), anyone who subsequently runs purify on an application linking in the library will get the same purify warning. This leads to rather distressed application developers. Especially if their company has a policy of 'no purify warnings'.
One needs to really ship the 'warning ignore' file for purify (does valgrind have one?).
I personally do wonder why, if the original author had purify related comments, which means he was aware of the issues, but had still left the code in place, the reviewer would not consider that the code did some-thing important enough to ignore purify's complaints.
So what we saw was a timebomb that had been ticking for around 10 years. As Ben Laurie mentioned, an awful lot has changed in our knowledge and available tools since then, and as mentioned above, the original code author has long since left the scene. Definately a case for avionics accident investigation. I wonder if they write books on that sort of stuff?
Firefox 3 has reworked the ability to manage your certificates. After some thought back at Mozo central, they've introduced a more clarified method for dealing with those sites that you know are good. E.g., for the people who are new to this; there are those sites that you know are good, and there are those sites where others tell you they are good. People have spent decades over deciding which of these spaces owns the term "trust" so I won't bore you with that sad and sorry tale today.
Meanwhile, I've worked my way through the process and added the FC blog as an Exception. The process is OK: The language is better than before, as it now says that the site is not trusted. Before it said "*you* don't trust this site!" which was so blatantly wrong as to be rude to me and confusing to everyone else. Now it just fudges the issue by saying the site is untrusted, but not indicating by whom. Most people will realise this is a meaningless statement, as trust comes from a person, it isn't something that you can get from an advert.
There are multiple clicks, possibly intended to indicate that you should really know what you are doing. I don't think that is so much going to help. What would help better is a colour. So far, it is white, indicating that ... well, nothing. So there is a confusion between sites you trust and those that have nothing, they are both cast into the nothing bucket.
However, this all in time. There is no doubt that KCM or Key Continuity Management is the way to go because users need to work with their sites, and when their communities install and use certs, that's it. KCM is needed to let the rest of the world outside Main Street, USA use the thing called SSL and secure browsing. So it will come in time, as the people at Firefox work out how to share the code with the two models.
One thing however: I did this around a week ago, carefully following the exception process. Now, just a moment before I started this post, Firefox suddenly lost its memory! As I was saving another blog post it decided to blow away the https site. Suddenly, we were back to the untrusted regime, and I had to do do the whole "I trust, I know I trust, I trust I know, I know I trust more than any blah blah trust blah!" thing, all over again. And then there was no post left ... luckily it was a minor change and the original was saved.
This could be an MITM. Oh, that I would be so important... oh, that someone would want to sneak into my editorial control over the influential Financial Cryptography blog and change the world-view of the thousands of faithful readers... well, fantasies aside, this isn't likely to be an MITM.
It could be a sysadm change, but the cert looks the same, although there is little info there to check (OK, this is the fault of the servers, because the only way to tell is to go to the server, and ... it doesn't give you any info worth talking about. SSH servers have the same problem.) And the sysadm would have told me.
So Occam's razor suggests this is a bug in Firefox. Well, we'll see. I cannot complain too loudly about that, as this is RC1. Release Candidates might have bugs. This is a big change to the way Firefox works, bugs are expected. One just bit me. As someone once said, the pioneers are the ones with the arrows in the back.
It is often remarked that Information Security specialists are so good at their work that they lock out all threats, including the users. Meanwhile the rest of the world has moved on and done things like insecure browsing, insecure email, google datawarehousing of every of your clicks, and Facebook. These ideas are worth billions in the right hands!
What happens when Information Security people wake up and smell the future?
I recently was invited to join a social network of Information Security people and got a chance to find out. I entered some few brief details in their browser interface. Let's be clear here, just because I was invited to a social network doesn't mean I am going to expose myself to phishing, mindlessly misdirected employment advertising, and other failures of the information age. So I entered some brief words such as the stuff that you already know from reading the blog.
Unfortunately this wasn't open enough for the newly-socialised IS people:
Could you please provide a more complete biography and photo? We have been trying hard to encourage the social part of the network and that is difficult when people are reluctant to provide information. I'm sure you will appreciate there needs to be a certain amount of disclosure in order to encourage mutual trust.
Just to be clear, *I totally agree with that sentiment!* I've worked all my life in the old meatspace equivalent of social networks - teams - and building trust is something that is done with disclosure. (In an appropriate setting.) So let's just sweep away all the Internet heebie jeebies of Identity theft, win-win negotiation, rule of threes, and so forth, and let me in.
I wanna disclose, guys, I wanna do it like I saw on those Hollywood movies where we all tell our story to a circle of tearful audience! Problem is, I can't:
From: Some nice guy <mail@infosecuk.ning.com>
Reply-To: do-not-reply@infosecuk.ning.com
To: Iang <iang@i.o>
Subject: Some nice guy has sent you a message on Information Security...[big snip]
Once you have provided this information we will be pleased to grant you access and look forward to you taking an active part in contributing to, and promoting, the network.To reply to this message, click here: http://infosecuk.ning.com/profile/Some_nice_guy/?xgp=messages
--
To control which e-mails you receive on Information Security, go to:
http://infosecuk.ning.com/profiles/profile/emailSettings
I can't reply to the message, or more precisely, all replies to public email addresses are binned. Because of Information Security, as we know -- spamming, etc. OK, so I click on all the links, and it says:
Your Profile is Pending ApprovalHello, Iang (Sign Out)
Your profile details must be approved by the Administrator before you can become a member of Information Security. You will receive an e-mail once your profile is approved.
Clunk. This happened a few weeks ago and I'm now stuck with receiving a stream of well-meaning messages asking me to communicate with them, but they aren't listening. It's like being married; information security has now entered into a deadly embrace with social networking, and the result is enough to make one shave ones head and become a monk.
Hasan reports that he has been working on an ontology for currency systems, mainly oriented at builders of local monies. Download is here, in Protege form which probably means you need to use that package to build with it.
Monetary_ontology_0.1d.zip: A zipped Protégé project of an ontology for currency creation and use. Also contains an "owl". The objective is an active description of all forms of "money" from barter to clearing systems, from precious metal coinage to debt-based fiat. It is oriented towards designers of payment systems and community currency systems. Here is a preview GIF image ! First begun 2008/05/04 and last updated 2008/05/12. v0.0 Contributed by Martin "Hasan" Bramwell
Click to load the full image.
Here's another in my little list of hypothesies, this one from the set known as "Divide and Conquer." The delay in publishing them is partly the cost of preparing the HTML and partly that some of them really need a good editorial whipping. Today's is sparked by this comment made by Dan, at least as attributed by Jon:
... you can make a secure system either by making it so simple you know it's secure, or so complex that no one can find an exploit.
allegedly Dan Geer, as reported by Jon Callas.
In the military, KISS stands for keep it simple, stupid because soldiers are dumb by design (it is very hard to be smart when being shot at). Crypto often borrows from the military, but we always need to change a bit. In this case, KISS stands for
Keep the Interface Stupidly Simple.
When you divide your protocol into parts, KISS tells you that even a programmer should find every part easy to understand. This is more than fortiutous, it is intentional, because the body programmer is your target audience. Remember that your hackers have to also understand all the other elements of good programming and systems building, so their attention span will be limited to around 1% of their knowledge space.
A good example of this is a block cipher. A smart programmer can take a paper definition of a secret-key cipher or a hash algorithm and code it up in a weekend, and know he has got it right.
Why is this? Three reasons:
An excellent protocol example of a simple interface is SSL. No matter what you think of the internals of the protocol or the security claims, the interface is designed to exactly mirror the interface to TCP. Hence the name, Secure Socket Layer. An inspired choice! The importance of this is primarily the easy understanding achieved by a large body of programmers who understand the metaphor of a stream.
Passports were always meant to help track citizens. According to lore, they were invented in the 19th century to stop Frenchmen evading the draft (conscription), which is still an issue in some countries. BigMac points to a Dutch working paper "Fingerprinting Passports," that indicates that passports can now be used to discriminate against the bearer's country of issue, to a distance of maybe 25cm. Future Napoleons will be happy.
Because terrorising the reader over breakfast is currently good writing style by governments and media alike, let's highlight the dangers first. The paper speculates:
Given that we can remotely detect the presence of a passport of a particular country, how could this functionality be abused? One abuse case that has been suggested is a passport bomb, designed to go off if someone with a passport of a certain nationality comes close. One could even send such a bomb by post, say to an embassy. A less spectacular, but possibly more realistic, use of this functionality would by passport thieves, who can remotely check if someone is carrying passport and if it is of a ‘suitable’ nationality, before they decide to rob them.
From the general fear department, we can also add that overseas travellers sometimes have a fear of being mugged, kidnapped, hijacked or simply shot because of their mere membership of a favourable or unfavourable country.
Now that we have the FUD off our chest, let's talk details. The trick involves sending a series of commands (up to 4) to the RFID in the passport, each of which are presumably rejected by the passport. The manner of rejection differs from country to country, so a precise fingerprint-of-country can be formed simply by examining each rejection, and then choosing a different command to further narrow the choices.
How did this happen? I would speculate that the root failure is derived from bureaucrats' never-ending appetite for complex technological solutions to simple problems. In this case, the first root cause is the use of the RFID, being by intention and design something that can be read from up to 10 cm.
It is inherently attackable, and therefore by definition a very odd choice for security. The second complexity, then, involved implementing something to stop the attackers reading off the RFIDs without permission. The solution to an active read-off attack is encryption, of course! Which leads to our third complexity, a secret key, which is written inside the passport, of course! Which immediately raises issues of brute-forcing (of course!) and, as the paper references, it turns out, brute forcing attacks work on some countries' passports because the secret key is .. poorly chosen.
All of this complexity, er, solution, means something called Basic Access Control is added to the RFID in order to ensure the use of the secret key. Which means a series of commands meant to defend the RFID. If we factor in the tendency for each country to implement passports entirely alone (because they are more scared of each other than they are of their citizens), we can see that each solution is proprietary and home-grown. To cope with this, the standard was written to be very flexible (of course!). Hence, it permits wide diversity in response to errors.
Whoops! Security error. In the world of security, we say that one should be precise in what we send, and precise in what we return.
From that point of view, this is poor security work by the governments of the world, but that's to be expected. The US State Department can now derive some satisfaction from earlier blunders; because of their failure to implement any form of encryption or access control, American passports can be read by all (terrorists and borderists alike), which apparently forced them to add aluminium foil into the passport cover to act as a Faraday cage. Likely, the other countries will now have to follow suit, and the smugness of being sophisticated and advanced in security terms ("we've got BAC!") will be replaced by a dawning realisation that they should have adopted the simpler solutions in the first place.
In our ELTEcrypt research group [writes Dani Nagy], we discussed opportunistic public key exchange from a cost-benefit point of view and came up with an important improvement over the existing schemes (e.g. ssh), which, I think, must be advertised as broadly as possible. It may even merit a short paper to some conference, but for now, I would like to ask you to publish it in your blog.
Opportunistic public key exchange is when two communicating parties perform an unauthenticated key exchange before the first communication session, assume that this key is trustworthy and then only verify that the same party uses the same key every time. This lowers the costs of defense significantly by not imposing authentication on the participants, while at the same time it does not significantly lower the cost of the dominant attack (doing MITM during the first communication session is typically not the dominant attack). Therefore, it is a Pareto-improvement over an authenticated PKI.
One successful implementation of this principle is ssh. However, it has one major flaw, stemming from misplaced costs: when an ssh host is re-installed or replaced by a new one, the cost of migrating the private key of the host is imposed on the host admin, while most of the costs resulting from not doing so are imposed on the clients.
In the current arrangement, when a new system is installed, the ssh host generates itself a new key pair. Migrating the old key requires extra work on the system administrator's part. So, he probably won't do it.
If the host admin fails to migrate the key pair, clients will get a frightening error message that won't let them do their job, until they exert significant effort for removing the "offending" old public key from their key cache. This is their most straightforward solution, which both weakens their security (they lose all protection against MITM) and punishes them for the host admin's mistake.
This could be improved in the following way: if the client detects that the host's public key has changed, instead of quitting after warning the user, it allows the user to accept the new key temporarily for this one session with hitting "yes" and SENDS AN EMAIL TO THE SYSTEM ADMINISTRATOR.
Such a scheme metes out punishment where it is due. It does not penalize the client too much for the host admin's mistake, and provides the latter with all the right incentives to do his duty (until he fixes the migration problem, he will be bombarded by emails by all the clients and the most straightforward solution to his problem is to migrate the key, which also happens to be the right thing to do).
As an added benefit, in some attack scenarios, the host admin will learn about an ongoing attack.
They're called wars because afterwards, everyone agrees that they were senseless wastes of resources, and faithfully promises never to let it happen again. In this case (at Mozo), the absolutely-everything format of XML (that stuff where everything is surrounded by <angle> bracketed words </angle>) is up against a new comer called JSON.
Because we are software engineers in the financial cryptography world I prefer the haptic approach to decision making. That is, we have to understand at least enough in order to build it. Touch this:
Here’s an example data structure, of the kind you might want to transmit from one place to another (represented as a Python dictionary; mentally replace with the syntax from your programming language of choice).person = { "name": "Simon Willison", "age": 25, "height": 1.68, "urls": [ "http://simonwillison.net/", "http://www.flickr.com/photos/simon/", "http://simon.incutio.com/" ] }
Speaking strictly from the point of view of security: the goals are to have all your own code, and to be simple. Insecurity lurks in complexity, and other people's code represents uncontrollable complexity. Not because the authors are evil but because their objectives differ from yours in ways that you cannot see and cannot control.
Generally, then, in financial cryptography you should use your own format. Because that ensures that it is your own code doing the reading, and especially, that you have the skills and assets to maintain that code, and fix it.
To get to the point, I think this rules out XML. If one were considering security as the only goal, then it's out: XML is far too complex, it drags in all sorts of unknown stuff which the average developer cannot control, and you are highly dependent on the other people's code sets. I've worked on a few large projects with XML now, and this is the ever-present situation: out of control.
What then about JSON? I'm not familiar with it, but a little googling and I found the page above that describes it ... in a page. From a strictly security pov, that gives it a hands-down win.
I already understand what JSON is about, so I can secure it. I can't and never will be able to say I can secure XML.
We've all heard it a hundred times: what's your threat model? But how many of us have been able to answer that question? Sadly, less than we would want, and I myself would not have a confident answer to the question. As writings on threat modelling are few and far between, it is difficult to draw a hard line under the concept. Yet more evidence of gaping holes in the security thinker's credibility.
Adam Shostack has written a series of blog posts on threat modelling in action at Microsoft (read in reverse order). It's good: readable, and a better starting point, if you need to do it, than anything else I've seen. Here are a couple of plus points (there are more) and a couple of criticisms:
Surprisingly, the approach is written to follow the practice that it is the job of the developers to do the security work:
We ask feature teams to participate in threat modeling, rather than having a central team of security experts develop threat models. There’s a large trade-off associated with this choice. The benefit is that everyone thinks about security early. The cost is that we have to be very prescriptive in how we advise people to approach the problem. Some people are great at “think like an attacker,” but others have trouble. Even for the people who are good at it, putting a process in place is great for coverage, assurance and reproducibility. But the experts don’t expose the cracks in a process in the same way as asking everyone to participate.
What is written between the lines is that the central security team at Microsoft provides a moderator or leader for the process. This is good thinking, as it brings in the experience, but it still makes the team do the work. I wonder how viable this is for general practice? Outside the megacorps where they have made this institutional mindshift happen, would it be possible to ask a security expert to come in, swallow 2 decades of learning, and be a leader of a process, not a doer of a process?
There are many ramifications of the above discovery, and it is fascinating to watch them bounce around the process. I'll just repeat one here: simplification! Adam hit the obvious problem that if you take the mountain to Mohammad, it should be a small mountain. Developers are employed to write good code, and complex processes just slow that down, and so an aggressive simplification was needed to come up with a very concise model. A more subtle point is that the moderator wants to impart something as well as get through the process, and complexity will kill any retention. Result: one loop on one chart, and one table.
The posts are not a prescription on how to do the whole process, and indeed in some places, they are tantalisingly light (we can guess that it is internal PR done through a public channel). With that understanding, they represent a great starting point.
There are two things that I would criticise. One major error, IMHO: Repudiation. This was an invention by PKI-oriented cryptographers in the 1990s or before, seeking yet another marketing point for the so-called digital signature. It happens to be wrong. Not only is the crypto inadequate to the task, the legal and human processes implied by the Repudiation concept are wrong. Not just misinformed or misaligned, they are reversed from reality, and in direct contradiction, so it is no surprise that after a decade of trying, Non-Repudiation has never ever worked in real life.
It is easy to fix part of the error. Where you see Non-Repudiation, put Auditing (in the sense of logging) or Evidence (if looking for a more juridical flavour). What is a little bit more of a challenge is how to replace "Repudation" as the name of the attack ... which on reflection is part of the error. The attack alleged as repudiation is problematic, because, before it is proven one way or the other, it is not possible to separate a real attack from a mistake. Then, labelling it as an attack creates a climate of guilty until proven innocent, but without the benefit of evidence tuned to proving innocence. This inevitably leads to injustice which leads to mistrust and finally, (if a fair and open market is in operation) rejection of the technology.
Instead, think of it as an attack born of confusion or uncertainty. This is a minor issue when inside one administrative or trust boundary, because one person elects to carry the entire risk. But it becomes a bigger risk when crossing into different trust areas. Then, different agents are likely to call a confusing situation by different viewpoints (incentives differ!).
At this point the confusion develops into a dispute, and that is the real name for the attack. To resolve the dispute, add auditing / logging and evidence. Indeed, signatures such as hashes and digsigs make mighty fine evidence so it might be that a lot of the work can be retained with only a little tweaking.
I then would prefer to see the threat - property matrix this way:
Threat | Security Property | |
Spoofing | --> | Authentication |
Tampering | --> | Integrity |
Dispute | --> | Evidence |
Information Disclosure | --> | Encryption |
Denial of Service | --> | Availability |
Elevation of Privilege | --> | Authorisation |
A minor criticism I see is in labelling. I think the whole process is not threat modelling but security modelling. It's a minor thing, which Adam neatly disposes of by saying that arguing about terms is not only pointless but distracts from getting the developers to do the job. I agree. If we end up disposing of the term 'security modelling' then I think that is a small price to pay to get the developers a few steps further forward in secure development.
Well, it had to happen one day. A major player has finally broken the code of silence and blamed the browsers. In this case, it is PayPal, and Safari.
Infoworld last week quoted Michael Barrett, PayPal’s CIO, saying the following:“Apple, unfortunately, is lagging behind what they need to do, to protect their customers. Our recommendation at this point, to our customers, is use Internet Explorer 7 or 8 when it comes out, or Firefox 2 or Firefox 3, or indeed Opera.”
The browser is the user's security tool. The browser is the only thing between you and the phisher. The browser is the point of all attack attention. The browser is it. That's why it had SSL built in -- to correctly identify the website as the one you wanted to go to.
So above, Paypal blames Safari for not doing enough about phishing. It's true, Safari does nothing (as I found out recently and had to switch back to Firefox). It likely had to be Paypal because the regulated banks won't say boo without permission, and Paypal might be supposed to be net-savvy. It had to be Safari because (a) there is that popular alternate now, and (b) Apple is still small enough not to be offended, and (c) others have done something in the phishing area.
A take-away then is not the names involved, but the fact that a large player has finally lost patience and is pointing fingers at those who are not addressing phishing:
At issue is the fact that Safari lacks a built-in phishing filter to warn users about shady Web sites. Safari also doesn’t support so-called Extended Validation certificates, which turn the address bar green if a site is legit. Extended Validation certificates aren’t the complete answer but are a help.
OK, so those are some ideas, and Safari could do something. However there may be more to this than meets the eye:
An emerging technology, EV certificates are already supported in Internet Explorer 7, and they've been used on PayPal's Web site for more than a year now. When IE 7 visits PayPal, the browser's address bar turns green -- a sign to users that the site is legitimate. Upcoming versions of Firefox and Opera are expected to support the technology.
Aha! It's not a general complaint to Apple at all. It is a complaint that EV has not been implemented in Safari. It's a very specific complaint!
( Long term readers know that EV implements the basic steps necessary to complete the SSL security model: By naming the CA that makes the claim, it clearly encapsulates the statement. By making it more clear what was going on to the user the final step was made to the risk-bearing party. )
Paypal has purchased a green certificate. And now they want it to work. It works on IE, but not on others. (Firefox and Opera say "soon" and so are given a pass. For now.) Apple rarely comments on its plans, so it has been named and shamed for not adopting the agreed solution. More for not playing the game than anything.
The sad thing about the EV is that it is (approximately) what the browsers should have done years ago, when phishing became apparent.
But nothing could be done. I know, I tried. If there is any more elegant proof of the market for silver bullets, I'm hard pressed to find it. To break the equilibrium around SSL+cert-user-CA (that reads SSL plus cert minus user minus CA), EV had to be packaged as an industry consortium agreeing on an expensive product. Once so packaged, it was then sold to Microsoft and to some major websites. Once in the major places, influence is then brought to bear to get the rest to come into line.
The problem with this, as I lay out in silver bullets, is that shifting from one equilibrium to another is a strictly weaker strategy. Firstly, we are not that confident in our choice of equilibrium. That's by definition; we wouldn't play this game if we knew how to play the game. Secondly, and to spin a leaf from John Boyd, the attacker can turn inside our OODA loop. Which is to say, he can create and modify his attacks faster than we can change equilibrium. Or, he is better at playing his game than we are.
You can read a much more extended argument in the essay (new, improved with extra added focus!). But for now, what I find interesting is the questions we don't yet have answers to.
What would be the attacker's best strategy, knowing all we do about the market and our claim that this is equilibrium shifting? Would the attacker destroy EV? Would he protect EV? Would he milk it?
Another question is, what is Apple's best strategy? It is currently outside the consortium, but has been attacked. Should it join and implement EV? Go it alone? Ignore? Invent an own strategy?
Some of the people at U.Cambridge have successfully and easily attacked the card readers and cards used in retail transactions in Britain. This is good work. You should skim it for the documented attacks on the British payments systems, if only because the summary is well written.
My takeaways are these:
Many others will write about the failure to use some security protocol, etc. Also note the failure of modes. So I shall add a postscript on the last point, secrecy of architecture. (I have written before about this problem, but I recall not where.)
By placing the security of the system under the wrap of secrecy, this allows a "secure-by-design" myth to emerge. Marketing people and managers cannot resist the allure of secret designs, and the internal security team has little interest in telling the truth (their job is easier if no informed scrutiny!). As the designs and processes are secret, there is no independent check on the spread of this myth of total security.
At some point the organisation (a bank or a banking sector) internalises the myth of total security and starts to lean heavily on the false belief. Other very-needed parts of the security arrangement are slowly stripped away because "the devices are secure so other parts are not needed."
The reality is that the system is not "secure" but is "economically difficult to attack". Defence in depth was employed to raise and balance the total security equation, which means other parts of the business become part of the security. By way of example, the card readers are often required to be tamper-resistant, and to be "controlled items" which means they cannot be purchased openly. These things will be written into the security architecture.
But, because all the business outside the security team cannot see the security architecture -- it's secret! -- they do not know this. So we see cost cutting and business changes indicated above. Because the business believes the system to be totally secure -- the myth! -- they don't bother to tell the security team that the cards have dual modes, readers are now made of plastic, are sold on eBay. The security team doesn't need to know this because they built a secure system.
In this way, the security-model-wrapped-in-secrecy backfires and destroys the security. I sometimes term the arisal of the myth within banks as organisational cognitive dissonance for want of a better term.
Peter Gutmann made the following comment:
Hmm, given this X-to-key-Y pattern (your DTLS-for-SRTP example, as well as OpenVPN using ESP with TLS keying), I wonder if it's worth unbundling the key exchange from the transport? At the moment there's (at least):TLS-keying --+-- TLS transport | +-- DTLS transport | +-- IPsec (ESP) transport | +-- SRTP transport | +-- Heck, SSH transport if you really wantIs the TLS handshake the universal impedance-matcher of secure-session mechanisms?
Which reminds me to bring out another hypothesis in secure protocol design, being #2: Divide and conquer. The first part is close to the above remark:
#2.1 Protocols Divide Naturally Into Two Parts
Good protocols divide into two parts, the first of which says to the second,
trust this key completely!
Frequently we see this separation between a Key Exchange phase and Wire Encryption phase within a protocol. Mingling these phases seems to result in excessive confusion of goals, whereas clean separation improves simplicity and stability.
Note that this principle is recursive. That is, your protocol might separate around the key, being part 1 for key exchange, and part 2 for encryption. The first part might then separate into two more components, one based on public keys and the other on secret keys, which we can call 1.a and 1.b. And, some protocols separate further, for example primary (or root) public keys and local public keys, and, session negotiation secret keys and payload encryption keys.
As long as the borders are clean and simple, this game is good because it allows us to conquer complexity. But when the borders are breached, we are adding complexity which adds insecurity.
A Warning. An advantage from this hypothesis might appear to be that one can swap in a different Key Exchange, or upgrade the protection protocol, or indeed repair one or other part without doing undue damage. But bear in mind the #1: just because these natural borders appear doesn't mean you should slice, dice, cook and book like software people do.
Divide and conquer is #2 because of its natural tendency to turn one big problem into two smaller problems. More later...
Some good news: after a long hard decade, OpenPGP is now on standards track. That means that it is a standard, more or less, for the rest of us, and the IETF process will make it a "full standard" according to their own process in due course.
RFC4880 is now OpenPGP and OpenPGP is now RFC4880. Hooray!
Which finally frees up the OpenPGP community to think what to do next?
Where do we go from here? That's an important question because OpenPGP provides an important base for a lot of security work, and a lot of security thinking, most of which is good and solid. The OpenPGP web of trust is one of the seminal security ideas, and is used by many products (referenced or not).
However, it is fair to say that OpenPGP is now out of date. The knowledge was good around the early 1990s, and is ready for an update. (I should point out that this is not as embarrassing as it sounds, as one competitor, PKI/x.509, is about 30 years out of date, deriving its model from pre-Internet telco times, and there is no recognition in that community of even the concept of being out of date.)
Rising to the challenge, the OpenPGP working group are thinking in terms of remodelling the layers such that there is a core/base component, and on top of that, a major profile, or suite, of algorithms. This will be a long debate about serious software engineering, security and architecture, and it will be the most fun that a software architect can have for the next ten years. In fact, it will be so much fun that it's time to roll out my view, being hypothesis number one:
H1: The One True Cipher Suite In cryptoplumbing, the gravest choices are apparently on the nature of the cipher suite. To include latest fad algo or not? Instead, I offer you a simple solution. Don't.
There is one cipher suite, and it is numbered Number 1.Cypersuite #1 is always negotiated as Number 1 in the very first message. It is your choice, your ultimate choice, and your destiny. Pick well.
If your users are nice to you, promise them Number 2 in two years. If they are not, don't. Either way, do not deliver any more cipher suites for at least 7 years, one for each hypothesis.And then it all went to pot... We see this with PGP. Version 2 was quite simple and therefore stable -- there was RSA, IDEA, MD5, and some weird padding scheme. That was it. Compatibility arguments were few and far between. Grumbles were limited to the padding scheme and a few other quirks.
Then came Versions 3-8, and it could be said that the explosion of options and features and variants caused more incompatibility than any standards committee could have done on its own.
Avoid the Champagne Hangover Do your homework up front.
Pick a good suite of ciphers, ones that are Pareto-Secure, and do your best to make the combination strong. Document the short falls and do not worry about them after that. Cut off any idle fingers that can't keep from tweaking. Do not permit people to sell you on the marginal merits of some crazy public key variant or some experimental MAC thing that a cryptographer knocked up over a weekend or some minor foible that allows an attacker to learn your aunty's birth date after asking a million times.
Resist the temptation. Stick with The One.
I've been using S/MIME for around a year now for encrypted comms, and I can report that the overall process is easier than OpenPGP. The reasons are twofold:
Sadly, S/MIME sucks. I reported previously on Thunderbird's most-welcome improvements to its UI (from unworkable to woeful) and also its ability to encrypt-not-sign, which catapulted the tool into legal sensibility. Recall, we don't know what a signature means, and the lawyers say "don't sign anything you don't read" ... I'd defy you to read an S/MIME signed email.
The problem that then occurs is that the original S/MIME designers (early 1990s?) used an unfortunate trick which is now revealed as truly broken: the keys are distributable with signing.
Ooops. Worse, the keys are only distributable with signing as far as I can see, which uncovers the drastic failings of tools designed by cryptographers and not software engineers. This sort of failure derives from such claims as, you must sign everything "to be trusted" ... which we disposed of above.
So, as signing is turned off, we now need to distribute the keys. This occurs by 2 part protocol that works like this:
With various error variations built in. OK, our first communal thought was that this would be workable but it turns out not to scale.
Consider that we change email clients every 6 months or so, and there appears no way to export your key collection. Consider that we use other clients, and we go on holidays every 3 months (or vacations every 12 months), and we lose our laptops or our clients trash our repositories. Some of us even care about cryptographic sanitation, and insist on locking our private keys in our secured laptop in the home vault with guards outside. Which means we can't read a thing from our work account.
Real work is done with a conspiracy of more than 2. It turns out that with around 6 people in the ring, someone is AFK ("away from keys"), all the time. So, someone cannot read and/or write. This either means that some are tempted to write in clear text (shame!), or we are all running around Alice-Bobbing each other. All the time.
Now, of course, we could simply turn on signing. This requires (a) a definition of signing, (b) written somewhere like a CPS, (c) which is approved and sustainable in agreements, (d) advised to the users who receive different signature meanings, and (e) acceptance of all the preceeding points as meaningful. These are very tough barriers, so don't hold your breath, if we are talking about emails that actually mean something (kid sister, knock yourself out...).
Turning on the signing also doesn't solve the core problem of key management, it just smothers it somewhat by distributing the keys every chance we get. It still doesn't solve the problem of how to get the keys when you lose your repository, as you are then locked out of posting out until you have everyone's keys. In every conspiracy, there's always one important person who's notoriously shy of being called Alice.
This exposes the core weakness of key management. Public Key cryptography is an engineering concept of 2 people, and beyond that it scales badly. S/MIME's digsig-distro is just a hack, and something like OpenPGP's key server mechanism would be far more sensible, far more scaleable. However, I wonder if we can improve on even OpenPGP, as the mere appearance of a centralised server reduces robustness by definition (TTPs, CVP, central points of attack, etc).
If an email can be used to send the key (signed), then why can't an email be used to request a key? Imagine that we added an email convention, a little like those old maillist conventions, that did this:
Subject: GETSMIME fc@example.com
and send it off. A mailclient like Thunderbird could simply reply by forwarding the key. (How this is done is an exercise for the reader. If you can't think of 3 ways in the next 3 minutes, you need more exercise.)
Now, the interesting thing about that is that if Tbird could respond to the GETSMIME, we wouldn't need key servers. That is, Alice would simply mail Bob with "GETSMIME Carol@example.com" and Bob's client could respond, perhaps even without asking because Bob already knows Alice. Swarm key distro, in other words. Or, Dave could be a key server that just sits there waiting for the requests, so we've got a key server with no change to the code base.
In closing, I'll just remind that the opinion of this blog is that the real solution to the almost infinite suckiness of S/MIME is that the clients should generate the keys opportunistically, and enable use of crypto as and when possible.
This solution will never be ideal, and that's because we have to deal with email's legacy. But the goal with email is to get to some crypto, some of the time, for some of the users. Our current showing is almost no crypto, almost none of the time, for almost none of the users. Pretty dire results, and nothing a software engineer would ever admit to.
So how do you know if you are a utility? Crash the system, and see how far the ripples spread.
Skype is down. Their blog reports some sort of network issue that blocks logins.
It's worth tracking to see how the world responds, how dependent the users have become on their Skype feed.
If you really want to be reliable, you need to let the accidents happen. Even, inject some unreliability. I like failure. I for one get perverse enjoyment from bugs. Once I see bugs in software, I like to leave them in there to shake out other bugs. Problems beget more problems, and once I've shaken a system for all its worth, I like to work from the furtherest and easiest, inwards, testing as I go.
It's always a moment of sadness when the last, deepest most convoluted prize bug gets fixed, and the system bursts back into stable, reliable life. Boring, and sad. No more art in life.
Skype to date has been far too reliable for my liking, which may mean it's heading for some reliability problems. You just can't buy the sort of entertainment that the Skype engineers are enjoying today :)
Reading this post from Robert Watson:
I presented, “Exploiting Concurrency Vulnerabilities in System Call Wrappers,” a paper on the topic of compromising system call interposition-based protection systems, such as COTS virus scanners, OpenBSD and NetBSD’s Systrace, the TIS Generic Software Wrappers Toolkit (GSWTK), and CerbNG. The key insight here is that the historic assumption of “atomicity” of system calls is falacious, and that on both uniprocessor and multiprocessing systems, it is trivial to construct a race between system call wrappers and malicious user processes to bypass protections. ...The moral, for those unwilling to read the paper, is that system call wrappers are a bad idea, unless of course, you’re willing to rewrite the OS to be message-passing.
And it sparked a thought that systems can only be secure if message-based. Maybe it's a principle, maybe it's a hypothesis, or maybe it's a law.
(My underlining.) To put this in context, if a system was built using TCP/IP, it's not built with security as its number one, overriding goal. That might be surprising to some, as pretty much all systems are built that way; which is the point, such an aggressive statement partly explains where we are, and partly explains how tricky this stuff is.
(To name names: SSH is built that way. Sorry, guys. Skype is a hybrid, with both message-passing and connections. Whether it is internally message-passing or connection-oriented I don't know, but I can speculate from my experiences with Ricardo. That latter started out message-passing over connection-oriented, and required complete client-side rewrites to remove the poison. AADS talks as if it is message-passing, and that might be because it is from the payments world, where there is much better understanding of these things.)
Back to theory. We know from the coordination problem, or the Two Generals problem, that protocols cannot be reliable about what they have sent to the other side. We also know from cryptography that we can create a reliable message on the receive-side (by means of an integrated digsig). We know that reliabile connections are not.
Also, above, read that last underlined sentence again. Operating systems guys have known for the longest time that the cleanest OS design was message passing (which they didn't push because of the speed issues):
Concurrency issues have been discussed before in computer security, especially relating to races between applications when accessing /tmp, unexpected signal interruption of socket operations, and distributed systems races, but this paper starts to explore the far more sordid area of OS kernel concurrency and security. Given that even notebook computers are multiprocessor these days, emphasizing the importance of correct synchronization and reasoning about high concurrency is critical to thinking about security correctly. As someone with strong interests in both OS parallelism and security, the parallels (no pun intended) seem obvious: in both cases, the details really matter, and it requires thinking about a proverbial Cartesian Evil Genius. Anyone who’s done serious work with concurrent systems knows that they are actively malicious, so a good alignment for the infamous malicious attacker in security research!
But none of that says what I asserted above: that if security is your goal, you must choose message-passing.
Is this intuition? Is there a theory out there? Where are we on the doh!-to-gosh scale?
Chris points to an indepth article on the Greek phone breach. Most will be looking for the whodunnit, but the article doesn't do more than stir the waters there. It is however a nice explanation of the technical breach. To summarise,
This ran for about a year, creating invisible wiretaps to some unknown cellphones for external reporting. Why then discovered? The normal story, something went wrong. Apparently a perpetrator upgrade on 24th January 2005 didn't quite work, broke text message forwarding, and started generating log messages. Ericsson investigators had to dig deep and discovered unauthorised patches....
Whoops. As I say, this is the normal story, and the authors say "It's impossible to overstate the importance of logging."
So, what can we say? This involved someone who had substantial knowledge of the system (hard-core programmers), access to the source code, access to a test system, and a substantial budget. E.g., a quasi-insider attack, and the bone is pointed at Ericsson, Vodaphone, or a Greek contractor, Intracom Telecom. And (not or) some spook friends of the aforementioned.
Costas Tsalikidis, telco engineer responsible for networks planning, found hanged, an apparent suicide, just before the Athens affair became public. |
Some people have that as their enemy, their threat, and given the target (politicians in Greece in the time of the Olympics) this is not exactly a national security issue (wot, protecting over-steroided atheletes is now spook business?) but it's almost certainly a state actor, and probably one run amok. So it moves the threat model closer to the domain of normal business, if in a mysterious fashion.
How to protect against that? The French offer one answer, tell their pollies to stop using foreign controlled services...
RAH points to WSJ on security and paper from HP:
H-P Designs 'Digital Signature' Against ForgeriesBANGALORE, India -- As anyone who does business in India knows, you can't get very far without the right piece of paper. That makes forgery a big problem and one of the most common types of fraud. ... In a research and development laboratory here in Bangalore, India's tech hub, Hewlett-Packard Co. is researching a way of marking new paper documents with a bar code that helps prevent forgeries....
With Trusted Hardcopy, which is just one of the tailored efforts, rather than requiring holograms or watermarked paper, the system uses equipment no more complicated than some software, a scanner and a computer. The bar code is designed to act like a digital signature, "thus bringing network level security to the world of paper," says H-P in a company document. H-P envisages government entities, public offices and companies as potential users, such as for the registration of land ownership.
... Yet India also is famously bureaucratic and forms-ridden. So the company has focused some of its efforts on trying to bridge the gap between tech and paper.... "We sort of assume that paper's not going to go away," said Mr. Kuchibhotla. "And we say that if paper's not going to go away, what technology do you need to bridge what's happening in the paper world with what's happening in the IT world?"
H-P's goal was to make a product that allowed paper to be read by a machine and secure against forgery. An early version of Trusted Hardcopy puts a bar code on paper that is similar to the bar codes on groceries. It contains the data that are also printed on the document and can be read by a scanner. It also encodes the data in the bar code to prevent tampering.
Because the bar code contains the authentic data, any changes to the document should be identifiable. The bar codes can't be easily copied. The bar code is printed at the bottom of a page of regular size copier paper....
For some obscure reason, this morning I ploughed through the rather excellent but rather deep tome of Peter Gutmann's Cryptographic Security Architecture - Design and Verification (or at least an older version of chapter 2, taken from his thesis).
He starts out by saying:
Security-related functions which handle sensitive data pervade the architecture, which implies that security needs to be considered in every aspect of the design, and must be designed in from the start (it’s very difficult to bolt on security afterwards).
And then spends much of the chapter showing why it is very difficult to design it in from the start.
When, then, to design security in at the beginning and when to bolt it on afterwards? In my Hypotheses and in the GP essays I suggest it is impractical to design the security in up-front.
But there still seems to be a space where you do exactly that: design the security in up-front. If Peter G can write a book about it, if security consultants take it as unquestionable mantra, and if I have done it myself, then we need to bring these warring viewpoints closer to defined borders, if not actual peace.
Musing on this, it occurs to me that we design security up front when the mission is security . And, not, if not. What this means is open to question, but we can tease out some clues.
A mission is that which when you have achieved it, you have succeeded, and if you have not, you have failed. It sounds fairly simple when put in those terms, and perhaps an example from today's world of complicated product will help.
For example a car. Marketing demands back-seat DVD players, online Internet, hands-free phones, integrated interior decorative speakers, two-tone metallised paint and go-faster tail trim. This is really easy to do, unless you are trying to build a compact metal box that also has to get 4 passengers from A to B. That is, the mission is transporting the passengers, not their entertainment or social values.
This hypothesis would have it that we simply have to divide the world's applications into those where security is the mission, and those where some other mission pertains.
E.g., with payment systems, we can safely assume that security is the mission. A payment system without security is an accounting system, not a payment system. Similar logic with an Internet server control tool.
With a wireless TCP/IP device, we cannot be so dismissive; an 802.11 wireless internet interface is still good for something if there is no security in it at all. A wireless net without security is still a wireless net. Similar logic with a VoIP product.
(For example, our favourite crypto tools, SSH and Skype, fall on opposing sides of the border. Or see the BSD's choice.)
So this speaks to requirements; a hypothesis might be that in the phase of requirements, first establish your mission. If your mission speaks to security, first, then design security up front. If your mission speaks to other things, then bolt on the security afterwards.
Is it that simple?
Bruce Schneier's popular cryptogram is out with the essay that I challenged earlier. Here's another point that I missed early on:
I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that "won" weren't the most secure firewalls; they were the ones that were easy to set up, easy to use and didn't annoy users too much. Because buyers couldn't base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren't the most secure, because buyers couldn't tell the difference.
This is simply the dominance of Kherchoffs' 6th principle. The most important ("as I've said many times before") although cryptographers still believe the 2nd is more important. One could validly ask how many product and industry evolutions do we have to watch before we work this out? Will this happen, again and again?
For those who do not recall, K6 is simply that the tool has to be usable:
6. Finally, it is necessary, given the circumstances that command its application, that the system be easy to use, requiring neither mental strain nor the knowledge of a long series of rules to observe.
By users. It's almost as if, as suggested above, that buyers cannot base their buying decision on the security metrics, so they cast around and decide on usability.
But not quite. The truth is far more fundamental than mere ignorance and random behaviour by users. It is that a tool that is used delivers security, and a tool that is not used cannot deliver any security at all. So an easy-to-use tool will generally win over a hard-to-use tool, and this will dominate the security equation.
Or to put it in other terms (another hypothesis!) the most common security failure of all is to not be used. The security business is more about business rules than security rules, which leads me into today's hypothesis:
#6.4 Compromise on Security before DeliverySecurity experts will tell you that if you don't build in security at the beginning, then you'll have trouble building it in later. This is true, but it hides a far worse evil: if you spend too much effort on security up front, you'll either lose time and lose the market, or you'll create a security drag on the application side before it has shown its true business model.
The security-first argument speaks to security convenience, the business-first argument speaks to survival. Make sure you understand your businesss need for survival, and pray that they understand the inconveniences you put up with to ensure that goal.
This will be unpopular. But, read the snippet above, again. Kerckhoffs wrote K6 in 1883, so we've got plenty of time. Again and again.
Thoughts from the modern world:
My incompetence with electronics and computers has always been offset by help from friends in the software industry. The lesson I learned from trying to work with an older-generation iPod is different. Technologies of different generations don't totally understand each other -- just like humans of different generations. However, in the case of electronics, generations pass in a matter of months, if not weeks.
He suggests that iPods don't talk to cars unless they are both up to date. Gadgets now work in generations!
Meanwhile, the IESG -- Internet Engineering Something Group -- is resisting the OpenPGP community from standardising its protocol because the document isn't good enough to be implemented from the document itself. (As discussed in their open maillist, I'll spare you the details....)
Who's right in this clash of worldviews? The market is always right. Although the IESG has a noble endeavour and a mission derived from the long history of lock-out by major corporations, the market moves too fast for such things.
We as a marketplace simply accept that some consumers, some developers, some standards will lose out. Because they weren't fast enough. OpenPGP faces that battle, and in the end, the long delay from 1997 or so until now has caused a fair cost on its adoption. (Yes, that was our fault, we didn't write it fast enough.)
Or, in contrast, those that did implement OpenPGP and PGP (Cryptix did both at insistence of myself, for example) didn't wait for the standard. PGP itself has migrated seriously, with losses at each step; we are now on protocol version 4 and this is distinctly different to protocol version 1!
This then means we have to be flexible. Today's hypothesis then is directly towards the security protocol industry entering the throwaway world:
H6.4: Design One to Throwaway. You Will Anyway.Carefully partition your protocol to be replaced. Not some portion of it (see Hypothesis #1 -- the One True Cipher Suite) but the whole sodding thing. If you like, get blatant about it and slot in ROT13 with a checksum made from your cat's ink dippings and a key exchange from putty and pigeon post.
Likely your application will not need real security for the first 1000 users or so (pick you own number here, or read up on GP). Once you've proven the business model, then it's time to secure, and start the tortuous and expensive process of rolling out a new protocol.
Here however note carefully: you are taking on a risk on behalf of the business, and the cryptoguild will massacre you! Make sure that you tell your boss and his boss and her boss too, but do it quietly.
... and we need to replace Dodgy Crypto Protocol #1 when the heat startsDon't just tell them, write it down, email it to the auditors, tattoo it on their daughters' foreheads in a secret code that only they know.
Meanwhile, design Well Needed Protocol #2 over your private weekends. Tell your spouse you need to save the world, or better yet, make him do it. Just make sure you are ready with a replacement in advanced form.
Over on EC and other places they are talking about the .bank TLD as a possibility for solving phishing. Alex says it's an idea who's time has come.
No chance: Adam correctly undermines it:
- Crooks are already investing in their attacks. If that money will have a high return, by convincing more people that the URL is safe, then crooks will invest it.
- Some banks, such as credit unions, can't really afford $50,000 for a domain name, and so won't have one. (Thanks to Alex at RiskAnalys.is, " .bank TLD, An Idea Whose Time Has Come?"
- Finally, and most importantly, it won't work. People don't understand URLs, and banks create increasingly complex URLs. The phishers will make foo.bar.cn/.bank/ and people won't understand that's bad.
It's useful to throw these ideas around ... if even as an exercise of what doesn't work. .bank won't work, or, if it did, why did all the URL based stuff fail in the past?
Human names aren't secure. Check out zooko's triangle. Something more is needed. Adam goes on to say:
The easy solution to preserving the internet channel against phishers is to use bookmarks.
The secure bookmark may be it. Adam is right. If there is such a thing as a time for a solution, the bookmark's time may have come.
The reason is that we as a security community (I mean here the one that took phishing seriously as a threat) have done a *lot* of work that has pushed in that direction. In excrutiatingly brief form:
Etc, etc. The bookmark was the core result. It's time has come, in that many of the people looking at phishing have now clicked into alignment on the bookmark.
But Adam also raises a good point in that it is hard to compete with sold security:
But that's too simple for anyone to make money at it. Certainly, no one's gonna make $50,000 a bank. That money is better spent on other things. .Bank is a bad idea.
I think that is a truism in general, but also the converse is as much a truism: many security developments were done for free in the 1st instance. SSH, Skype, firewalls, SSLeay, etc were all marked by strong not-for-profit contributions, and these things changed the world.
Being #6.2 in a series of hypotheses.
It's OK to design your own protocol. If there is nothing out there that seems to fit, after really thrashing #6.1 for all she is worth, design one that will do the job -- your job.
Consider the big weakness of SSL - its a connection protocol!
The mantra of "you should use SSL" is just plain stupid. For a start, most applications are naturally message-oriented and not connection-oriented. Even ones that are naturally stream-based -- like voip -- are often done as message protocols because they are not stream-based enough to make it worthwhile. Another example is HTTP which is famously a request/response protocol, so it is by definition a messaging protocol layered over a connection protocol. The only real connection-oriented need I can think of is for the secure terminal work done by SSH; that is because the underlying requirement (#6.1) is for a terminal session for Unix ttys that is so solidly connection based that there is no way to avoid it. Yet, even this slavish adoption of connection-oriented terminal work is not so clearcut: If you are old enough to remember, it does not apply to terminals that did not know about character echo but instead sent lines or even entire pages in batch mode.
What this principle means is that you have to get into the soul of your application and decide what it needs (requirements! #6.1 ! again!) before you decide on the tool. Building security systems using a bottom-up toolbox approach is fraught with danger, because the limitations imposed by the popular tools cramp the flexibility needed for security. Later on, you will re-discover those tools as shackles around your limbs; they will slow you down and require lots of fumbling and shuffling and logical gymnastics to camouflage your innocent journey into security.
Number 6.1 in a series of Hypotheses, as sparked by the earlier post that suggests that S/MIME and its critics alike lack foundation.
Lay down your requirements before you start.
I find it takes longer to lay down the security requirements of the application than to actually design the protocol to meet the needs. And, it takes longer to design than it does to code up the first version. The hard work is in knowing what to build, not in building it.
During later phases, you will find challenges that bounce you back to your original requirements.
Let that happen. Bounce things back and forward for as long as it takes to prove your requirements are really what is needed for the application. Don't be surprised if you get half-way through and discover a real doozy.
The alternative is to pick up someone else's requirements out of a textbook or off a cypherpunks list and assume they make sense. If that happens you'll be into a perpetual battle that will eventually end up in a dog's breakfast when some real canine attacker works out the joke. If you're in that game, use a psuedonym when designing, and change jobs before it gets serious.
To come: 6.2 through 6.6, to complete Hypothesis #6: It's your job. Do it!
As inspired by this paper on S/MIME signing, I (quickly) surveyed what the RFCs say about S/MIME signature semantics. In brief, RFCs suggest that the signature is for the purpose of:
In increasing order of sophistication. What the RFCs do not say is that a digital signature used in an S/MIME packet should be for a particular purpose, and should be construed to a particular meaning.
That is, they do not say that the signature means precisely some set of the above, and excluding others. Is it one? Is it all? Is one more important than another?
RFC2459 sheds more light on where this is defined:
A certificate user should review the certificate policy generated by the certification authority (CA) before relying on the authentication or non-repudiation services associated with the public key in a particular certificate. To this end, this standard does not prescribe legally binding rules or duties.
There, we can probably make a guess that the RFC uses the term "non-repudiation" when it means all of the legal semantics that might apply to a human act of signing. (Non-repudiation, as we know, is a fundamentally broken concept and should not be used.) So we can guess that any human semantics are deferred to the CA's documents.
Indeed, RFC2459 goes even further by suggesting that the user refer to the CP before relying on the authentication services. This is a direct recognition that all CAs are different, and therefore the semantics of identity must also differ from CA to CA. In this sense, RFC2459 is correct, as we know that many certificates are issued according to domain-control, others are issued according to web-of-trust, and yet others are issued to the EV draft.
So when the paper mentioned above refers to weaknesses, it seems to be drifting into semantics that are not backed up by its references. Although the commentary raises interesting problems, it is not easy to ascribe the problem to any particular area. Indeed, few of the documents specified have a definition of security, as indicated by (lack of) requirements, and neither does the paper supply one. Hence the complaint that "naïve sign & encrypt is vulnerable" is itself vulnerable to definitions assumed but not clearly stated by the author.
Where did these confused semantics come from? This is a common trap that most of the net and much of the cryptographic community has fallen into; The wider problem is simply not doing the requirements, something that serious software engineers know is paramount.
A narrower problem is that the digital signature can be confused with a human signature, perhaps by using the same word for very different concepts. Many people have then thought it has something to do with a human signature, or human identification, confusing its real utility and complicating architectures built on such confusion.
MOSS:
As long as the private keys are protected from disclosure, i.e., the private keys are accessible only to the user to whom they have been assigned, the recipient of a digitally signed message will know from whom the message was sent and the originator of an encrypted message will know that only the intended recipient is able to read it.
ESS:
Some of the features of each service use the concept of a "triple wrapped" message. A triple wrapped message is one that has been signed, then encrypted, then signed again. The signers of the inner and outer signatures may be different entities or the same entity....1.1.1 Purpose of Triple Wrapping
Not all messages need to be triple wrapped. Triple wrapping is used when a message must be signed, then encrypted, and then have signed attributes bound to the encrypted body. Outer attributes may be added or removed by the message originator or intermediate agents, and may be signed by intermediate agents or the final recipient.
The inside signature is used for content integrity, non-repudiation with proof of origin, and binding attributes (such as a security label) to the original content. These attributes go from the originator to the recipient, regardless of the number of intermediate entities such as mail list agents that process the message. The signed attributes can be used for access control to the inner body. Requests for signed receipts by the originator are carried in the inside signature as well.....
The outside signature provides authentication and integrity for information that is processed hop-by-hop, where each hop is an intermediate entity such as a mail list agent. The outer signature binds attributes (such as a security label) to the encrypted body. These attributes can be used for access control and routing decisions.
S/MIME Version 3 Certificate Handling:
2.3 ... Agents MAY send CA certificates, that is, certificates that are self-signed and can be considered the "root" of other chains. Note that receiving agents SHOULD NOT simply trust any self-signed certificates as valid CAs, but SHOULD use some other mechanism to determine if this is a CA that should be trusted. Also note that in the case of DSA certificates the parameters may be located in the root certificate. This would require that the recipient possess the root certificate in order to perform a signature verification, and is a valid example of a case where transmitting the root certificate may be required.
S/MIME Version 3 Message Specification
1. IntroductionS/MIME (Secure/Multipurpose Internet Mail Extensions) provides a consistent way to send and receive secure MIME data. Based on the popular Internet MIME standard, S/MIME provides the following cryptographic security services for electronic messaging applications: authentication, message integrity and non-repudiation of origin (using digital signatures) and privacy and data security (using encryption).
Internet X.509 Public Key Infrastructure Certificate and CRL Profile
This specification profiles the format and semantics of certificates and certificate revocation lists for the Internet PKI. ...In order to relieve some of the obstacles to using X.509 certificates, this document defines a profile to promote the development of certificate management systems; development of application tools; and interoperability determined by policy.
Some communities will need to supplement, or possibly replace, this profile in order to meet the requirements of specialized application domains or environments with additional authorization, assurance, or operational requirements. However, for basic applications, common representations of frequently used attributes are defined so that application developers can obtain necessary information without regard to the issuer of a particular certificate or certificate revocation list (CRL).
A certificate user should review the certificate policy generated by the certification authority (CA) before relying on the authentication or non-repudiation services associated with the public key in a particular certificate. To this end, this standard does not prescribe legally binding rules or duties.
This is getting boring. Yet another post from Gunnar Peterson where the only thing I can do is copy it verbatim:
Tim Bray on REST:Messages All The Way Down · HTTP is a decent and under-appreciated protocol, but in the end maybe the most important thing is that it forces you to think about the messages and how you exchange them. There’s no pretense that Remote Procedures are being called, or Object Models are being shared; I send you some bits and some metadata about them, and you respond with the same. It turns out that in the HTTP universe, at that point our conversation is over, and that turns out to be a good basis for building applications, but the key thing is putting the messages, rather than what they allegedly stand for, at the centerThat's great - the message is the center - now REST just needs message level security model and mechanisms like WS-Security. SSL is what is usually bandied about as a security model by Restafarians, but we know from Deutsch, Gosling, and Joy that "the network is secure" is the fourth fallacy of distributed computing.
Ok, I admit to being highly skeptical about WS-anything, but that's his area :) The concept however is right: REST is great but totally insecure, and there isn't any way to help that right now, SSL being a famously connection-oriented design that doesn't meet a message-oriented requirement.
( For the record, SOX was more or less a basic form of REST, done in 1995 by Gary Howland. Not because he was some sort of seer or genius, but because that was the only serious way to engineer it. Well, that's what we thought at the time. Maybe he was a genius. )
Some time ago, I used Thunderbird as a foil for introducing a hypothesis of mine:
There is only one mode, and it is secure.
Thunderbird, although long suffering and unfairly maligned, is a perfect example of how a security application fails to deliver security, because it believes there are "modes" and a user must "enable modes" in order to get security. It is, in the case of Thunderbird, and practically all S/MIME products, a disaster, because it reduces takeup to the numbers that don't permit any valuable market feedback to develop.
Here, then is Hypothesis #3 all written out in simple, direct form.
#3 There is only one Mode, and it is Secure.Good Protocols bootstrap into secure mode, immediately. On first time starting up, the protocol goes into secure mode. From there on, it only gets better.
There is no insecure mode. No button to turn off, no feature to forget, no way to trick the application.
There should be no necessary setup by the user. If a public key is needed, make it. If the other side needs a secret key, send it. If you need a password, invent it and suggest the user replaces it, don't ask for it and block the action.
It should take an Act of God to start up in anything but the best possible security you can manage. A Presidential Order doesn't cut it. Make your protocol immorally promiscuous and your users will thank you, even if your reputation will take a dive for about 5 years. Don't worry, your users will outlive their users.
In practice, a perfect start-up is impossible, and we do not let a perfect theory be the enemy of a good protocol. Your protocol should leap into life with joy and abandon, and reward your users with instantly secure communication. Let an expert add some extra features if they want to get that warm gooey feeling of no-risk security, but never let him tell you that near enough isn't good enough. Instead, send him paddling over to a committee.
The above says what to do, but it doesn't say why. Let's talk about why.
It's all about take-up: In the building and especially the deployment of a secure protocol, every time we present a choice to the user, we face the binary destruction of our market share. To our poor, overloaded and easily distracted user, if there are two modes, and if everything is equal, she has a 50% chance of turning off the security.
Actually, things aren't equal, and it is much worse than 50%. Some brief examples:
(Thunderbird now permits encryption *without signing* so it is possible to do a simple "signed nonsense message" to send your key, and avoid the trauma of the above paragraph. This is a big win, as before it was unprofessional and potentially dangerous to recommend S/MIME.)
I could go on. All these problems have solutions, of course, if you are serious about that sort of thing. Or you could take another tack and sweep all the problems away, by simply employing H3:
There is only one mode, and it is secure.
H3 says "be kind to your users," and don't ask them to turn off your software. H3 suggests "be kind to your shareholders," and don't ask them to apply the maths of binary destruction to the predicted user base in your business plan.
(cleaning up old stats from the Times ... there must be money here, as otherwise, why didn't I publish these months ago?)
ONLINE, ON THE PHONE, ON THE UP50 billion the number of e-mails dispatched every day wordwide; in 2001 the traffic was less than 12 billion 88 per cent of e-mails are junk including about 1 per cent which are virus-infected 32 The average number of e-mail messages received per person per day. This is rising by 84 per cent each year 440 million the number of electronic mailboxes in use, including 170 million corporate ones, growing by 32 per cent per year 1,035 million the total number of mobile phone text messages sent each month in Britain 37 The average number of texts a user sends per month compared with 21 in 2001 1 million the number of children aged under 10 in Britain - one in three - who own a phone 8 The average age at which a child gets a mobile phone in Britain Copyright 2006 Times Newspapers Ltd.
Paul Crowley rants in anguish over the status of current day cryptoplumbing, and attacks the usual suspects (SSL, SSH, *pgp*, IPSec). They are easy targets so I'll let you read them there. (Where Paul goes further is that he actually attacks SSH, which takes it beyond the norm. I like.)
Leaving aside that, what is it that Paul really interested in? He needs a protocol.
I'm now in the position of wanting to make crypto recommendations for the next generation of the Monotone revision control system. I wish I had a better idea what to tell them. They need transport-level crypto for server-to-server connections, but I hesitate to recommend SSL because the poison that is X.509 is hard to remove and it makes all the libraries for using SSL ugly and hard to use. They need to sign things, but I don't want to recommend OpenPGP: it's hard to talk to and the Web of Trust is a truly terrible fit for their problem; on top of which, OpenPGP has no systematic way to assert the type of what you're signing. They need a way for one key to make assertions about another, and we're going to invent all that from scratch because nothing out there is even remotely suitable.
After long hard thought ... Paul says he has problems with what is offered. Here's what I think, and in my scratch pad of Hypotheses, I call this
Hypothesis #6: It's your job. Do it.
(It's your job to click. Click on ... thanks to Zooko for the pointer! Oh, and the editor reminds that Hypothesis #3 was introduced earlier.)
The Grnch asks, in all honesty:
> What is the point of encrypting information that is publicly visible?
To which the answer is:
To remove the security weakness of the decision.
This weakness is the decision required to determine what is "publicly visible" or not. Or, more generally, what is sensitive or not.
Unfortunately, there is no algorithm to determine those two sets. Software can't make that decision, so it falls to humans. Unfortunately, humans have no good algorithm either -- they can make quick value judgements, they can make bad value judgements, or they can turn the whole thing off. Worse, even, it often falls to software engineers to make the decision (e.g., HTTPS) and not only are engineers poor at making such judgements, they don't even have local contextual information to inform them. That is, they have no clue what the user wants protected.
The only strategy then is to encrypt everything, all the time. This feeds into my third hypothesis:
There is only one mode, and it is secure.
I'm being very loose here in the use of the word "secure" but suffice to say, we include encryption in the definition. But it also leads feeds into another hypothesis of mine:
Only end-to-end security is secure.
For the same reasons ... if we introduce a lower layer security mechanism, we again introduce a decision problem. Following on from the above, we can't trust the software to decide whether to encrypt or not, because it has no semantic wisdom with which to decide. And we can't trust the user...
Which brings rise to the knowledge problem. Imagine a piece of software that has a binary configuration for own-security versus rely-on-lower-layers. A button that says "encrypt / no-encrypt" which you set if the lower layer has its own security or not, for example. There is, so the theory goes, no point in encrypting if the lower layer does it.
But, how does it know? What can the software do to reliably determine whether the lower layer has encryption? Consider IPSec ... how do we know whether it is there? Consider your firewall sysadmin ... who came in on the weekend and tweaked the rules ... how do we know he didn't accidentally open something critical up? Consider online account access through a browser ... how do we know that the user has secured their operating system and browser before opening Internet Explorer or Firefox?
You can't. We can't, I can't, nobody can rely on these things. Security models built on "just use SSL" or similar are somewhere between fatally flawed and nonsense for these reasons; for real security, security models that outsource the security to some other layer just don't cut the mustard.
But, the infrastructure is in place, which counts for something. So are there some tweaks that we can put in place to at least make it reasonably secure, whatever that means? Yes, they include these fairly minor tweaks:
Spread the word! This won't stop phishing, but it will make it harder. And, it gets us closer to doing the hard re-engineering ... such as securing against MITB.
Lynn mentioned in comments yesterday:
I guess I have to admit to being on a roll.
:-) Lynn grasped the nexus between the tea-room and the systems room yesterday:
One of the big issues is inadequate design and/or assumptions ... in part, failing to assume that the operating environment is an extremely hostile environment with an enormous number of bad things that can happen.
What I didn't stress was the reasons behind why security training was so important -- more important than your average CSO knows about. Lynn spots it above: reliability.
The reason we benefit from teaching security (think Fight Club here not the American Football) is that it clearly teaches how to build reliable systems. The problem addressed here is that unreliable systems fall foul of statistical enemies, and they are weak, few and far between. But when you get to big systems and lots of transactions, they become significant, and systems without reliability die the death of a thousand cuts.
Security training solves this because it takes the statistical enemy up several notches and makes it apparent and dangerous even in small environments. And, once a mind is tuned to thinking of the attack of the aggressor, dealing with the statistical failure is easy, it's just an accidental case of what an aggressor could do.
I would even assert that the enormous amounts of money spent attempting to patch an inadequate implementation can be orders of magnitude larger than the cost of doing it right in the first place.
This is the conventional wisdom of the security industry -- and I disagree. Not because it doesn't make sense, and because it isn't true (it makes sense! and it's true!) but because time and time again, we've tried it and it has failed.
The security industry is full of examples where we've spent huge amounts of money on up-front "adequate security," and it's been wasted. It is not full of examples where we've spent huge amounts of money up front, and it's paid off...
Partly, the conventional security industry wisdom fails because it is far too easy for us to hang it all out in the tea-room and make like we actually know what we are talking about in security. It's simply too easy to blather such received wisdom. In the market for silver bullets, we simply don't know, and we share that absence of knowledge with phrases and images that lose meaning for their repitition. In such a market, we end up selling the wrong product for a big price -- payment up front, please!
We are better off -- I assert -- saving our money until the wrong product shows itself to be wrong. Sell the wrong product by all means, but sell it cheaply. Live life a little dangerously, and let a few frauds happen. Ride the GP curve up and learn from your attackers.
But of course, we don't really disagree, as Lynn immediately goes on to say:
Some of this is security proportional to risk ... where it is also fundamental that what may be at risk is correctly identified.
Right.
To close with reference to yesterday's post: Security talk also easily impresses the managerial class, and this is another reason why we need "hackers" to "hack", to use today's unfortunate lingo. A breach of security, rendered before our very servers, speaks for itself, in terms that cut through the sales talk of the silver bullet sellers. A breach of security is a hard fact that can be fed into the above risk analysis, in a world where Spencarian signals abound.
Once upon a time we all went to CompSci school and had lots of fun.
Then it all stopped. It stopped at different times at different places, of course, but it was always for the same reason. "You can't have fun," wiggled the finger of some professor talking a lot of Greek.
Well, it wasn't quite like that, but there is a germ of truth in the story. Have a look over at this post (spotted on EC) where the poster declares -- after a long-winded description of the benefits of a classical education -- that the computer science schools should add security to their canons, or core curricula. (Did I get the plurals right? If you know the answer you will have trouble following the rest of this post...)
Teaching someone to validate input is easy. Teaching someone why they need to be rabid about doing it every single time - so that they internalize the importance of security - is hard. It's the ethics and values part of secure coding I really hate having to retrofit, not the technical bit. As it says in Proverbs 22:6, "Train a child in the way he should go, and when he is old he will not turn from it."
This poster has it wrong, and I sense years in the classroom, under the ruler, doing verbs, adverbs and whatnots. No fun at all.
Of course, the reason security is hard is because they -- the un-fun classical scholars -- don't provide any particular view as to why it is necessary, and modern programmers might not be able to eloquently fill in the gap, but they do make very economic decisions. Economics trumps ethics in all known competitions, so talking about "ethics and values of secure coding" is just more Greek.
So what happened to the fun of it all? I speak of those age-old core skills now gone, the skills now bemoaned in the board rooms of just-hacked corporations the world around, as they frown over their SB1386s. To find out happened to them, we have to go back a long time, to a time where titles mattered.
Time was, a junior programmer didn't have a title, and his first skills were listening, coding and frothing.
He listened, he coded and frothed, all under close supervision, and perchance entered the world's notice as a journeyman, being an unsupervised listener, coder and frother.
In those days, the journeyman was called a hacker, because he was capable of hacking some bits and pieces together, of literally making a hack of something. It was a bit of a joke, really, but our hacker could get the job done, and it was better to be known as one than not being known at all. (One day he would aspire to become a guru or a wizard, but that's another story.)
There was another lesser meaning to hacker, which derived from one of the journeyman's core classes in the canon -- breaching security. He was required to attack others' work. Not so as to break it but so as to understand it. To learn, and to appreciate why certain odd things were done in very certain but very odd ways.
Breaching security was not only fun, it was a normal and necessary part of computer science. If you haven't done it, you aren't well rounded, and this is partly why I muck around in such non-PC areas such as poking holes in SSL. If I can poke holes in SSL, so the theory of secure protocols goes, then I'm ready -- perhaps -- to design my own.
Indeed breaching security or its analogue is normal and essential in many disciplines; cryptology for example teaches that you should spend a decade or so attacking others' designs -- cryptanalysis -- before you ever should dare to make your own -- cryptography. Can you imagine doing a civil engineering course without bending some beams?
(Back to the story.) And in due course, when a security system was breached, it became known as being hacked. Not because the verb was so intended, but by process of elimination some hacker had done it, pursuant to his studies. (Gurus did not need to practice, only discuss over cappuccinos. Juniors listened and didn't do anything unless told, mostly cleaning out the Atomic for another brew.)
You can see where we are going now, so I'll hit fast-forward. More time passed ... more learning a.k.a. hacking ... The press grasped the sexy term and reversed the senses of the meaning.
Some company had its security breached. Hacking became annoying. The fun diminishes... Then the viruses, then the trojans, the DDOS, the phishing, the ....
And it all got lumped together under one bad name and made bad. Rustication for some, "Resigned" for others, Jail for some unfortunates.
That's how computer science lost the security skills from the canon. It was dropped from the University curricula by people who didn't understand that it was there for a reason. Bureaucrats, lawmakers, police, and especially corporates who didn't want to do security and preferred to blame others, etc etc, the list of naysayers is very long.
Having stripped it from computer science, we are now in a world where security is not taught, and need to ask: what they suggest in its place:
There are some bright security spots in the academic environs. For example, one professor I talked to at Stanford in the CS department - in a non-security class, no less - had his students "red team" and blue team" their homework, to stress that any and all homework had to be unhackable. Go, Stanford! Part of your grade was your homework, but your grade was reduced if a classmate could hack it. As it should be.
Right, pretend hacking exercises. Also, security conferences. Nice in spirit, but the implementation is a rather poor copy of the original. Indeed, if you think about the dynamics of hacking games and security conferences ("Nobody ever got hacked going to Blue Hat??") we aren't likely to sigh with relief.
And then:
One of my colleagues in industry has an even more draconian (but necessary) suggestion for enforcing change upon universities. ... He decided that one way to get people's attention was to ask Congress to tie research funds to universities to changing the computer science curriculum. I dare say if universities' research grants were held up, they might find the extra time or muster the will to change their curricula!
Heaven help us! Are we to believe that the solution to the security training quandary is to ask the government to tell us how to do security training? Tell me this is a joke, and the Hello Kitty People haven't taken over our future:
The Hello Kitty people are those teenagers who put their personal lives on MySpace and then complain that their privacy is being violated. They are the TV viewers who think that the Hurricane Katrina rescue or the Iraq war were screwed up only because we don't, they belatedly discover, have actual Hello Kitties in political power. When inevitably some of world's Kitties, unknown beyond their cute image, turn out to be less than fully trustworthy, the chorus of yowling Kitty People becomes unbearable cacophony.
(We've written before about how perhaps the greatest direct enemy of Internet security is the government, so we won't repeat today.)
Here is a test. A corporation such as Oracle could do this, instead of blaming the government or the hackers or other corporations for its security woes. Or Microsoft could do it, or anyone, really.
Simply instruct all your people to breach security. Fill in the missing element in the canon. Breach something, today, and learn.
Obviously, the Greeks will complain about the errant irresponsibility of such support for crime ... but just as obviously, if they were to do that, their security knowledge would go up by leaps and bounds.
Sadly, if this were taken seriously, modern corporations such as Oracle would collapse in a heap. It's far cheaper to drop the training, blame "the hackers" and ask the government for a subsidy.
Even more sadly, we just don't have a better version of training than "weapons free." But let's at least realise that this is the issue: you classicals, you bureaucrats, you Sonys and Oracles and Suns, you scared insecure corporations have brought us to where we are now, so don't blame the Universities, blame yourselves.
And, in the name of all that is sacred, *don't ask the government for help!*
Dave Birch posts on telco operators in Britain and NFC -- near field communications. You should read the lot, but here's most of it as it's mostly FC:
I think they are alluding to a tussle between service providers (eg, banks) and mobile operators over NFC. The mobile operators want everything to go through the SIM, generally speaking, including NFC applications. This frames NFC applications in the kind of portal strategy that was so successful for the operators when they introduced WAP. Naturally, the service providers want NFC applications to be directly addressable without going through the SIM, so that anyone can load any NFC application into their phone.I have to say that my sympathies are with the service providers here. We've been having fun building and loading NFC applications for a variety of clients using -- for instance -- the Nokia NFC phone (see diagram below) and it would be crazy to have get permission from (or, indeed, pay) an operator to load a new application: it would kill creativity.
In fact, I'm sure it would it be bad (even for the operators) to have NFC locked down by the operators. For one thing, contactless payments are only one potential use of NFC phones and the operators should not let this one application (and the ideaof charging banks a transaction fee) to dominate their roadmap. There is a much bigger picture, which is around the integration of mobile phones into their local environments. Look at smart posters, as an obvious example: we've been playing with these for a while now. Incidentally, one of the demonstrations that we will be presenting at the Intelligent Transport Systems World Congress in London next month allows someone to find out a bus timetable by holding their phone up to a poster at a bus stop and then subsequently buy a ticket by waving their phone over a "purchase ticket" label (we'll also be demonstrating phone-to-phone transfer of tickets, which is pretty interesting). Anyway, the point is that no-one accesses these kinds of services by typing URLs into WAP screens: no smart posters means no servicediscovery and therefore no data usage.
We saw this same process with WAP. Some people tried and were constrained (notably, e-gold did WAP based payments from 1999 onwards). Other people didn't bother, as it was clear from the outside that the control structure was too tight to innovate.
So telco operators get another chance with NFC. Will they try (and fail) to control NFC or destroy it in the attempt? As Dave indicates, getting "Internet" thinking to telcos is no easy sale.
I think the answer is they will fail to control it. The reason for this is that there are enough rebel handset manufacturers out there now; there are even, I am told, handsets that can route calls through bluetooth and VoIP (the shame! the horror!). An open market will counter-balance control attempts, and NFC will just become another option independent of the network.
While on the conjunction of Mozo tools and security, woeful or otherwise ... a month or so back I used Thunderbird as a foil to introduce a hypothesis (which you can call a law when I'm dead):
* There is only one mode, and it's secure. *
Predictably, most people thought I was grumbling "about Thunderbird", when in fact I was grumbling about the brain-dead insecurity known as PKI.
Unfortunately, Thunderbird has a great name, but the security thing it follows does not. The absolutely dire architecture loosely and variously known as S/MIME or x.509 identity certificates, as applied to email, provides a textbook case of how a system breaches Kerckhoffs' 6th law -- that the system should be usable -- and therefore is bypassed by the users. As this is his most important law by far (by simple economic proof, if you wish to demur) the system is insecure, and the 5 lesser Kerckhoffs do not enter into the discussion.
I also pointed out that I had two newbies up and going in 3 minutes flat with Skype - both with next to no help (over some other chat thing like ICQ). So let's call that the benchmark for a superlative security solution that meets Kerckhoffs' 6th law.
Which leads me to last night's observation - Jimbo, a moderately experienced computer person, downloaded, installed and was sending and receiving email with Thunderbird within 10 minutes. "Less probably," he says, which was welcome relief after debugging what turned out to be an Microsoft Outlook failure brought on by a re-install of Microsoft Office. About an hour, times two, wasted, but at least we got another Tbird flying in <10mins.
From which we can conclude that Thunderbird is in the ballpark, that's actually a good result, albeit with no security. We could suggest that Thunderbird would potentially meet Kerckhoffs' 6th if it could boot up in some sort of quasi-secure mode (even though it and every other mailer won't meet my hypothesis and probably never will, due to the tired and aged infrastructure of email).
So, why is this important? Well, it's important because there are some relatively simple fixes to put in that would bring an x.509 identity based mail agent closer to being in line with modern security thinking (if we can stretch the point with the late19th century).
Here they are:
The key is to be opportunistic, pun intended. By creating and adding the missing key, we bootstrap from nothing -- insecure comms -- to a security level which is good-enough-for-most-folks.
Let's step back and put this in context. The above simple crypto mechanism is better than the alternate as it encrypts when it can. The alternate is nothing, recall. Yet, some might say that it is worse than the PKI alternate, to which I say: comparing real world user comms -- email in flight -- to some paper ideal is an irrelevant comparison.
Keep in mind that the alternate is no crypto email, due to the immutable K6. If however you are one of those unfortunate souls locked in the world of PKI, and/or to die in the attempt, this method comes with an added bonus:
I don't want to stress the last point, above. Perhaps that's because I audit a CA in my spare time (can you say conflict of interest?), or perhaps because I have a fairly pessimistic view of the ability of the players to deliver something the market wants. Or perhaps I don't want to encourage software suppliers to do yet more secret deals with yet more opaque partners which users can't control or even have reasonable security expectations about? Who knows...
But, notwithstanding all the above, points 1 thru 5 will approximately deliver a startup that gets a Thunderbird client into something like secure comms some of the time. And once something like that is done, there are plenty of ways to *improve* that to get more crypto, more of the time, more securely.
I wonder if it could be done with a plugin?
After SeaLand burnt up a month or so back ...
... we've been on the lookout for a new site for an ultra-cool offshore datahaven. And now we're proud to announce we've found it:
US missile defence radar hits stormy seas - $815m behemoth flounders - By Lester HainesThe Sea-Based X-Band Radar. Photo: BoeingThe future of the US's much-hyped, $815m Sea-Based X-Band Radar (SBX) platform is in serious doubt after a raft of sceptics expressed doubts that the sea monster could ever effectively operate in its intended final mooring off Alaska's Aleutian Islands.
Check it out ...
Is that the mother of all radomes, or what?
Jane Adams reports over on the Digital Money blog:
Now Visa International is working with InCard Technologies, an American company that has come up with something called Power Inlay Technology. The result is a smart card with a screen (or a smart card with a built in light or one that plays annoying jingles - isn't technology wonderful?) and a random number generator. That means the card is capable of generating a dynamic cvv2 for example.
Hoping over to InCardTech, we find, right down the bottom of the page:
DisplayCard with Additional Information
Depending on the type of payment card in question, a wealth of information useful to the consumer can be conveyed on an easy to read display. All this at the consumer’s fingertips!
Offering you what you need to know, when you need to know it. |
Potentially interesting! What does this mean? Jane says:
It doesn't come with a keypad yet but Visa seems confident that it will.What does that mean? That means that the arguments about who pays for readers for token based authentication for secure sign on to online banking will become redundant. Who needs a reader if you can tap your PIN directly into the card and the card itself displays your one time passcode?
Whoa, slight confusion there. You still need a reader ... to communicate with the smart card. What you don't necessarily need, in the fullness of time, is an expensive Class 4 secured reader, with its own display, keypad and Go button.
So while this will eliminate some costs, indeed a major cost, costs will remain.
And those costs are still rather traumatic, especially in comparison to your other smart card that already comes with its secure reader, including display and keypad, and soon to include GPS and other handy dandy tracking tools. (your cell/mobile/handy.) So while this may eventually cause smart cards to be independently useful in some universe, I'd suspect it is too little, too late at this stage.
Also from the Maginot product line, they even have a one time pad card. |
In talking with Hagai, it was suggested that I try using the TLS/IMAP capabilities of Thunderbird, which I turned on (it's been a year or two since the last time I tried it). Unfortunately, nothing happened. Nothing positive, nothing negative. Cue in here a long debate about whether it was working or not, and how there should be a status display, at least, and various other remedies, at most.
A week later, the cleaning lady came in and cleaned up my desk. This process, for her, also involves unpowering the machine. Darn, normally I leave it on for ever, like a couple of months or so.
On restarting everything, Thunderbird could not connect to the mail servers. Our earlier mystery is thus resolved - the settings don't take effect until restart. Doh!
So, how then did Thunderbird handle? Not so well, but it may have got there in the end. This gives me a change to do a sort of case study in 1990s design weaknesses, a critique in (un)usability, leading to design principles updated for this decade.
To predict the punch line, the big result is that there should only be one mode, and it should be secure. To get there more slowly, here's what I observed:
Firstly, Thunderbird grumbled about the certificate being in the wrong name. I got my negative signal, and I knew that there was something working! Hooray!
But, then it turned out that Thunderbird still could not connect, because "You have chosen secure authentication, but this server does not offer it. Therefore you cannot log in..." Or somesuch. Then I had to go find that option and turn it off. This had to be done for all mail accounts, one by one.
Then it worked. Well, I *guess* it did... because funnily enough it already had the mail, and again had not evidenced any difference.
Let's break this up into point form. Further, let's also assume that all competing products to be as bad or worse. I actually *choose* Thunderbird as my preferred email client, over say Kmail. So it's not as bad as it sounds; I'm not "abandoning Thunderbird", I'm just not getting much security benefit from it, and I'm not recommending it to others for security purposes.
Another demerit: multiple options with no clear relationship, but unfortunate consequences.
(So, we now start to see what "need" means when used by users... it means "I need my email and I'll switch the darned security rubbish off and/or move to another system / supplier / etc.)
We can conclude that this is a nightmare in terms of:
Let's put this in context: when this system was designed, we didn't have the knowledge we have now. Thunderbird's security concept is at least 3 years old, probably 8-10 years old. Since those years have passed, we've got phishing, usability studies, opportunistic crypto, successful user-level cryptoapps (two, now), and a large body of research that tells us how to do it properly.
We know way more than we did 3 years ago - which was when I started on phishing. (FTR, I suggested visit counts! How hokey!)
Having got the apologies off our chest, let's get to the serious slamming: If you look at any minor mods to the Thunderbird TLS-based security, like an extra popup, or extra info or displays, you still end up with a mess. E.g., Hagai suggested that there should be an icon to display what is going on - but that only helps *me* being an experience user who knows exactly what it is trying to tell me. I know what is meant by 'secure authentication' but if you ask grandma, she'll offer you some carrot cake and say "yes, dear. now have some of this, I grew the carrots myself!"
(And, in so doing, she'll prove herself wiser than any of us. And she grows carrots!)
Pigs cannot be improved by putting them in dresses - this security system is a pig and won't be improved by frills.
The *design* is completely backwards, and all it serves to do is frustrate the use of the system. The PKI view is that the architecture is in place for good reasons, and therefore the user should be instructed and led along that system path. Hence,
"We need to educate the users better."
That is a truly utterly disastrous recommendation. No! Firstly, the system is wrong, for reasons that we can skip today. Secondly, the technical choices being offered to the users are beyond their capabilities. This can never be "educated." Thirdly, it's a totally inefficient use of the user's time. Fourthly, the end effect is that most users will not ever get the benefit.
(That would be a mighty fine survey -- how many users get the benefit of TLS security in Thunderbird? If it is less than 10%, that's a failure.)
The system should be reversed in logic. It should automatically achieve what it can achieve and then simply display somewhere how far it got:
The buttons to turn these modes on are totally unneccessary. We have computers to figure that sort of nonsense out.
Even the above is not the best way. Fallback modes are difficult to get right. They are very expensive, brittle even. (But, they are better - far far far cheaper - than asking the user to make those choices.) There is still one way to improve on this!
Hence, after 5 demerits and a handful of higher-level critiques, we get to the punchline:
To improve, there should only be one mode. And that mode is secure. There should be only one mode, because that means you can eliminate the fallback code. Code that falls back is probably twice as large as code that does not fallback. Twice as brittle, four times as many customer complaints. I speak from experience...
The principle, which I call my 3rd Hypothesis in Secure Protocol Design, reads like this:
There is only one mode, and it is secure.
If you compare and contrast that principle with all the above, you'll find that all the above bugs magically disappear. In fact, a whole lot of your life suddenly becomes much better.
Now, again, let's drag in some wider context. It is interesting that email can never ever get away from the fact that it will always have this sucky insecure mode. Several of them, indeed. So we may never get away from fallbacks, for email at least.
That unfortunate legacy should be considered as the reality that clashes with the Hypothesis. It is email that breaches the Hypothesis, and it and all of us suffer for it.
There is no use bemoaning the historical disaster that is email. But: new designs can and will get it right. Skype has adopted this Hypothesis, and it took over - it owns VoIP space in part because it delivered security without the cost. SSH did exactly the same, before.
In time, other communication designs such as for IM/chat and emerging methods will adopt Hypothesis #3, and they will compete with Skype. Some of the mail systems (Start/TLS ?) have also adopted it, and where they do, they do very well, allegedly.
(Nobody can compete with SSH, because we only need one open source product there - the task is so well defined there isn't any room for innovation. Well, that's not exactly true - there are at least two innovations coming down the pipeline that I know of but they both embrace and extend. But that's topic drift.)
SNI is slowly coming to fruition. Quick reminder: SNI is the extension that supports multiple SSL servers on the same machine, and is one huge barrier to the routine employment of TLS as an aid against phishing and other threats.
Googling around I found that Apache 2.2.0 may be slated to support SNI. This is important -- until Apache or the Microsoft equivalent supports it as shipped, we will be blocked as browsers will be waiting, waiting, ...
Over on ThoughtTorrent, Nik says that most browsers are good already or are going to be ready to support. What's missing is Apple's Safari. Shocking! File a bug:
Title: Support for TLS Server Name Indication (RFC 3546)Summary: This is a request for server name indication (SNI) support, per RFC 3546. It's basically TLS' equivalent of the Host header, allowing the correct SSL certificate to be served when multiple domains share the same IP address.
Steps to Reproduce: Connect to a server that supports SNI.
Expected Results: A perfectly normal hello is sent.
Actual Results: An extended hello, with a server_name extension.
Regression: This occurs consistently in my tests using Safari/WebKit as frontends.
Notes: There is a test server available at https://sni.corelands.com/
Other browsers support this:
- Opera 8.0
- Internet Explorer 7, beta 2
- Firefox 2.0
- Konqueror 4.0
Granted, the last three aren't released yet, but it does show that support is widespread.
But you need an Apple user account to do this. Bummer! Has anyone got one?
Someone else is also running test servers:
https://sni.velox.ch https://alice.sni.velox.ch https://carol.sni.velox.ch https://bob.sni.velox.ch https://dave.sni.velox.ch https://mallory.sni.velox.ch https://www.sni.velox.ch https://test.sni.velox.ch
What more needs to be done? Does Apache need some help in getting this done? Given the importance of spreading more websites to TLS usage (so we can use the certs to deal with phishing on a routine basis) this is one of those projects where it is worth it to go the distance, identify the blockages and see what we can do to remove them.
(I note that Frank has posted more information on the Mofo grants project. Here's a totally random thought .... Is there merit in sponsoring a TLS/SNI compatibility meet? A bit like George's of last year...)
Finally some figures! We've known for a decade that the SSH model consumes all in its path. What we haven't known is relative quantities. Seen somewhere on the net, this week's report shows Encrypted Traffic. In SSH form: 3.42% In HTTPS form: 1.11%, by volume. For number of packets, it is 3.51 and 1.67 respectively.
SSH | 3.42% | 17.45T | 3.51% | 20.98% |
---|---|---|---|---|
HTTPS | 1.11% | 5.677T | 1.67% | 10.00G |
IPsec ESP | 0.14% | 0.697T | 0.21% | 1.211G |
IPsec AH | 0.01% | 0.054G | 0.01$ | 0.089G |
IPsec IKE | 0.00% | 0.001G | 0.00% | 0.006G |
Approximately a three times domination which is our standard benchmark for a good whipping in military terms. Although this is not a pitched battle of like armies contesting the same space (like the VPN bloodletting to come) it is important to establish that SSH usage is significant, non trivial and exceeds HTTPS on all measures.
IPsec barely twitched the needle and others weren't reported. Curiously, the amount of HTTPS is way up compared to HTTP: about 7-8%. I would have expected much less, the many silent but resiliant readers of FC have more impact than previously thought.
There's one monster catch: this is "Internet 2" which is some weird funded different space, possibly as relevant to the real net as candy prices are to the economy. Also, no mention of Skype. Use of Rsync and FTP slightly exceeds that of all encrypted traffic. Hmmm.... people still use Rsync. What is wrong here? I have not come across an rsync user since ... since ... Any clues?
Still it's a number. Any number is good for an argument.
Financial cryptographer Mark Miller has finished his PhD thesis, formally entitled Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control.
This is a milestone! The thrust of Dr Miller's work could be termed as "E: a language for financial cryptography applications." Another way of looking at it could be "Java without the bugs" but that is probably cutting it short.
If this stuff was in widespread usage, we would probably reduce our threat surface area by 2 orders of magnitude. You probably wouldn't need to read the other two papers at all, if.
http://www.erights.org/talks/thesis/
http://www.caplet.com/
AbstractWhen separately written programs are composed so that they may cooperate, they may instead destructively interfere in unanticipated ways. These hazards limit the scale and functionality of the software systems we can successfully compose. This dissertation presents a framework for enabling those interactions between components needed for the cooperation we intend, while minimizing the hazards of destructive interference.
Great progress on the composition problem has been made within the object paradigm, chiefly in the context of sequential, single-machine programming among benign components. We show how to extend this success to support robust composition of concurrent and potentially malicious components distributed over potentially malicious machines. We present E, a distributed, persistent, secure programming language, and CapDesk, a virus-safe desktop built in E, as embodiments of the techniques we explain.
Advisor: Jonathan S. Shapiro, Ph.D.
Readers: Scott Smith, Ph.D., Yair Amir, Ph.D.
Presented here as the lead article in FC++. Mark's generous licence:
Copyright © 2006, Mark Samuel Miller. All rights reserved.
Permission is hereby granted to make and distribute verbatim copies of this document without royalty or fee. Permission is granted to quote excerpts from this documented provided the original source is properly cited.
[Anne & Lynn Wheeler do a guest post to introduce a new metaphor! Editorial note: I've done some minor editorial interpretations for the not so eclectic audience.]
From the ISO, a new standard aims to ensure the security of financial transactions on the Internet:
ISO 21188:2006, 'Public Key Infrastructure for financial services - practices and policy framework', offers a set of guidelines to assist risk managers, business managers and analysts, technical designers and implementers and operational management and auditors in the financial services industry.
My two bits [writes Lynn], in light of recent British pin&chip vulnerability thread, is to consider another metaphor for viewing the session authentication paradigm: They tend to leave the transaction naked and vulnerable.
In the early 1990s, we had worked on the original payment gateway for what become to be called e-commerce 1, 2 (as a slight aside, we also assert it could be considered the first SOA implementation 3 - Token-ring vs Ethernet - 10 years later ).
To some extent part of the transaction vulnerability analysis for x9.59 transaction work done in the mid-90s was based on analysis and experience with that original payment gateway as it was implemented on the basis of the session-oriented paradigm 4.
This newer work resulted in something that very few other protocols did -- defined end-to-end transactions with strong authentication. Many of the other protocols would leave the transaction naked and vulnerable at various steps in the processing. For example, session-oriented protocols would leave the entire transaction naked and vulnerable. In other words, the bytes that represent the transaction would not have a complete end-to-end strong authentication related to exactly that transaction, and therefore leave it naked and vulnerable for at least some part of the processing.
This then implies that the complete end-to-end business process has to be heavily armored and secured, and even minor chinks in the business armoring would then result in exposing the naked transaction to the potental for attacks and fraud.
If outsider attacks aren't enough, naked transactions are also extremely vulnerable to insider attacks. Nominally, transactions will be involved in a large number of different business processes, exposing them to insider attacks at every step. End-to-end transactions including strong authentication armors the actual transaction, and thus avoids leaving the transaction naked and vulnerable as it travels along a vast array of processing steps.
The naked transaction paradigm also contributes to the observation that something like seventy percent of fraud in such environments involve insiders. End-to-end transactions with strong authentication (armoring the actual transaction) then also alleviates the need for enormous amounts of total business process armoring. As we find it necessary to protect naked and vulnerable transactions, we inevitably find that absolutely no chinks in the armor can be allowed, resulting in expensive implications in the business processing - the people and procedures employed.
The x9a10 working group (for what become the x9.59 financial standard) was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. This meant not only having countermeasures to things like replay attacks (static data that could be easily skimmed and resent), but also having end-to-end transaction strong authentication (eliminating the vulnerabilities associated with having naked and vulnerable transactions at various points in the infrastructure).
The x9.59 financial standard for all retail payments then called for armoring and protecting all retail transactions. This then implied the business rule that account numbers used in x9.59 transactions could not be used in transactions that didn't have end-to-end transaction strong authentication. This eliminated the problem with knowledge leakage; if the account number leaked, it no longer represents a vulnerability. I.e. an account number revealed naked was no longer vulnerable to fraudulent transactions 5.
Part of the wider theme on security proportional to risk is that if the individual transactions are not armored then it can be extraordinarily expensive to provide absolutely perfect total infrastructure armoring to protect naked and vulnerable transactions. Session-hiding cryptography especially is not able to absolutely guarantee that naked, vulnerable transactions have 100% coverage and/or be protected from all possible attacks and exploits (including insider attacks) 6.
There was yet another issue with some of the payment-oriented protocols in the mid-90s looking at providing end-to-end strong authentication based on digital signature paradigm. This was the mistaken belief in appending digital certificates as part of the implementation. Typical payment transactions are on the order of 60-80 bytes, and the various payment protocols from the period then appended digital certificates and achieved a payload bloat of 4k to 12k bytes (or a payload bloat of one hundred times). It was difficult to justify an enormous end-to-end payload bloat of one hundred times for a redundant and superfluous digital certificate, so the protocols tended to strip the digital certificate off altogether, leaving the transaction naked and vulnerable during subsequent processing.
My response to this was to demonstrate that it is possible to compress the appended digital certificates to zero bytes, opening the way for x9.59 transactions with end-to-end strong authentication based on digital signatures. Rather than viewing x9.59 as using certificate-less digital signatures for end-to-end transaction strong authentication, it can be considered that an x9.59 transaction appends a compressed zero-byte digital certificate to address the severe payload bloat problem 7.
To return briefly to Britain's Chip&Pin woes, consider that the issue of SDA (static data authentication vulnerable to replay attacks) or DDA (countermeasure to replay attacks) is somewhat independent of using a session-oriented implementation. That is, the design still leaves transactions naked and vulnerable at various points in the infrastructure.
Opera talks about security features in Opera 9. The good parts - they have totally rewritten their protocol engine, and:
3. We have disabled SSL v2 and the 40 and 56 bit encryption methods supported by SSL and TLS.
The somewhat silly part, they've added more warnings for use of 40 and 56 bit ciphers, but forgotten to warn about that popular zero-bit cipher called HTTP.
The slow part - it must have been two years since we first started talking about it to get to now: SSLv2 in Opera 9 disabled, and TLS Extensions enabled. (SNI itself is not mentioned. I forget where the other suppliers are on this question.)
Why so long?
2. TLS 1.1 and TLS Extensions are now enabled by default.When we tested these features in Opera 7.60 TP in September 2004, our users found 100+ sites that would not work with either or both of these features. So why enable these features now, have we lost our minds?
The reason isn't that the problematic sites have disappeared. Unfortunately, they haven't. The reason we are activating these features now is that we have worked around the problems.
It makes me giddy, just thinking about those 100 sites ... So Opera backed off, just like the others. The reason that this is taking so long is that as far as your average browser goes, security is a second or third priority. Convenience is the priority. Opera's page, above, has lots of references to how they want to be fully standards compliant etc etc, which is good, but if you ask them about security, they'll mumble on about irrelevancies like bits in algorithms and needing to share information with other browsers before doing anything.
SSL v2, originally developed by Netscape, is more than 11 years old. It was replaced by SSL v3 (also developed by Netscape) in 1996. SSL v2 is known to have at least one major weakness. And considering the age of the servers that only supports SSL v2, one can certainly wonder how secure the servers themselves are.Some may wonder why disabling an outdated protocol is news. *Shouldn't these "features" have been removed long ago?*
I like standards as much as the next guy, and I use a browser that is proud of user experience. I even suggest that you use for its incidental security benefits - see the Top Tips on the front page!
But when it comes to protecting people's online bank accounts, something has to break. Is it really more important to connect to a hundred or a thousand old sites than it is to protect users from security attacks from a thousand phishers or a hundred thousand attacks?
The fact is, as we found out when we tried to disable these methods during the 7.60 TP and 8.0 Beta testing, that despite both the new versions of the protocol and the SSL v2 protocol's security problems, there were actually major sites using it as their single available SSL version as recently as a year ago! It is actually only a few years since a major US financial institution upgraded their servers from SSL v2 to TLS 1.0. This meant that it was not practical from a usability perspective to disable the protocol.
Come on guys, don't pussy foot around with user security! If you want to get serious, name names. Pick up the big stick and wack those sites. If you're not serious, then here's what is going to happen: government will move in and pass laws to make you liable. Your choice - get serious or get sued.
The connection between all this dancing around with TLS and end-user security is a bit too hard to see in simple terms. It is all to do with SNI - server name indication. This is a feature only available in TLS. As explained above, to use TLS, SSLv2 must die.
Once we get SNI, each Apache server can *share* TLS certificates for multiple sites over one single IP number. Right now, sharing TLS sites is clumsy and only suitable for the diehards at CAcert (like this site is set up, the certs stuff causes problems). This makes TLS a high-priced option - only for "e-commerce" not for the net masses. Putting each site over a separate IP# is just a waste of good sysadmin time and other resources, and someone has to pay for that.
(Do you think the commerciality of the equation might explain the laggardness of browser manufacturers here?)
With SNI, using TLS then has a goodly chance of becoming virtual - like virtual HTTP sites sometimes called VHosts. Once we start moving more web servers into TLS _by default_, we start to protect more and more ... and we shift the ground of phishing defence over to certificates. Which can defend against phishing but only if used properly (c.f. toolbars).
Yes, I know, there are too many steps in that strategy to make for easy reading. I know, we already lost the battle of phishing, and it has moved into the Microsoft OS as the battleground. I know, TLS is bypassed as a security system. And, I know, SSL was really only for people who could pay for it anyway.
Still, it would be nice, wouldn't it? To have many more sites using crypto by default? Because they can? Imagine if websites were like Skype - always protected? Imagine how much easier it would be to write software - never needing to worry about whether you are on the right page or not.
And if that's not convincing, consider this: what's happening in "other countries" is coming to you. Don't be the one to hold back ubiquitous, opportunistic crypto. You might be reading about RIP and massive datamining in the press today about other countries; in a decade or so you'll be reading about their unintended victims in your country, and you will know the part you played.
In other crypto news:
Microsoft buys SSL VPN vendor Whale
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9000608
Government to force handover of encryption keys
http://news.zdnet.co.uk/0,39020330,39269746,00.htm
UK law will criminalise IT pros, say experts
http://news.zdnet.co.uk/business/legal/0,39020651,39270045,00.htm
Great things going on over at FreeBSD. Last month was the surprising news that Java was now distro'd in binary form. Heavens, we might actually see Java move from "write once, run twice" to numbers requiring more than 2 bits, in our lifetimes. (I haven't tried it yet. I've got other things to do.)
More serious is the ongoing soap opera of security. I mean over all platforms, in general. FreeBSD still screams in my rankings (sorry, unpublished, unless someone blogs the secret link again, darnit) as #2, a nose behind OpenBSD for the top dog spot in the race for hard core security. Here's the story.
Someone cunning (Colin?) noticed that a real problem existed in the FreeBSD world - nobody bothers to update, and that includes critical security patches. That's right. We all sit on our haunches and re-install or put it off for 6-12 months at a time. Why?
Welll, why's a tricky word, but I have to hand it to the FreeBSD community - if there is one place where we can find out, that's where it is. Colin Percival, security czar and general good sort, decided to punch out a survey and ask the users why? Or, Why not? We haven't seen the results of the survey, but something already happened:
Polite, professional, thoughtful debate.
No, really! It's unheard of on an Internet security forum to see such reasoned, considered discussion. At least, I've never seen it before, I'm still gobsmacked, and searching for my politeness book, long lost under the 30 volume set of Internet Flames for Champions, and Trollers Almanacs going back 6 generations.
A couple of (other) things came out. The big message was that the upgrade process was either too unknown, too complex, too dangerous, or just too scary. So there's a big project for FreeBSD sitting right there - as if they need another. Actually this project has been underway for some time, it's what Colin has been working on, so to say this is unrecognised is to short change the good work done so far.
But this one's important. Another thing that delicately poked its nose above the waterline was the contrast between the professional sysadmin and the busy other guy. A lot of people are using FreeBSD who are not professional sysadmins. These people haven't time to explore the arcania of the latest tool's options. These people are impressed by Apple's upgrade process - a window pops up and asks if it's a good time, please, pretty please? These people not only manage a FreeBSD platform or 10, but they also negotiate contracts, drive buses, organise logistics, program big apps for big iron, solve disputes with unions and run recruiting camps. A.k.a., business people. And in their lunchbreaks, they tweak the FreeBSD platforms. Standing up, mouth full.
In short, they are gifted part-timers. Or, like me, trained in another lifetime. And we haven't the time.
So it is no longer - I suggest - acceptable for the process of upgrades and installs to be seriously technical. Simplification is called for. The product is now in too many places, too many skill sets and too many critical applications to demand a tame, trained sysadmin full time, right time.
Old hands will say - that's the product. It's built for the expert. Security comes at a cost.
Well, sort of - in this case, FreeBSD is hoisted on its own petard. Security comes at a risk-management cost. FreeBSD happens to give the best compromise for the security minded practitioner. I know I can install my machine, not do a darn thing for 6 months, and still be secure. That's so valuable, I won't even bother to install Linux, let alone look up the spelling of whatever thing the Microsoft circus are pushing this month. I install FreeBSD because I get the best security bang for buck: No necessary work, and all the apps I can use.
Which brings us to another thing that popped out of the discussion - every one of the people who commented was using risk management. Seriously! Everyone was calculating their risk of compromise versus work put in. There is no way you would see that elsewhere - where the stark choice is either "you get what you're given, you lucky lucky microsoft victim" all the way across to the more colourful but unprintable "you will be **$ secure if you dare *@$*@^# utter the *#&$*#& OpenBSD install disk near your (*&@*@! machine in vein."
Not so on FreeBSD. Everyone installs, and takes on their risks. Then politely turns around and suggests how it would be nice to improve the upgrade process, so we can ... upgrade more frequently than those big anniversaries.
(If you are not a cryptoplumber, the following words will be indistinguishable from random... that might be a good thing!)
When I and Zooko created the SDP1 layout (for "Secure Datagram Protocol #1") one of the requirements wasn't to avoid traffic analysis. So much so that we explicitly left all that out.
But, the times, they are a-changing. SDP1 is now in use in one app full-time, and 3 other apps are being coded up. So I have more experience in how to drive this process, and not a little idea about how to inform a re-design.
And the war news is bleak, we are getting beaten up in Eurasia. Our boys over there sure could use some help.
So how to avoid traffic analysis? A fundamental way is to be indistinguishable from random, as that reveals no information. Let's revisit this and see how far we can get.
One way is to make all packets around the same size, and same frequency. That's somewhat easy. Firstly, we expand out the (internal) Pad to include more mildly random data so that short packets are boosted to the average length. There's little we can do about long packets, except break them up, which really challenges the assumptions of datagrams and packet-independence.
Also, we can statistically generate some chit chat to keep the packets ticking away ... although I'd personally hold short of the dramatically costly demands of the Freedom design, which asked you to devote 64 or 256k permanent on-traffic to its cause. (A laughable demand, until you investigate what Skype is doing, now, successfully, on your computer this very minute.)
But a harder issue is the outer packet layer as it goes over the wire. It has structure, so someone can track it and learn information from it. Can we get rid of the structure?
The consists of open network layout consists of three parts - a token, a ciphertext and a MAC.
Token | . . . enciphered . . . text . . . | SHA1-HMAC |
Each of these is an array, which itself consists of a leading length field followed by that many bytes.
Length Field | . . . many . . . bytes . . . length . . . long . . . |
(Also, in all extent systems, there is a leading byte 0x5D that says "this is an SDP1, and not something else." That is, the application provides a little wrapping because there are cases where non-crypto traffic has to pass.)
5D |
|
|
|
Those arrays were considered necessary back then - but today I'm not so sure. Here's the logic.
Firstly the token creates an identifier for a cryptographic context. The token indexes into the keys, so it can be decrypted. The reason that this may not be so necessary is that there is generally another token already available in the network layer - in particular the UDP port number. At least one application I have coded up found itself having to use a different port number (create a new socket) for every logical channel, not because of the crypto needs, but because of how NAT works to track the sender and receiver.
(This same logic applies to the 0x5D in use.)
Secondly, skip to the MAC. This is in the outer layer - primarily because there is a paper (M. Bellare and C. Namprempre, Authenticated Encryption: Relations among notions and analysis of the generic composition paradigm Asiacrypt 2000 LNCS 1976) that advises this. That is, SDP1 uses Encrypt-then-MAC mode.
But it turns out that this might have been an overly conservative choice. Earlier fears that MAC-then-Encrypt mode was insecure may have been overdone. That is, if due care is taken, then putting the MAC inside the cryptographic envelope could be strong. And thus eliminate the MAC as a hook to hang some traffic analysis on.
So let's assume that for now. We ditch the token, and we do MAC-then-encrypt. Which leaves us with the ciphertext. Now, because the datagram transport layer - again, UDP typically - will preserve the length of the content data, we do not need the array constructions that tell us how long the data is.
Now we have a clean encrypted text with no outer layer information. One furfie might have been that we would have to pass across the IV as is normally done in crypto protocols. But not in SDP1, as this is covered in the overall "context" - that session management that manages the keys structure also manages the IVs for each packet. By design, there is no IV passing needed.
Hey presto, we now have a clean encrypted datagram which is indistinguishable from random data.
Am I right? At the time, Zooko and I agreed it couldn't be done - but now I'm thinking we were overly cautious about the needs of encrypt-then-Mac, and the needs to identify each packet coming in.
(Hat tip to Todd for pushing me to put these thoughts down.)
Someone blogged a draft essay of mine on the limits of reliability in connections. As it is now wafting through the blog world, I need to take remedial action and publish it. It's more or less there, but I reserve the right to tune it some :)
When is a reliable connection not a reliable connection?The answer is - when you really truly need it to be reliable. Connections provided by standard software are reliable up to a point. If you don't care, they are reliable enough. If you do care, they aren't as reliable as you want.
In testing the reliability of the connection, we all refer to TCP/IP's contract and how it uses the theory of networking to guarantee the delivery. As we shall see, this is a fallacy of extension and for reliable applications - let's call that reliability engineering - the guarantee is not adequate.
And, it specifically is not adequate for financial cryptography. For this reason, most reliable systems are written to use datagrams, in one sense or another. Capabilities systems are generally done this way, and all cash systems are done this way, eventually. Oddly enough, HTTP was designed with datagrams, and one of the great design flaws in later implementations was the persistent mashing of the clean request-response paradigm of datagrams over connections.
To summarise the case for connections being reliable and guaranteed, recall that TCP has resends and checksums. It has a window that monitors the packet reception, orders them all and delivers a stream guaranteed to be ordered correctly, with no repeats, and what you got is what was sent.
Sounds pretty good. Unfortunately, we cannot rely on it. Let's document the reasons why the promise falls short. Whether this effects you in practice depends, as we started out, on whether you really truly need reliability.
Mark posts this story by Mike Williams to risks. Recorded here as a nice case study in unique transaction numbers.
I used to work on the S.W.I.F.T. payments system, and even that wasn't 100% perfect at eliminating duplicates and spotting omissions.In the many years that I worked with the system, we had one situation where everybody followed the rules, and yet a payment for ten million pounds got lost.
It all started when an operator at Bank A mistyped 10,000,000 instead of 20,000,000 on the initial payment. The error was spotted pretty quickly - banks have systems in place for double checking the total amount that gets paid in and out.
The operator could have sent a cancellation for 10,000,000 and a new payment for 20,000,000 and all would have been well, but cancellations can take days to process and someone would have to pay the overnight interest. What actually happened was that they sent a second payment for the remaining 10,000,000.
Now Bank A happened to use a system whereby the SWIFT Transaction Sequence Number is the same as the initial paperwork that caused the payment to be made, so the two payment messages were sent with the same TSN, the same amount, date, payer and payee. In fact the two payment messages were identical. (My bank didn't work like that, my programs always used a unique TSN, but that's partly because I wanted to use our TSN as a unique index on our outgoing files to make the coding simpler).
Unfortunately, at some point in its journey round the world, the initial payment hit a comms glitch. These were the days when electronic data communications were far less reliable than they are now. The relay station didn't get a confirmation ("ACK") so it sent a copy of the message with a "PDS" marker (Possibly duplicated by SWIFT network).
When the payments arrived at Bank B, they got passed to one of my programs that checks the possible duplicates. Because the payments were 100% identical, and one of them was flagged "PDS", that payment was dumped into the "real duplicate" file.
Mike Williams, Gentleman of Leisure [Forwarded to RISKS by Mark Brader]
A day rarely passes in the crypto community where people do not preach that you should use standard protocols for all your crypto work. Many systems have foundered on this advice, something I tried to explain in more depth in the GP rants. Thankfully, not Skype. They wrote the whole thing from scratch, and they did it well.
Arguably the worlds' most successful crypto application (with approximately 5 million people enjoying its protection right now) it had to run the gauntlet of full industry skepticism for doing the cryptoplumbing thing on its own.
I earlier wrote that even if they bungled the crypto protocol, they still did the right thing. Philipp pointed me at some work from a few months back that claims their protocols have been audited and are relatively A-OK. Even better!
The designers of Skype did not hesitate to employ cryptography widely and well in order to establish a foundation of trust, authenticity, and confidentiality for their peer-to-peer services. The implementers of Skype implemented the cryptographic functions correctly and efficiently. As a result, the confidentiality of a Skype session is far greater than that offered by a wired or wireless telephone call or by email and email attachments.
So wrote Tom Berson in "Skype Security Evaluation," a name I've not come across. His analysis is worth reading if you are into reading up on cryptoprotocols for fun and profit. Although he doesn't reveal the full story, he reveals enough to know what they are up to at the crypto level, making up somewhat for the absence of open source. Here's some observations on his observations, spiced up with two other researches listed below.
The nymous identity creation is more or less the same as SOX, with a CA layered over the top. That is, the client creates the key and registers it with a pseudonym at the central server. The CA then signs that key, presumably making a statement that the pseudonym is unique in the Skype space.
I'm not entirely sure the addition of the CA is worth the cost. Given what we know about petnaming and so forth, and the fact that it opens up the vulnerability of the TTP MITMs, this appears to be a weakspot in the protocol - if Skype are going to be listening, then this is where they are going to do it. The weakness was identified by the Blackhat presentation (see below) and the Blackhat guys also claim that it is possible to set up a separate net and trick users into that net - not good news if true, and an indictment on the use of CAs over more modern constructs if it can't stop a darknet.
The key exchange is not entirely described. Both sides exchange their certificates and can then encrypt and sign to each other. They exchange 128 random bits each and combine this into the shared key of 256 bits - makes sense given the other assumptions. Before that, however, they do this, which I did not understand the point of:
To protect against playback, the peers challenge each other with random 64-bit nonces, and respond by returning the challenge, modified in a standard way, and signed with the responder’s private signing key.
How can there be a replay unless both sides have clagged PRNGs and are generating the same 128 bit inputs each time? The effect of this is, AFAICS, to duplicate the key exchange process by exchanging nonces ... but then throw that useful key entropy away! If you can explain this, please do so.
The data stream is encrypted by XORing the sound onto the output of an AES algorithm running in a stream generation mode. I'm not sure why this is done. My first guess is that any data corruption is self-correcting; a useful property in phones as you can just drop the bad data. But checksums over the packets seem to also cover that. Alternatively, it might be that it results in rather small amounts of packet expansion. (My own efforts at SDP1, with Zooko, resulted in significant expansion of packets, something I find annoying, but acceptable.) (I should note that the cryptophone.de design XORs the sound with *two* cipher streams, in case one is considered dodgy.)
Other plus points - the Skype engineers wisely chose their own key formats, a device that pays of by reducing the amount of code needed dramatically, and reduces dependencies on outside formats like x.509 and ASN1. Minus points appear to be in the complexity of the use of TCP and UDP, and a lot of duplicated packet flows. This is brought out more in the other presentations though.
In closing, Tom writes:
4. The Bottom Line
I started as a skeptic. I thought the system would be easy to defeat. However, my confidence in the Skype grows daily. The more I find out about it, the more I like.In 1998 I observed that cryptography was changing from expensive to cheap, from arcane to usual, from difficult to easy, from scarce to abundant. My colleagues and I tried to predict what difference, if any, abundant cryptography might make in the world. What new forms of engineering, business, economics, or society would be possible? We did not predict Skype. But, now that I am coming to know it well, I recognize that Skype is an early example of what abundant cryptography can yield.
I don't think it is quite that rosy, but it is on the whole good.
Juicy and more Skeptical Addendums:
The Nokia 770 has just been released ... without a phone or a camera (photo taken from an Ericsson). But it does have a full-model Linux install and can be used for all things Unix.
(And, a VoIP phone is coming soon, they say, in the "2006 Edition." I wonder if Nokia reads the news...)
Seriously though, the nexus with FC is more than reading this morning's other blog entry - we want our own pocket PC so we can stick on our secure online payments program. Either a completely secure browser environment for online banking or WebFunds for secure payments. Unfortunately, the small model Java Mobile isn't worth the bother to rewrite all the apps.
Often people expect your transaction systems to be scaleable, and fast and capable of performing prodigous feats. Usually these claims reduce down to "if you're not using my favourite widget" then you are somehow of minimal stature. But even when exposed as a microdroid sales tactic, it is very hard to argue against someone that is convinced that a system just must use the ACME backend, or it's not worth spit.
One issue is transaction flow. How many transactions must you handle? Here's some numbers, suitable for business plans or just slapping on the table as if your plan depended on it. Please note that this is a fairly boring post - only read it when you need some numbers for your b-plan. I'll update it as new info comes in.
Earlier Wednesday, the Tokyo exchange had issued a warning it would stop trading if the system capacity limit of 4 million transactions was reached. As it reached 3.5 million about an hour before the session's close, it announced it would stop trading 20 minutes early.
According to the 2004 fourth-quarter report issued by Western Union's parent company, First Data Corp., an estimated 420 Western Union transactions occur every minute each day -- amounting to an average of seven transfers every second of the year.
Craig's stellar plotting system for e-gold's transaction flow gives us loads of 34k per day last I looked. Also Fee Income.
eBay's annual report for 2005 reported that "PayPal processed an average of approximately 929,000 transactions per day during 2004." There are contradictory numbers here and here: Total number of payments grew to 117.4 million, up 41 percent year over year and 4 percent vs. the prior quarter. This would imply 320k transactions per day, but we don't expect Paypal to be accurate in filings to the SEC.
Payment News reports:
Robin Sidel reports for the Wall St. Journal on how credit card issuers are now pursuing the market for smaller payments less than $5:....The market for transactions valued at less than $5 accounted for $1.32 trillion in consumer spending in 2003, representing more than 400 billion transactions, according to research by TowerGroup, a unit of MasterCard International Inc. By comparison, Visa processes a total of about $2 trillion of global transactions each year.
During the busiest hour on December 23 [2005], Visa processed an average of 6,363 transaction messages each second. That's a 14 percent increase over the average of 5,546 transaction messages per second Visa processed during the peak hour on December 24, 2004. Consider that Visa's payment network, VisaNet, can process more transactions over the course of a coffee break than all the stock exchanges of the world in an entire day.
Nice quip! I'll check that out next time I get my exchange running. In closing, let's let Lynn Wheeler have the last word. He reports that the old white elephant of transaction processing, SET, performed like this:
...If even a small percentage of the 2000 transactions/sec that typically go on were to make the transition to SET, the backend processing institution would have to increase their legacy computational processing capability by three orders of magnitude. The only way that SET could ever be succesful was if it totally failed, since the backend processing couldn't build out to meet the SET computational requirements. It was somewhat satisfying to see the number of people that the thot stopped them in their tracks.The best case demo of SET at a show a year later was on an emulated processing environment with totally idle dedicated boxes. The fastest that they could get was 30 seconds elapsed time, with essentially all of that being RSA crypto computational processing. Now imagine a real-world asymmetric environment that is getting 1000 of those a second. My statement was that a realistic backend processing configuration would require on the order of 30,000 dedicated machines just doing the RSA crypto processing.
There were then remarks from the workstation vendors on the prospect of selling that many machines, and somebody from RSA about selling that many licenses. From then on my description that SET was purely an academic toy demo, since nobody could ever take it as a serious proposal with operational characteristics like that.
private email, 14th December 2005. Note how he's talking about the mid 90's, using 10 year old figures for credit card processing networks like Visa.
In a 2005 document entitled Trends and Attitudes in Information Security that someone sent to me, RSA Security, perhaps the major company in the security world today, surveys users in 4 of the largest markets and finds that most know about identity theft, and most are somewhat scared of ecommerce today. (But growth continues, so it is not all doom and gloom.)
This is an important document so I'll walk through it, and I hope you can bear with me until we get to the important part. As we all know all about identity theft, we can skip to the end of that part. RSA concludes its longish discussion on identity theft with this gem:
ConclusionConsumers are, in many respects, their own worst enemies. Constantly opening new accounts and providing personal information puts them at risk. Ally this to the naturally trusting nature of people and it is easy to see why Man-in-the-middle attacks are becoming increasingly prevalent. The next section of this e-Book takes a closer look at these attacks and considers how authentication tokens can be a significant preventative.
Don't forget to blame the users! Leaving that aside, we now know that MITM is the threat of choice for discerning security companies, and it's on the rise. I thought that last sentance above was predicting a routine advertisement for RSA tokens, which famously do not cover the dynamic or live MITM. But I was wrong, as we head into what amounts to an analysis of the MITM:
9. Offline [sic] Man-in-the-Middle attackWith online phishing, the victim receives the bogus e-mail and clicks through to the falsified Web site. However, instead of merely collecting rapidly changing passwords and contact information, the attacker now inserts himself in the middle of an online transaction stream. The attacker asks for and intercepts the user’s short-time-window, onetime password and stealthily initiates a session with the legitimate site, posing as the victim and using the victim’s just-intercepted ID and OTP.
Phishing is the MITM. More importantly, the hardware tokens that are the current rage will not stop the realtime attack, that which RSA calls "online phishing." That's a significant admission, as the RSA tokens have a lot to do with their current success (read: stock price). The document does not mention the RSA product by name, but that's an understan