May 22, 2007

No such thing as provable security?

I have a lot of skepticism about the notion of provable security.

To some extent this is just efficient hubris -- I can't do it so it can't be any good. Call it chutzpah, if you like, but there's slightly more relevance to that than egotism, as, if I can't do it, it generally signals that businesses will have a lot of trouble dealing with it. Not because there aren't enough people better than me, but because, if those that can do it cannot explain it to me, then they haven't got much of a chance in explaining it to the average business.

Added to that, there have been a steady stream of "proofs" that have been broken, and "proven systems" that have been bypassed. If you look at it from a scientific investigative point of view, generally, the proof only works because the assumptions are so constrained that they eventually leave the realm of reality, and that's particularly dangerous to do in security work.

Added to all that: The ACM is awarding its Godel prize for a proof that there is no proof:

In a paper titled "Natural Proofs" originally presented at the 1994 ACM STOC, the authors found that a wide class of proof techniques cannot be used to resolve this challenge unless widely held conventions are violated. These conventions involve well-defined instructions for accomplishing a task that rely on generating a sequence of numbers (known as pseudo-random number generators). The authors' findings apply to computational problems used in cryptography, authentication, and access control. They show that other proof techniques need to be applied to address this basic, unresolved challenge.

The findings of Razborov and Rudich, published in a journal paper entitled "Natural Proofs" in the Journal of Computer and System Sciences in 1997, address a problem that is widely considered the most important question in computing theory. It has been designated as one of seven Prize Problems by the Clay Mathematics Institute of Cambridge, Mass., which has allocated $1 million for solving each problem. It asks - if it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This problem is posed to determine whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve.

The paper proves that there is no so-called "Natural Proof" that certain computational problems often used in cryptography are hard to solve. Such cryptographic methods are critical to electronic commerce, and though these methods are widely thought to be unbreakable, the findings imply that there are no Natural Proofs for their security.

If so, this can count as a plus point for risk management, and a minus point for the school of no-risk security. However hard you try, any system you put in place will have some chances of falling flat on its face. Deal with it; the savvy financial cryptographer puts in place a strong system, then moves on to addressing what happens when it breaks.

The "Natural Proofs" result certainly matches my skepticism, but I guess we'll have to wait for the serious mathematicians to prove that it isn't so ... perhaps by proving that it is not possible to prove that there is no proof?

Posted by iang at 08:21 AM | Comments (3) | TrackBack

May 21, 2007

Choose your hatchet: when governance models collide

As mentioned, I advised e-gold on governance models way back when, and now we can see how the company deals with its relationship to the US government. Someone has posted a video over on YouTube of some 2006 testimony before Senate hearings on child pornography, wherein governance models are much discussed.

The video looks like hatchet job versus hatchet job, as governance of Internet child pornography collides with governance over payment integrity.

There is a wide-spread belief that the case against e-gold is likely to be fought on a battle ground of public opinion and regulated insiders, rather than that of law and public policy. Same as it ever was, perhaps, but a wider question for future FCers is what to do about it?

The governance models that were provided to e-gold were relatively sound, albeit incompletely implemented. No matter the incompleteness, the models were strong enough to preserve the gold for many years, at least to the extent that the recent seizure by the courts was able to complete. That's by way of a proof that some gold existed, although the victims of the seizure will have other choice words.

But those governance models are designed in general to deal with routine fraud of an inside nature. They are not designed to deal with the sort of difficulties facing e-gold. How then to do better in the future?

I see three lessons here for FCers at the governance layer.

1. A lot depends on the contract Ivan has with his users. We might say that an issuer should have a clear contract with its users. And, in that contract we might expect to find certain relationships with governments, law enforcement, regulators and other interested parties.

The question then would be to see whether these relationships were sustainable, stable and reasonable. In the case of e-gold, they were somewhat unbalanced at least due to its nominal offshore status, so e-gold ended up serving two or three jurisdictions instead of one. Complication, not simplification, without any offsetting protection.

2. Normal business process is to arbitrage existing structures and regulatory postures, but this is not to say that sustainable business includes simply facilitating crime. There is some grave doubt as to whether child pornography was more severe within the e-gold system than in classical banking, but there is much less doubt about ponzi schemes and pyramids. e-gold seems to have permitted these to a far greater extent than desirable, and that was unlikely to make friends in the long run.

Then, the need might be interpreted as to create a business process that delivers an overwhelming good without delivering an overwhelming bad. Truly a matter of judgement, but an easy call might be to keep clear of the more popular crimes.

3. Finally, a reasonable and sustainable dispute system is required. Neither Paypal nor e-gold achieved this, and indeed the banks are widely criticised for it (or its absence). The only payment system that seems to have achieved this is WebMoney, although the story takes on fairy tale proportions due to the lack of english documentation.

Either way, the e-gold dispute resolution system is coming in for a hammering, so this aspect should be noted well by FC community. As a matter of record, alternative dispute resolution (ADR) was well studied by the e-gold founders, who gave speeches at conferences on subjects such as arbitration, but study did not apparently transfer to implementation.

Posted by iang at 10:02 AM | Comments (0) | TrackBack

When to bolt on the security afterwards...

For some obscure reason, this morning I ploughed through the rather excellent but rather deep tome of Peter Gutmann's Cryptographic Security Architecture - Design and Verification (or at least an older version of chapter 2, taken from his thesis).

He starts out by saying:

Security-related functions which handle sensitive data pervade the architecture, which implies that security needs to be considered in every aspect of the design, and must be designed in from the start (it’s very difficult to bolt on security afterwards).

And then spends much of the chapter showing why it is very difficult to design it in from the start.

When, then, to design security in at the beginning and when to bolt it on afterwards? In my Hypotheses and in the GP essays I suggest it is impractical to design the security in up-front.

But there still seems to be a space where you do exactly that: design the security in up-front. If Peter G can write a book about it, if security consultants take it as unquestionable mantra, and if I have done it myself, then we need to bring these warring viewpoints closer to defined borders, if not actual peace.

Musing on this, it occurs to me that we design security up front when the mission is security . And, not, if not. What this means is open to question, but we can tease out some clues.

A mission is that which when you have achieved it, you have succeeded, and if you have not, you have failed. It sounds fairly simple when put in those terms, and perhaps an example from today's world of complicated product will help.

For example a car. Marketing demands back-seat DVD players, online Internet, hands-free phones, integrated interior decorative speakers, two-tone metallised paint and go-faster tail trim. This is really easy to do, unless you are trying to build a compact metal box that also has to get 4 passengers from A to B. That is, the mission is transporting the passengers, not their entertainment or social values.

This hypothesis would have it that we simply have to divide the world's applications into those where security is the mission, and those where some other mission pertains.

E.g., with payment systems, we can safely assume that security is the mission. A payment system without security is an accounting system, not a payment system. Similar logic with an Internet server control tool.

With a wireless TCP/IP device, we cannot be so dismissive; an 802.11 wireless internet interface is still good for something if there is no security in it at all. A wireless net without security is still a wireless net. Similar logic with a VoIP product.

(For example, our favourite crypto tools, SSH and Skype, fall on opposing sides of the border. Or see the BSD's choice.)

So this speaks to requirements; a hypothesis might be that in the phase of requirements, first establish your mission. If your mission speaks to security, first, then design security up front. If your mission speaks to other things, then bolt on the security afterwards.

Is it that simple?

Posted by iang at 07:01 AM | Comments (5) | TrackBack

May 17, 2007

The Myth of the Superuser, and other frauds by the security community

The meme is starting to spread. It seems that the realisation that the security community is built on self-serving myths leading to systemic fraud has now entered the consciousness of the mainstream world.

Over on the Volokh Conspiracy, Paul Ohm, exposes the Myth of the Superuser. His view is that too often, the sense of the Superuser is one of an overpowering ability of this uber-attacker. Once this sense enters the security agenda, the belief that there is this all-powerful evil criminal mastermind out there, watching and waiting, leads us into dangerous territory.

Ohm does not make the case that they do not exist, but that their effect or importance is greatly exaggerated. I agree, and this is exactly the case I made for MITM. In brief, the Man-in-the-middle is claimed to be out there lurking, and we must protect, at any costs. Wrong on all counts, and the result is a security disaster called phishing, which in itself is an MITM.

Then, phishing can be interpreted as a result of our obsession with Ohm's Superuser, the uber-attacker. In part, at least, and I'd settle for running the experiment without the uber-obsession. Specifically, Ohm points at some bad results he has identified:

Very briefly, in addition to these two harms — overbroad laws and civil liberties infringements — the other four harms I identify are guilt by association (think Ed Felten); wasted investigative resources (Superusers are expensive to catch); wasted economic resources (how much money is spent each year on computer security, and is it all justified?); and flawed scholarship (See my comment from yesterday about DRM).

All of which we can see, and probably agree on. What makes this essay stand out is that he goes the extra mile and examines what the root causes might be:

I have essentially been saying that we (policymakers, lawyers, law professors, computer security experts) do a lousy job calculating the risks posed by Superusers. This sounds a lot like what is said elsewhere, for example involving the risks of global warming, the safety of nuclear power plants, or the dangers of genetically modified foods. But there is a significant, important difference: researchers who study these other risks rigorously analyze data. In fact, their focus on numbers and probabilities and the average person’s seeming disregard for statistics is a central mystery pursued by many legal scholars who study risk, such as Cass Sunstein in his book, Laws of Fear.

In stark contrast, experts in the field of computer crime and computer security are seemingly uninterested in probabilities. Computer experts rarely assess a risk of online harm as anything but, “significant,” and they almost never compare different categories of harm for relative risk. Why do these experts seem so willing to abdicate the important risk-calculating role played by their counterparts in other fields?

Does that sound familiar? To slide into personal anecdote, consider the phishing debate (the real one back in 2004 or so, not the current phony one).

When I was convinced that we had a real problem, and people were ignoring it, I reasoned that the lack of scientific approach was what was holding people back from accepting the evidence. So I started collecting numbers on costs, breaches, and so forth (you'll see frequent posts on the blog, also mail postings around). I started pushing these numbers out there so that we had some grounding in what we were talking about.

What happened? Nothing. Nobody cared. I was able around 2004 to state that phishing already cost the USA about a billion dollars a year, and sometime shortly after that, that basically all data was compromised. In fact, I'm not even sure when we passed these points, because ... it's not worth my trouble to even go back and identify it!

Worse than nobody caring, the security field simply does not have the conceptual tools to deal with this. A little bit like "everyone was in denial" but worse, there is a collective glazed view to the whole problem.

What's going on? Ohm identifies 4 explanations (in point form here, but read his larger descriptions):

  1. Pervasive secrecy...
  2. Everyone is an Expert...
  3. Self-Interest...
  4. The Need for Interdisciplinary Work...

No complaint there! Readers will recognise those frequent themes, and we could probably collectively get it to a list of 10 explanations without too much mental sweat.

But I would go further. Deeper. Harsher.

I would suggest that there is one underlying cause, and it is structural. It is because security is a market for silver bullets, in the deep and economic sense explained in that long paper. All of the above arise, in varying degrees, in the market first postulated by Michael Spence.

The problem with this is that it forces us to face truths that few can deal with. It asks us to choose between the awful grimness of helplessness, and the temptation to the dark side of security fraud, before we can enter any semblance of a professional existance. Nobody wants to go there.

Posted by iang at 08:21 AM | Comments (7) | TrackBack

May 15, 2007

And now the phoney war on cash (a.k.a., give us another subsidy, ma!)

Dave commented on the "war on cash" ... and Adam picked up on that. Now, that sounds like FC! For someone who once had something to do with the Financial Cryptography community, Adam has a strange comment:

Having the government provide a means for a reasonable functioning economy, and removing the costs of worrying about the gold content of a coin, or the solvency of DavidBucks adds huge efficiencies. There's quite a few things that I'd take the government out of before I took them out of coining currency. (Know thy customer regulations, for example.)

Well... to separate out some issues. Private issuance of money has a long and powerful history. Although the evidence is not entirely a slam dunk, for the most part the jury is in on this question. The envelope, please:

Where private money fails as an industry, it is because of government interference.

The US free banking tradition is the clearest example, in that several different areas had different results. Scottish free banking tradition has the best history, with a century or more of solid gains, only to be finally destroyed by the English, which had already lost free banking to the long, dark and dismal history of the Bank of England.

There is, while we are engaged in this old pub topic, one flaw to free banking that no-one can figure out: it and government currencies backed by gold reserves tend to fail in the face of total war. There are approximately 3 of these, being the US civil war of independence, WWI, and WWII, and each resulted in destroyed financial systems as governments raided them for value.

On to MkKinsey's comments on Dave's blog:

Cash needs to be priced appropriately. The fact is that, today, the pricing of cash is not in line with its costs. Consumers and merchants in most countries do not pay the real cost of cash, and so merchants and consumers have no reason to reduce their use of cash. One problem is that there is no clear ownership of cash. Another is that governments often position cash as a public good -- to be offered free by banks -- thereby inhibiting an economic debate on cash versus other instruments.

Adam is right to be skeptical. Basically, it easy to champion their case against cash, as cash is indeed subsidised competition. But there is an easy retort:

Let's strip both sides of their subsidies!

Like it? I sure do ... just as surely, every warrior against cash will run for the hills when they figure out how naked they'd be.


Henry Moore's Fallen Warrior

Start with the issuance of cash. Make it free to any operator. Leave them their know thy customer regulations, and see what happens.

(Oops, maybe we already know!)

Posted by iang at 12:34 PM | Comments (2) | TrackBack

May 11, 2007

US government seizes the gold in frozen acounts

As discussed earlier, the US government, in cases against e-gold and presumably yet-to-be-charged account holders, has apparently completed seizure of some part of the gold:

In an unprecedented move on or just before Wednesday May 9th, 2007, the United States of America has forced Omnipay et al E-gold to redeem all the gold backing the 58 previously frozen accounts owned by e-gold, 1mdc, icegold and a handful of other exchangers and customers to be liquidated effective immediately to a us dollar account owned by the federal government. ...

MoneyNetNews has learned from a reliable source that e-gold has been ordered to hand over a fresh copy of the customer database when the redemption is completed.

Bear in mind this is unconfirmed at this stage. HT-RAH! I'll update this article if anything more comes to hand.

Posted by iang at 10:34 AM | Comments (3) | TrackBack

May 09, 2007

Solution to phishing -- an idea who's time has come?

Over on EC and other places they are talking about the .bank TLD as a possibility for solving phishing. Alex says it's an idea who's time has come.

No chance: Adam correctly undermines it:

  1. Crooks are already investing in their attacks. If that money will have a high return, by convincing more people that the URL is safe, then crooks will invest it.
  2. Some banks, such as credit unions, can't really afford $50,000 for a domain name, and so won't have one. (Thanks to Alex at RiskAnalys.is, " .bank TLD, An Idea Whose Time Has Come?"
  3. Finally, and most importantly, it won't work. People don't understand URLs, and banks create increasingly complex URLs. The phishers will make foo.bar.cn/.bank/ and people won't understand that's bad.

It's useful to throw these ideas around ... if even as an exercise of what doesn't work. .bank won't work, or, if it did, why did all the URL based stuff fail in the past?

Human names aren't secure. Check out zooko's triangle. Something more is needed. Adam goes on to say:

The easy solution to preserving the internet channel against phishers is to use bookmarks.

The secure bookmark may be it. Adam is right. If there is such a thing as a time for a solution, the bookmark's time may have come.

The reason is that we as a security community (I mean here the one that took phishing seriously as a threat) have done a *lot* of work that has pushed in that direction. In excrutiatingly brief form:

  • ZT is the theory,
  • Trustbar was the first experiment,
  • Petname Toolbar simplified it down to the essentials of petnames *and* discovered the bookmark,
  • the caps community re-thought out all their theory (!) and said "it's a secure bookmark!"
  • the human security UI people like Ping mixed in different flavours and different ideas.

Etc, etc. The bookmark was the core result. It's time has come, in that many of the people looking at phishing have now clicked into alignment on the bookmark.

But Adam also raises a good point in that it is hard to compete with sold security:

But that's too simple for anyone to make money at it. Certainly, no one's gonna make $50,000 a bank. That money is better spent on other things. .Bank is a bad idea.

I think that is a truism in general, but also the converse is as much a truism: many security developments were done for free in the 1st instance. SSH, Skype, firewalls, SSLeay, etc were all marked by strong not-for-profit contributions, and these things changed the world.

Posted by iang at 06:52 AM | Comments (11) | TrackBack

May 08, 2007

H6.2 Most Standardised Security Protocols are Too Heavy

Being #6.2 in a series of hypotheses.

It's OK to design your own protocol. If there is nothing out there that seems to fit, after really thrashing #6.1 for all she is worth, design one that will do the job -- your job.

Consider the big weakness of SSL - its a connection protocol!

The mantra of "you should use SSL" is just plain stupid. For a start, most applications are naturally message-oriented and not connection-oriented. Even ones that are naturally stream-based -- like voip -- are often done as message protocols because they are not stream-based enough to make it worthwhile. Another example is HTTP which is famously a request/response protocol, so it is by definition a messaging protocol layered over a connection protocol. The only real connection-oriented need I can think of is for the secure terminal work done by SSH; that is because the underlying requirement (#6.1) is for a terminal session for Unix ttys that is so solidly connection based that there is no way to avoid it. Yet, even this slavish adoption of connection-oriented terminal work is not so clearcut: If you are old enough to remember, it does not apply to terminals that did not know about character echo but instead sent lines or even entire pages in batch mode.

What this principle means is that you have to get into the soul of your application and decide what it needs (requirements! #6.1 ! again!) before you decide on the tool. Building security systems using a bottom-up toolbox approach is fraught with danger, because the limitations imposed by the popular tools cramp the flexibility needed for security. Later on, you will re-discover those tools as shackles around your limbs; they will slow you down and require lots of fumbling and shuffling and logical gymnastics to camouflage your innocent journey into security.

Posted by iang at 05:39 PM | Comments (3) | TrackBack

Threatwatch: Still searching for the economic MITM

One of the things we know is that MITMs (man-in-the-middle attacks) are possible, but almost never seen in the wild. Phishing is a huge exception, of course. Another fertile area is wireless lans, especially around coffee shops. Correctly, people have pointed to this point as a likely area where MITMs would break out.

Incorrectly, people have typically confused possibility with action. Here's the latest "almost evidence" of MITMs, breathtakingly revealed by the BBC:

In a chatroom used to discuss the technique, also known as a 'man in the middle' attack, Times Online saw information changing hands about how security at wi-fi hotspots – of which there are now more than 10,000 in the UK – can be bypassed.

During one exchange in a forum entitled 'T-Mobile or Starbucks hotspot', a user named aarona567 asks: "will a man in the middle type attack prove effective? Any input/suggestions greatly appreciated?"

"It's easy," a poster called 'itseme' replies, before giving details about how the fake network should be set up. "Works very well," he continues. "The only problem is,that its very slow ~3-4 Kb/s...."

Another participant, called 'baalpeteor', says: "I am now able to tunnel my way around public hotspot logins...It works GREAT. The dns method now seems to work pass starbucks login."

Now, the last paragraph is something else, it is referring to the ability to tunnel through DNS to get uncontrolled access to the net. This is typically possible if you run your own DNS server and install some patches and stuff. This is useful, and economically sensible for anyone to do, although technically it may be infringing behaviour to gain access to the net from someone else's infrastructure (laws and attitudes varying...).

So where's the evidence of the MITM? Guys talking about something isn't the same as doing it (and the penultimate paragraph seems to be talking about DNS tunnelling as well). People have been demoing this sort of stuff at conferences for decades ... we know it is possible. What we also know is that it is not a good use of your valuable time as a crim. People who do this sort of thing for a living search for techniqes that give low visibility, and high gathering capability. Broadcasting in order to steal a single username and password fails on both counts.

If we were scientists, or risk-based security scholars, what we need is evidence that they did the MITM *and* they committed a theft in so doing it. Only then can we know enough to allocate the resources to solving the problem.

To wrap up, here is some *credible* news that indicates how to economically attack users:

Pump and dump fraudsters targeting hotels and Internet cafes, says FBI Cyber crooks are installing key-logging malware on public computers located in hotels and Internet cafes in order to steal log-in details that are used to hack into and hijack online brokerage accounts to conduct pump and dump scams.

The US Federal Bureau of Investigation (FBI) has found that online fraudsters are targeting unsuspecting hotel guests and users of Internet cafes.

When investors use the public computers to check portfolios or make a trade, fraudsters are able to capture usernames and passwords. Funds are then looted from the brokerage accounts and used to drive up the prices of stocks the frudsters had bought earlier. The stock is then sold at a profit.

In an interview with Bloomberg reporters, Shawn Henry, deputy assistant director of the FBI's cyber division, said people wouldn't think twice about using a computer in an Internet cafe or business centre in a hotel, but he warns investors not to use computers they don't know are secure.

Why is this credible, and the other one not? Because the crim is not sitting there with his equipment -- he's using the public computer to do all the dangerous work.

Posted by iang at 02:18 PM | Comments (1) | TrackBack

May 07, 2007

WSJ: Soft evidence on a crypto-related breach

Unconfirmed claims are being made on WSJ that the hackers in the TJX case did the following:

  1. sat in a carpark and listened into a store's wireless net.
  2. cracked the WEP encryption.
  3. scarfed up user names and passwords ....
  4. used that to then access centralised databases to download the CC info.
The TJX hackers did leave some electronic footprints that show most of their break-ins were done during peak sales periods to capture lots of data, according to investigators. They first tapped into data transmitted by hand-held equipment that stores use to communicate price markdowns and to manage inventory. "It was as easy as breaking into a house through a side window that was wide open," according to one person familiar with TJX's internal probe. The devices communicate with computers in store cash registers as well as routers that transmit certain housekeeping data.

After they used that data to crack the encryption code the hackers digitally eavesdropped on employees logging into TJX's central database in Framingham and stole one or more user names and passwords, investigators believe. With that information, they set up their own accounts in the TJX system and collected transaction data including credit-card numbers into about 100 large files for their own access. They were able to go into the TJX system remotely from any computer on the Internet, probers say.

OK. So assuming this is all true (and no evidence has been revealed other than the identity of the store where it happened), what can we say? Lots, and it is all unpopular. Here's a scattered list of things, with some semblance of connectivity:

a. Notice how the crackers still went for the centralised database. Why? It is validated information, and is therefore much more valuable and economic. The gang was serious and methodical. They went for the databases.

Conclusion: Eavesdropping isn't much of a threat to credit cards.

b. Eavesdropping is a threat to passwords, assuming that is what they picked up. But, we knew that way back, and that exact threat is what inspired SSH: eavesdroppers sniffing for root passwords. It's also where SSL is most sensibly used.

c. Eavesdropping is a threat, but MITM is not: by the looks of it, they simply sat there and sucked up lots of data, looking for the passwords. MITMs are just too hard to make them economic, *and* they leave tracks. "Who exactly is it that is broadcasting from that car over there....?"

(For today's almost evidence of the threat of MITMs see the BBC!)

d. Therefore, SSL v1 would have been sufficient to protect against this threat level. SSL v2 was overkill, and over-expensive: note how it wasn't deployed to protect the passwords from being eavesdropped. Neither was any other strong protocol. (Standard problem: most standardised security protocols are too heavy.)

TJX and 45 million americans say "thanks, guys!" I reckon it is going to take the other 255 million americans to lose big time before this lesson is attended to.

e. Why did they use a weak crypto protocol? Because it is the one delivered in the hardware.

Question: Why is hardware often delivered with weak crypto?

f. And, why was a weak crypto protocol chosen by the WEP people? And why are security observers skeptical that the new replacement for WEP will last any longer? The solution isn't in the "guild" approach I mentioned earlier, so forget ranting about how people should use a good security expert. It's in the institutional factors: security is inversely proportional to the number of designers. And anything designed by an industry cartel has a lot of designers.

g. Even if they had used strong crypto, could the breach have happened? Yes, because the network was big and complex, and the hackers could have simply plugged into some place elsewhere. Check out the clue here:

The hackers in Minnesota took advantage starting in July 2005. Though their identities aren't known, their operation has the hallmarks of gangs made up of Romanian hackers and members of Russian organized crime groups that also are suspected in at least two other U.S. cases over the past two years, security experts say. Investigators say these gangs are known for scoping out the least secure targets and being methodical in their intrusions, in contrast with hacker groups known in the trade as "Bonnie and Clydes" who often enter and exit quickly and clumsily, sometimes strewing clues behind them.

Recall that transactions are naked and vulnerable . Because the data is seen in so many places, savvy FCers assume the transactions are visible by default, and thus vulnerable unless intrinsically protected.

h. And, even if the entire network had been protected by some sort of overarching crypto protocol like WEP, the answer is to take over a device. Big stores means big networks means plenty of devices to take over.

i. Which leaves end-to-end encryption. The only protection you can really count on is end-to-end. WEP, WPA and IPSec and other such infrastruction-level systems are only a hopeful answer to an easy question, end-to-end security protocols are the real answer to application level questions.

(e.g., they could have used SSL for protecting the password access to the databases, end to end. But they didn't.)

j. The other requirement is to make the data insensitive to breaches. That is, even if a crook gets all the packets, he can't do anything with them. Not naked, as it were. End-to-end encryption then becomes a privacy benefit, not a security necessity.

However, to my knowledge, only Ricardo and AADS deliver this, and most other designers are still wallowing around in the mud of encrypted databases. A possible exception to this is the selective disclosure approach ... but for various business reasons that is even less likely to field than Ricardo and AADS were.

k. Why don't we use more end-to-end encryption with naked transaction protocols? One reason is that they don't scale: we have to write one for each application. Another reason is that we've been taught not to by generations: "you should use a standard security product." ... as seen by TJX, who *did* use a standard security product.

Conclusion: Security advice is "lowest common denominator" grade. The best advice is to use a standard product that is inapplicable to the problem area, and if that's the best advice, that also means there aren't enough FC-grade people to do better.

l. "Oh, we didn't mean that one!" Yeah, right. Tell us how to tell? Better yet, tell the public how to tell. They are already being told to upgrade to WPA, as if improving 1% of their network from 20% security to 80% security is going to help.

m. In short, people will seize on the encryption angle as the critical element. It isn't. If you are starting to get to the point of confusion due to the multiplying conflicts, silly advice, and sense of powerless the average manager has, you're starting to understand.

This is messy stuff, and you can pretty much expect most people to not get it right. Unfortunately, most security people will get it wrong too in a mad search for the single most important mistake TJX made.

The real errors are systemic: why are they storing SSNs anyway? Why are they using a single number for the credit card? Why are retail transactions so pervasively bound with identity, anyway? Why is it all delayed credit-based anyway?

Posted by iang at 02:27 PM | Comments (4) | TrackBack

May 05, 2007

survey of RFC S/MIME signature handling

As inspired by this paper on S/MIME signing, I (quickly) surveyed what the RFCs say about S/MIME signature semantics. In brief, RFCs suggest that the signature is for the purpose of:

  • integrity of content or message
  • authenticity of the sender / originator, and/or
  • non-repudiation of origin
  • (advanced usages such as) preservation of signing labels

In increasing order of sophistication. What the RFCs do not say is that a digital signature used in an S/MIME packet should be for a particular purpose, and should be construed to a particular meaning.

That is, they do not say that the signature means precisely some set of the above, and excluding others. Is it one? Is it all? Is one more important than another?

RFC2459 sheds more light on where this is defined:

A certificate user should review the certificate policy generated by the certification authority (CA) before relying on the authentication or non-repudiation services associated with the public key in a particular certificate. To this end, this standard does not prescribe legally binding rules or duties.

There, we can probably make a guess that the RFC uses the term "non-repudiation" when it means all of the legal semantics that might apply to a human act of signing. (Non-repudiation, as we know, is a fundamentally broken concept and should not be used.) So we can guess that any human semantics are deferred to the CA's documents.

Indeed, RFC2459 goes even further by suggesting that the user refer to the CP before relying on the authentication services. This is a direct recognition that all CAs are different, and therefore the semantics of identity must also differ from CA to CA. In this sense, RFC2459 is correct, as we know that many certificates are issued according to domain-control, others are issued according to web-of-trust, and yet others are issued to the EV draft.

So when the paper mentioned above refers to weaknesses, it seems to be drifting into semantics that are not backed up by its references. Although the commentary raises interesting problems, it is not easy to ascribe the problem to any particular area. Indeed, few of the documents specified have a definition of security, as indicated by (lack of) requirements, and neither does the paper supply one. Hence the complaint that "naďve sign & encrypt is vulnerable" is itself vulnerable to definitions assumed but not clearly stated by the author.

Where did these confused semantics come from? This is a common trap that most of the net and much of the cryptographic community has fallen into; The wider problem is simply not doing the requirements, something that serious software engineers know is paramount.

A narrower problem is that the digital signature can be confused with a human signature, perhaps by using the same word for very different concepts. Many people have then thought it has something to do with a human signature, or human identification, confusing its real utility and complicating architectures built on such confusion.


MOSS:

As long as the private keys are protected from disclosure, i.e., the private keys are accessible only to the user to whom they have been assigned, the recipient of a digitally signed message will know from whom the message was sent and the originator of an encrypted message will know that only the intended recipient is able to read it.

ESS:

Some of the features of each service use the concept of a "triple wrapped" message. A triple wrapped message is one that has been signed, then encrypted, then signed again. The signers of the inner and outer signatures may be different entities or the same entity....

1.1.1 Purpose of Triple Wrapping

Not all messages need to be triple wrapped. Triple wrapping is used when a message must be signed, then encrypted, and then have signed attributes bound to the encrypted body. Outer attributes may be added or removed by the message originator or intermediate agents, and may be signed by intermediate agents or the final recipient.

The inside signature is used for content integrity, non-repudiation with proof of origin, and binding attributes (such as a security label) to the original content. These attributes go from the originator to the recipient, regardless of the number of intermediate entities such as mail list agents that process the message. The signed attributes can be used for access control to the inner body. Requests for signed receipts by the originator are carried in the inside signature as well.....

The outside signature provides authentication and integrity for information that is processed hop-by-hop, where each hop is an intermediate entity such as a mail list agent. The outer signature binds attributes (such as a security label) to the encrypted body. These attributes can be used for access control and routing decisions.

S/MIME Version 3 Certificate Handling:

2.3 ... Agents MAY send CA certificates, that is, certificates that are self-signed and can be considered the "root" of other chains. Note that receiving agents SHOULD NOT simply trust any self-signed certificates as valid CAs, but SHOULD use some other mechanism to determine if this is a CA that should be trusted. Also note that in the case of DSA certificates the parameters may be located in the root certificate. This would require that the recipient possess the root certificate in order to perform a signature verification, and is a valid example of a case where transmitting the root certificate may be required.


S/MIME Version 3 Message Specification

1. Introduction

S/MIME (Secure/Multipurpose Internet Mail Extensions) provides a consistent way to send and receive secure MIME data. Based on the popular Internet MIME standard, S/MIME provides the following cryptographic security services for electronic messaging applications: authentication, message integrity and non-repudiation of origin (using digital signatures) and privacy and data security (using encryption).


Internet X.509 Public Key Infrastructure Certificate and CRL Profile

This specification profiles the format and semantics of certificates and certificate revocation lists for the Internet PKI. ...

In order to relieve some of the obstacles to using X.509 certificates, this document defines a profile to promote the development of certificate management systems; development of application tools; and interoperability determined by policy.

Some communities will need to supplement, or possibly replace, this profile in order to meet the requirements of specialized application domains or environments with additional authorization, assurance, or operational requirements. However, for basic applications, common representations of frequently used attributes are defined so that application developers can obtain necessary information without regard to the issuer of a particular certificate or certificate revocation list (CRL).

A certificate user should review the certificate policy generated by the certification authority (CA) before relying on the authentication or non-repudiation services associated with the public key in a particular certificate. To this end, this standard does not prescribe legally binding rules or duties.

Posted by iang at 09:23 AM | Comments (2) | TrackBack

May 04, 2007

US moves to seize the gold

Someone pointed out that the indictment unsealed last week against e-gold, etc includes this clause:

78. As a result of the offenses alleged in Counts One and Three of this indictment, the defendants E-GOLD, LTD., GOLD & SILVER RESERVE, INC., DOUGLAS JACKSON, REID JACKSON, and BARRY DOWNEY shall forfeit to the United States any property, real or personal, involved in, or traceable to such property involved in money laundering, in violation of Title 18, United States Code, Section 1956, and in operation of an unlicensed money transmitting business, in violation of Title 18, United States Code, Section 1960; including, but not limited to the following:

(a) The sum of money equal to the total amount of property involved in, or traceable to property involved in those violations. Fed.R.Crim.P. 32.2(b)(1).

(b) All the assets, including without limitation, equipment, inventory, accounts receivable and bank accounts, of E-GOLD, LTD., and GOLD & SILVER RESERVE, INC., whether titled in those names or not, including, but not limited to all precious metals, including gold, silver, platinum, and palladium, that "back" the e-metal electronic currency of the E-GOLD operation, wherever located.

If more than one defendant is convicted of an offense, the defendants so convicted are jointly and severally liable for the amount derived from such offense. By virtue of the commission of the felony charged in Counts One and Three of this indictment, and and all interest that the defendant has in the property involved in, or traceable to property involved in money laundering is vested in the United States and hereby forfeited to the United States pursuant to Title 18, United States Code, Section 982(a)(1).

(My emphasis. I included the whole clause, intending to allow each to form their own opinions. Of course, you and I should read the whole thing...)

A little background. In the gold community, it is dictum that the US is no respecter of property rights. Twice in the 20th century, the financial rebels will point out, the US seized the gold of its private citizens, generally as a result of its own bad management of the currency, and trying to stop people from fleeing the over-inflated dollar.

In this case, the US government is seeking to seize all of the gold held within the system as reserves to the currency, without due regard to the operation of property rights for the rest of the user base. It would seem an over-broad reaching by the government, but sadly, expected and unsurprising to the digital currency community.

Whatever one thinks of e-gold, its operators and their actions, this is likely to reinforce the reputation of complete and utter disrespect that the US has for property rights around the world.

Unfortunately, this is no isolated case, but is in fact a concerted and long-lived programme by the US government to undermine property rights the world around. The US-invented Anti Money Laundering (AML) regime stretches back 20 years or more to Ronald Reagan's war on drugs, and is now sufficiently strong to destroy the effect of property laws, which latter are nothing if not strong.

AML takes implementing countries backwards in time and history. Although England and its former colonies inherited strong property rights from the days of the Magna Carta, it is as well to realise that the English experiment may have been more an exception than the rule. Consider Russia as counterpoint:

Since only the Tsar or the Party had property, no individual Russian could be sure of long-term usage of anything upon which to create wealth. And it is the poor to whom the property right matters most of all because property is the poor man's ticket into the game of wealth creation. The rich, after all, have their money and their friends to protect their holdings, while the poor must rely upon the law alone.

"The Rape of Russia" was not ancient history according to Williamson's 1999 testimony before the US House of Representatives, but living times: during the decade of the 1990s, the same group conspired with Russians to launder much of the residual value of the Russian people.

One would hope that the court is a little wise to the fact that if e-gold Ltd's system was used for crimes, then there was also use for good purposes; that is, it was the operators and some users that were responsible. Normally we would expect the system to be placed under administration, and then a wholesale cleanout of any "bad" accounts to occur, under court supervision.

If not, the author above warned the US Congress what will happen to those who dismantle what property rights there are:

Connections

In the absence of property, it was access - the opportunity to seek opportunity - and favor in which the Russians began to traffic. The connections one achieved, in turn, became the most essential tools a human being could grasp, employ and, over time, in which he might trade. Where relationships, not laws, are used to define society's boundaries, tribute must be paid. Bribery, extortion and subterfuge have been the inevitable result. What marks the Russian condition in particular is the scale of these activities, which is colossal. Russia, then, is a negotiated culture, the opposite of the openly competitive culture productive markets require.

It is fairly clear that the e-gold operation was presented to the world and operated as a property rights operation. Although not without shenanigans, the very basis of the digital gold community has been a joyful expression of the right of property, and what good it can do when it is left to run free.

Unfortunately, the US government may have learnt too much from the Russian experience. The substantial crime in the indictment is "property rights without a licence" and the fine for that is "seizing all the property." Welcome to the world of tribute; bribery, extortion and subterfuge to follow.

Posted by iang at 07:54 AM | Comments (9) | TrackBack

May 03, 2007

Hal Finney on 'AACS and Processing Key'

Hal Finney posts an explanation of the AACS movie encryption scheme. This FC scheme has just been cracked, and the primary keys published, to much media and legal attention. As digital rights management is a core financial cryptography application, it's worth recording the technology story as a case study, even if the detail is overwhelming!


Since this is the cryptography mailing list, there might be interest in the cryptography behind this infamous key. This is from AACSLA Specifications page, particularly the first spec, Common Cryptographic Elements. The basic cryptography is from Naor, Naor and Lotspiech.

The AACS system implements broadcast encryption. This is a scheme which has also been used for satellite TV. The idea is that you want to encrypt data such that any of a large number of devices can decrypt it, with also the possibility of efficiently revoking the keys in a relatively small subset of devices. The revocation is in case attackers manage to extract keys from devices and use them to decrypt data without authorization.

Broadcast encryption schemes such as that used by AACS equip each device with a set of keys, and encrypt a content key to various subsets of these keys such that each authorized device can decrypt the content key but the revoked devices cannot. Various methods have been proposed for achieving this, with tradeoffs between the number of keys held by each device and the amount of data which must be broadcast to hold all of the necessary encryptions.

AACS uses a binary tree based method, where each device corresponds to the leaf node of a tree. It uses a tree with depth of 31, so there are 2^31 leaf nodes and 2^32 - 1 nodes in all. At this point it is believed that software players of a particular type are all considered a single device, while hardware players are each a unique device. This will allow individual hardware players to be revoked, while requiring all software players of a given brand or type to be revoked at once. This tradeoff is assumed to be acceptable because it is easy to get a new version of a software player.

The method of assigning and handling the keys is called subset-difference. It allows a single encryption to be decrypted by any of the devices in a given subtree of the main tree, minus any sub-subtree of that subtree. In this way, any set of revoked nodes can be handled by the union of an appropriate chosen set of subset-difference encryptions. For example, suppose two nodes A and B are to be revoked. Let A be to the left of B, and call their lowest common ancestor node C. Encrypt to the whole tree minus the subtree rooted at C; also to C's left child's subtree minus A; also to C's right child's subtree minus B. This will cover all nodes except A and B.

To implement subset-difference, the root node of each subtree is assigned a unique key called a device key. Then going down the subtree from that node, each node gets its own device key as a distinct one-way hash of its parent's device key. The result is that if you know a node's device key, you can deduce the device keys of all descendants of that node.

This assignment of keys is carried out independently for each subtree, so a node at level n has n+1 device keys associated with it, one for each of the n+1 subtrees that it is a part of.

Leaf nodes correspond to devices, but devices do not get the device keys for "their" leaf node. Instead, they are given the device keys of the sibling node of their leaf, as well as the device keys of all of the siblings of their ancestor nodes. Because knowing a device key allows deducing the device keys of all its descendents, this assignment allows each physical device to deduce all device keys in the tree except for their "ancestor" nodes: those on the one branch of the tree leading to the leaf node.

To implement subset-difference encryption, suppose we want to encrypt to all nodes in the subtree rooted at node A except those nodes in the sub-subtree rooted at node B. Then we encrypt to the device key of node B that was assigned as part of the device key system rooted at node A. All nodes in the node-A subtree except those below node B can deduce this device key, because B is not one of their ancestors. Nodes below B cannot deduce the device key because B is an ancestor, and nodes not below A cannot deduce it because this set of device keys was unique to the node-A subtree.

In order to get the system started, one node is considered pre-revoked and not assigned to any physical device. Initially, the data is encrypted to the device key assigned to that node as part of the system for the whole tree. Every device will be able to deduce that device key and decrypt the data.

That one key is the "processing key" about which so much fuss is being made. All HD-DVD disks that were initially produced have their content keys encrypted to that single key. Knowing this processing key, along with other information available from the disk, allows determining all necessary decryption keys and provides access to the plaintext of the content. With this value having been published, all of the first generation of HD-DVD disks can be played.

The interesting thing is that publishing a processing key like this does not provide much information about which device was cracked in order to extract the key. This might leave AACSLA in a quandary about what to revoke in order to fix the problem. However in this particular case the attackers made little attempt to conceal their efforts and it was clear which software player(s) were being used. This may not be the case in the future.

AACSLA has announced that they will be changing the processing keys used in disks which will begin to be released shortly. Software players have been updated with new device keys, indicating that the old ones will be revoked. In the context of the subset-difference algorithm, there will now probably be a few encryptions necessary to cover the whole tree while revoking the old software player nodes as well as the pre-revoked node. This will make the processing key which has been published useless for decrypting new disks.

Because processing keys do not unambiguously point to their source, AACSLA may choose to set up subset-difference encryptions in which each software player is part of a different subtree and therefore uses a different processing key. This might require a few more encryptions than the minimal number that subset-difference allows, but it would reduce the chance that AACSLA would find themselves unable to determine the source of a published processing key. This will only work as long as attackers restrict themselves to the relatively few software players. If some group were to succeed in extracting keys from a hardware player and publish a processing key that might apply to the majority of hardware players in use, AACSLA would seemingly have no way to determine how to address the problem.

Now I must confess that this already long message has oversimplified the AACS system in certain respects. First, the subset-difference system is only carried on for the lowest 22 levels of the 31 level tree. There are effectively 512 independent trees where the algorithm is applied, each with a single pre-revoked leaf node. However at this time it appears that only one is in use.

Second, the processing key is not actually the same as the node's device key, but rather is a hash of the device key. Further, the exact details of how you go from the processing key to the various disk content keys involve several levels of indirection and encryption.

Third, even given the processing key, some of the information needed to derive all of the disk's content is not easily available. One piece needed is a per-title Volume ID which is not readable from the disk in a straightforward way. Volume IDs have been discovered by eavesdropping on the USB bus connected to a disk player, or by hacking disk player firmware. At this point it is hard for typical end users to read Volume IDs, so knowing the processing key is not generally sufficient to read disks. Databases of Volume IDs have been published online, but disk media keys could just as easily have been published.

Speculating now, the AACS system is flexible but it is possible that publication of processing keys may not have been fully anticipated by the designers. The difficulty of tracing processing keys to their source in an environment in which new disks may require many weeks or even months of lead time may interfere with the planned revocation system. The current processing key will soon be rendered invalid for new releases, so AACSLA's aggressive legal tactics seem disproportionate compared to the relative unimportance of this particular key. Perhaps these legal actions are primarily intended to limit distribution of future processing keys that are found on the next set of disk releases. That would further point to technical difficulties in revocation strategy when a new processing
key is published.

Hal Finney

Posted by iang at 07:53 AM | Comments (1) | TrackBack

May 02, 2007

more Tipping Point evidence - POS vendors sued

I wrote a week or so back about the failure of liability sharing and the consequent failure in market information. In that case, it circulated around the expectation that the banks have to back up the consumer every time something goes wrong. Is the TJX case enough to trigger the long awaited pass-on of liability to the other parties who share some responsibility?

Chris at EC points to Storefront:

Following lawsuits in February against some of the nation's largest retailers for illegally revealing too much credit card information on printed receipts, two of those retailers are now suing their POS vendors.

In the initial lawsuits filed early this year, some 50 of the nation's top retailers... were accused of printing full credit numbers and expiration dates on printed customer receipts, violating a provision of the Fair and Accurate Credit Transactions Act (FACTA) ...

In the last couple of weeks, two of those retail defendants—Charlotte Russe and Shoe Pavillion—have sued their POS vendors, saying that the retailer relied on them and if the retailer is liable, then the POS vendor should pay for it.

Is this a good thing? I think, yes. The alternates are not good: the vendor has no liability for actions, a law is passed that suits nobody, and things get worse.

Nick commented "Let the suing begin!" Better to suck up some court time and create an environment where -- no matter how small -- a vendor of security stuff has to work *with the customer's and the end-customer's risk model* and also take on some of the liability when it goes wrong.

Posted by iang at 11:24 AM | Comments (1) | TrackBack