March 25, 2008

Pogo reports: big(gest) bank breach was covered up?

An anomoly surfaces on the breach scene. Lynn reports in comments via dark reading to Pogo:

With the exception of the Birmingham News, what may be the largest bank breach involving insider theft of data seems to have flown under the mainstream media radar. ...

In light of the details now available, the breach appears to be the largest bank breach involving insider theft of data in terms of number of customers whose data were stolen. The largest incident to date for insider theft from a financial institution involved the theft of data on 8.5 million customers from Fidelity National Information Services by a subsidiary's employee.

It is not clear at the time of this writing whether Compass Bank ever notified the more than 1 million customers that their data had been stolen or how it handled disclosure and notification. A request for additional information from Compass Bank was not immediately answered.

I would guess that the Feds agreed to keep it quiet. And gave the institution a get-out-of-jail card for the disclosure requirement. It would be curious to see the logic, and I'd be skeptical. On the one side, the damage is done, and the potential for a sting or new information would not really be good enough to compensate for a million victims.

On the other side, maybe they were also able to satisfy themselves that no more damage would be done? It still doesn't cut the mustard, because once identity victims get hit, they need the hard information to clear their credit records.

But, in the light of yesterday's post, let's see this as an exception to the current US flavour of breach disclosure, and see if it sheds any light on costs of non-disclosure.

Posted by iang at 08:11 AM | Comments (4) | TrackBack

March 24, 2008

S/MIME: we don't need more reasons why it failed...

Reading up on econ and sec for something that won't be mentioned in this post, I stumbled across this passage by Ozment and Schechter in "Bootstrapping the Adoption of Internet Security Protocols":

If Alice has adopted authentication, she signs all of her email. She thus expects Bob to reject unsigned messages that purport to be from her but cannot be authenticated. If Alice has not adopted authentication, she does not sign her messages. She thus expects Bob to accept messages from her even though they are not signed. To know whether to accept an unsigned message purportedly from Alice, Bob must know whether Alice has adopted authentication.

That's as eloquent a comment as I've come across of what we might call the S/MIME signing problem (with some hints to other systems like OpenPGP or SSL).

The authors then spend another 12 or so pages addressing the issue, and I've yet to read that, but it does seem that we can shortcut their analysis and say: this market won't work! Here's more:

Solving this problem requires a secure mechanism through which Bob can determine if Alice has adopted authentication. For example, if Bob already knows Alice he might consider it safe to call and ask if she signs her messages. Unfortunately, the Internet has lacked a general mechanism with which to securely determine whether a system or its users has adopted an authentication technology.

Students of tautology will find that interesting. What to do? From my podium, I say this:

There is only one mode, and it is secure.

The 3rd hypothesis has the legs to walk this journey, and it would carry S/MIME into securing much more email, if only those legs were set free to walk your secure talk. Now to read the rest of their paper...

Posted by iang at 04:20 PM | Comments (0) | TrackBack

Liability for breaches: do we need new laws?

It is frequently pointed out by economists that incentives are the key to a lot of behaviour. They argue that, if incentives are aligned, positive results happen, and if misaligned, damage is done. This tradition goes a long way back in economics tradition, and has been recently highlighted to the Internet security community by Prof. Ross Anderson and others, who point out that the incentives are not aligned in information security.

The point in Information Technology is that a supplier provides the service, but disclaims the liability. The nature of this service might range from Microsoft's Windows operating system to banks' online interfaces, to Mozilla's browser to the vast behemoth known as the credit system. In each case, there are security ramifications to the service which are all passed on to the user. However, as the user is generally in no position to fix or even understand the security ramifications, we have an incentives clash.

The classical (liberal?) cry is that we need new laws to shift the liability back to supplier. The economic argument against that is simple: firstly, we have no clear picture of the efficient way to deal with the liability, and secondly, passing a law is almost always going to make matters less clear. So it will probably be wrong.

Now switch across to the breaches debate. Breaches in the US roll on, and sometimes even jump through the immigration barrier to the UK and other places. That's old news, but what is not is that the legal fraternity are now in on the act, and ready to file class action suits:

In a likely precursor of what's to come, a Philadelphia law firm and an attorney in Maine have filed class-action lawsuits against Hannaford Bros. Co., the Scarborough, Maine-based supermarket chain that this week disclosed a data security breach involving the potential compromise of 4.2 million credit and debit cards.

Philadelphia-based Berger & Montague PC filed its lawsuit yesterday in U.S. District Court in Maine. A similar suit was filed Tuesday by Bangor, Maine-based attorney Samuel Lanham Jr. on behalf of Hannaford customers in all of the states where the grocer does business.

In a class action suit, one suit is filed and all victims join it on one side. The judgement is then awarded and shared out (with a hefty percentage going to the attorneys). You could criticise the concept on several ground: the lawyers always win, the payouts are often small to each individual, the cases take a long time, the smaller company is blown away by them, there are easy ways to game the payout... etc etc, but from an economics perspective it is also evident that the class action suit achieves a switch in incentives.

Before now, the supplier of online banking, or merchant retailing, or Internet software was untouchable in any big sense for security issues. This was the point of the incentives commentators, in that there was no incentives alignment. (I went even further in the market for silver bullets by showing how incentives are negatively aligned. Because of the silver bullets effect, the big player is incentivized to deliberately avoid the much bigger extraordinary costs -- fingerpointing -- while absorbing all small, direct losses without noticing. This means that the big player was incentivized to avoid dealing with security, and thus was generally incentivized to make matters worse for the individual.)

Now, some large lump of incentives for security has switched across to the supplier. Now, at a minimum, there is the threat of a class action suit. Indeed, it is now a validated threat, as we can see the clarity, the presence and the danger (for retailers at least). At the maximum, there may be an actual judgement at the end of actual filed suit, something that is less likely and more tangible than a threat. Hence, it is now possible to calculate the expected value (loss) from the class action activity.

If, then, the silver bullet economics are shifted to the point where these direct security costs are now more important than the indirect fingerpointing costs, we might also hope that incentives have shifted sufficiently to bring security costs to the user back onto the agenda for the supplier. If we achieve that, then we'll have achieved a good thing.

Which also brings us to another conclusion about the market for security: we don't need any new laws, as the class action system may be sufficient. Well, that's not entirely true. What we do need is this:

1. a breach disclosure law (as SB1386 has been credited with opening the floodgates of breach information), and

2. a mechanism to shift the newly-surfaced incentives, such as the class action system.

It cannot be stressed enough that SB1386 was *necessary* to change the balance. It wasn't however sufficient, for that we still need to allocate the liability more directly. In the presence of class action threats, no more may be needed, and especially, new liability laws will be damaging because they will not only be too limiting in their understanding, they are likely to damage the (free market) emergence of the class action mechanism.

When do we find out if class action is enough? I first predicted this path many years back with respect to phishing, and eventually gave up waiting. So it is also fair to say that we need one more component:

3. Time. Patience.

Not something I (nor politicians nor blog writers nor security sellers) are well-endowed with, apparently, but it seems the market has sufficient endowments of it.

Posted by iang at 10:32 AM | Comments (3) | TrackBack

March 20, 2008

World's biggest PKI goes open source: DogTag is released

One of the frequently lamented complaints of PKI is that it simply didn't scale (IT talk for not delivering enough grunt to power a big user base), and there was no evidence to the contrary. Well, that's not quite true, there is one large-scale PKI on the planet:

Red Hat has teamed up with August Schell to run and support the U.S. Department of Defense’s (DoD) public key infrastructure (PKI). The DoD PKI is the world’s largest and most advanced PKI installation, supporting all military and civilian personnel throughout the DoD worldwide.

Red Hat Certificate Authority (CA) and Lightweight Directory Access Protocol (LDAP) are used to operate the DoD identity management infrastructure with August Schell providing hands-on support for the source-code-level implementation. The Red Hat Certificate System has issued more than 10 million digital credentials, many of which were issued immediately preceding conflicts in Iraq and Afghanistan.

It seems that its success or growth rode the wave of the recent Iraq military expedition (or war or whatever they call it these days). The reason for stressing the military context is that in such circumstances, things can be pushed out which would never evolve of their own pace in the government or private economy. This may make it a trendsetter, because it finally breached the barriers for the rest of the world. Or it may simply underline the reasons why it won't set any trends; only future history will tell us which it is.

Today's news is that the code behind the above PKI has just gone open source under the name of DogTag (a reference to the 2 metal identity tags that US servicemen and women wear around their necks). Bob Lord announces:

In December of 2004, Red Hat purchased several technologies and products from AOL. The most prominent of those products were the Directory Server and the Certificate System. Since then we opened the source code to the Directory Server (see http://directory.fedoraproject.org/ for all the details). However a number of factors kept the work to release the Certificate System on the back burner. That's about to change.

Today, I'm extremely happy to announce the release of the Certificate System source code to the world.

This isn't a “Lite” or demo version of the technology, but the entire source code repository, the one we use to build the Red Hat branded version of the product. It's the real deal.

Another barrier breached! This could change the map for small, boutique or startup CAs. For those who think that this will lead to an explosion of interest in crypto and CAs and PKIs, I cannot resist adding some thoughts from Frank Hecker, who commented:

It's ... the end of an era: When I was working at Netscape ten years ago we were dealing with patented crypto algorithms (RSA), classified crypto algorithms (FORTEZZA), proprietary crypto libraries (RSA BSafe) and crypto applications (Netscape Communicator, Netscape Enterprise Server, Netscape Certificate Management System), and crypto export control. All gone now, or at least gone for most practical purposes (e.g., export control).

Of course, now that people have all the open-source patent-free no-export-hassle crypto that they could possibly want, they're realizing that crypto in and of itself wasn't nearly the panacea they thought it was :-)

I cannot say it better!

Posted by iang at 04:08 AM | Comments (1) | TrackBack

March 13, 2008

Trojan with Everything, To Go!

more literal evidence of ... well, everything really:

Targeting over 400 banks (including my own :( ! ) and having the ability to circumvent two-factor authentication are just two of the features that push Trojan.Silentbanker into the limelight. The scale and sophistication of this emerging banking Trojan is worrying, even for someone who sees banking Trojans on a daily basis.

This Trojan downloads a configuration file that contains the domain names of over 400 banks. Not only are the usual large American banks targeted but banks in many other countries are also targeted, including France, Spain, Ireland, the UK, Finland, Turkey—the list goes on.

The ability of this Trojan to perform man-in-the-middle attacks on valid transactions is what is most worrying. The Trojan can intercept transactions that require two-factor authentication. It can then silently change the user-entered destination bank account details to the attacker's account details instead. Of course the Trojan ensures that the user does not notice this change by presenting the user with the details they expect to see, while all the time sending the bank the attacker's details instead. Since the user doesn’t notice anything wrong with the transaction, they will enter the second authentication password, in effect handing over their money to the attackers. The Trojan intercepts all of this traffic before it is encrypted, so even if the transaction takes place over SSL the attack is still valid. Unfortunately, we were unable to reproduce exactly such a transaction in the lab. However, through analysis of the Trojan's code it can be seen that this feature is available to the attackers.

The Trojan does not use this attack vector for all banks, however. *It only uses this route when an easier route is not available*. If a transaction can occur at the targeted bank using just a username and password then the Trojan will take that information, if a certificate is also required the Trojan can steal that too, if cookies are required the Trojan will steal those....

(spotted by JPM) MITB, MITM, two-factor as silver bullets for online banks, the node is insecure, etc etc.

About the only thing that is a bit of a surprise is the speed of this attack. We first reported the MITB here around 2 years back, and we are still only seeing reports like the above. Although I said earlier that a big problem with the banking world was that the attacker can spin inside your OODA loop, it would appear that he does not take on every attack.

See above for some limits: the attacker is finding and pursuing the attacks that are easiest, first. Is this finally the evidence that cryptographers cannot ignore? Crypto alone has proven to not work. It may be theoretically strong, but it is practically brittle, and easily bypassed. A more balanced, risk-based approach is needed. An approach that uses a lot less crypto, and a lot more engineering and user understanding, would be far more efficacious to deliver what users need.

Posted by iang at 07:18 AM | Comments (2) | TrackBack

March 12, 2008

Format Wars: XML v. JSON

They're called wars because afterwards, everyone agrees that they were senseless wastes of resources, and faithfully promises never to let it happen again. In this case (at Mozo), the absolutely-everything format of XML (that stuff where everything is surrounded by <angle> bracketed words </angle>) is up against a new comer called JSON.

Because we are software engineers in the financial cryptography world I prefer the haptic approach to decision making. That is, we have to understand at least enough in order to build it. Touch this:

Here’s an example data structure, of the kind you might want to transmit from one place to another (represented as a Python dictionary; mentally replace with the syntax from your programming language of choice).
person = {
  "name": "Simon Willison",
  "age": 25,
  "height": 1.68,
  "urls": [
    "http://simonwillison.net/",
    "http://www.flickr.com/photos/simon/",
    "http://simon.incutio.com/"
  ]
}

Speaking strictly from the point of view of security: the goals are to have all your own code, and to be simple. Insecurity lurks in complexity, and other people's code represents uncontrollable complexity. Not because the authors are evil but because their objectives differ from yours in ways that you cannot see and cannot control.

Generally, then, in financial cryptography you should use your own format. Because that ensures that it is your own code doing the reading, and especially, that you have the skills and assets to maintain that code, and fix it.

To get to the point, I think this rules out XML. If one were considering security as the only goal, then it's out: XML is far too complex, it drags in all sorts of unknown stuff which the average developer cannot control, and you are highly dependent on the other people's code sets. I've worked on a few large projects with XML now, and this is the ever-present situation: out of control.

What then about JSON? I'm not familiar with it, but a little googling and I found the page above that describes it ... in a page. From a strictly security pov, that gives it a hands-down win.

I already understand what JSON is about, so I can secure it. I can't and never will be able to say I can secure XML.

Posted by iang at 08:31 AM | Comments (5) | TrackBack

March 09, 2008

The Trouble with Threat Modelling

We've all heard it a hundred times: what's your threat model? But how many of us have been able to answer that question? Sadly, less than we would want, and I myself would not have a confident answer to the question. As writings on threat modelling are few and far between, it is difficult to draw a hard line under the concept. Yet more evidence of gaping holes in the security thinker's credibility.

Adam Shostack has written a series of blog posts on threat modelling in action at Microsoft (read in reverse order). It's good: readable, and a better starting point, if you need to do it, than anything else I've seen. Here are a couple of plus points (there are more) and a couple of criticisms:

Surprisingly, the approach is written to follow the practice that it is the job of the developers to do the security work:

We ask feature teams to participate in threat modeling, rather than having a central team of security experts develop threat models. There’s a large trade-off associated with this choice. The benefit is that everyone thinks about security early. The cost is that we have to be very prescriptive in how we advise people to approach the problem. Some people are great at “think like an attacker,” but others have trouble. Even for the people who are good at it, putting a process in place is great for coverage, assurance and reproducibility. But the experts don’t expose the cracks in a process in the same way as asking everyone to participate.

What is written between the lines is that the central security team at Microsoft provides a moderator or leader for the process. This is good thinking, as it brings in the experience, but it still makes the team do the work. I wonder how viable this is for general practice? Outside the megacorps where they have made this institutional mindshift happen, would it be possible to ask a security expert to come in, swallow 2 decades of learning, and be a leader of a process, not a doer of a process?

There are many ramifications of the above discovery, and it is fascinating to watch them bounce around the process. I'll just repeat one here: simplification! Adam hit the obvious problem that if you take the mountain to Mohammad, it should be a small mountain. Developers are employed to write good code, and complex processes just slow that down, and so an aggressive simplification was needed to come up with a very concise model. A more subtle point is that the moderator wants to impart something as well as get through the process, and complexity will kill any retention. Result: one loop on one chart, and one table.

The posts are not a prescription on how to do the whole process, and indeed in some places, they are tantalisingly light (we can guess that it is internal PR done through a public channel). With that understanding, they represent a great starting point.

There are two things that I would criticise. One major error, IMHO: Repudiation. This was an invention by PKI-oriented cryptographers in the 1990s or before, seeking yet another marketing point for the so-called digital signature. It happens to be wrong. Not only is the crypto inadequate to the task, the legal and human processes implied by the Repudiation concept are wrong. Not just misinformed or misaligned, they are reversed from reality, and in direct contradiction, so it is no surprise that after a decade of trying, Non-Repudiation has never ever worked in real life.

It is easy to fix part of the error. Where you see Non-Repudiation, put Auditing (in the sense of logging) or Evidence (if looking for a more juridical flavour). What is a little bit more of a challenge is how to replace "Repudation" as the name of the attack ... which on reflection is part of the error. The attack alleged as repudiation is problematic, because, before it is proven one way or the other, it is not possible to separate a real attack from a mistake. Then, labelling it as an attack creates a climate of guilty until proven innocent, but without the benefit of evidence tuned to proving innocence. This inevitably leads to injustice which leads to mistrust and finally, (if a fair and open market is in operation) rejection of the technology.

Instead, think of it as an attack born of confusion or uncertainty. This is a minor issue when inside one administrative or trust boundary, because one person elects to carry the entire risk. But it becomes a bigger risk when crossing into different trust areas. Then, different agents are likely to call a confusing situation by different viewpoints (incentives differ!).

At this point the confusion develops into a dispute, and that is the real name for the attack. To resolve the dispute, add auditing / logging and evidence. Indeed, signatures such as hashes and digsigs make mighty fine evidence so it might be that a lot of the work can be retained with only a little tweaking.

I then would prefer to see the threat - property matrix this way:

Threat Security Property
Spoofing-->Authentication
Tampering-->Integrity
Dispute-->Evidence
Information Disclosure-->Encryption
Denial of Service-->Availability
Elevation of Privilege-->Authorisation

A minor criticism I see is in labelling. I think the whole process is not threat modelling but security modelling. It's a minor thing, which Adam neatly disposes of by saying that arguing about terms is not only pointless but distracts from getting the developers to do the job. I agree. If we end up disposing of the term 'security modelling' then I think that is a small price to pay to get the developers a few steps further forward in secure development.

Posted by iang at 09:04 AM | Comments (3) | TrackBack