Long term readers will know that I have often written of the failure of the browser vendors to provide effective security against phishing. I long ago predicted that nothing will change until the class-action lawsuit came. Now signs are appearing that this is coming to pass:
That's changing rapidly. Recently, Sony faced a class action lawsuit for losing the private information of millions of users. And this week, it was reported that Dropbox is already being sued for a recent security breach of its own.It's too early to know if these particular lawsuits will get anywhere, but they're part of a growing trend. As online services become an ever more important part of the American economy, the companies that create them increasingly find that security problems are hitting them where it really hurts: the bottom line.
See also the spate of lawsuits against banks over losses; although it isn't the banks' direct fault, they are complicit in pushing weak security models, and a law will come to make them completely liable. Speaking of laws:
Computer security has also been an area of increasing activity for the Federal Trade Commission. In mid-June, FTC commissioner Edith Ramirez testified to Congress about her agency's efforts to get companies to beef up their online security. In addition to enforcing specific rules for the financial industry, the FTC has asserted authority over any company that makes "false or misleading data security claims" or causes harm to consumers by failing to take "reasonable security measures." Ramirez described two recent settlements with companies whose security vulnerabilities had allowed hackers to obtain sensitive customer data. Among other remedies, those firms have agreed to submit to independent security audits for the next 20 years.
Skip over the sad joke at the end. Timothy B. Lee and Ars Technica, author of those words, did more than just recycle other stories, they actually did some digging:
Alex Halderman, a computer science professor at the University of Michigan, to help us evaluate these options. He argued that consumer choice by itself is unlikely to produce secure software. Most consumers aren't equipped to tell whether a company's security claims are "snake oil or actually have some meat behind them." Security problems therefore tend not to become evident until it's too late.But he argued the most obvious regulatory approach—direct government regulation of software security practices—was also unlikely to work. A federal agency like the FTC has neither the expertise nor the manpower to thoroughly audit the software of thousands of private companies. Moreover, "we don't have really widely regarded, well-established best practices," Halderman said. "Especially from the outside, it's difficult to look at a problem and determine whether it was truly negligent or just the kind of natural errors that happen in every software project."
And when an agency found flaws, he said, it would have trouble figuring out how urgent they were. Private companies might be forced to spend a lot of time fixing trivial flaws while more serious problems get overlooked.
(Buyers don't know. Sellers don't know.)
So what about liability? I like others have recognised that liability will eventually arise:
This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year's attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don't repeat that mistake in future.
But:
Still, Halderman warned that too much litigation could cause companies to become excessively security-conscious. Software developers always face a trade-off between security and other priorities like cost and time to market. Forcing companies to devote too much effort to security can be as harmful as devoting too little. So policymakers shouldn't focus exclusively on liability, he said.
Actually, it's far worse. Figure out some problem, and go to a company and mention that this issue exists. The company will ignore you. Mention liability, and the company will immediately close ranks and deny-by-silence any potential liability. Here's a variation written up close by concerning privacy laws:
...For everything else, the only rule for companies is just “don’t lie about what you’re doing with data.”The Federal Trade Commission enforces this prohibition, and does a pretty good job with this limited authority, but risk-averse lawyers have figured out that the best way to not violate this rule is to not make explicit privacy promises at all. For this reason, corporate privacy policies tend to be legalistic and vague, reserving rights to use, sell, or share your information while not really describing the company’s practices. Consumers who want to find out what’s happening to their information often cannot, since current law actually incentivizes companies not to make concrete disclosures.
Likewise with liability: if it is known of beforehand, it is far easier to slap on a claim of gross negligence. Which means in simple layman's terms: triple damages. Hence, companies have a powerful incentive to ignore liability completely. As above with privacy: companies are incentivised not to do it; and so it comes to pass with security in general.
Try it. Figure out some user-killer problem in some sector, and go talk to your favourite vendor. Mention damages, liability, etc, and up go the shutters. No word, no response, no acknowledgement. And so, the problem(s) will never get fixed. The fear of liabilities is greater than the fear of users, competitors, change, even fear itself.
Which pretty much guarantees a class-action lawsuit one day. And the problem still won't be fixed, as all thoughts are turned to denial.
So what to do? Halderman drifts in the same direction as I've commented:
Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it's hard to mandate, or even to measure, "security consciousness" from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it's not likely to be effective unless management's heart is in it.
It's completely meaningless to mandate, which is the flaw behind the joke of audit. But it is possible to measure. Here's an attempt by yours truly.
What's not clear as yet is how is it possible to incentivise companies to pursue that lofty goal, even if we all agree it is good?
How to cope with a financial system that looks like it's about to collapse every time bad news turns up? This is an issue that is causing a few headaches amongst the regulators. Here's some musings from Chris Skinner over a paper from the Financial Stability gurus at the Bank of England:
Third, the paper argues for policies that create much greater transparency in the system.This means that the committees worldwide will begin “collecting systematically much greater amounts of data on evolving financial network structure, potentially in close to real time. For example, the introduction of the Office of Financial Research (OFR) under the Dodd-Frank Act will nudge the United States in this direction.
“This data revolution potentially brings at least two benefits.
“First, it ought to provide the authorities with data to calibrate and parameterise the sort of network framework developed here. An empirical mapping of the true network structure should allow for better identification of potential financial tipping points and cliff edges across the financial system. It could thus provide a sounder, quantitative basis for judging remedial policy actions to avoid these cliff edges.
“Second, more publicly available data on network structures may affect the behaviour of financial institutions in the network. Armed with greater information on counterparty risk, banks may feel less need to hoard liquidity following a disturbance.”
Yup. Real time data collection will be there in the foundation of future finance.
But have a care: you can't use the systems you have now. That's because if you layer regulation over policy over predictions over datamining over banking over securitization over transaction systems … all layered over clunky old 14th century double entry … the whole system will come crashing down like the WTC when someone flies a big can of gas into it.
The reason? Double entry is a fine tool at the intra-corporate level. Indeed, it was material in the rise of the modern corporation form, in the fine tradition of the Italian city states, longitudinal contractual obligations and open employment. But, double entry isn't designed to cope with the transactional load of of inter-company globalised finance. Once we go outside the corporation, the inverted pyramid gets too big, too heavy, and the forces crush down on the apex.
It can't do it. Triple entry can. That's because it is cryptographically solid, so it can survive the rigours of those concentrated forces at the inverted apex. That doesn't solve the nightmare scenarios like securitization spaghetti loans, but it does mean that when they ultimately unravel and collapse, we can track and allocate them.
Message to the regulators: if you want your pyramid to last, start with triple entry.
PS: did the paper really say "More taxes and levies on banks to ensure that the system can survive future shocks;" … seriously? Do people really believe that Tobin tax nonsense?
We've long documented the failure of PKI and secure browsing to be an effective solution to security needs. Now comes spectacular proof: sites engaged in carding, which is the trading of stolen credit card information, have always protected their trading sites with SSL certs of the self-signed variety. According to brief searching on 'a bunch of generic "sell CVV", "fullz", "dumps" ones,' conducted informally by Peter Gutmann, some of the CAs are now issuing certificates to carders.
This amounts to a new juvenile culinary phase in the target-rich economy of cyber-crime:
Phisher: I've come to eat your babies!
CA: Oh, yes, you'll be needing a certificate for that, $200 thanks!
Although not scientifically conducted or verified, both Mozilla and the CA concerned inclined that the criminals can have their certs and eat them too. As long as they follow the same conditions as everyone else, that's fine.
Except, it's not. Firstly, it's against the law in most all places to aid & abet a criminal. As Blackstone put it (ex Wikipedia):
"AN accessory after the fact may be, where a person, knowing a felony to have been committed, receives, relieves, comforts, or assists the felon.17 Therefore, to make an accessory ex post facto, it is in the first place requisite that he knows of the felony committed.18 In the next place, he must receive, relieve, comfort, or assist him. And, generally, any assistance whatever given to a felon, to hinder his being apprehended, tried, or suffering punishment, makes the assistor an accessory. As furnishing him with a horse to escape his pursuers, money or victuals to support him, a house or other shelter to conceal him, or open force and violence to rescue or protect him."
The point here made by Blackstone, and translated into the laws of many lands is that the assistance given is not specific, but broad. If we are in some sense assisting in the commission of the crime, then we are accessories. For which there are penalties.
And, these penalties are as if we were the criminals. For those who didn't follow the legal blah blah above, the simple thing is this: it's against the law. Go to jail, do not pass go, do not collect $200.
Secondly, consider the security diaspora. Users were hoping that browsers such as Firefox, IE, etc, would protect them from phishing. The vendors' position, policy-wise and code-wise, is that their security mechanism to protect their users from phishing is to provide PKI certificates, which might evidence some level of verification on your counter-party. This reduces down to a single statement: if you are ripped off with someone who uses a cert against you, you might know something about them.
This protection is (a) ineffective against phishing, which is shown frequently every time a phisher downgrades the HTTPS to HTTP, (b) now shared & available equally with phishers themselves to assist them in their crime, and who now apparently feel that (c) the protection offered in encryption against their enemies outweighs the legal threat of some identity being revealed to their enemies. Users lose.
In open list on Mozilla's forum, the CA concerned saw no reason to address the situation. Other CAs also seem to issue to equally dodgy sites, so it's not about one CA. The general feeling in the CA-dominated sector is that identity within the cert is sufficient reason to establish a reasonable security protection, notwithstanding that history, logic, evidence and now the phishers themselves show such a claim is about as reasonable and efficacious as selling recipes for marinated babies.
It seems Peter and I stand alone. In some exasperation, I consulted with Mozilla directly. To little end; Mozilla also believe that the phishing community are deserving of certificates, they simply have to ask a CA to be invited into our trusted inner-nursery. I've no doubt that the other vendors will believe and maintain the same, an approach to legal issues sometimes known as the ostrich strategy. The only difference here being that Mozilla maintains an open forum, so it can be embarrassed into a private response, whereas CAs and other vendors can be embarrassed into silence.