August 17, 2011

How Liability is going to kill what little is left of Internet security…

Long term readers will know that I have often written of the failure of the browser vendors to provide effective security against phishing. I long ago predicted that nothing will change until the class-action lawsuit came. Now signs are appearing that this is coming to pass:

That's changing rapidly. Recently, Sony faced a class action lawsuit for losing the private information of millions of users. And this week, it was reported that Dropbox is already being sued for a recent security breach of its own.

It's too early to know if these particular lawsuits will get anywhere, but they're part of a growing trend. As online services become an ever more important part of the American economy, the companies that create them increasingly find that security problems are hitting them where it really hurts: the bottom line.

See also the spate of lawsuits against banks over losses; although it isn't the banks' direct fault, they are complicit in pushing weak security models, and a law will come to make them completely liable. Speaking of laws:

Computer security has also been an area of increasing activity for the Federal Trade Commission. In mid-June, FTC commissioner Edith Ramirez testified to Congress about her agency's efforts to get companies to beef up their online security. In addition to enforcing specific rules for the financial industry, the FTC has asserted authority over any company that makes "false or misleading data security claims" or causes harm to consumers by failing to take "reasonable security measures." Ramirez described two recent settlements with companies whose security vulnerabilities had allowed hackers to obtain sensitive customer data. Among other remedies, those firms have agreed to submit to independent security audits for the next 20 years.

Skip over the sad joke at the end. Timothy B. Lee and Ars Technica, author of those words, did more than just recycle other stories, they actually did some digging:

Alex Halderman, a computer science professor at the University of Michigan, to help us evaluate these options. He argued that consumer choice by itself is unlikely to produce secure software. Most consumers aren't equipped to tell whether a company's security claims are "snake oil or actually have some meat behind them." Security problems therefore tend not to become evident until it's too late.

But he argued the most obvious regulatory approach—direct government regulation of software security practices—was also unlikely to work. A federal agency like the FTC has neither the expertise nor the manpower to thoroughly audit the software of thousands of private companies. Moreover, "we don't have really widely regarded, well-established best practices," Halderman said. "Especially from the outside, it's difficult to look at a problem and determine whether it was truly negligent or just the kind of natural errors that happen in every software project."

And when an agency found flaws, he said, it would have trouble figuring out how urgent they were. Private companies might be forced to spend a lot of time fixing trivial flaws while more serious problems get overlooked.

(Buyers don't know. Sellers don't know.)

So what about liability? I like others have recognised that liability will eventually arise:

This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year's attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don't repeat that mistake in future.

But:

Still, Halderman warned that too much litigation could cause companies to become excessively security-conscious. Software developers always face a trade-off between security and other priorities like cost and time to market. Forcing companies to devote too much effort to security can be as harmful as devoting too little. So policymakers shouldn't focus exclusively on liability, he said.

Actually, it's far worse. Figure out some problem, and go to a company and mention that this issue exists. The company will ignore you. Mention liability, and the company will immediately close ranks and deny-by-silence any potential liability. Here's a variation written up close by concerning privacy laws:

...For everything else, the only rule for companies is just “don’t lie about what you’re doing with data.”

The Federal Trade Commission enforces this prohibition, and does a pretty good job with this limited authority, but risk-averse lawyers have figured out that the best way to not violate this rule is to not make explicit privacy promises at all. For this reason, corporate privacy policies tend to be legalistic and vague, reserving rights to use, sell, or share your information while not really describing the company’s practices. Consumers who want to find out what’s happening to their information often cannot, since current law actually incentivizes companies not to make concrete disclosures.

Likewise with liability: if it is known of beforehand, it is far easier to slap on a claim of gross negligence. Which means in simple layman's terms: triple damages. Hence, companies have a powerful incentive to ignore liability completely. As above with privacy: companies are incentivised not to do it; and so it comes to pass with security in general.

Try it. Figure out some user-killer problem in some sector, and go talk to your favourite vendor. Mention damages, liability, etc, and up go the shutters. No word, no response, no acknowledgement. And so, the problem(s) will never get fixed. The fear of liabilities is greater than the fear of users, competitors, change, even fear itself.

Which pretty much guarantees a class-action lawsuit one day. And the problem still won't be fixed, as all thoughts are turned to denial.

So what to do? Halderman drifts in the same direction as I've commented:

Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it's hard to mandate, or even to measure, "security consciousness" from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it's not likely to be effective unless management's heart is in it.

It's completely meaningless to mandate, which is the flaw behind the joke of audit. But it is possible to measure. Here's an attempt by yours truly.

What's not clear as yet is how is it possible to incentivise companies to pursue that lofty goal, even if we all agree it is good?

Posted by iang at August 17, 2011 11:21 AM | TrackBack
Comments

I think you fail to mention that public disclosure of security holes is also an option to force companies into action.

Posted by: Mark S at October 8, 2011 03:31 AM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.