January 28, 2010
the most magical question of all -- why are so many bright people fooling themselves about the science in information security?
It has been clear for a long time that information security was more about perception than any other factor than was good for it, a concept I tried to turn into a theory in the market for silver bullets, based on some solid thinking by others on the economics of insufficient information. Here are some random snippets that seem to anecdotally support that security is dominated by perception.
Gunnar reports on Google who were apparently subject to a cyber-attack by China. I didn't notice, probably because it doesn't pass the laugh test, but he collects all this security-blog-o-sphere stuff into a nice package:
Of course cyberattacks and the other issues raised by Google as rationale have been around for a long time, so why did they choose now as the time to threaten to pull out? ... First, we know that Google has been getting its butt kicked by Baidu.com. Baidu's search market share in 3Q09 was 77%. ... Google was in need of some positive PR to correct its worsening image (especially in Europe, where concerns about privacy are mounting on a daily basis). Google.cn is the goat that would be sacrificed ... It's no surprise than NSA is getting interested in the story. One doesn't need to know much about US politics to realize that framing this as a national security issue is going to make Google's case for US government's pressure on China much stronger ... No wonder Google has been hiring all those smart policy types with government experience ...
While Google is bandying around the phrase "national security" as a commercial weapon, Bruce Schneier is earning lots of airmiles by talking not about security but about what he calls *magical thinking*: TSA rules to make you safer from the last attack:
Of course not, the attacks are designed to get through whatever we're doing. The liquid bombers used liquid so now we screen liquids. This is a powder bomber using powders. They will look at what we do and do something different. There's sort of a bit of magical thinking about the last hour, its not a more dangerous hour, its the hour this guy happened to choose. I am not sure why the next guy can't choose the first hour or a different material or maybe even not an airplane. Focusing on the tactic might make us feel a little better but its not going to make us any safer.
Or, what military types refer to as fighting the last war, or, building the Maginot Line. Which would support the notion that the real enemy that TSA is fighting is the home front, and perception is the weapon of choice.
Adam has a nice collection of the latest TSA madness, including this quote:
'It became necessary to destroy the town to save it,' a TSA major said today. He was talking about the decision by allied commanders to shock and awe the public regardless of civilian casualties, to rout al Qaeda.
Which I can't tell if it is a spoof or not, but it seems to be on point. Here is more evidence of the perceptional nature of security: news that Microsoft's browser had a flaw in it has finally caused governments to sit up and do the unthinkable: warn people not to use a Microsoft product.
Nobody would ever notice if a government said "we don't use Linux because of security issues" or "we don't permit Apple because of ..." Microsoft's browbeating of the press and governments has been so successful that for 2 decades, nobody dare say "don't use Microsoft." Remember "Nobody every got fired for buying IBM?"
Which unfortunately has been a great loss to Microsoft (as it was to IBM) because it hid the danger from them, too, until 1992. Now they are facing the long-term decline, shackled with their chains of past insecurity. Perception-wise, they will probably never be able to shake off the the real public opinion, now that it's shifted, even with the great work listed at bottom.
Too late for their future shareholders, but maybe their past shareholders had the right idea? Markus Kuhn reports on a placebo bomb detector for the BBC, and discovered it is testably indistinguishable with any other random appliance purchased at the local Dixon's (consumer electronics store):
There is no way in which this device could be programmed to distinguish the many different substances that the ADE651 manufacturer claimed it could, not to mention that any useful interaction with such an LC circuit would require a transmitter antenna, a power source, and lots of other components that the ADE651 appears to lack.
These things sell for around 40,000 sterling each, in quantity, and the Iraqi government swears by them. OK, whatever. Compelling proof ... that the power of the placebo is essential to unlock the minds of the (human) bomb detectors that do the real job? You be the judge. What has not as yet been answered to me is why the TSA has not purchased them -- if they are America's department for magical thinking, why not purchase such things?
The devices contain no power source (”powered by the user’s static electricity”, no battery), resemble very much a dowsing rod, and generally leave much to be desired regarding a plausible operating principle or performance in repeatable double-blind trials. There are several such military dowsing rods on the market.
And they won't contribute to global warming! So real security (where "real" means, we have evidence that this is how people think, act and purchase) is as much about placebo devices as anything else. Here's the most magical question of all: why is an entire generation of crypto/security/geeks fixated on the technical workings of a device? Insisting that it operate to lab specs? When all the evidence from the field indicates that it doesn't matter much if at all?
Here's another outstanding example: Last month there was a series of crypto break news in GSM phones. Here's a summary from emergentchaos's Mordaxus.
Orr Dunkelman, Nathan Keller, and Adi Shamir have released a paper showing that they've broken KASUMI, the cipher used in encrypting 3G GSM communications. KASUMI is also known as A5/3, which is confusing because it's only been a week since breaks on A5/1, a completely different cipher, were publicized. So if you're wondering if this is last week's news, it isn't. It's next week's news.
(Except it's last month's news.) OK, joking aside, so what? GSM phones use encryption to stop the papparazzi recording your love-chat, stop neighbours hearing your shopping list, and spoofers stealing GSM minutes. As long as they do that, why aren't we happy with a 40 bit crypto response to the 20 bit crypto threat?
(In 1994 numbers, etc, just add water for 16 years of crypto-flation.)
It will be interesting to see the response from the GSM Association. They have the opportunity to show leadership. If they recognize that this is a real problem, reassure us that it's not a catastrophe, and show that they're taking it seriously, then this can be an all-around good thing for them and us.
We're all adults (well, okay, most of us are adults and act like adults some of the time), and if we know that there will be an upgrade in a few years, then that's great. We lived through the WEP issues. We are living through the SSL evil proxy issues. This is less acute than either of those. But we need to have some assurance that in a few years, we'll just get wireless devices with a safety net.
I don't mean to pick on mordaxus here, but this typifies an entire security industry: absolute obsession with an apparent security rating (measured in bits of crypto strength) and an almost willful blindness to the environment of choice. Let's list how safe we are because of GSM's fine security design:
- All phones provide the complete and perfect location and relationship tracking device for all citizens [one, two, three, four], and we told on great authority that we should be worried when they aren't so good at tracking, according to Kuhn's colleague Richard Clayton,
- the conversation is only encrypted over the airwaves to the nearest base station (which has minimal security in it, if those "buy your own base-station" adverts are correct),
- Phones are probably programmable over the air via various techniques (undocumented, elusive, insert your conspiracy theory here about advice to take out your battery when attending a secret meeting, etc etc), and
- The entire infrastructure doesn't really have a lot of security, and that's purposeful.
What is the "real problem" that Mordaxus expects them to spot? What catastrophe? It's not as if we need to speculate here, we actually have real evidence: We know that when they were broken 12 years ago by Lucky Green ... nothing happened. It didn't change our security situation one iota.
Their challenge is to have a response before this news metastasizes into a common perception that 3G crypto is worthless.
Right. If we have no security argument, we also are left arguing on perception.
There are some out there that think they can use psychology to assess our current security thinking. Perhaps they can answer the most magical question of all: why are the world's top security sellers so quick to damn a crypto algorithm that has lost of few bits, like MD5, when the world's top security buyers are happily purchasing Placebo devices with 5km ratings? Or Cell-phones with 40 bit crypto? And, apparently happy with their choice?
Let's face it. Security thought as a science is failed, it is all marketing, all perception, all religion. The good news is that this meme seems to be finally getting some traction in the scientific community: "So Long, and no thanks for the Externalities: The Rational Rejection of Security Advice by Users" by Cormac Herley, who works for, of all people, Microsoft Research. Finally, we have the paper that says what we all knew:
It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives.
Read that if you think there is a place for science in information security. On the other hand, if you think information security is something else, better off to go read something on creative journalism, public relations, politics, marketing, ...
Posted by iang at January 28, 2010 02:34 PM
Nice idea forgoing the valid certificate for HTTPS. Are you using it as another way to make your point?
probably not wanting to pay the "Certificate Authority" cartel primarily operated by CIA,NSA-contractor spinoff Verisign/SIAC type companies that almost surely have the ability to do man-in-the-middle viewing of "encrypted" traffic via the backbone providers which are now well documented to allow nearly unlimited tapping requests?
A ha ha it's not like the page needs to use https. It's a blog for godsake.
As far as the security community is concerned, it is simple denial. Consumers want to use the computer to perform tasks, not become part of daily maintenance rituals.
Since the security community doesn't know how to fix a failed security model, they rationalize point solutions in order to milk the cash cow. Better to promote bad science than no science for that noble cause.
Why denial? The real science is more likely found in the writings of Roger Schell and the paper linked below.
The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments
Add to your list things that everyone thinks works but hasn't been validated ...
1) fingerprints - see Simon A Cole for details.
2) sniffer dogs for drugs/explosive (court case in London, pending)
Are we a science? I would suggest that if you follow Kuhn, we fit the definition of a proto-science very, very well.
As normal I'm late to the party on this one...
As you point out we have a bit of an issue with "security" in it's various guises.
And that is problem #1, security means different things to different people.
First of in English we have two words "security and safety" in French they have but one "Securite" this fundemental language difference effects the way people think about the issue. And ignoring it leads to deep seated and fundemental problems.
With regards to "Information Security" we don't even know what "information" is (try explaining why it differs from data without using the self recursive "meta" words).
Oh and when you've done that ask yourself why both data and information are not knowledge.
This is problem #2 what is information and what are it's properties.
I would actually argue that the "tangable" physical world we know and love to touch, is infact a subset of the "intangable" information world.
Thus not only do we not know what it is, we cannot see much of it from the subset of it we exist in.
If you think I've been hitting the "wacky backy" no I haven't. Science moves forward a number of ways but essentialy it is the process of gathering organising and verification of information in a form that we call knowledge.
You get the old line of "Newton discovered gravity", well no he did not, gravity appears to be an integral charecteristic of our physical world.
What Newton did was observe, theorise, test and accept/reject each proto-theory untill he had a mathmatical model that appeared to fit with his observations and those of others.
We now know it is not accurate but suffices to get us around the solar system. It is why there is the truisum about physics being "a series of more accurate lies, each closer to God's dice than the last".
Which brings me around to the point Alex refers to about "we fit the definition of a proto-science very, very well".
We are currently in Aristotl's version of science not Newton's the reason for this is simple,
Which is Problem #3, what are informations measurands.
The answer is outside of the nebulous "Information Entropy" not much apart from the axiom of a bit.
Oh and a researcher at IBM worked out the minimum energy required to store or move a bit of information in our "tangable" physical world. But people forget that it is not intangable information, just the physical image of a part of it, like a shadow ghost photograph or written description of a smell.
Thus we have no reliable metrics, we just mooch around in "best practice" and throw a few effectivly meaningles statistics around.
Oh and the joke of them all is the question "What is a random number". We use expresions such as "non determanistic" or words such as "probablistic", or quips about "living in a state of sin".
One little talked about asspect of science is "borrowing", that is an advance in one area of science gives rise to a new perspective in that area. "Your better class of thinker" (obligitory Douglas Adams refrence ;) realises that the "new light" can be used to illuminate other "dark corners" and thus further enlightment in other fields of endevor.
Well the nearest field of endevor we have to information is Quantum Physics...
Which might account for some of the seamingly odd view points of the likes of Seth Lloyd (Universe is a computer), Roger Penrose and others (Quantum conciousness), who are starting to belive that our ability to think may be quantum in nature...
And we do know from the likes of photosynthesis (google bacteriochlorophyl or BChl) and sight / smell that the biological part of our physical world has been happily using Quantum effects for... well longer than we can remember ;)
But getting back to the "security game" a number of the things you refere to are examples of the classic con.
Which is Problem #4, the security industry is currently a con game.
That is you find a mark with resources you want and work out how to dress up nonsense in a way they want to believe (ie selling the finest of invisable cloths).
Through to inflating a need to create a market for a product you have (FUD Marketing).
This works simply due to having no reliable metrics to test claims.
But also as in some areas of security alows the "operator" not the "system" to be blaimed when a system fails.
That is you can claim the operator did not have suficient faith in the system...
Which is great for very dubious systems (dowsing for bombs). You can also through in fringe reasearch papers to lend credability to your otherwise bogus claims.
However there is another issue when it comes to "borrowing" that is it is not of necessity bi-directional.
Which is Problem #5 although lessons in intangable security can be applied to tangable security the opposit is not true.
The reason for this is "non locality" and "no cost force multipliers" due to "no cost duplication" for criminals and other naredowells.
In the tangable world you have very real financial costs to duplicate an object so uniquness has some "value" meaning.
Likewise tangable force multipliers (tools) are constrained by duplication costs and usage costs (power).
Finally to make illicit use a tangable object you or a force multipler have to be local to it (physicaly present) which adds other constraints.
None of these constraints apply in the intangable information world to the naredowell. From the other side of the world they send information and the target machines duplicate it and process it and send the results back. The naredowell only pays for their Internet cost to launch their attack (say 1USD if that).
Which is one of the reasons why the usual actuarial processess (from the insurance industry) that we use for risk analysis just does not work in the information world...
Which is problem #6 rather than seek out the properties of information that can be used for reliable and repeatable metrics we chose to borrow models from the discredited world of economics.
We have to actually ask ourselves are economic models based on a tangable world with tangable goods and tangable constraints that only work in a very limited way actualy valid in an intangable world with intangable goods not subject to the tangable constraints?
For instance examination of the Dot Com boom/bust of the Internet and before it the telephone, telegraph and railways shows us that "free market" economics do not apply.
Further we know that the tangable world cost/distance metric does not apply to the Internet, thus there are not realy physicaly seperate markets which allow for competative growth.
As an example Google's problems in China. China is due to a number of issues effectivly a seperate market (for those inside). Google chose to be hamstrung as the price of being able to play in that market.
Google is now upset that it is loosing out to a competitor inside that market that is not hamstrung...
Thus the simple "free market" rules do not apply which is to Googles cost.
We have to ask ourselves the question,
Are we going to get sufficient insight from using inappropriate models borrowed from economics and insurance to turn intangable world security from a proto-science to a science?
Or are we going to be better served using models that do not have underlying hidden assumptions from the tangable world?
I suggest the latter is going to be best in the long run.
The bottom line is information is intangable, we know little or nothing about even the tiny subset we experiance in our tangable world. This lack of understanding of it's properties means we have no usable metrics. We cannot borrow from most other fields of endevor because they all have low level implicit tangable assumptions and constraints. And for naredowells the lack of metrics and tangable constraints is significantly to their advantage.
As was once said about (non American) football "It's a funny old game", and for those of us playing we most certainly are living the curse of "interesting times".
Google is the vanguard of a failed capitalist state now that their friends are interested in hoisting the China threat as a means of diverting the people away from the economic failures. The next step will be a war with Iran over some idiotic idea that we can tell them what to do with their destiny. If China wants to hack Google let them have at it, if Google doesn't appreciate the Chinese regime then get out of the market place. Real people don't care about the evil empire of Google, Microdick, the United States Government, or the Chinese Government in fact the vast majority of time is spent getting around these fools or ignoring them. If Islamic terror is a real threat then boycotting Saudi and Nigeria oil might be highly suggested, if Hugo and his Bolivarian desires bugs you then don't buy their products boycott them and ignore them. The threats we face as simple people have not been reflected in any of the political events, because if that where to happen protest about the cost of good beer and vodka would be the headlines. Anti-government entities in China would not select Google for anything so perhaps as the cost of entry into the Chinese market Google agreed like Microdick to surrender their source code and then backed out that sound more plausible.