October 06, 2006
Why security training is really important (and it ain't anything to do with security!)
Lynn mentioned in comments yesterday:
I guess I have to admit to being on a roll.
:-) Lynn grasped the nexus between the tea-room and the systems room yesterday:
One of the big issues is inadequate design and/or assumptions ... in part, failing to assume that the operating environment is an extremely hostile environment with an enormous number of bad things that can happen.
What I didn't stress was the reasons behind why security training was so important -- more important than your average CSO knows about. Lynn spots it above: reliability.
The reason we benefit from teaching security (think Fight Club here not the American Football) is that it clearly teaches how to build reliable systems. The problem addressed here is that unreliable systems fall foul of statistical enemies, and they are weak, few and far between. But when you get to big systems and lots of transactions, they become significant, and systems without reliability die the death of a thousand cuts.
Security training solves this because it takes the statistical enemy up several notches and makes it apparent and dangerous even in small environments. And, once a mind is tuned to thinking of the attack of the aggressor, dealing with the statistical failure is easy, it's just an accidental case of what an aggressor could do.
I would even assert that the enormous amounts of money spent attempting to patch an inadequate implementation can be orders of magnitude larger than the cost of doing it right in the first place.
This is the conventional wisdom of the security industry -- and I disagree. Not because it doesn't make sense, and because it isn't true (it makes sense! and it's true!) but because time and time again, we've tried it and it has failed.
The security industry is full of examples where we've spent huge amounts of money on up-front "adequate security," and it's been wasted. It is not full of examples where we've spent huge amounts of money up front, and it's paid off...
Partly, the conventional security industry wisdom fails because it is far too easy for us to hang it all out in the tea-room and make like we actually know what we are talking about in security. It's simply too easy to blather such received wisdom. In the market for silver bullets, we simply don't know, and we share that absence of knowledge with phrases and images that lose meaning for their repitition. In such a market, we end up selling the wrong product for a big price -- payment up front, please!
We are better off -- I assert -- saving our money until the wrong product shows itself to be wrong. Sell the wrong product by all means, but sell it cheaply. Live life a little dangerously, and let a few frauds happen. Ride the GP curve up and learn from your attackers.
But of course, we don't really disagree, as Lynn immediately goes on to say:
Some of this is security proportional to risk ... where it is also fundamental that what may be at risk is correctly identified.
To close with reference to yesterday's post: Security talk also easily impresses the managerial class, and this is another reason why we need "hackers" to "hack", to use today's unfortunate lingo. A breach of security, rendered before our very servers, speaks for itself, in terms that cut through the sales talk of the silver bullet sellers. A breach of security is a hard fact that can be fed into the above risk analysis, in a world where Spencarian signals abound.
Posted by iang at October 6, 2006 02:35 PM
note that some of the security issues were extremely well understood in the 60s & 70s ... where systems designed for commercial timesharing operation had some fundamental integrity operations built into the basic operation ... and had little of the vulnerabilities seen in many of the more modern systems.
i've frequently contended that it involves many of the original design and environment assumptions. many of the more modern systems came from a genre of stand-alone, unconnected, personal systems sitting on somebody's kitchen table. there was little reason to design in countermeasures to network-based hostile attacks. many of these systems also developed a large based of applications that were depended on taking over the complete system for operation (like the game market). later some of these systems were adapted to closed networks for group collaboration ... again not a basically hostile environment requiring any attack countermeasures.
it was when a large number of these systems started being attached to open, hostile networking environment that a lot of the problems started showing up ... since none of the original design assumptions included operating in such an environment.
i would contend that it is somewhat analogous to taking one of the original horseless carriages and placing it on the track in the middle of a indy 500 race.
as to the frequent failures of many of the upfront, designed-in security efforts ... one could claim that there was inadequate understanding of the threat and fundamental security principles. of course this can also be attributed to not getting any educational grounding in treats and security.
the counterexample is that many of the systems from the 60s and 70s designed for commercial timesharing (where there is an assumption that different users, would attack each other, given the chance) ... have had much, much lower rate of vulnerabilities.
I was involved in cp67 and vm370 ... originated on 4th floor of 545 tech sq which was used extensively by commercial timesharing service bureaus
and mutlics was done on the 5th floor of the same bldg.
multics security reference:
until recently buffer overflows were a dominant form of network attacks
which has somewhat given away to networking attacks using various forms of automatic scripting vulnerbilities.
however, a paper from a couple years ago cited the lack of any buffer overflow vulnerabilities in multics
http://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation
2.2 Security as Standard Product Feature
2.3 No Buffer Overflows
2.4 Minimizing Complexity
This Research Report consists of two invited papers for the Classic Papers section of the 18 th Annual Computer Security Applications Conference (ACSAC) to be held 9-13 December 2002 in Las Vegas, NV. The papers will be available on the web after the conference at
The first paper, Thirty Years Later: Lessons from the Multics Security Evaluation, is a commentary on the second paper, discussing the implications of the second paper's results on contemporary computer security issues. Copyright will be transferred on the first paper.
The second paper, Multics Security Evaluation: Vulnerability Analysis is a reprint of a US Air Force report, first published in 1974. It is a government document, approved for public release, distribution unlimited, and is not subject to copyright. This reprint does not include the original computer listings. They can be found at
... snip ...
I have recently participated in a small conference on industrial esp..., sorry, business intelligence, where a sysadmin-type guy gave a presentation about how important various security practices (from "conventional wisdom") are and how they all have failed him in practice (especially user training). Yes, you read it right. I couldn't resist giggling.
Now, when I told him about these things we have discussed so much: about learning from your adversaries instead of second-guessing them in the tea-room and about thinking about motivations and rational behavior of participants in the security process (why do people stick passwords on their monitors?), he was profoundly surprised. He said, I have shown him an entirely new facet of the problem of security.
So, I guess, there are still quite a few security professionals around, for whom security is just synonimous to excessive paranoia. And your post sheds some light on why there is a demand for them: because by fighting against Rational Adversary they manage to defeat Blind Chance.
Chalk one up for rational behavior that is not necessarily concious. In other words, for people doing the right thing for all the wrong reasons... :-)
in combination with changing the paradigm for x9.59
http://www.garlic.com/~lynn/aadsm25.htm#41 Why security training is really important (and it ain't anything to do with security!)
we started looking at a generalized authentication mechanism that would replace pin/password ... which became the aads chip strawman in the 1998 timeframe
now if you had a hardware token that never divulged its private key for authentication ... it would be "something you have" authentication ... and it would have to be physically obtained in order to do some fraud. it wasn't susceptable to the "yes card" type exploits because it didn't use static data
and, in the case of x9.59 transactions ... the actual transaction was signed and authenticated ... so it wouldn't be vulnerable to any kind of mitm-attacks
that might still occur with cards doing dynamic data ... but authenticating separately from doing the transaction.
so from 3-factor authentication model
* something you have
* something you know
* something you are
so a card represents "something you have" authentication. now, normally, multi-factor authentication is assumed to be more secure when the different factors have independent vulnerabilities or failure modes. in the card case, pin/password is nominally a countermeasure to lost/stolen card. however, lack of careful design can result in the "yes card" exploit ... negating any assumption about independent failure modes.
so there is the problem with people having to deal with scores (or possibly hundreds) of passwords, leading to the password post-it note scenario. in theory card authentication could replace all the pin/passwords. however, in the existing institutional-centric model, that eventually leads to each of the scores (or hundreds) of passwords for a person being replaced with a card. this in itself, is an untenable solution ... but if each card than has a different pin/password ... then the person has to write the appropriate pin/password on each card (again negating any security asumptions related to multi-factor authentication).
so one of the efforts in the aads chip strawman was looking at what might be necessary to switch from a institutional-centric model to a person-centric model ... where a person might have a single (or extremely few) hardware tokens ... that they could register every place there was a requirement for authentication (potentially resulting in a person having only a single hardware token and a single pin to remember)
misc. past posts mentioning what enablers would be needed to transition to a person-centric authentication infrastructure
http://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
http://www.garlic.com/~lynn/aadsm19.htm#14 To live in interesting times - open Identity systems
http://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
http://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
http://www.garlic.com/~lynn/aadsm20.htm#6 the limits of crypto and authentication
http://www.garlic.com/~lynn/aadsm20.htm#36 Another entry in the internet security hall of shame
http://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
http://www.garlic.com/~lynn/aadsm21.htm#2 Another entry in the internet security hall of shame
http://www.garlic.com/~lynn/aadsm22.htm#12 thoughts on one time pads
http://www.garlic.com/~lynn/aadsm24.htm#49 Crypto to defend chip IP: snake oil or good idea?
http://www.garlic.com/~lynn/aadsm24.htm#52 Crypto to defend chip IP: snake oil or good idea?
http://www.garlic.com/~lynn/aadsm25.htm#7 Crypto to defend chip IP: snake oil or good idea?
http://www.garlic.com/~lynn/aadsm8.htm#softpki16 DNSSEC (RE: Software for PKI)
http://www.garlic.com/~lynn/2003e.html#22 MP cost effectiveness
http://www.garlic.com/~lynn/2003e.html#31 MP cost effectiveness
http://www.garlic.com/~lynn/2003o.html#9 Bank security question (newbie question)
http://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
http://www.garlic.com/~lynn/2004q.html#0 Single User: Password or Certificate
http://www.garlic.com/~lynn/2005g.html#8 On smartcards and card readers
http://www.garlic.com/~lynn/2005g.html#47 Maximum RAM and ROM for smartcards
http://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
http://www.garlic.com/~lynn/2005m.html#37 public key authentication
http://www.garlic.com/~lynn/2005p.html#6 Innovative password security
http://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
http://www.garlic.com/~lynn/2005r.html#25 PCI audit compliance
http://www.garlic.com/~lynn/2005r.html#31 Symbols vs letters as passphrase?
http://www.garlic.com/~lynn/2005t.html#28 RSA SecurID product
http://www.garlic.com/~lynn/2005u.html#26 RSA SecurID product
http://www.garlic.com/~lynn/2006d.html#41 Caller ID "spoofing"
http://www.garlic.com/~lynn/2006o.html#20 Gen 2 EPC Protocol Approved as ISO 18000-6C
http://www.garlic.com/~lynn/2006p.html#32 OT - hand-held security
http://www.garlic.com/~lynn/2006q.html#3 Device Authentication - The answer to attacks lauched using stolen passwords?