July 29, 2010

The difference between 0 breaches and 0+delta breaches

Seen on the net, by Dan Geer:

The design goal for any security system is that the number of failures is small but non-zero, i.e., N>0. If the number of failures is zero, there is no way to disambiguate good luck from spending too much. Calibration requires differing outcomes.

I've been trying for years to figure out a nice way to describe the difference between 0 failures, and some small number N>0 like 1 or 2 or 10 in a population of a million.

Dan might have said it above: If the number of failures is zero, there is no way to disambiguate good luck from spending too much.

Has he nailed it? It's certainly a lot tighter than my long efforts ... Once we get that key piece of information down, we can move on. As he does:

Regulatory compliance, on the other hand, stipulates N==0 failures and is thus neither calibratable nor cost effective. Whether the cure is worse than the disease is an exercise for the reader.

An insight! For regulatory compliance, I'd substitute public compliance, which includes all the media attention and reputation attacks.

Posted by iang at July 29, 2010 12:29 AM | TrackBack
Comments

Hi Ian,

I'm not sure which "regular compliance" Dan refers to, but for CC, regardless the assurance level at which an evaluation is performed, there is no requirement that the number of failures/faults/bugs must be equal to zero.

Posted by: kr, Twan at July 29, 2010 08:11 AM

Yup, someone else said the same thing. Which is why I modified his comment .. in the text. We *publicly require* everyone to be perfect, or at least to sweep their imperfections under the table ...

Unless we can somehow immunise the public into believing that a "percentage" of fraud is an OK thing, like credit cards.

Posted by: Iang at July 29, 2010 08:14 AM

Aha, that's interesting. Although evaluations will not guarantee a flawless product, a flawed product doesn't necessarily mean it's necessarily (yes, twice) susceptible to fraud!

Many fraud cases known are insider jobs.

What should be acknowledged, imnsho, is the fallibility of the systems in which these products are used as components.

And, if banks and/or ccc's deploy components under their control (a credit card with a chip is never really under my control, same argument as "against voting machines"), the systemic "legal" liability shift to customers/users/citizens is logically untenable.

Then again who said that legal matters are logically tenable ;-)

Posted by: kr, Twan at July 29, 2010 08:16 AM

Yes, I think what is happening is that the whole Internet security facade has been ripped apart by phishing, breaches, botnets and all the rest, in quick succession.

So, what is happening is a lot of navel gazing ... ummming and ahhhing about why it is we know so much, but we're so powerless to do the right thing.

For example, we're all agreed that evaluations and certifications are essential and necessary ... yet we're not all agreed that the result is perfect of it is flawed. Ask any insider and they will answer NO, ask any outsider and the answer is YES, to some approximation.

As a community, I think we have drunk from the crypto koolaid to deeply. In the mid-90s, we were besotted with the apparent uninvincibility of the basic crypto algorithms ... the apparently easy way to secure everything by using more bits ... 56 DES, 128 Blowfish, 168 T-DES, 256 AES ...

All of these were uncrackable and somehow we led ourselves to believe that this is the standard for everything else. SSL had to be uncrackable, and mountains had to be moved to make this happen. Secure Browsing had to be perfect, and minds had to be manipulated to make this happen. And on and on.

This systemic fraud perpetuated on the Internet has its roots in something, and I think a lot of it is this desperate desire to generate a product so uncrackable, so unbreachable that it can be marketed as perfect. A breach waiting to happen...

Posted by: Iang (Pareto-Secure) at July 29, 2010 08:28 AM

The problem of spend to little get hurt spend to much waste resources unprofitably is older even than money.

It is the basic problem with all defensive behaviour, if you go back to the times of the "hunter gather" the gathers had an issue (as do all prey) if you put all your resources into gathering then you will not see the preditor stalking you. If all gathers spend their time looking for preditors then no gathering will occur and they will starve. Thus there is some trade off to an optimum value of lookouts for any given preditor, terrain or group size of gathers.

Interestingly the optimum is usually less than four for all preditors and group sizes that fit within a moderat shout range in open terrain. For larger groups it is usually the number of watchers that will go around the edge of the group and remain within moderate shout range in open terrain. In closed terrain it depends not on shout distance but visuall distance. Which is why you get very large groups (antelope etc) in the open savannah, but much smaller sized groups (monkeys) in closed areas such as scrub and forest etc.

Now the important thing to notice is that the number of watchers goes up at a very very small fraction of the number of gathers.

All of which is why traditionaly we have looked at perimiter defence. However it has a "physical assumption" underlying it which is "locality". In a network environment with 0-day attacks everywhere that is connected is local, thus perimeter defence only works with visable attack vectors (ie those that are known or exhibit behaviour that is sufficiently different from the norm to be detected.

Thus there are three basic classes of attack vector,

1, Known (ie known knowns).
2, Visable (ie unknown knowns).
3, Unknown (unknown unknowns).

Within reason the Known Class can be correctly defended against with uptodate Anti-malware, without effecting the day to day activities of a host (within the network perimeter). A simple measurand for this class is the number of attacks stopped.

Again within reason the Visable Class may be mittigated against using various probablistic techniques. This however may well involve considerable delay (with respect to attack time not human time) and require "issolation" or "quarantining" hosts within the network perimeter which will usually negativly impact day to day activities of a host (within the perimeter). A simple measurand for this class is the number of events detected, a more difficult but more usefull measurand is to distinguish between the "positives" (ie those that are seen and are proven to be attacks, those that are seen and assumed to be attacks and those that are seen and proven to be false alarms).

At first sight the Unknown Class cannot be defended against because there is "nothing to see" thus detect. Therefore the only perimeter possible is a "perfect air gap" which in current times makes a significant impact on some day to day activities of the hosts on such networks. Because there is "nothing to see" it could be argued that there is no measurand.

Setting the resource line should fall between the Visable and Unknown classes, but in most cases resource restrictions actually puts it between the Known and Visable classes.

The question then arises is the Unknown class realy unknown?

The answer is probabilistic or a "Qualified no".

If an attack does not copy any host data and does not modify any host or it's data and does not impact a hosts day to day activities, then it's impact inside the perimeter is negligibly small at that point in time (it might for arguments sake use spare CPU cycles and memory to crack password files form another location).

Such activity might be very difficult but not impossible to spot. Currently with monolithic executable files and current operating systems it is effectivly not possible to spot.

However there is a way that this problem can be resolved but it requires a different computing platform methodology both in hardware and software.


Posted by: Clive Robinson at July 30, 2010 05:00 AM

... Peter Madsen, of Brigham Young University in Utah, and Vinit Desai, of the University of Colorado at Denver, ran into this problem while trying to investigate how organisations learn from both successful and failed ventures, and how that knowledge is retained over time. Their solution was to examine firms, private and public, that launch rockets designed to place satellites into orbit around the Earth. As the authors explain in a recently-published paper in the Academy of Management Journal, when a satellite fails it is easily identifiable (either the rocket makes into space, or it doesn't); costly; and “often very loud”. ...

Posted by: Learning from things that go Bang at August 18, 2010 09:28 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.