April 03, 2009

The Exquisite Torture of Best Practices

Best practices has always seemed to be a flaky idea, and it took me a long time to unravel why, at least in my view. It is that, if you adopt best practices, you are accepting, and proving, that you yourself are not competent in this area. In effect, you have no better strategy than to adopt whatever other people say.

The "competences" theory would have it that you adopt best practices in security if you are an online gardening shop, because your competences lie in the field of delivering gardening tools, plants and green thumbs advice. Not in security, and gosh, if someone steals a thousand plants then perhaps we should also throw in the shovel and some carbon credits to ease them into a productive life...

On the other hand, if you are dealing with, say, money, best practices in security is not good enough. You have entered a security field, not through fault of your own but because crooks really do always want to steal it. So your ability in defending against that must be elevated, above and beyond the level of "best practices," above and beyond the ordinary.

In the language of core competences, you must develop a competence in security. Now, Adam comes along and offers an alternate perspective:

Best practices are ideas which make intuitive sense: don't write down your passwords. Make backups. Educate your users. Shoot the guy in the kneecap and he'll tell you what you need to know.

I guess it is true that best practices do make some form of intuitive sense, as otherwise they are too hard to propogate. More importantly:

The trouble is that none of these are subjected to testing. No one bothers to design experiments to see if users who write down their passwords get broken into more than those who don't. No one tests to see if user education works. (I did, once, and stopped advocating user education. Unfortunately, the tests were done under NDA.)

The other trouble is that once people get the idea that some idea is a best practice, they stop thinking about it critically. It might be because of the authority instinct that Milgram showed, or because they've invested effort and prestige in their solution, or because they believe the idea should work.

What Adam suggests is that best practices survive far longer than is useful, because they have no feedback loop. Best practices are not tested, so they are a belief, not a practice. Once a belief takes hold, we are into a downward spiral (as described in the Silver Bullets paper, which itself simply applies the full _asymmetric literature_ to security) which at its core is due to the lack of a confirming test in the system that nudges the belief to keep pace with the times; if there is nothing that nudges the idea towards relevancy, it meanders by itself away from relevancy and eventually to wrongness.

But it is still a belief, so we still do it and smile wisely when others repeat it. For example, best practices has it that you don't write your passwords down. But, in the security field, we all agree now that this is wrong. "Best" is now bad, you are strongly encouraged to write your passwords down. Why do we call the bad idea, "best practices" ? Because there is nothing in the system of best practices that changes it to cope with the way we work today.

The next time someone suggests something because it's a best practice, ask yourself: is this going to work? Will it be worth the cost?

I would say -- using my reading of asymmetric goods and with a nod to the systems theory of feedback loops, as espoused by Boyd -- that the next time someone suggests that you use it because it is a best practice, you should ask yourself:

Do I need to be competent in this field?

If you sell seeds and shovels, don't be competent in online security. Outsource that, and instead think about soil acidity, worms, viruses and other natural phenomena. If you are in online banking, be competent in security. Don't outsource that, and don't lower yourself to the level of best practices.

Understand the practices, and test them. Modify them and be ready to junk them. Don't rest on belief, and dismiss others attempts to have you conform to belief they themselves hold, but cannot explain.

(Then, because you are competent in the field, your very next question is easy. What exactly was the genesis of the "don't write passwords down" belief? Back in the dim dark mainframe days, we had one account and the threat was someone reading the post-it note on the side of the monitor. Now, we each have hundreds of accounts and passwords, and the desire to avoid dictionary attacks forces each password to be unmemorable. For those with the competence, again to use the language of core competences, the rest follows. "Write your passwords down, dear user.")

Posted by iang at April 3, 2009 05:19 AM | TrackBack
Comments

Best practices exist for a couple of reasons,

The first and most important is "tort".

The second is that old problem of "metrics" or the lack thereof.

Tort is that wonderfull area of law where the measure is "balance of probability" as seen by "the reasonable person" (not the criminal law measure of "beyond reasonable doubt").

Which boils down to safety in numbers or herd protection. If you are doing what everybody else is doing then you must be doing "the reasonable thing". Which means that on balance of probability you are not going to be found wanting if potentialy standing on the wrong end of a civil action...

So from the liability asspect, the lower the standard of "best practice" is, the easier it is to acheive...
However apart from the liability aspect is being one of the herd advantageous?

The answer not unexpectedly is most definatly not.

It is the opposit of hybrid vigor, it is a mono culture with identical strengths and most importantly weaknesses. If a virus exploites a common weakness then you all catch a cold or worse...

Which means different strokes for different folks realy is a more efficient and (for the "commons") safer way to go. As the number of vectors required to achive significant effects is comprable to the number of different stratagies deployed.

The second issue of "metrics" or the lack there of is why we realy should stop using terms like "computer science" and "software engineering" and be a little more honest and use something along the lines of "Security Artisan".

Isacc Newton invented many things (milling on coins, the cat flap etc) but his greatest claim to fame is not from guessing why the apple fell on his head, but devising a system to get at the "fundemental truth" of it.

The system is the iterative "scientific method" of observe, hypothosise and test.

Importantly the first and third steps require a method by which you can make meaningfull and unbiased judgment or comparison.

Not just by saying X is less than or greater than Y, but by being also able to say by how much...

That is metrics should be quantative not qualative in nature.

That is you have to have a dependable system of measurment for various aspects of the entities you wish to consider, to the dgree required to make meaningfull judgment.

This series of related measurment systems are called "metrics" and without them the scientific process cannot be undertaken.

More correctly the science of measurment is known as "metrology" (not to be confused with guessing if it's going to rain ;)

Without reliable and meaningfull metrics you cannot make valid observations. Therefore you cannot test any hypothosis you or others may have.

However there is a "gotcher" which is "measurment for measurments sake".

Measurments have to be meaningfull within the context they are being used. Meaurments from one context may have little or no meaning in a different context.

Therefore in any given context you first have to decide what aspect of an entity or system it is you wish to measure (and importantly why).

Then how to express individual measurments with respect to each other in different contexts (ie volume and mass are related through density and the ratio of densities is used to calculate displacment which is why Brunell knew that contary to conventional wisdom of the time an iron hulled ship would float).

In a factory or manufacturing plant there are many contexts,

The machine operator is only interested in "making to spec" as quickly as possible. However the toolsetter is interested in minimising tool wear to maximise up time. The shop floor manager is mainly interested in resource utilisation. The production manager in ensuring the right resources are at the right place at the right time with minimum "on hand" and storage times. The general manager in various aspects but mainly in efficiency and cost minimisation. The finance manager on ROI of Capex and cash flow. The MD on maximizing shareholder value.

Each context has it's own metrics but each context can relate it's metrics in a manner that is usable in related contexts.

However the MD is not likley to want to see or care about the metrics used by the machine operator. The MD's interest is in how it relates to shareholder value and is only realy going to listen when the metrics are expressed in units of measure for his context.

A failure to realise this will mean that you will gather the wrong data in the wrong format and therfore make the wrong impression on the man who cuts your cheques at the end of the month...

Posted by: Clive Robinson at April 16, 2009 09:52 PM

Best practices exist for a couple of reasons,

The first and most important is "tort".

The second is that old problem of "metrics" or the lack thereof.

Tort is that wonderfull area of law where the measure is "balance of probability" as seen by "the reasonable person" (not the criminal law measure of "beyond reasonable doubt").

Which boils down to safety in numbers or herd protection. If you are doing what everybody else is doing then you must be doing "the reasonable thing". Which means that on balance of probability you are not going to be found wanting if potentialy standing on the wrong end of a civil action...

So from the liability asspect, the lower the standard of "best practice" is, the easier it is to acheive...
However apart from the liability aspect is being one of the herd advantageous?

The answer not unexpectedly is most definatly not.

It is the opposit of hybrid vigor, it is a mono culture with identical strengths and most importantly weaknesses. If a virus exploites a common weakness then you all catch a cold or worse...

Which means different strokes for different folks realy is a more efficient and (for the "commons") safer way to go. As the number of vectors required to achive significant effects is comprable to the number of different stratagies deployed.

The second issue of "metrics" or the lack there of is why we realy should stop using terms like "computer science" and "software engineering" and be a little more honest and use something along the lines of "Security Artisan".

Isacc Newton invented many things (milling on coins, the cat flap etc) but his greatest claim to fame is not from guessing why the apple fell on his head, but devising a system to get at the "fundemental truth" of it.

The system is the iterative "scientific method" of observe, hypothosise and test.

Importantly the first and third steps require a method by which you can make meaningfull and unbiased judgment or comparison.

Not just by saying X is less than or greater than Y, but by being also able to say by how much...

That is metrics should be quantative not qualative in nature.

That is you have to have a dependable system of measurment for various aspects of the entities you wish to consider, to the dgree required to make meaningfull judgment.

This series of related measurment systems are called "metrics" and without them the scientific process cannot be undertaken.

More correctly the science of measurment is known as "metrology" (not to be confused with guessing if it's going to rain ;)

Without reliable and meaningfull metrics you cannot make valid observations. Therefore you cannot test any hypothosis you or others may have.

However there is a "gotcher" which is "measurment for measurments sake".

Measurments have to be meaningfull within the context they are being used. Meaurments from one context may have little or no meaning in a different context.

Therefore in any given context you first have to decide what aspect of an entity or system it is you wish to measure (and importantly why).

Then how to express individual measurments with respect to each other in different contexts (ie volume and mass are related through density and the ratio of densities is used to calculate displacment which is why Brunell knew that contary to conventional wisdom of the time an iron hulled ship would float).

In a factory or manufacturing plant there are many contexts,

The machine operator is only interested in "making to spec" as quickly as possible. However the toolsetter is interested in minimising tool wear to maximise up time. The shop floor manager is mainly interested in resource utilisation. The production manager in ensuring the right resources are at the right place at the right time with minimum "on hand" and storage times. The general manager in various aspects but mainly in efficiency and cost minimisation. The finance manager on ROI of Capex and cash flow. The MD on maximizing shareholder value.

Each context has it's own metrics but each context can relate it's metrics in a manner that is usable in related contexts.

However the MD is not likley to want to see or care about the metrics used by the machine operator. The MD's interest is in how it relates to shareholder value and is only realy going to listen when the metrics are expressed in units of measure for his context.

A failure to realise this will mean that you will gather the wrong data in the wrong format and therfore make the wrong impression on the man who cuts your cheques at the end of the month...

Posted by: Clive Robinson at April 16, 2009 09:53 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.