I find that most people with InfoSec backgrounds confuse the purpose of using probability theory in risk analysis (1).
Most security folks (and many in the financial industry) believe that risk analysis is something to *engineer* future state, rather than a tool used in understanding our ability to meet qualitative objectives. As such, when the state of nature changes (as it inevitably does) or when it's determined that the analyst screwed up in accounting for uncertainty or variable measurement - the whole process is demonized.
In reality, a good model for risk analysis can only help rational actors arrive at rational conclusions. It cannot and will not forsee a precise future state, but it rather serves to help remove bias and provide structure to what would otherwise be an ad-hoc decision making process. It is with this in mind, that I often ask the authors of these sorts of articles - "well, how then shall we live?" The best answer I get is "suggested practices"(2). The problem with this concept is that it is, in and of itself, a risk analysis model, just one done as a faith-based initiative rather than one done with any real rigor ("trust me, I'm the auditor, you need these controls").
W/regards to other points:
---------------------------------------
"The only business that does risk management as a core or essence is banking and insurance"
False on two accounts. First, allow me to point you to future earnings guidance statements made by public companies.
Second, I'd say that FinServ is just a market segment that applies analytical rigor to a product line that has a significant degree of uncertainty. Chemical and Aerospace engineering, Food Service, and many other industries I'm skipping over do perform rigorous risk analysis, it's just that the system they operate in has much less uncertainty.
---------------------------------------
"risk management is... something...you ignore because you've got too much to do."
Nope, at worst it's just something you don't apply significant rigor to because it's not perceived as necessary. When you walk across the street, decide to hire or not to hire, just about any decision that has the potential for negative consequence, you're creating a belief statement that is "go" or "no go". This is very much a risk analysis, as in a Bayesian sense you're creating a belief statement about what is the most probable wise action.
---------------------------------------
"ROI in infosec is GIGO"
I think you're confusing the concept of the quality of inputs into a model with a statement about the quality of the model.
With regards to ROI in infosec, I find those who simply state that it "can't be done" categorically to be boorish purveyors of hyperbole. They seem to be obsessed with confidentiality and forget that availability is a significant aspect of the charter for most security departments. ROI for keeping production systems available most certainly can be calculated with some degree of suitability.
Now that said, I don't believe that ROI is applicable when we're concerned with and/or including the probability of losses due to breaches in confidentiality and integrity, as these concepts are not easily tied to incoming cash flow in a direct and obvious manner.
---------------------------------------
"Risk management is just another word for NPV, so risk management doesn't work."
False premise, false conclusion. NPV necessitates some concept of cash flow: Rt/(1+i)^t where Rt is cash flow in. Risk Analysis, in InfoSec/Engineering at least, is currently based on the Dutch model: probable frequency of loss and probable magnitude of loss (note that ALE is a number of limited value, as risk is a derived number like km/hr). Two totally different concepts.
---------------------------------------
"a priori, risk management suffers GIGO"
Um, what? If you mean that using deductive reasoning, models about the world require useful inputs to develop useful outputs, OK then. All perceptions of reality have that same limitation. But I see no deduction on your part to achieve a statement of "a priori".
---------------------------------------
"Consider the famous case of the car-lock. Car locks used to be notoriously weak. Why? Because a car stolen was a car sold. So, no matter what numbers were applied in the above risk management calculation, it always gave the wrong result; better locks made the position worse!"
You seem to be assuming an objective ethical position here and inferring that all actors would desire to achieve it. Rather, the car company most certainly did an analysis and came to the conclusion that it's interests were different than the consumer. It's a great example not because it "proves" risk analysis to be silly in some Popperist sense (3) but rather it highlights the most interesting problem in Risk Management - the problem of multiple perspectives (an example would be where the risk manager's individual compensation is inconsistent with executive risk tolerance).
---------------------------------------
Finally, in response to your summary, I think you over-complicate the value the CISO/CSO/CRO has to the company. Their value boils down to only two things; Align risk exposure to the tolerance of management or create operational efficiencies. All this other talk of "aligning to business and strategy" is, in my opinion, pure bunk.
===========================
(1): note that the concept of risk management isn't necessarily what you're referring to here - risk management has as much to do with understanding capability as it does with arriving at a state of knowledge. Without that capability component, you'll never achieve a state of wisdom.
(2): ironically using the term "best/good practices" implies some sort of analysis and measurement.
(3): In fact, I'd say that the state has changed to the point where the opposite is true, cars probably have too much lock security built in. I wonder what the locksmithing industry would have to say about the 70's vs. now and their ability to retrieve our keys for us.
Posted by Alex at January 22, 2009 10:24 AM