Attention InfoSec Pros: measuring risk is in your future

Mike Rothman of Securosis stirs things up a bit with his “Risk Metrics are Crap” post. This type of exercise forces participants to make public commitments. In itself, this is not a huge deal since many positions of those in our space are relatively well documented already, however, anyone familiar with Cialdini knows that commitment serves to reinforce positions and not promote compromise or learning. Not surprisingly, nobody changed sides. In fact, nobody moved an inch (or maybe that’s a “teeny-tiny bit” for those quant-averse participants).

More importantly, nobody is budging because there is nothing new here. Mike simply took semi-random potshots at risk quantification, used a lot of potty language and then sat back. Perhaps the most ironic part of his post was his claim that he was the one baiting trolls. Clearly, with all the wild assertions he makes, he is the troll… unless he didn’t do his homework. No infosec quant I know thinks “generating a number makes them right” nor would they make a statement like “the risk of firing the admin is 92.”

In a lot of respects, the simple fact that a number of folks who favor the status quo (strange in itself) are attempting to discredit a nascent approach to infosec risk (though it isn’t nascent in many other risk management arenas) shows that the ideas are picking up some steam. One of the great things about a measurement approach is that it is transparent and therefore more easily attackable than a qualitative approach that hides behind its ambiguity. Perhaps the collective body of qualitative approaches can most easily be characterized as simply “not quant,” which doesn’t say much for them.

In some respects, Mike is right that risk metrics (better described as risk measures, IMO) are crap. But the only thing worse than quantifying risk is the status quo – existing techniques for qualitative risk assessments. So we are caught between a rock and a hard place here, and in the absence of advances in qualitative risk assessment (really, how would we know that advances were actually advances?) I opt for risk measurement.

This debate between qualitative and quantitative approaches really isn’t an either-or proposition. Mike seems to think that an ordinal scale of measurement (“We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk”) isn’t quantitative but it is, in a rudimentary way. And certainly we can do better than saying “the likelihood is greater than zero and less than 100%” for some particular negative outcome. If we can’t, we really shouldn’t complain about our companies not taking our advice… because it isn’t really advice, right? The true value we provide to our organizations is to whittle those 0-100 intervals down – in objective terms, smaller confidence intervals equate to higher levels of expertise.

Perhaps the most important point about the debate between qualitative and quantitative techniques is that in order to make risk management decisions – that is, to allocate our security resources – we MUST quantify risk, and in fact we ARE quantifying risk. We know that the resources we are allocating for security have both real costs and opportunity costs associated with them. Whether we are making a purchasing decision for $10 million in security products or simply creating a prioritized to-do list, we are expressing relative value. Economists call it “revealed preferences” when we demonstrate value through our resource consumption practices. And we reveal a lot.

At the very least, we can total up the costs of our security consumption behavior and create an “indifference curve” (I call this a control horizon) that plots a line for all pairs of likelihood and consequences at the point where we should be indifferent between applying the controls or bearing the expected losses. A million dollar investment “breaks even” in circumstances where there is a 1% chance of losing $100 million or a 99% chance of losing $1.01 million and at all the points in between the two. Of course, we usually don’t do this explicitly, but when we decide an investment is “worth it” this is the implication nonetheless. A good decision means that the actual risk lay on or above the line (on a risk matrix), and a bad decision is made when the risk is actually below the line – meaning we are paying more in security measures than the expected loss for the risk being addressed.

It is worth repeating that we are measuring risk already because these measures are buried under the covers of “qualitative risk assessment.” That is what I see as the true value of quantitative models – they are transparent. Converse to Mike’s assertion that risk measurers think “generating a number makes them right,” I would assert that we know we aren’t right, but this makes us more precise. That is, it sets the table for a legitimate discussion about risk-related information so that we can come to some conclusions about values and then determine how to address the risks.

I mentioned earlier that our infosec models are nascent, but that is not entirely accurate. The basic models themselves are fairly robust and have been vetted by economists and mathematicians in numerous other arenas; our challenge is in determining the inputs. Make no mistake, the quality of the inputs drives the accuracy of the model and that will always be a judgment call made with the help of experience and historical data (and thus Mike’s snark at the sub-prime mortgage debacle should be directed at the people who decided the inputs, not the models).

There is no need to malign the risk models and measurements; after all, they underly our existing decisions. And trust me, we quants have been agreeing with and responding to challenges like this for years. To whatever extent it makes people nervous to illuminate the set of assumptions behind security recommendations, it just may be for all the wrong reasons.

Want to get involved with the new generation of risk modelers? Check out how Alex Hutton, Jay Jacobs, Chris Hayes, and others are furthering the modeling work of Jack Jones’ FAIR over at the Society of Information Risk Analysts. It just may be your future.