[Note: I decided to write this to clarify my thoughts on how risk is calculated in response to a question I have about Wade Baker's pseudo-risk calculation in the Verizon Data Breach Investigations Report (DBIR). This may be useful to others still working through the details of risk.]
The first and only necessary component of risk is likelihood. Likelihood is driven by the uncertainty of which outcome within a set of possible outcomes will occur for any single event. Some of those are wanted, and some unwanted by those involved in decisionmaking (I use the word “unwanted” instead of “negative” to cover a broader set of outcomes and address the fact that there are varying opinions about what is unwanted. If the mix of unwanted outcomes is not random or equally distributed (e.g. 2 possible outcomes each happening half the time, or 3 outcomes each happening 1/3 of the time), we use past frequencies of outcomes to inform our beliefs about future risks. The portion of unwanted outcomes out of the total population of outcomes is our likelihood number which corresponds to risk when dealing with potential losses.
The other component of risk involves consequences. I noted above that likelihood is the only necessary component of risk. That is because we often suggest that in order to quantify risk we must quantify our consequences as well, but this isn’t the case. Since we are identifying unwanted outcomes anyway, in many cases we implicitly understand the value or loss involved, even if we don’t quantify it in dollar (or other currency) terms. Not only that, but we can quantify consequences in other ways that are available in whatever circumstances are being evaluated. Whatever numbers we use, they constitute the total number of units in the population of outcomes. In IT security, that might be total endpoints or total number of records or total value of the assets, for example.
When we do have a number to express consequences, we use the risk (estimated using previous frequencies as a starting point) or likelihood expressed as a percentage of total outcomes to discount the total population. This is simply an expected value calculation.
Take for example a sales pipeline. The value of a sales pipeline for a company is the likelihood of closing a deal (making a sale) multiplied by the total possible amount of the deal itself. This likelihood number is a discount factor used to reduce the amount in question based on the risk involved. So a $100,000 deal with a 70% likelihood of closing is worth $70,000 in the pipeline. Risk is like the inverse of the pipeline numbers, since the pipeline measures positive outcomes and risk measures negative (unwanted) ones.
The final outcome of a risk calculation may be
- the probability itself, qualified by what is meant in terms of consequences,
- the portion of the population that is expected to be affected by the unwanted outcome (using the likelihood as a discount factor for the total population), or
- the “value-at-risk” (VaR) expressed in monetary terms (our universal unit of costs or losses) that involves translating the number derived in the previous measure into currency units.
Risk never has been and never will be anything other than likelihood*impact (or probability*consequences…or t*v*i…or similar). Because you measure impact differently or just assume it’s going to be undesirably high, it’s still in the equation. Likelihood-only risk models aren’t risk models. It’s a likelihood model. You never hear the weatherman talking about “the *risk* of rain tomorrow…” – their models produce the “the *chance* of rain tomorrow…”. Sometimes they do give an impact in various ways. “A category 4 storm can blow down buildings…”
Growing up in Florida, I can remember our family taking all that likelihood and impact info in and deciding what our risk treatment strategy would be – stay put and ride it out or go to grandma’s house in Alabama. I was young and not often involved in the decision process but I can definitely tell you that we didn’t go to Alabama when the weatherman told us a category 1 hurricane was heading directly for us.
Hi, Wade -
I agree that impact is still in the equation, but I disagree that it is always quantified, at least not explicitly – the risk can be represented as a likelihood (%). These are the cases where the total population is not well-known or samples/estimates are used. I *do* hear weatherman use *risk* once in a while, but regardless there are plenty of other places that use it:
- The risk of nuclear accidents
- the risk of heart attack (the medical world is full of this)
- the risk of a car accident
I have a Google alert set up to give me a weekly digest on the word “risk” for just this purpose, and BY FAR the likelihood is the number expressed. Last week’s digest includes salmonella risk, Somali pirate’s risk, Australian home buyer default risk, profit margin risk, Florida’s unemployment computer system risk, breast cancer risk, and a few others…
But the consequences/impact IS still in the equation – that is the subject matter.
I think what you are suggesting is that it is better to quantify the impact more specifically (after all, the likelihoods do this implicitly), and I agree with that. Check out the excellent book “Calculated Risks” by Gerd Gigerenzer and he makes the case for using “natural frequencies” in medicine.
What is your definition of Risk? In simple terms, it is exposure to loss. Thus, there has to be a frequency of loss as well as a loss (or consequence) – most commonly represented in dollars. It feels like the information security industry wants to come up with a new definition of risk – and this is perplexing. Likelihood only tells me how many times I can expect a condition to occur that I can expect a loss. You are only accounting for half the story using your logic.
Hi, Chris -
I am not clear whether you read my response to Wade in the second comment above, so at the risk of repeating myself…
I agree that there must be a consequence. What I challenge is the notion that the consequence must be quantified. I gave a handful of examples already, so I suspect you simply disagree every time someone quantifies the risk of heart attack, for example and uses frequencies or percentages alone.
I actually think my definition of risk is broader than most infosec folks and conforms more closely with risk professionals outside of infosec.
Note that Slovic and others have highlighted the varying uses of the term “risk” in an even broader context.
I think Chris nailed it. Risk is exposure to loss.
I too am frustrated by people in InfoSec that have decided to try and define things in their terms. There is a long history of defining and analyzing loss exposures in the insurance industry. Their methods most closely match what we need to accomplish with the management of information risk. The insurance industry has managed to create _very_ profitable business models by accepting risk for premiums. While they haven’t figured it all out we would do well to learn some lessons there rather than pontificating our thoughts on risk and all the things we’ve learned from Google.
Try taking an ARM or CPCU class. At the very least learn a few terms and basic definitions.
@Brooke -
Yours is certainly a reasonable opinion to have, but I wonder if you are a victim of your own experience. The literature on risk is much, much broader than simply its use in insurance, so why must everyone conform to your model?
I should note that I agree that quantifying losses in dollars (or other currency) is more beneficial, however, there are many skeptics in the infosec world that believe information asset value (and corresponding losses) cannot be quantified very well.
I would love to understand whether you think the medical community is using the term incorrectly when they talk about, for example, the risk of heart attack.
Neither you nor Chris have addressed this use case.
Thanks,
Pete
[Btw, what does profit have to do with this issue?]
@Pete
Perhaps here’s an answer to your question about the medical use-case: Not all heart attacks are fatal, not all cancer kills. Our industry’s tendency to avoid quantification of consequence, and often to assume worst-case, is a key contributor to our “chicken little” image.
As for how to arrive at a well-reasoned probable loss magnitude estimate, there are methods and principles that make it feasible — and not as difficult as we tend to make it out to be. For one thing, you need to get the appropriate SME’s involved (and they ain’t the infosec team). It also helps to use an effective taxonomy for loss. Last but not least, there’s more data available than we tend to recognize — you just need to know where to look (which the taxonomy helps with). Will the estimates be precise? No. Will they be accurate and useful? Yes.
Thanks,
Jack
@Jack -
You don’t really answer my question, but I agree with your allusion that there are varying magnitudes of consequences that are worth paying attention to.
I think we are much more likely to ignore the likelihood, not the consequence.
I guess I should continue to repeat myself that I think quantifying consequences is better than not doing it, but I think it is still legitimate not to do it.
Thanks,
Pete
@Pete
I guess I misunderstood your question. Sorry.
I can’t argue with the notion of not quantifying consequences — but it isn’t then a risk statement. It’s a likelihood/frequency statement, and they aren’t the same thing. Without at least roughly quantified consequences, we can’t understand the true significance of an issue. We also sure as heck can’t compare the significance of on issue where consequences have been quantified with one where it hasn’t.
Thanks,
Jack
BTW — quantification of consequences provides the “so what” within an analysis. Absent that, people will infer their own expectations/beliefs about consequence, which often will vary based upon their individual biases and levels of understanding.
@Jack -
I think we are getting ahead of ourselves here… Don’t forget that qualitative risk analysis is alive and well and employ risk statements all the time.
I don’t see how the info provided above and also books such as “Calculated Risks” by Gerd Gigerenzer and “Risk!” from the folks at the Harvard Center for Risk Analysis can all have it wrong. They rarely, if ever, quantify consequences.
@Pete
You’re right. A “qualified” consequence statement can be very useful. You said early in the post, however, that frequency was the only necessary component, which was what I was disagreeing with.
So, are we in agreement that some statement of consequence significance is necessary for a risk statement?
Thanks,
Jack
Yes, consequences are necessary to “frame” the likelihood statement and make it a risk statement. I believe we are in agreement.
Now, on to “pseudo-risk”…
Pete
There is no risk in Pseudo-Risk
The folks at Verizon are doing a great job with their Data Breach Investigation Reports. Their latest edition is their second and it warrants a thorough review. My biggest concern involves their “pseudo-risk” calculation. Regardless of whether it is pr…
Greeting. So many of our dreams at first seem impossible, then they seem improbable, and then, when we summon the will, they soon become inevitable.
I am from Azerbaijan and now teach English, tell me right I wrote the following sentence: “Dave and lisa transplantation over a northeastern show.”
Thank 8) Sem’on.