Chandler Howell responds to my previous post with his own. He brings up a good point:
Picking freqency or likelihood of occurance [sic again ] doesn’t do us much good since, unfortunately, there are only two levels of likelihood in most people’s mind when dealing with risk:
Definitely will happen (probability 1) – [omitted discussion]
Never will happen (probability 0) – [more omitted discussion]
This approach is oxymoronic however, since risk is a function of uncertainty and these events as described are certain. (This sounds a lot like Donn Parker’s perspective). However, there is always an implicit time element involved, and almost always a host of control measures that turn black and white into gray.
This is not to say that Howell is wrong. In fact, I worry about this problem all the time – particularly the security pro’s perspective that any possibility is a definite 1. It really destroys our ability to even use the word "risk" in risk assessments (for reasons mentioned above). But this is like saying that your chances of winning the lottery are either 1 (you win) or 0 (you don’t). [which reminds me of my current favorite bad joke: "What's the definition of stupid? A guy who drives to the store to buy a lottery ticket." Get it?]
And, of course, our chance of dying is 100% and yet we come up with plenty of ways to qualify risk.
This brings us back to units of risk measurement. Chandler also says "To make matters worse, I completely agree with Pete that security practitioners have a nasty habit of playing loose with the fact that, “‘bad’ can be anything we want it to be.”" My comment was not intended to mean this, but he is right anyway. But I would like to define "bad" more. Again, back to units of risk measurement.
Risk is (in fact) the probability that something bad will happen. "Value at Risk" is commonly used in financial services and is a good way to refer to dollar losses, in my opinion. And the "bad" can be:
- unwanted email
- defaced website
- compromised system
- virus
- worm
- lost data event
And the units used are number of events. Now, in order to provide some color, we may need to clarify by saying "the risk of getting an unwanted email" is x%, where x is the number of unwanted emails over total emails received (by the way, this number is really, really high). Voila, we just quantified risk without breaking a sweat.
Risk of getting a worm infection? Total number of worms over total number of network flows.
Risk of getting a virus? Total number of virii over total number of system operations.
Now, some of these numbers are hard to get, and when you do get them, people get scared because the risk may be .0001% (wow, that seems low) and still happen every day, in a large volume environment. So another way to look at it is to look at qualifiers like a total population (this is particularly useful with low frequency events): 1 of every 100 servers is compromised every two years, so the risk that any given server is compromised in a year is .5%. (In 200 years, you expect every server to be compromised once, though some may be compromised more than once while others remain uncompromised).
Think of it this way – what is your risk of dying in a plane crash? Well, it is either the total number of people who die in plane crashes over the total number of people who ride in planes OR the total number of people who die in plane crashes over the total number of people who die (normally in a year). Either case is useful, but different. And fairly easy to qualify (though folks that quote these numbers rarely do qualify them in this way).
This is why controls are so interesting – they will directly impact your level of risk (if they don’t they aren’t very good controls). So if you don’t fly in a plane, your risk of dying in a plane crash is effectively 0. Similarly, if you don’t use email, your risk of receiving unwanted email is 0 (oops, I am guilty of oxymoronic language).
I have more on quantifying risk in this searchsecurity article. It is not that we can’t quantify risk; we simply don’t want to.