Stephen Moore made a comment in Steve Riley’s blog about my ROSI posting:
I think your premise that "We know when a compromise occurs because it is self-defining" is flawed. Let’s say a user’s password is compromised. Certainly we can audit successful and unsuccessful login attempts. But how will we know if an unauthorized person logged in, unless they start doing other obvious damage? Will you be able to detect data theft?
Sure, pick the hardest example As far as passwords go, you could verify the logins with the user to identify them. Of course, in my experience, users are both lax with their passwords and quick to assert that they didn’t login at such-and-such-a-time. On the data side of the problem, you could log all access to all data, then wait and see whether a compromise "shows up." This is the big problem with data – oftentimes, many people (including thieves and impostors) have "legitimate" technical access to it (in the sense that the user ID and the password pair is valid).
There are, of course, many ways to detect data theft. In fact, I have been moderating panels at SecureWorld Expo on this very topic (among a number of others). The vendors on my panels all have products to catch this. Assuming you are willing to implement strong usage policies and track it throughout your network, you can find the numbers there as well.
This kind of thing may seem like a monumental task, but if you really care you can do it today, and it will be trivial for everyone by 2010.
And how do you proactively (please don’t flame more for that, n0one) calculate probabilities of things haven’t to you yet?
Another good point – I am not sure that ROSI is very useful with "high impact, low frequency" events – either your company cares or it doesn’t. But I do believe this is where benchmarking comes in. Though it hasn’t happened to you, we already know about and hear of cases all the time that we assume are "high impact, low frequency" events. If we benchmark with other companies in the industry, then we can better estimate the likelihood. This means we need to get started in order to collect good data.
If you estimate something as 2 in a million and it’s really 1 in a million, it’s still the equivalent of the cost of the security vendor charging twice as much for their product. And a 100% increase or more in cost makes it difficult to even rank competing security measures by ROSI, never mind figuring out with certainty what it is.
I am not exactly clear what the point is here, but I will attempt to address it succinctly. First, you really need a time element to perform the comparison. A "1 in a million" event could happen every 10 minutes in a Fortune 500 company, which is very countable (I think these questions of scale are a good reason NOT to merge information risk management with operational risk management, and also why you need to normalize benchmarking data). Second, you have to figure in the amount of the loss. Once you have time period and losses figured, you can compare that to the cost of the preventive mechanism. Third, I think you’d be pretty crazy to look into a product for this minimal difference – the percent reduction is not useful in our calculations; you’d have to lose $100 billion in order to have a difference of $100,000 to work with. Fourth, I don’t think ROSI is incredibly useful when comparing two products in the same category – you’re better off measuring TCO.
I think we’ve got to get these probabilities as accurate as a weatherman predicting rain before ROSI can become a useful tool.
Up until this last comment, it sounded like you wanted a very precise tool (;-)). ROSI is not that tool. It is certainly more useful today in dealing with high frequency events and/or extreme changes in likelihood of occurrence (incidentally, when it comes to uncertainty humans in general care much more about moving from certainty to uncertainty (100% protected to 95%), or vice versa, than what goes on within the uncertainty spectrum (40% protected to 45%)).
My recommendation to Stephen is …. don’t use ROSI, but start collecting the data. Under your circumstances, you are right that it wouldn’t be useful today. However, if you want it to become useful, you better get started, so those 1 in a million episodes aren’t so daunting.