About three weeks ago, I posted my initial attempt at an objective measurement system that would (perhaps only incrementally) provide a better measure of the "security" of a system than existing measures like vulnerability counts and Days of Risk measures.
Here is the SVR:
Vulnerability Rating (VR) = (sum(DOL)) / (TD – DOB)
A post by George Ou on ZDNet’s "Zero Day" blog provides a good example why I believe my rating is better than raw number counts. What Ou did was count the vulnerabilities for Windows XP & Windows Vista and compared the number to vulnerabilities for Mac OS X version 10.4 and 10.5. It is not really clear what conclusions he draws from the data:
So this shows that Apple had more than 5 times the number of flaws per
month than Windows XP and Vista in 2007, and most of these flaws are
serious. Clearly this goes against conventional wisdom because the
numbers show just the opposite and it isn’t even close.
I think he is suggesting that conventional wisdom would assert that Macs are more secure than Windows systems and yet this analysis proves otherwise. That is, because Windows had much fewer vulnerabilities it is more secure. And in fact, Ou makes the following assertion:
The more monthly flaws there are in the historical trend, the more
likely it is that someone will find a hole to exploit in the future.
This is a fairly common mistake to make, and provably wrong. Intuitively, it better not be true, else we shouldn’t be seeking new vulnerabilities (this, of course, is true but for threat-based reasons, not this one). Logically, it can’t be true – even if we find every vulnerability there would be, according to this statement, many more to come.
Since there are a finite number of vulnerabilities in any software program at any given time, if we reduce the population of unknown vulnerabilities, we also reduce our chances of finding the remaining ones at the same effort level and in the same program, because there are fewer to find. (Btw, even though from an absolute perspective this holds true, from a practicality perspective this n-1 reduction is probably insignificant over the lifetime of a program — i.e. the pool of vulnerabilities is still sufficiently high to provide ripe pickings).
If we make two basic assumptions — 1) that finding (and patching) vulnerabilities makes our software stronger; and 2) that over time, the trend for finding unknown, unpatched vulnerabilities in a program is a downward one — then the only thing this data might tell us is that Macs are more secure and not less secure than Windows systems, assuming that "secure" is equated to a lower number of vulnerabilities since the software was generally available.
Of course, this entire analysis is not very useful to begin with — the numbers are more like votes in an ugly contest than something we can leverage for security.
So far, we’ve completely ignored the threat component of risk (and security). It isn’t completely unreasonable to think of the vulnerability numbers more in terms of threat than vulnerability, given the huge increase in threat-risk once the vulns are disclosed, but even that is pretty tricky. For example, it assumes that the number of interested attackers is the same for each OS.
The SVR is an attempt to address the weaknesses in these types of analyses. It factors in the entire lifetime of a vulnerability (from GA to patch) and reduces the number over time to reduce its number.
I guess the next step is to collect the data and crunch the numbers. Maybe I’ll have time to do this over the next month.