In our profession, the number of vulnerabilities is often used as an indicator of the relative security of some particular product. For example, McAfee’s report about Apple security uses trends in vulnerabilities to support its position that Macs are becoming an easier target for attackers. It is not clear to me whether this vulnerability data is a leading or lagging indicator and how the information should be analyzed/used for strategic planning.
As a leading indicator, the vulnerability information should be useful to predict the relative security of a product; as a lagging indicator, it confirms the notion that the product was insecure in the past. This latter option (lagging) seems most appropriate for the nature of the information. What would be really interesting is for someone to do some work to determine whether the number of known vulnerabilities is useful as a predictor of future vulns, for any given point in time. (Eric Rescorla looked at vulnerability trends to try to identify an eventual reduction but his results concluded that there was no identifiable trend downward. Andy Ozment tried something similar and suggested that there might be one.)
If the common use of vulnerability counts is legitimate – that is, more vulnerabilities equals less secure – then we really should be limiting the number we find, shouldn’t we? We are told quite frequently that finding these vulnerabilities make us more secure, not less secure. And adding to the complexity, any patch contributes its own relative strength/weakness to the program when it is applied (Dan Geer suggested in his monoculture paper that patches make software less secure).
Of course, all this is pretty silly in the face of how the numbers are accrued to begin with – it is more likely to be an "ugly contest" with the bugfinders as judges than some sort of reasonable measure of security. What we really need is a better way to evaluate software security without resorting to these numbers.
Gunnar Peterson at 1 Raindrop mentions a Microsoft report that is extremely interesting and yet also inconclusive in its attempt to evaluate the "reliability" of software using various software complexity metrics. This is the work we need more of – that which is a good predictor of future bugs (and the security bug subset) that can assist us in determining which software has the fewest bugs. Then we can turn to our "return on bug investment" to determine whether it is worth it (most folks have already decided Microsoft is worth it.)
> What we really need is a better way to evaluate software security without resorting to these numbers.
We already have one – http://www.commoncriteriaportal.org/. Assurance is much a better measurement than the lagging indicator of vulnerability counts.
Using the vulnerabilty count data to measure how secure a product is reminds me of the villagers logic from Monty Pythons Holy Grail where they attempt to determine if the young lady is a witch – Re-read http://www.mwscomp.com/movies/grail/grail-05.htm and replace the test for being a witch with measuring insecure software. =)
Robert
@Robert -
Are you suggesting that Common Criteria is using a metric to determine security level or simply that EAL4 (like one of the Windows variants has) or other is useful in this regard?
@Pete
> Are you suggesting that Common Criteria is using a metric to determine security level or simply that EAL4 (like one of the Windows variants has) or other is useful in this regard?
Keep in mind that EAL4+ is the level of assurance you have that they met the goals outlined in their Security Target (http://niap.nist.gov/cc-scheme/st/ST_VID4025-ST.pdf). Those goals are quite low when compared with the goals of Trusted Solaris (http://www.cesg.gov.uk/site/iacs/itsec/media/sectarg/TSolaris8_Issue3.1.pdf).
You can’t compare the assurance metric alone without considering what it is providing assurance of =).
Robert