Mozilla-mania and Security Metrics

It is truly fascinating to see the volume of media and Internet coverage given to Mozilla in its quest for security metrics. Contrast this with, say, NIST’s CCSS, a slightly different but related initiative where there was almost no coverage…huh.

In any case, given this level of attention Mozilla may be a good organization to push forth some useful metrics. It is definitely worth making a few comments:

  1. First, though this discussion is around “security metrics” it is NOT oriented around “enterprise security metrics.” I would rather Mozilla call this “attack surface metrics” but this is a common ambiguity used by many, so we can just move on after making the distinction.
  2. The functional value of this metric comes during product selection phase, which is much less likely to be an ongoing exercise for most organizations. That means that these numbers will be more useful in the public “which is more secure” wars that go on between Firefox and IE (or Windows, Linux, Mac for that matter). In fact, for any given software type (e.g. browser) this is unlikely to come up more than once every couple of years for most enterprises.
  3. This metric appears to be developing itself to be a better mousetrap than Jeff Jones’ vulnerability counts, but is likely to suffer from the same problems. See below for a discussion and some ideas and recommendations.

Designing an Attack Surface Metric

The majority of people who have studied the problem of designing an attack surface metric agree that vulnerability counts are imperfect in some important ways. More importantly, many also agree that there are various attack surface measures, most notably the Relative Attack Surface Quotient by the folks at Carnegie Mellon but also possible the work being done by Nachiappan Nagappan, et. al. at Microsoft, that hold more promise but are harder to implement.

This makes it quite distressing that there are more promising options available and yet nobody is willing to go there. It also clearly means that we might say we care a lot even if we don’t really care A LOT. Know what I mean? (Although I applaud the effort, the impact is not likely to have much significance in the long run.)

That said, there are still useful metrics that can be derived from information we do have. We know that information includes bug counts and dates of creation, disclosure, and patch availability (and perhaps patch application, but that is a stretch). It would be even more useful if it included complexity metrics about the software itself (lines of code, cyclomatic complexity, arcs and blocs, etc..) and development effort (most importantly, time spent in development). I believe Mozilla will include some part of the former but probably can’t include the latter of these.

The Real Challenge w/ Vulnerability Numbers

In order to evaluate an attack surface metric using vulnerability counts, we must make a basic set of assumptions:

  1. We can’t truly know date of first discovery. That is, we should always assume the bad guys already knew about any vulnerability that good guys only recently discovered.
  2. We must isolate out threat and vulnerability – vulnerabilities can be controlled, threats can only be controlled indirectly (through influence, for example).
  3. When a vulnerability is patched, the software becomes “more secure,” meaning that the vulnerable state of the software is reduced. There is certainly reason to consider an alternative here – that patches make software more complex and therefore more vulnerable – but I think it is reasonable to assert that a patch is much less likely to introduce more vulnerabilities into the software than it was created to address.
  4. Over time, software gets more secure and not less so. That is, if all other things were equal, software that has been available for two years should be considered more secure (less vulnerable) than software of the same functionality created two weeks ago. (This is even more debatable, but I believe it is secondary to the discussion at hand.)

You can probably tell from the assumptions above that this is going to get tricky from a metrics perspective.  The conclusions you should draw are:

  1. Higher vulnerability counts in shorter time periods make software more secure (again, less vulnerable).
  2. Date of disclosure impacts threat, not vulnerability, and should therefore be ignored for an attack surface metric. The appropriate date is the creation date — that is, the day the software went “live” with the vulnerability intact.
  3. A “vulnerability level” curve should be sloping downward and not upward. (Good luck with this one ;-) ).

In the past, I made an attempt to address these challenges when I created the Spire Vulnerability Rating (SVR) metric. I gladly contribute the name and the algorithm to the Mozilla security metrics project. (Related post here: http://spiresecurity.typepad.com/spire_security_viewpoint/2007/12/why-we-need-the.html).

Now, go forth, and multiply! (or is that divide? ;-) ).

2 comments for “Mozilla-mania and Security Metrics

  1. July 11, 2008 at 11:13 am

    One of the things I really like about the Mozilla metrics project is that it includes classifications for the type of vulnerabilities and where they were found in the engineering process. This is great stuff, because finally there will be some real data on how well various parts of a major software development process actually work and where further work needs to be done.

  2. Pete
    July 11, 2008 at 11:25 am

    @Arthur -

    I agree that would be useful. That is the kind of info I am pushing Microsoft to publish regarding its SDL.

Comments are closed.