Software Security Labels: Should we throw in the towel?

Eric Rescorla at Educated Guesswork has his say on software facts labels. He doesn’t appear to be very optimistic. I can appreciate his view on the challenges here, and I generally agree with his very detailed analysis, but I have to say that I WANT SOMETHING LIKE THIS SO BAD I CAN TASTE IT.

Here’s the problem: in the absence of objective data points that describe vulnerability potential, we are stuck with the silly model we have now of counting known vulns, which only measures how much the bugfinders don’t like some particular vendor and their (bugfinders) relative skill level. In addition, there are smart people recommending the  horribly bad idea of software liability regulations, and this may be a reasonable alternative.

What was worst about Rescorla’s posting was simply that he didn’t appear to even want in on the conversation except to point to its likely failure. (When important people suggest that something will fail, it usually does in the short term. Just not for the reasons they believe, but simply because they didn’t believe in it. Luckily, over time someone will prove them wrong and they will miss the boat. We all lose in the meantime because it will take twice as long to accomplish.)

I believe that the problem is one that demands a solution. And I am willing to endure failed attempts while we determine whether the correlation between whatever we count and vulnerabilities is actually there. If there ends up being no correlation, then things will get interesting with our root cause analysis.

So the question is – what is it about software that makes it more- or less- vulnerable? One easy hypothesis that has been made in the past is that complexity drives insecurity. So I think that number of "qualified" lines of code (non-comments) or function points may be useful. In addition, the concept of cyclomatic complexity has been around since 1976:

Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module. This measure provides a single ordinal number that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabe’s complexity. It is often used in concert with other software metrics. As one of the more widely-accepted software metrics, it is intended to be independent of language and language format [McCabe 94].

I also believe that the methodology laid out in "Threat Modeling" may be of use here. Their concept of Entry Points (aka Attack Points) may be a useful item to count. Entry points include any interface a program has with the outside world (I assume on the receiving side). I interpret this as meaning human and system interfaces, so would include all inputs and APIs.

It does make sense to consider the control infrastructure – whatever mechanisms are in place, like input validation techniques, but I think I am willing to forego that for now. In addition, defect density estimation is pretty compelling, with an opportunity to use fault injection and/or independent testing groups to calculate how many defects may still be in the program. Again, I am willing to forego this level of depth until we get our feet wet.

Notice I didn’t say anything about programmer skill, coding time, testing time, etc. I think these may be useful, but am betting that my initial set is going to be a better indicator of vulnerability. (It seems too easy to me to report extended hours worked coding with no resulting higher quality).

An alternative to the software facts label may be a pet project of mine – the "software safety data sheet" (see here, here, and here). Rather than counting things, the SSDS simply enumerates code paths and specifically identifies all entry points, in the same manner as a Host Intrusion Prevention Solution does in the aftermarket. Not metrics oriented, per se, but another way to secure programs.

Whatever we do, I think it should be kept simple initially to determine whether the data bears out the hypothesis. Now, we need some software architects to weigh in on what are the best candidates.

1 comment for “Software Security Labels: Should we throw in the towel?

  1. Steve Christey
    October 27, 2005 at 4:33 am

    I feel your pain regarding solid metrics for measuring software vulnerability. The bias in vulnerability research can be unintentional as well. There are approximately 300 different flavors of vulnerabilities by my count, yet most researchers only focus on 10 or 20. Every researcher has their own preference or skill area. The public record of vulnerabilities is nowhere near reliable, but it’s the best we have for now. The best clues are how complex the discovered vulnerabilities are (you rarely see a 100% obvious buffer overflow in a major software package these days – most overflows involve multiple bugs, complex attack scenarios, or previously unexplored functional areas such as file formats).

    The research community needs to establish reasonably formal analysis methods that are repeatable and comprehensive, then deploy those methods on multiple products, and publish their results. Only then can you possibly compare the security of products. At least it would be a start.

    Steve Christey
    CVE Editor

Comments are closed.