Can your (unchanged) source code get more vulnerable over time?

Emergent Chaos has a good post today about how risk changes over time: Evolve or Die. In it, arthur makes the point:

"…As a result, the source code has naturally regressed and become more vulnerable over time, much like a piece of machinery wears out over time."

While I agree that the code can become "insecure" over time (i.e. risk increases), I disagree with the reason why. Since the source code has not changed, I believe it is impossible for the source code to "regress" and vulnerability level to increase. That is, whatever new vulnerabilities may be found, even many years later, have always been there (again, assuming no other changes) and the "nefarious hacker" may already have been aware of them and possibly even exploited them.

What changes over time is the threat level. So, a vulnerability in code exists even if it isn’t identified. We can’t necessarily prove it, or quantify it (though it may be possible to estimate it). Once it is found, the risk increases because the threat becomes non-zero. New techniques in finding vulnerabilities (e.g. fuzzers) can increase that threat even more by lowering the effort required to find a vulnerability.

So, why does this matter? First, it is important to note that your software isn’t getting more vulnerable – you have been vulnerable all along and just didn’t know it. Second, it highlights the significant impact that discovery and disclosure have on risk. And third, it is a good way to demonstrate how risk changes over time.

4 comments for “Can your (unchanged) source code get more vulnerable over time?

  1. Erik
    August 30, 2007 at 2:40 am

    I agree with you that the number of vulnerabilities in a piece of code can’t change over time.

    I don’t agree with you with that what makes the code insecure over time is a raised threat level. The threat level might increase over time, but it could just as well decrease. What is more likely to change over time is the number of KNOWN vulnerabilities, i.e. more of the vulnerabilities in the code will be discovered over time. The threat can be high even though there are no known vulnerabilities in a piece of code.

    An interresting question that comes to mind now is which vulnerability level that should be used in a risk calculation (risk=threat*vulnerability*consequence). Should we use the number of known vulnerabilities or an estimation of the total number of vulnerabilities (known+unknown). I would prefer the latter.

  2. Pete
    August 30, 2007 at 8:55 am

    @Erik -

    I agree that the number of known vulns change. But if we both agree that the number of vulns does change, then what does it matter? It is the *known* part that matters. The more people who know about a vulnerability, the higher the probability that it will get exploited (all other things equal).

    That increased knowledge of vulns is the thing that increases the threat because more people know about it and the cost of exploit often goes down (e.g. no cost to finding a vuln, PoC code may be released, etc.).

  3. August 31, 2007 at 11:01 am

    After years of observing these circular discussions, I wonder why there is not more focus on technology that reduces the opportunity to act on vulnerabitities in the first place.

    In the usual risk equation:

    risk=threat*vulnerability*consequence

    vulnerabilities=>threats=>consequences

    and the usual focus has been on identifying and patching vulnerabilities.

    However, if you remove the ability to act on vulnerabilities you reduce threats and consequences and reduce risk, yet the vulnerabilities are still present.

    We do just that with Trustifier by separating root user from the system, isolating running applications or services, and using just-in time privilege escalation, such as making net_bind a one shot deal for a web server (as an example).

  4. September 4, 2007 at 3:00 pm

    I question the usefulness of “Risk=threatXvulnXimpact”In fact, I question the usefulness of the modern definition of “vulnerability” for the reasons that Peter outlines above.

    What we’re really concerned with if we’re going to make a probability statement around risk, is our ability to withstand the force applied by a threat agent (borrowed from FAIR – Threat Capability vs. Control Strength), the frequency that we’ll see that threat act against us, and then the impact to us. The code itself is static. The only thing that changes is our (its) ability to withstand the force applied.

    Think of it this way, 56 bit DES is still as strong as it was in 1994. The only thing that has changed is our ability to apply force to it…. The known vulnerabilities only increased because the force the attackers were able to apply against it.

Comments are closed.