It seems counterintuitive: how can it be that making software “stronger” (as in reducing vulnerabilities) can increase risk on the Internet (as in creating more incidents)? But it happens frequently. The trick to understanding this conundrum lay in thinking like an economist and not like an engineer.
Engineers are focused on quality, so when they hear about vulnerabilities in software, their immediate reaction is to want to fix them… all of them. Regardless of whose software it is. Regardless of where it’s deployed. In fact, some of them care so much that they go out seeking vulnerabilities simply to fix them. They are the type of people who are great at solving problems, but not at understanding the downstream implications of their actions.
Economists, on the other hand (get it?), look at cause and effect, actions and reactions, and, most importantly, outcomes. The root of the economic problem lay in the ultimate unwanted outcome – the breach.Economics-oriented security pros understand that everything we do is intended to thwart the breach. It is easy to lose track of unwanted outcomes in the face of compliance needs and operational activities, but even those activities are all intended to minimize damages from attacks and exploits.
The engineer correctly believes that fixing vulnerabilities creates high quality (“stronger”) software. If the program starts with 300 vulnerabilities and you fix one, that obviously leaves 299 – one less than when it started. More importantly, if an enterprise has 1,000 systems that all have that same vulnerability and they apply a patch to 500 of them, they have decreased their attack surface by 500 vulnerabilities. From both perspectives, the level of vulnerability is, in fact, reduced.
But the economist knows that fewer vulnerabilities is not the ultimate objective. The ultimate objective is to reduce the likelihood of an incident.
The economist understands that there is a key missing ingredient to the engineer’s scenario – the intelligent adversary, aka the threat. And in pursuit of higher quality software the vulnerability details usually get published, leading to lower attack costs for the adversary. Given the scalability of technology, this typically leads to more attackers connecting to more targets, albeit in a (somewhat) smaller population of targets.
That is the key observation for this discussion – a breach requires both an attacker (threat) and a target (vuln), which manifests itself in the form of a connection between source and destination. Even though the population of targets may be reduced (perhaps even significantly so), if the threat is sufficiently motivated, more connections can be made with the vulnerable targets. The only way to guarantee reduced risk is to bring one of the populations (most likely the vulnerable targets) to zero. History shows us this is not likely with commercial software in enterprises. Interestingly, the increasingly common scenario for cloud-based software (e.g. Software-as-a-Service) may be able to do just that.
And there you have it – given the need for both threats and vulnerabilities, the reduction in one doesn’t force a reduction overall. And if the other element is increased in the process, the marginal difference in each population must be evaluated to truly understand the impact. Historically, this has led to scenarios where the vulnerability is reduced while the risk is simultaneously increased.
For reference: