Disclosing the Elephant in the Room of the Disclosure Debate

There has been a lot of discussion lately about vulnerability disclosure, with Google and Microsoft respectively weighing in with their latest opinions on the topic.

There is really nothing new here, as evidenced by the Google folks referencing a 9-year-old Bruce Schneier essay on the topic. I have written extensively on the topic and the related software liability in previous years (some highlighted below) and get castigated quite a bit when pointing out some fairly obvious points. I believe these points are important and are sometimes ignored, so I will go ahead and point some of them out again, as there is a big elephant in this room and I think it is the real reason that folks are constantly at odds with one another.

So, here’s the elephant: Vulnerability disclosure of any kind (full, responsible, irresponsible, coordinated, uncoordinated, whatever) is not working and hasn’t been working since Bill Gates’ Trustworthy Computing memo of 2002. If you think about it, the remarkable thing about that memo is that it effectively neutralized all that was to follow in disclosure (and even the debate where it stands today) because it was a major acknowledgement of the problem from a huge company. Ever since then, nobody has been able to articulate the long-term strategic benefit of vulnerability disclosure (and for good reason). Even worse, there is no evidence of benefits anywhere, other than to the bugfinders themselves (though certainly this can work both ways). Let’s face it, the truth of this matter, and the reason for all the debate revolves around respect, fame, and competitive advantage and not around bringing about a safer Internet. Please let me explain.

The reason something as simple as a memo could have such an effect is that bugfinders never really had a strategic mission to begin with. Let’s face it, the only thing a bugfinder wants is for his or her particular bug that they happened to find at any given time fixed in whatever they believe is a timely manner. I don’t know where this fits in Maslow’s hierarchy of needs, but it ranks very low unless of course you factor in self-esteem (in the form of respect and fame). In any case, finding and fixing a single vulnerability is an extremely minor exercise (relatively speaking) with a huge downside relating to the scalability of the threat.

Perhaps the more interesting development in this arena is that we have an independent who works for a large company and therefore has heavy influence due to both his technical skill and his employment status. The circumstances where large companies target each other (and I assume that everyone agrees with the Google Security statement that even if Tavis Ormandy was working independently they fully supported his actions) is even more complicated. The most interesting problems relates back to this lack of strategic purpose for disclosure – if a company the size of Google is spending time finding vulnerabilities in its competitors’ products, it seems reasonable to me that they should have found every single vulnerability in their own products. The principle of comparative advantage should be put to work here. In addition, large companies should have a better sense for their altruistic objectives, unless there aren’t any.

Although we know that the lack of cohesion in participants muddies the waters for strategy (and thus we are stuck spinning our wheels dealing with the whims of bugfinders), the most obvious reason for finding vulnerabilities is to enhance software quality and increase security. Amidst this noble goal, the second half of the statement often gets ignored. I think this is because people assume a correlation where there is none. That is, enhanced software quality with respect to vulnerability discovery and disclosure does NOT increase security, at least in the short-term. The objective is really a derivative of an interest in reducing the number of compromises.

So, how can disclosing a vulnerability (followed presumably by the availability of a patch) reduce security? Simple – although the opportunity exists for individuals to reduce their vulnerable state, so many people can’t or don’t that the increase in threat significantly multiplies the number of incidents that occur. That is, vulnerability disclosure completely ignores the threat component of the risk equation in the short run.

This short run focus on vulnerabilities and not threat might seem okay because we have much less control of that aspect of risk, but we have significant indirect influence. That is, clearly the risk to unpatched (or otherwise unprotected) systems goes way up because disclosure significantly reduces the “costs” to any attacker and we know from history that incidents increase dramatically after disclosure. One quick aside: With the evolution of today’s technical architectures to SaaS and other cloud-based applications, it is worth pointing out that the circumstances of increased risk are not applicable to environments where one entity can guarantee that every instance of a software program has been properly patched.

As for the long run, there are many more developers than there are bugfinders and every day we are creating many more vulnerabilities than we are finding. It does not appear that developers are creating fewer vulnerabilities as a result of the disclosure effort, nor does the world have fewer vulnerabilities. One approach to address this is to assert that we should significantly increase our bugfinding efforts… except that we have done that as well, with the introduction through the years of newer and better automated solutions. No, the real way to address these problems is to think outside the box for a solution – all manner of trusted computing and its derivatives, for example.

Perhaps the biggest failing of vulnerability disclosure is that we completely ignore the externalities in this situation – the billion or so users of these various products. This spiteful approach is often justified with a wolves/sheep kind of reasoning that is quickly brought to its knees by considering all the good people in our own networks of friends and neighbors that shouldn’t need to be software engineers just to surf the Internet. These users are frequently victims of the increased risk we are artificially creating in their environments, unbeknownst to them.

One thought exercise that might be interesting here is to try to imagine what would happen if all of a sudden nobody disclosed any vulnerabilities. In fact, nobody (at least none of the good guys) even looked for vulnerabilities. The typical response here is to suggest that software would get even shoddier and the bad guys would have their way with us and we would never even know about it. I would suggest to you that this is complete and utter rubbish.

My version of this thought exercise is that people would work harder to further the goals of trusted computing because the stakes were higher and more funds were available. They would develop better monitoring tools to catch even more undercover exploits than are already being caught. They would put even MORE pressure on software manufacturer’s when compromises were discovered. Even now, we discover and respond to “undercover exploits” more quickly than we do publicly disclosed vulnerabilities, and I think we can get even better at it. Make no mistake, given that we are only finding a small fraction of existing vulnerabilities, there is nothing keeping the bad guys from finding and exploiting unknown vulnerabilities today, so it isn’t like our current process is helping there.

I have the utmost respect for many bugfinders. I believe many of them have great intentions. But they are attempting to haphazardly run across the battlefield while the bad guys pick them off from sniper posts, infiltrate their ranks, or simply choose another battle field that is unoccupied. There is no chance at victory fighting the battle this way.

[Here is a list of previous posts, essays, and articles I have written about vulnerability disclosure. It is worth mentioning that though I stand by my facts and opinions, I am not always proud of the emotional pieces - I hold the utmost respect for a number of the folks I took shots at. I still disagree with their opinions, though ;-) ]

10/11/04: The Folly of Vulnerability Seeking

11/13/04: The Folly of Vulnerability Seeking (follow-up to my searchsecurity article of the same name)

4/1/05: The Dead Horse Lives

8/8/05: More, more, more (Vuln Research)

8/17/05: The Long-Term Impact of Vulnerability Research: Public Welfare

10/30/05: I’ll bite: Feel free not to be so helpful

11/2/05: To sue is human, to err denied (one of my favorite titles ;-) )

11/7/05: A New Litmus Test for Security Companies

2/20/06: I waffle slightly (I think)

3/7/06: More Turtles!

3/24/06: Why Bugfinding is Irresponsible and Increases Risk

3/31/06: More on Bugfinding

8/3/06: How Microsoft Reduces Risk (where I introduce a new risk equation)

9/6/06: It Ain’t Over ’til it’s Over

9/6/06: Now it’s Over (For Now)

5/16/07: More Sex is Safer Sex

9/24/08: On Vulnerability Rediscovery

7/13/09: Exploiting Undercover Vulnerabilities

2/25/09: The Disclosure Race Condition

3/4/09: The Other Side of Full Disclosure