The security profession has been debating vulnerability disclosure policies for years. The debate has heated up again with the latest Adobe "zero-day" (a true undercover vulnerability, I believe) resulting in specifics being published on Sourcefire's VRT blog, some concerned comments, and a blog post on Metasploit.
The arguments for disclosure first tug at the heartstrings with simplistic platitudes like "it is better to know than not to know" but then grounds itself a bit with the following logic:
- The bad guys already have this information
- The good guys need the information to protect themselves
There you have it – a classic fight between good and evil. But even this is incredibly simplistic. It treats two groups as holistic entities and not dynamic populations. And that matters a lot when evaluating risk.
Essentially, folks who support higher levels of quicker disclosure are betting that good guys can and will respond faster and more completely than the bad guys can attack. With discrete groups this may be true, but with dynamic populations I am not so sure.
Risk is a function of threats, vulnerabilities, and consequences. The variance in these elements is constrained by scarce resources on both the attacker side and the defender side.
The attacker makes his decisions based on a cost-benefit analysis that compares costs – skill, effort, and equipment – to the expected benefit discounted by potential penalties (the attacker's risk equation). The higher the result of this equation, the higher the risk to an organization (because threat is higher).
The defender makes a ROSI (return on security investment) assessment (typically ad-hoc) to determine her overall risk. The lower the cost of protection, the more likely that investment is a good one.
Finally, we shouldn't forget opportunity cost which compares these results to anything else the attacker or defender might want to do.
Looking again at the disclosure reasoning, the question is whether releasing more information helps the good guys or the bad guys more. The "bad guys already have this information" argument neglects the acquisition cost of this information and the skill level required to execute.
A basic illustration of cost associated with "effort" – some of you were no doubt a bit annoyed as you read my first paragraph above and wanted to see the source material in question – it didn't have links to the pertinent Sourcefire and Metasploit blogs. Of course, you "already had" this information in the form of your ability to use search engines to find it. Links are a part of the Web culture because we recognize that time is money and making things a bit easier for the reader lowers his/her costs.
So the short point is that distribution of information, and its corresponding ease of access, matters.
The "good guys need this information for protection" is perhaps a trickier. The huge majority of Internet users do not need the information provided because they have no capacity to leverage it for protection. (Guys like HD Moore can do wonders with it, of course). The users rely on the makers of products who DO need this information to provide protection.
It is clear from this case that many large security companies already had the information (they already had samples), so the added benefit to the "good guy" community must be adjusted with that information in mind.
In the end, I think it is less likely that good guys used this information for protection than it is that bad guys used it to compromise some user. I believe this is almost always the case, and my evidence is the aggregated number of exploits that occur after disclosure compared with the number of exploits of undercover vulnerabilities