The Disclosure Race Condition

The security profession has been debating vulnerability disclosure policies for years. The debate has heated up again with the latest Adobe "zero-day" (a true undercover vulnerability, I believe) resulting in specifics being published on Sourcefire's VRT blog, some concerned comments, and a blog post on Metasploit.

The arguments for disclosure first tug at the heartstrings with simplistic platitudes like "it is better to know than not to know" but then grounds itself a bit with the following logic:

  1. The bad guys already have this information
  2. The good guys need the information to protect themselves

There you have it – a classic fight between good and evil. But even this is incredibly simplistic. It treats two groups as holistic entities and not dynamic populations. And that matters a lot when evaluating risk.

Essentially, folks who support higher levels of quicker disclosure are betting that good guys can and will respond faster and more completely than the bad guys can attack. With discrete groups this may be true, but with dynamic populations I am not so sure.

Risk is a function of threats, vulnerabilities, and consequences. The variance in these elements is constrained by scarce resources on both the attacker side and the defender side.

The attacker makes his decisions based on a cost-benefit analysis that compares costs – skill, effort, and equipment – to the expected benefit discounted by potential penalties (the attacker's risk equation). The higher the result of this equation, the higher the risk to an organization (because threat is higher).

The defender makes a ROSI (return on security investment) assessment (typically ad-hoc) to determine her overall risk. The lower the cost of protection, the more likely that investment is a good one.

Finally, we shouldn't forget opportunity cost which compares these results to anything else the attacker or defender might want to do.

Looking again at the disclosure reasoning, the question is whether releasing more information helps the good guys or the bad guys more. The "bad guys already have this information" argument neglects the acquisition cost of this information and the skill level required to execute.

A basic illustration of cost associated with "effort" – some of you were no doubt a bit annoyed as you read my first paragraph above and wanted to see the source material in question – it didn't have links to the pertinent Sourcefire and Metasploit blogs. Of course, you "already had" this information in the form of your ability to use search engines to find it. Links are a part of the Web culture because we recognize that time is money and making things a bit easier for the reader lowers his/her costs.

So the short point is that distribution of information, and its corresponding ease of access, matters.

The "good guys need this information for protection" is perhaps a trickier. The huge majority of Internet users do not need the information provided because they have no capacity to leverage it for protection. (Guys like HD Moore can do wonders with it, of course). The users rely on the makers of products who DO need this information to provide protection.

It is clear from this case that many large security companies already had the information (they already had samples), so the added benefit to the "good guy" community must be adjusted with that information in mind.

In the end, I think it is less likely that good guys used this information for protection than it is that bad guys used it to compromise some user. I believe this is almost always the case, and my evidence is the aggregated number of exploits that occur after disclosure compared with the number of exploits of undercover vulnerabilities

4 comments for “The Disclosure Race Condition

  1. Jon
    February 25, 2009 at 10:59 am

    Against my better judgment…

    Your argument does not take into account detecting successful attacks against assets. Your argument seems to be towards preventative controls, such as a vendor-supplied patch. In lieu of a patch, I think an organization would like to know if an attacker was actively exploiting this vulnerability against their assets.

    I’m inferring from the Sourcefire VRT blog postings that Sourcefire was not privy to the information as a “good guy”, so while many “large security companies” had access to the information, maybe not all of them. And, a vendor that provides information to an open source project will just inevitably leak the information anyway…

  2. Pete
    February 25, 2009 at 11:13 am

    @Jon -

    I don’t intend to leave out third party detection/prevention at all. According to reports, Symantec, McAfee, Trend, etc. all had samples. And of course Sourcefire had the info or they wouldn’t have been able to blog what they did.

    Good point about open source, but once again this boils down to distribution – how many places an attacker has available with the applicable information.

  3. Jon
    February 26, 2009 at 8:52 am

    @Pete

    “And of course Sourcefire had the info or they wouldn’t have been able to blog what they did.”

    Orly?

    http://twitter.com/mroesch/status/1253491039

    You shouldn’t assume with such certainty, Pete.

    It seems to me that some of the good guys and bad guys were near parity on acquisition costs. So, we have a case where vendors are selective to whom they distribute information, which to me is a very early 90′s mentality. Sourcefire worked hard and smart to protect their customers at least at the same level that their competitors offered. And, if you don’t think “bad guys” aren’t monitoring open source projects and their repositories for security bug fixes or detection code, then you’re being silly. As you know, the acquisition cost is basically free in this case. Seems like a good thesis paper or something…

    Anywho, to restart #1 and #2 above:

    1. The bad guys already have this information
    2. The good guys need the information to protect themselves

    Yep, sounds right to me…

  4. Pete
    February 26, 2009 at 9:27 am

    @Jon -

    I am absolutely certain that Sourcefire had the information they published in their blog post prior to making the post. Although with enough time a bunch of monkeys hacking away at keyboards could theoretically come up with the post with no information at all, I guarantee that Sourcefire is a sharp company and the poster knew exactly what he was writing before he hit “post”.

    Your reassertion of my strawman argument highlights the power of the “insider view”. The time spent monitoring anything is not free from an economic perspective.

    Think of things this way — if there are a thousand places where this information might be, there is a much higher likelihood that an attacker will come across it sooner if it is published in 500 places rather than if it is published in 5 places.

    This is even more apparent if you agree with me that new “bad guys” are being added to the Internet population all the time.

    It would be interesting to know how you leveraged the information in Sourcefire’s post personally, but also consider whether your customers, friends, and family could have done anything with it.

    Jon – I am not saying the effect is huge, I am asserting that the ratio of exploitation:protection in this situation is higher than it would have been without the post, although perhaps only slightly.

Comments are closed.