It Ain’t Over ’til it’s Over

Thomas Ptacek re-hashes what he has said before, as wrong as it was then and is now. Mike Rothman chimes in with a preemptive strike a la Eminem in 8 Mile to try to control the conversation, but it didn’t work. The most interesting thing about Ptacek’s points are that they are all opinions – there is no evidence at all to support his conclusions. He totally DIDN’T say that!

I happen to agree that the whole disclosure (and discovery*) story is old, but I sure hope I will always be there to correct the mistakes being made every time they come up, because they seem to recur (pssst – I know why, too). Regardless of the monotony of the topic to some, just saying what you have said before doesn’t make it any truer than the first time you said it. Of course, anyone who has read chapter 3 of Cialdini’s Influence knows that the repeaters begin to believe what they’ve said more (and still more when it is written) in order to maintain their Commitment and Consistency, in the same way that the cult that thought Y2k would be the end of the world actually believed it more after Y2k happened, regardless of the obvious evidence to the contrary.

The interesting thing, though, is not what they’ve said before, but what they haven’t said, ever. They can’t prove that anything has gotten better. They just want to believe it with all their heart and soul.

To really end the discussion, all they have to do is answer some simple questions:

  1. When will they be done? Okay, I’ll make this easier – when will all the vulnerabilities be found?
  2. When will the rate of vulnerability discovery and disclosure surpass the rate of vulnerability creation?
  3. What evidence is there that the Russian Mafia is going to find the exact same set of vulnerabilities that they find?
  4. If the Russian Mafia has found the same bugs, and they believe disclosure is necessary for protection, how can they wait so long to disclose?
  5. What are they doing about the other vulnerabilities that the Russian Mafia has found?
  6. What software product is now "more secure" with all of its required patches?
  7. Why does every security agency in the world raise their risk rating when vulne\rabilities are published?
  8. Why do they believe that without disclosure nobody would ever find any vulnerabilities?

Truth be told, the answer to the question of why bugfinders believe that their actions are justified is simply because they like doing what they are doing. It makes them feel good. I would respect that kind of disclosure most of all.

Btw, I have the evidence on my side: 1) risk warnings never go down with the latest disclosure of vulns; 2) any single infection of a disclosed vuln is evidence that risk went up; 3) bugfinding doesn’t make software more secure; 4) there may be about 7% overlap in bugfinding; 5) bugfinding may make software more secure after 7 years (i.e. about 2-4 years after its useful lifetime).

* My biggest beef with this whole charade is really on the discovery side of things. For some reason, Ptacek focuses on something called "full disclosure" which is defined a thousand different ways by a thousand different people.

6 comments for “It Ain’t Over ’til it’s Over

  1. September 6, 2006 at 1:25 am

    1. Never.

    2. When vulnerability disclosure actually creates new vulnerabilities.

    3. The fact that clientside vulnerabilities in IE are worth more money than serverside vulnerabilities in .

    4. We wait the minimal amount of time possible. Others wait even less time.

    5. Nothing.

    6. Windows XP.

    7. Because their risk ratings are arbitrary and subjective.

    8. Because from 1988 to 1995 there wasn’t a single buffer overflow advisory, despite the fact that the Morris worm exploited one.

  2. PaulM
    September 6, 2006 at 1:36 pm

    > The most interesting thing about Ptacek’s points are that they are all opinions – there is no evidence at all to support his conclusions. He totally DIDN’T say that!

    All of the bullet points from Thomas’ post are quantifiable, at least anecdotally. You can dismiss them as purely opinion, but you’d be wrong.

    > Btw, I have the evidence on my side:
    > 1) risk warnings never go down with the latest disclosure of vulns;

    No, they go down with the latest release of a patch. That patch is likely the result of a found and disclosed vuln.

    > 2) any single infection of a disclosed vuln is evidence that risk went up;

    OK, but if we’re quantifying risk, is the risk to a target greater if a vuln is known to attacker and target and, if not mitigated, at least detected? Or is the risk greater if the vuln is known only to the attacker and the target cannot mitigate or detect the attack and therefore cannot respond to it? This one’s just common sense.

    > 3) bugfinding doesn’t make software more secure;

    In an atomic instance (OpenSSL v0.9.6 vs. v0.9.7) there’s a clear impact from patching the bug – a remotely exploitable buffer overflow is eliminated. Over time, maybe it doesn’t have a direct impact.

    But you cannot discount the indirect impact that disclosure has on software. Look at the evolution of NT 4.0 -> 2003 Server. Had it not been for some global incidents and a PR nightmare for Microsoft, 2003 Server might not have features like DEP or host firewalls that can be configured via Group Policy. Had there been no research and disclosure, this process might have taken longer with different (likely worse, but that’s my opinion) results.

    > 4) there may be about 7% overlap in bugfinding;

    So is your conclusion that since, within the same community and industry, there is a 7% rediscovery rate of the same vulnerability, that there is only 7% overlap in the bugs found by infosec researchers working openly and those working privately (for the Russian Mob)? I can’t even begin to list all of the flaws in your assertion. Clearly Ozment’s study represents nothing of the kind.

    > 5) bugfinding may make software more secure after 7 years (i.e. about 2-4 years after its useful lifetime).

    Yes, Microsoft Windows server software has run its course. The technology is almost 15 years old and has run its course. Nobody uses it anymore.

  3. andi
    September 7, 2006 at 10:02 am

    Come on.. Should I close my eyes if a train is moving towards me at full speed? Would the danger be smaller? Common sense is all I need to support Ptacek’s arguments.

  4. Pete
    September 7, 2006 at 10:52 am

    @andi – if 2000 trains are coming toward you at full speed, and somebody tells you about three of them, and you focus all of your efforts on those three, are you still dead? You DO close your eyes and you don’t know it. (Btw, do you know how often “common sense” and “conventional wisdom” have been wrong in the history of mankind?)

    @Paul -

    “Anecdotal” quantification? Get real. For every instance of short-term “good” that’s been forced upon the unsuspecting Internet user by bugfinders, there have been tens of thousands of “bads”.

    Good points on my comments. I will elaborate:

    1) Patches don’t make risk go down unless every single vulnerable system everywhere in the world is patched.

    2) Detection only affects risk if you can prevent it. Since there is a finite number of vulns on a system at any given time, an increase in the number of attackers who know about it also increases risk. If all systems everywhere get patched, you decrease risk. You simply choose to ignore all of the other risk associated with that target. It is not impossible to protect yourself without knowing about specific vulns.

    3) You assume that people will patch. They don’t, and more attackers will know about the bug. Increased risk. I don’t disagree that software is being developed more securely; I do disagree if anyone asserts this was the only way. Neither of us can prove/disprove each other.

    4) I am saying that if there is no relationship between good bugfinders and bad bugfinders, given the total number of vulns in the world, it is highly unlikely that there will be many collisions. I use Ozment’s paper as representative – you are right that it leaves a lot to be desired in this context, but I think it is also likely to be a best case scenario. You would need 100% overlap to succeed.

    5) Are you suggesting that Windows 3.1 and XP (and every other flavor of Windows) have the same exact code base?

  5. PaulM
    September 7, 2006 at 2:51 pm

    OK, I am starting to feel guilty posting these novellas up to your blog Pete, but I am enjoying the discussion.

    > 1) Patches don’t make risk go down unless every single vulnerable system everywhere in the world is patched.

    Calculating risk for “the whole world” is a pointless endeavor. However, calculating risk for, say, a client’s network, is a legitimate excercise.

    Case & point: SQL-Slammer. Anybody with an IDS listening to unfiltered Internet traffic is still seeing this worm. However, if they have patched and/or blocked SQL across their border points, then the risk is negligibly low. Patches make risk go down. Period.

    > 2) Detection only affects risk if you can prevent it.

    Untrue. You can’t look at risk only in terms of whether or not a host has been compromised, but for how long, since business risk isn’t simply whether or not you’ve lost exclusive control of a machine on your network, but what information has been stolen or altered or what services have been disrupted. Time to detect ~= time to respond ~= length of compromise. The smaller those numbers are, the less risk the compromise represents to the business.

    > Since there is a finite number
    > of vulns on a system at any given time, an increase in the number of attackers who
    > know about it also increases risk.

    Agreed, though since spam/phishing/worms are all highly-efficient means of delivering an exploit, that I have to wonder how significant the increase is in some cases. A handful of bot-herders with a local 0day is potentially more dangerous than 100K script kids with a published remote exploit.

    > 3) You assume that people will patch. They don’t, and more attackers will know about
    > the bug. Increased risk. I don’t disagree that software is being developed more securely;
    > I do disagree if anyone asserts this was the only way. Neither of us can prove/disprove each
    > other.

    Funny. That study you cited showed that nearly 40% of systems were patched within 30 days, prior to the presence of a worm that exploited that bug.

    > 4) I am saying that if there is no relationship between good bugfinders and bad
    > bugfinders, given the total number of vulns in the world, it is highly unlikely
    > that there will be many collisions. I use Ozment’s paper as representative – you
    > are right that it leaves a lot to be desired in this context, but I think it is
    > also likely to be a best case scenario. You would need 100% overlap to succeed.

    The problems is that this is very much the chicken and the egg, as you pointed out earlier. We can correlate the overlap between researchers and criminals, but identifying whether or not disclosure causes Russian spamsploits or vise-versa is impossible.

    > 5) Are you suggesting that Windows 3.1 and XP (and every other flavor of Windows)
    > have the same exact code base?

    Of course not. But look at the WMF exploits from New Year’s – pieces of code from the 3.0 days still live on in XP. But my point was that Microsoft has learned from its experience with NT 4.0. We all benefit from that, and it’s not at all far-fetched to say that disclosure of vulnerabilities played a part in that.

    Perhaps its a little Ayn-Randian, but network security really is an objective and selfish exercise. If disclosure helps me but hurts you because I read bugtraq while you read Penny Arcade, then maybe that’s just the nature of things and maybe you’ll “get it” eventually, probably after you experience some pain.

    I believe that it is more important for an individual organization to be prepared to handle risks, even if that means that the availability of the information used by an organization to protect itself can also be used to the detriment of others. The continuity of my sphere of influence and responsibility has to come first – that’s what they pay me for. They pay me to make sure that they’re part of the 40% that aren’t impacted by the next worm.

  6. September 7, 2006 at 4:24 pm

    > 3) bugfinding doesn’t make software more secure

    But it does: http://www.eecs.harvard.edu/~stuart/papers/usenix06.pdf

Comments are closed.