More on Bugfinding

“Another Pete” contributes his position in the comments to my most recent post on bugfinding. Inevitably, he targets the software companies without setting a measurable standard for “good enough” security. I am not sure why I value bugfinder intelligence more than they do themselves (they seem to think that any average Joe can find the vulnerabilities they find; I think they are pretty smart and often find vulnerabilities that attackers never would).

I would also like to point out that I think talking about disclosure is sort of interesting, but I am much more interested in the process of discovery. Actively seeking out ways to break software in the face of many existing ways to break the same software has no incremental value (one possible exception: truly unique exploit techniques). It’s like putting bullet proof glass in one window of a house with many windows, or more likely, a skyscraper with thousands of windows.

His position is in italics, my comments are inserted below:

If the software vendors become the authority over disclosure then will they do the right thing and promptly fix the problem, recall the software, or notify consumers in a grand and public way? History shows us that they do not. A certain percentage of bug hunters (I don’t know any studies but I know those who are) are in it for the gold and the glory. They do it for the same business reasons that the software vendors do NOT disclose. The current model doesn’t have to be. It is a cumulation of frustration since the vendors consistently do NOT do the right thing.

PEL: Think outside the box for a millisecond here and consider the possibility that disclosure is not the “right” thing. It’s like giving in to a blackmailer’s demands to save a child only to be subject to a deluge of future blackmail incidents. Play a little chess instead of checkers, please. And feel free not to vet your own frustration out on the rest of us.

Bugfinders are ultimately like the street people who want to clean your windshield at a stoplight, do it with muddy water, then expect to be paid for it with gratitude and money. It’s not (necessarily) to say their heart isn’t in the right place, but forced charity that is destructive is pretty hard to be thankful for no matter how much the giver wants to believe it is helpful.

Furthermore, there exists a new market to sell disclosure to those who can pay regardless of intentions. Like any criminal activity, this is a natural process. Pete refers to the natural process of development to discovered vulnerabilty that researchers are disturbing. It never was a natural process. Someone has to actively look for most bugs and then consider implications/payload/process before it becomes a verifiable problem. So whether it be the misguided evolution of penetration testing to force researchers to add 0-days to their bag of tricks (to earn more money) or sec researchers looking for bugs for money or attention the process has always been forced.

PEL: I agree, for the most part. My point was simply that the “natural” part of this process should be left to the bad guys. There will still be plenty for us to do, and it will actually matter.

As the issue now stands, it is unfixable through “responsible disclosure”. The market has been made. Anyone with a debugger and a fuzzer stands to make something of themselves one way or another. Even 3rd party “IDS/IPS/FW/Anti-Virus/Bogus Powdered Spit Protection” wants a share of the cash. Some of them are the same vendors who sell the buggy software to start with. The capitalistic forces in this market segment are overwhelming and the disclosure debate is no more than a form of idleness. There can only be one way forward here and that’s to close the loop– vendors need to stop releasing bad software WITHOUT warranty/responsibility/reliability.

PEL: I would not be opposed to this, except for the fact that it is completely unreasonable ;-) . Suggesting that disclosure is idleness is to simply misunderstand or ignore the power of the threat when it is coupled with the scalability we get from networked computers. I think folks should disclose as little as possible and wait as long as possible while still remaining secure from any exploit of the vulns in question. Since today you can protect yourself from all software code defects (not all vulns, mind you), known and unknown, disclosed or not, I think we should disclose nothing, ever (that ought to get a rise out of some of you ;-) ). In fact, we shouldn’t even bother to discover any new vulns in the haphazard way we do today. The only folks who have business looking for the vulns are the software companies themselves. The other folks who will want to do this are the bad guys – WE SHOULD FOCUS OUR ENTIRE SECURITY EFFORTS ON PROTECTING OURSELVES FROM THE BAD GUYS. 

How do we get there? Disclosure. It’s a force that has existed as long as humans have been social creatures living in communities. We share the scary stuff with each other through song, dance, or recently, words, and try to help our neighbors by making them aware so they can help watch out with us. Secrets of pain are only hurtful to ourselves in the long run.

PEL: I don’t really understand what you’re point is here. It sounds like you are supporting disclosure even though you dissed it in your previous paragraph. And the rest of the paragraph sounds a lot like voodoo to me. 

2 comments for “More on Bugfinding

  1. April 1, 2006 at 9:10 pm

    > I am not sure why I value bugfinder intelligence more than they do themselves (they seem to think that any average Joe can find the vulnerabilities they find; I think they are pretty smart and often find vulnerabilities that attackers never would).

    This is a perception you have because you aren’t actively trying to understand the technology deep enough to look for bugs. You likely have the intellectual capabilities to do so, but because you have not exercised them, the bug finding game seems like magic. It’s not. It’s quite easy. I’ve seen 12 year olds do it.

    Your view is also skewed because you attempt to quantify things you can’t measure. I know for a fact there have been more than 10 “undercover” exploits in the past ten years. I’d estimate the true incidents are likely in the thousands.

    > Bugfinders are ultimately like the street people who want to clean your windshield at a stoplight, do it with muddy water, then expect to be paid for it with gratitude and money. It’s not (necessarily) to say their heart isn’t in the right place, but forced charity that is destructive is pretty hard to be thankful for no matter how much the giver wants to believe it is helpful.

    This quote is comedy gold. :) I may end up using it on a T-Shirt this year in Vegas.

    > I think folks should disclose as little as possible and wait as long as possible while still remaining secure from any exploit of the vulns in question.

    You realize that this doesn’t work though without a trusted computing base. Look at the recent Sendmail bug as a case in point. Based on very limited, and somewhat initially misleading information in the form of an advisory/patch release, in less than 4 hours time we were able to figure out what the vulnerability was and the initial attack vector required to exploit it. Within 2 days time it was possible to get our PoC to a semi-reliable level. There are attackers out there that are a lot more gifted at the exploit writing game than we are. If you release the patches, you are effectively releasing the vulnerability details. So what’s your solution now, to remain in the world of “responsible disclosure”, should we shame software companies into never admiting security weakness and never releasing patches, because that provides the details of the problem?

    Your advocated stance on disclosure, if espoused by law makers, would take the currently freely exchanged information underground. Can’t say for sure what the outcome would be, but the measureable security for organizations would most likely be a lot worse than it is now, partially because you can not quantify that which can not be measured, and partially because people will be lulled into a false sense of security.

    Your angst is somewhat understandable, but unfortunately off target.

    Robert

  2. Pete
    April 1, 2006 at 9:47 pm

    @Robert -

    1. If a 12 yr old can find vulns, it really wouldn’t be exercising my intellectual capabilities to do so, right? So why don’t you want to exercise yours? Why don’t you hire a bunch of 12 yr olds to do this work?

    2. If you can’t quantify these things, I suggest exercising a little more intellectual capability. It’s really quite simple. A 12 yr old could do it.

    3. Why do you equate exploits with incidents? One exploit can be the cause of many incidents, right? So if there have been many more undercover exploits, then name 3, please. Thanks. (Btw, if you know this for a fact, then wouldn’t it be the responsible thing to disclose this information?)

    4. Thanks for making my point for me with your sendmail bug example. Why on earth would we want that information discovered and disclosed by the good guys given that we know how quickly the bad guys will react? We’ve gone x years (how far back does this vuln go?) without a single bad guy exploiting that vuln and yet now you are suggesting it is a problem? What about all of those previous years? What are you doing today about the *next* Sendmail vulnerability?

    5. I happen to think the important information already *is* underground and you are providing enterprises with comfort food of patching to make their problems go away when they actually aren’t going away. Talk about lulling.

    6. It is always amusing that bugfinders think that if they didn’t find the vulnerabilities, then nothing would actually happen – no exploits, no incidents, no nothing. And if you think stuff would happen, then who would be lulled into a false sense of security? Wouldn’t the normal reaction to a (presumably) unexpected exploit be something quite a bit more than a lull?

Comments are closed.