Why Bugfinding is Irresponsible and Increases Risk

Robert E. Lee of Dyad Security has some interesting comments in my previous post on being “Good” that are worth addressing. It seems obvious that he believes that what he is doing is the right thing (many bugfinders do). Unfortunately, really large numbers work against him (and them). I actually feel bad for bugfinders because of this – I understand they are fighting the good fight, but at this stage of the Internet, it is seriously misguided. The basic reasons why are as follows:

  1. We’ll never find all the vulnerabilities in existence. This means we can focus on the select few (as many as they are) found by the good guys or come up with a new approach that appropriately addresses all vulnerabilities equally, instead of the ones that some particular bugfinder would like to advance as important simply because s/he found it.
  2. Our current state of security is getting worse, not better – the world is creating more vulnerabilities every day than bugfinders are finding. One simply needs to look at vulnerability statistics to see that there is no end in sight. Compare these with the number of software developers at work and the total lines of code being written every day, and it is a losing proposition. A valiant effort, perhaps, but a losing one nonetheless.

Of course, the common challenge (as Robert issues below) is that we shouldn’t stick our head in the sand. There is a paradox between the bugfinder perception that they are reducing risk overall and their belief that one must discover/disclose vulnerabilities for them to become important. Simply paying attention to these specific vulnerabilities increases the risk. This creates new value (for bad guys) for some vulnerabilities while leaving all other vulnerabilities up for grabs as they’ve always been and selectively ignoring them. 

I say, equal rights for all vulnerabilities, known and unknown! ;-)

Contrary to bugfinders, my approach is to elevate our understanding of systems to a new level – to intuit the existence of vulnerabilities even without specific new evidence – there is plenty of old evidence to go around. Our goal should be to protect ourselves not from the individual vulns found as they are found, but to protect ourselves from all vulnerabilities all of the time, by taking an architectural view. 

Here are some replies to his comments:

REL: The intent is to share it with everyone equally. However, I believe that public disclosure helps those who wish to protect more than it does those who wish to cause malice. At least that is our intent when we do disclose bug information.

The benefit applies to organizations (and software vendors) who are actively keeping up to date with disclosed information. The others will likely be at greater risk, but the researcher has no control over that.

I don’t fault the intent (well, I do slightly because of its “overbearing mother” approach to protecting people who aren’t even in the family) but this is a classic case where good intentions have gone bad. Your previous assertion (in the comments to my previous post) that the bugfinder has the only opinion that matters was correct – so now you can’t suggest that you have no control over increasing risk. In fact, you have taken complete control over it by inserting yourself into a natural process and manufacturing the risk yourself.

REL: Perhaps a better explanation of my opinion is that there are consumers of products who very much want vulnerability information details. Should their desire to be informed be any less valid than those consumers who wish to stick their heads in sand and pretend away the existance of vulnerabilities?

First of all, you can’t save the world. Second, both constituencies have a right to their opinions (albeit not in your world), though you have mischaracterized the second set. In fact, your whole philosophy rides on the notion that it is impossible to pretend away the existence of vulnerabilities, so why do you think they could do it? Vulnerabilities would still be found and we would still provide protection, but the process would be different.

In any case, the first set of folks are being fed a diet of McDonald’s when healthier food would do a better job to prolong their (Internet) life. They could easily be given methods for providing that extra protection without ruining it for the second set like we do today. And that is the clincher for me – the increase of risk with the current process. So I suppose my answer is yes, with the stated caveats. 

In most industries, when a really undesired side effect of using a product is discovered, say a childs toy that can lead to death by choking, public disclosure is looked upon as a good thing. Parents who are paying attention will now know to not allow their child to play with the faulty toy. To withold the information from parents in that situation would be considered unethical. I see software vulnerability information similarly, though I know (from speaking with you at different conferences over the years) that you are no fan of analogies from other industries.

When bugs are found, with or without a work around (patch), a competent organization can at least be on the ball enough to provide extra care in monitoring their devices, or perhaps even remove public access to them.

I am all for disclosing vulnerabilities that are discovered due to in-the-wild-exploits. In fact, there have been ten (that I am aware of) in the past ten years. I think we should protect ourselves from those ten and any others that come up in that way. And they will come up. Note that nothing we did then or now protected us from those undercover exploits. We should really be worrying about them more.

A competent organization should be protecting themselves from today’s, tomorrow’s, and next year’s vulnerabilities. Why don’t you want them to do that?

REL: Vendors have not done a good job providing a “fitness for use” guarantee of the software provided. Because of this I do think things will get worse before they get better, but eventually the consumers of insecure software will demand more formally evaluated (Ala Common Criteria) assurances of what they are receiving than they are now.

Windows is Common Criteria certified. Vulnerabilities are inevitable. Nobody has said anything other than suggesting that if a vulnerability is found, then the software is unfit, even in the face of every single piece of software having vulnerabilities found. At some point, it boils down to some level of risk tolerance – a level nobody will define.  

REL: One of my favorite talks I’ve seen on that subject is archived here: rtsp://media-1.datamerica.com/blackhat/bh-usa-00/video/2000_Black_Hat_Vegas_VK3-Brian_Snow-We_Need_Assurance-video.rm

Not much has changed since this talk was delivered. Vendors are still providing an “attractive nuisance” to malicious attackers.

I support full disclosure in most situations because A) It provides the information to organizations that they need to test if they are affected by the reported bug; but perhaps more importantly B) it forces one to realize that we need better security controls. Technologies like SE Linux, Trusted Solaris, etc are definitely steps in the right direction. The “find a bug”, “disclose a bug”, “patch a bug” game may be ultimately fruitless without part B kicking into effect.

The test for whether we need better security controls should be a real test, not a manufactured one. As it is, bugfinders are at the stage where they are prolonging the risk and weaknesses of our environment by providing this comfort food for folks. Make no mistake, what you are doing does nothing to protect us from the real threats that are out there – it simply distracts us from doing so.

I believe in the Good Guys. I believe if we can get past the hurdle of spite and distrust between the Good Guys and software vendors, we could all make the Internet much safer. I believe we can protect ourselves from more vulnerabilities using alternative techniques than we can with today’s Pyrrhic victories. I believe the children are our future… (oh, sorry, got caught in an 80’s time warp there ;-) )

6 comments for “Why Bugfinding is Irresponsible and Increases Risk

  1. March 24, 2006 at 10:03 pm

    > 1. We’ll never find all the vulnerabilities in existence. This means we can focus on the select few (as many as they are) found by the good guys or come up with a new approach that appropriately addresses all vulnerabilities equally, instead of the ones that some particular bugfinder would like to advance as important simply because s/he found it.

    Most of the bugs we find are found while we’re testing customers for other bugs. I think part of our postions overlap in the fact that we both view the practice of actively looking for new bugs for the sole sake of finding new bugs as a distraction from more important things. I support the development of technologies that take advantage of security mechanisms that have been proven effective.

    I’m not sure what you mean by all treated equally. The criticality of a bug should be something an end user determines, not a researcher. As a researcher, all we care to share is the category of problem, what it’s likely to affect (CIA), etc. High/Medium/Low depends on the organizations pain thressholds for being affected.

    > Our goal should be to protect ourselves not from the individual vulns found as they are found, but to protect ourselves from all vulnerabilities all of the time, by taking an architectural view.

    Ok, on this particular point, we are definitely in agreement, I just see it as an “and”, not an “or” to the disclosure discussion.

    > Vulnerabilities would still be found and we would still provide protection, but the process would be different.

    I think I said it in my last post. The “find a bug”, “disclose a bug”, “patch a bug” game is ultimately fruitless unless it increases the number of consumers who demand technology that will let them “protect [them]selves from [entire classes of] vulnerabilities all of the time, by taking an architectural view”

    > A competent organization should be protecting themselves from today’s, tomorrow’s, and next year’s vulnerabilities. Why don’t you want them to do that?

    That is our goal, but customers using technology that was only intended to be used in cooperative non-hostile environments, on the internet, are going to have problems. No firewall/IPS/Anti-Virus is going to change that.

    > Windows is Common Criteria certified. Vulnerabilities are inevitable.

    It is, you are correct, but look at the Protection Profile selected (CAPP). The Controlled Access Protection Profile was intended for non-hostile, cooperative environments. Basically the assurance level matters more if they had tried for better security controls. When windows achives CC evaluation with LSPP, then it will be more interesting. If you read the Security Target for windows, you’ll see that the security mechanisms wern’t meant to protect you from bad people on the internet.

    Windows 2003 Security Target: http://niap.nist.gov/cc-scheme/st/ST_VID4025-ST.pdf
    Windows 2003 Evaluation Report: http://niap.nist.gov/cc-scheme/st/ST_VID4025-VR.pdf

    Trusted Solaris 8 Security Target: http://www.commoncriteriaportal.org/public/files/epfiles/TSolaris8_Issue3.1.pdf
    Trusted Solaris 8 Security Evaluation Report: http://www.commoncriteriaportal.org/public/files/epfiles/CRP170v3.pdf

    To summarize the Strength of Environment for the two…

    Windows: The evaluation of Windows 2003/XP provides a moderate level of independently assured security in a
    conventional TOE and is suitable for the environment specification in this ST. Translated – This will work pretty well in a cooperative, non-hostile environment.

    TSOL: Trusted Solaris 8 4/01 is intended for use in organisations who need to safeguard
    sensitive information (e.g., organisations concerned with processing commercially
    sensitive or classified information) and who require security features unavailable
    in standard commercial operating environments.

    Neither can make a “bug free” claim, but the second one tries to deliver technology that lets a consumer “protect [them]selves from [entire classes of] vulnerabilities all of the time, by taking an architectural view”.

    > Make no mistake, what you are doing does nothing to protect us from the real threats that are out there – it simply distracts us from doing so.

    We’re more similar than you think. I have the same opinion about IDS/IPS/FW/Anti-Virus/Bogus Powdered Spit Protection. They do nothing to protect us from the real threats that are out there – they simply distracts us from doing so. I’m not arguing the merrits of active vs passive bug finding. Rather my initial comments were directed at the right to disclose or not to disclose.

    Robert

  2. Pete
    March 24, 2006 at 10:30 pm

    >REL: We’re more similar than you think. I have the same opinion about >IDS/IPS/FW/Anti-Virus/Bogus Powdered Spit Protection. They do nothing to protect >us from the real threats that are out there – they simply distracts us from doing >so. I’m not arguing the merrits of active vs passive bug finding. Rather my >initial comments were directed at the right to disclose or not to disclose.

    Whoa, I neither said anything like that nor believe anything like that, so we must be much different than you think. Inline security solutions are the only ones that can quickly protect us from the logical extension of the damage you are doing. We’d be even more helpless without them.

  3. March 25, 2006 at 2:22 pm

    The Windows certification was basically a joke and useful only for marketing and publicity ploys, seeing how the system tested was not connected to the internet, was to have no floppy drive and was to run only limited applications, if any, (the way I heard it anyway).

    You both make the case that the effort should be on protecting systems and data from all vulnerabilities at either o/s or app level. When that happens, the glory for successfully writing a worm or virus, (from the cracker side) or for finding a vulnerability(from the bug finder side), will fade to zero, right?

    Since this level of protection can only occur on the host, does this lead to a natural conclusion that we should be looking at trusted operating systems?

    If this would be achievable, then the vulnerabilities would still exist, they would just not be useable to escalate privileges. Does this mean that the risk and threat models would change, as no vulnerability would be used to become a threat and risk would then become zero?

  4. March 25, 2006 at 4:17 pm

    @Anonymous

    > You both make the case that the effort should be on protecting systems and data from all vulnerabilities at either o/s or app level.

    I took care to change that claim from all vulnerabilities to entire classes of vulnerabilities. Anyone who is claiming to protect against all known and unknown problems is obviously selling you something.

    A trusted base is the starting point, but covers well more than the OS itself. You have to start with the hardware and build up the trust of your base from there.

    Even with SE Linux installed with a well thought out policy, application level flaws could lead to Confidentiality and Integrity faults depending on how well the application code was written. IE, SQL injection would still be possible, but you wouldn’t be able to “pop a shell”.

    As we both more or less said previously, moving towards a trusted model, we’d still have the need to find, track, and fix vulnerabilities, but the game would be different from the standpoint that most of them would be mitigated by our policies, instead of having to frantically install patches.

    If you’re really interested in this stuff, irc on the efnet network and join #selinux. We can talk in detail about trusted computing there.

    Robert

  5. March 31, 2006 at 9:19 am

    If the software vendors become the authority over disclosure then will they do the right thing and promptly fix the problem, recall the software, or notify consumers in a grand and public way? History shows us that they do not. A certain percentage of bug hunters (I don’t know any studies but I know those who are) are in it for the gold and the glory. They do it for the same business reasons that the software vendors do NOT disclose. The current model doesn’t have to be. It is a cumulation of frustration since the vendors consistently do NOT do the right thing.

    Furthermore, there exists a new market to sell disclosure to those who can pay regardless of intentions. Like any criminal activity, this is a natural process. Pete refers to the natural process of development to discovered vulnerabilty that researchers are disturbing. It never was a natural process. Someone has to actively look for most bugs and then consider implications/payload/process before it becomes a verifiable problem. So whether it be the misguided evolution of penetration testing to force researchers to add 0-days to their bag of tricks (to earn more money) or sec researchers looking for bugs for money or attention the process has always been forced.

    As the issue now stands, it is unfixable through “responsible disclosure”. The market has been made. Anyone with a debugger and a fuzzer stands to make something of themselves one way or another. Even 3rd party “IDS/IPS/FW/Anti-Virus/Bogus Powdered Spit Protection” wants a share of the cash. Some of them are the same vendors who sell the buggy software to start with. The capitalistic forces in this market segment are overwhelming and the disclosure debate is no more than a form of idleness. There can only be one way forward here and that’s to close the loop– vendors need to stop releasing bad software WITHOUT warranty/responsibility/reliability.

    How do we get there? Disclosure. It’s a force that has existed as long as humans have been social creatures living in communities. We share the scary stuff with each other through song, dance, or recently, words, and try to help our neighbors by making them aware so they can help watch out with us. Secrets of pain are only hurtful to ourselves in the long run.

  6. June 13, 2006 at 10:11 am

    Rethinking full-dislosure…

    This morning I noticed that McAfee announced yesterday that they fully intend to once again enter the game of public vulnerability disclosure. Now, as you may or may not know, I’m a huge fan of full-disclsure; given my belief that full-disclosure is a …

Comments are closed.