Robert E. Lee of Dyad Security has some interesting comments in my previous post on being “Good” that are worth addressing. It seems obvious that he believes that what he is doing is the right thing (many bugfinders do). Unfortunately, really large numbers work against him (and them). I actually feel bad for bugfinders because of this – I understand they are fighting the good fight, but at this stage of the Internet, it is seriously misguided. The basic reasons why are as follows:
- We’ll never find all the vulnerabilities in existence. This means we can focus on the select few (as many as they are) found by the good guys or come up with a new approach that appropriately addresses all vulnerabilities equally, instead of the ones that some particular bugfinder would like to advance as important simply because s/he found it.
- Our current state of security is getting worse, not better – the world is creating more vulnerabilities every day than bugfinders are finding. One simply needs to look at vulnerability statistics to see that there is no end in sight. Compare these with the number of software developers at work and the total lines of code being written every day, and it is a losing proposition. A valiant effort, perhaps, but a losing one nonetheless.
Of course, the common challenge (as Robert issues below) is that we shouldn’t stick our head in the sand. There is a paradox between the bugfinder perception that they are reducing risk overall and their belief that one must discover/disclose vulnerabilities for them to become important. Simply paying attention to these specific vulnerabilities increases the risk. This creates new value (for bad guys) for some vulnerabilities while leaving all other vulnerabilities up for grabs as they’ve always been and selectively ignoring them.
I say, equal rights for all vulnerabilities, known and unknown!
Contrary to bugfinders, my approach is to elevate our understanding of systems to a new level – to intuit the existence of vulnerabilities even without specific new evidence – there is plenty of old evidence to go around. Our goal should be to protect ourselves not from the individual vulns found as they are found, but to protect ourselves from all vulnerabilities all of the time, by taking an architectural view.
Here are some replies to his comments:
REL: The intent is to share it with everyone equally. However, I believe that public disclosure helps those who wish to protect more than it does those who wish to cause malice. At least that is our intent when we do disclose bug information.
The benefit applies to organizations (and software vendors) who are actively keeping up to date with disclosed information. The others will likely be at greater risk, but the researcher has no control over that.
I don’t fault the intent (well, I do slightly because of its “overbearing mother” approach to protecting people who aren’t even in the family) but this is a classic case where good intentions have gone bad. Your previous assertion (in the comments to my previous post) that the bugfinder has the only opinion that matters was correct – so now you can’t suggest that you have no control over increasing risk. In fact, you have taken complete control over it by inserting yourself into a natural process and manufacturing the risk yourself.
REL: Perhaps a better explanation of my opinion is that there are consumers of products who very much want vulnerability information details. Should their desire to be informed be any less valid than those consumers who wish to stick their heads in sand and pretend away the existance of vulnerabilities?
First of all, you can’t save the world. Second, both constituencies have a right to their opinions (albeit not in your world), though you have mischaracterized the second set. In fact, your whole philosophy rides on the notion that it is impossible to pretend away the existence of vulnerabilities, so why do you think they could do it? Vulnerabilities would still be found and we would still provide protection, but the process would be different.
In any case, the first set of folks are being fed a diet of McDonald’s when healthier food would do a better job to prolong their (Internet) life. They could easily be given methods for providing that extra protection without ruining it for the second set like we do today. And that is the clincher for me – the increase of risk with the current process. So I suppose my answer is yes, with the stated caveats.
In most industries, when a really undesired side effect of using a product is discovered, say a childs toy that can lead to death by choking, public disclosure is looked upon as a good thing. Parents who are paying attention will now know to not allow their child to play with the faulty toy. To withold the information from parents in that situation would be considered unethical. I see software vulnerability information similarly, though I know (from speaking with you at different conferences over the years) that you are no fan of analogies from other industries.
When bugs are found, with or without a work around (patch), a competent organization can at least be on the ball enough to provide extra care in monitoring their devices, or perhaps even remove public access to them.
I am all for disclosing vulnerabilities that are discovered due to in-the-wild-exploits. In fact, there have been ten (that I am aware of) in the past ten years. I think we should protect ourselves from those ten and any others that come up in that way. And they will come up. Note that nothing we did then or now protected us from those undercover exploits. We should really be worrying about them more.
A competent organization should be protecting themselves from today’s, tomorrow’s, and next year’s vulnerabilities. Why don’t you want them to do that?
REL: Vendors have not done a good job providing a “fitness for use” guarantee of the software provided. Because of this I do think things will get worse before they get better, but eventually the consumers of insecure software will demand more formally evaluated (Ala Common Criteria) assurances of what they are receiving than they are now.
Windows is Common Criteria certified. Vulnerabilities are inevitable. Nobody has said anything other than suggesting that if a vulnerability is found, then the software is unfit, even in the face of every single piece of software having vulnerabilities found. At some point, it boils down to some level of risk tolerance – a level nobody will define.
REL: One of my favorite talks I’ve seen on that subject is archived here: rtsp://media-1.datamerica.com/blackhat/bh-usa-00/video/2000_Black_Hat_Vegas_VK3-Brian_Snow-We_Need_Assurance-video.rm
Not much has changed since this talk was delivered. Vendors are still providing an “attractive nuisance” to malicious attackers.
I support full disclosure in most situations because A) It provides the information to organizations that they need to test if they are affected by the reported bug; but perhaps more importantly B) it forces one to realize that we need better security controls. Technologies like SE Linux, Trusted Solaris, etc are definitely steps in the right direction. The “find a bug”, “disclose a bug”, “patch a bug” game may be ultimately fruitless without part B kicking into effect.
The test for whether we need better security controls should be a real test, not a manufactured one. As it is, bugfinders are at the stage where they are prolonging the risk and weaknesses of our environment by providing this comfort food for folks. Make no mistake, what you are doing does nothing to protect us from the real threats that are out there – it simply distracts us from doing so.
I believe in the Good Guys. I believe if we can get past the hurdle of spite and distrust between the Good Guys and software vendors, we could all make the Internet much safer. I believe we can protect ourselves from more vulnerabilities using alternative techniques than we can with today’s Pyrrhic victories. I believe the children are our future… (oh, sorry, got caught in an 80’s time warp there )