Someone who I assume is actually Chris Wysopal has a comment posted on Dark Reading’s message board. I will take the time to respond to a few points, since it is on my not-so-favorite-not-too-belabored topic:
What is this fixation on the motivations of researchers that some, like Marcus Raynum [sic], have. Who cares if the researcher who makes a medical breakthrough was doing it for the money or not? Who cares if the professor who publishes a paper which advances the state of the art in his field did it to satisfy a publishing requirement? Must all progress be altruistic?
Chris is right here – motivation shouldn’t matter for significant breakthroughs – unfortunately, that doesn’t apply here. Motivations do matter when harm is intentional or foreseeable. We differentiate motives all the time when it comes to activities that have significant negative impact on others.
Then there is the fixation on the negatives involved in progress as if the positives don’t outweigh them. Did the invention of the automobile have any downsides? What about morphine?
The positives don’t even come close to outweighing the negatives. I agree that the key is the cost-benefit equation, and in the case of just-another-buffer-overflow it is clear that costs way exceed benefits. In fact, I suspect that the net negative (positive – negative) in economic terms is greater than any other non-fatal activity in society today. To somehow suggest that every single vuln disclosed should be considered "progress" or a "significant breakthrough" is ludicrous.
There is a real value to independent outsiders keeping companies in check and informing the public. It works in other spheres of society and commerce, why not software?
Well, whether there is value in some sort of arbitrary vigilante justice is debateable, I suppose, but it is plain-as-day wrong in my book. It is hard for me to fathom that a guy as smart as Chris Wysopal doesn’t understand the difference between the scalability and interconnectedness of software versus the attributes of physical goods.
It is really farfetched to compare bugfinding to the creation of something new and interesting, unless the bugfinders have created something new and interesting. Completely new attack techniques may meet some of these requirements.
To assume that self-appointed authority to "keep companies in check" is somehow beneficial to society is wrong in so many ways.
(Somewhat) Equal Time
Chris was kind enough to comment (below) so I thought I would elevate his points and my counterpoints here:
Your ideas of requiring a positive cost-benefit equation or a significance requirement are nice in theory but not in the real world. Who would enforce this?
I don’t see anything in either of our initial comments about requirements and enforcement. This is about beneficial vs. not-so-beneficial and everyone is allowed to do whatever they want. That said, cost-benefit and significance are really very simple in this context: For any specific vulnerability that is exploited using a known technique (buffer overflow, integer overflow, format string error, etc.), the cost is demonstrably higher than the benefit. Significant breakthroughs consist of entirely new exploit techniques. For example, I think Litchfield’s new Cursor Injection may fit the bill. (Btw, I am not 100% sold on this, but am stretching a bit in the name of "academic" research. At least it is unique.)
How would significance be defined? You have to allow for insignificant disclosures or you have to ban all disclosure. To not be able to describe any vulnerabilities or any attack techniques keeps the field of security from advancing.
I am fine with banning all discovery and disclosure activity not endorsed by the software manufacturer, either overtly or through the license agreement. Not sure why you think the second part – every security professional knows there are bad guys out there, so there would be plenty of attack techniques and vulnerabilities to choose from. Only in this case, they would be real. The cool thing is that this would actually do a much better job of improving the security of software since it is real. Don’t you think you should be paying attention to the new attack techniques being developed by the bad guys rather than the "good guy sideshow"?
I’m all for delaying the release of detailed vulnerability information until people have a chance to patch because a timeline can be defined. Cost-benefit equations and significance cannot. If you have ideas how I would like to hear them.
Speaking of enforcement, to suggest that "delaying the release, etc.." is controllable is impossible. Sure, a timeline can be defined, but it is arbitrary and not in the best interests of everyone involved.
I don’t know why you are turning a "security fixations" post into a call for enforcement, but the ban idea works perfectly fine for me.
Pete,
Your ideas of requiring a positive cost-benefit equation or a significance requirement are nice in theory but not in the real world. Who would enforce this? How would significance be defined? You have to allow for insignificant disclosures or you have to ban all disclosure. To not be able to describe any vulnerabilities or any attack techniques keeps the field of security from advancing.
I’m all for delaying the release of detailed vulnerability information until people have a chance to patch because a timeline can be defined. Cost-benefit equations and significance cannot. If you have ideas how I would like to hear them.
-Chris
I’m coming very late to this debate. Sorry about that.
As should be clear – security vulnerabilities are different than other product defects. A faulty brake system on a truck is a vulnerability, but it isn’t something that generally people will try to exploit for their gain.
The same can not be said for other technologies.
Were pinto owners angry when people tried to show how easily they blow up. yes, but it was because of a loss of resale value and fear for their own safety in regular circumstances, not because they feared intentional rear-ending that would cause them to burst into flames.
The same isn’t true for security vulnerabilities. People do attempt to exploit them, with a notable increase in issues as a result of disclosure.