Here are two good examples why "responsible disclosure" can never work: people just don’t have to "follow the rules." Of course, we know that, but we always hope that they will. And we can’t even have war crime tribunals afterwards.
I would just as soon focus all of my efforts on problems like these and undercover exploits (you know, the ones we really can’t control) than deal with all of our self-generated noise. Wouldn’t you?
The problem we have is that vendors are not in a position of power to dictate “the rules”. The only “responsible” thing a researcher can do is share the information publicly. I actually find your examples to show why full and immediate disclosure is more responsible. If they had only contacted the vendor about the problem, we wouldn’t be reading about it now.
On a side note, it is really funny how much commentary is done on this issue by people who are more or less passive observers. Those who do not perform a security research function have no authority to dictate “rules” about information disclosure. The only opinions who matter in this game are the researchers and the consumers. Everyone else is more or less irrelevant.
Robert
@Robert -
Within the scope of our current expectations of how “good” bugfinders should operate, I don’t see how this is responsible. If what you mean is that since you were smart enough to find a bug you should be allowed to tell the world about it (in the same way my son will proudly tell everyone today is his birthday) then you certainly have that right (and the attention is just as fleeting). But it’s not responsible. The act of disclosure itself, and the fanfare that often comes with it, is indicative enough that it takes a lot of skill to find vulnerabilities, right? So sharing the information with folks who want to use them maliciously and probably wouldn’t have found them to begin with is irresponsible.
Regardless of the responsibility part of the equation, I actually appreciate these situations because they make my case much better than I apparently do – discovery and disclosure is outside of our “authority” and we should act based on the worst-case scenario (this one along with undercover exploits), not the best case.
Your second paragraph is quite the power statement. I can imagine as a security researcher that you sometimes feel like God. Let’s be fair, though, and keep the consumer peon opinion out of this (after all, I am a consumer and you called me irrelevant when I tried to voice my opinion). You are absolutely right that it is within your power to increase risk or decrease it on the Internet. To date, you (or bugfinders like you) have decided to increase it. It is not clear to me why you want to increase your personal stake to the detriment of the Internet, but that is your right (apparently).
> So sharing the information with folks who want to use them maliciously and probably wouldn’t have found them to begin with is irresponsible.
The intent is to share it with everyone equally. However, I believe that public disclosure helps those who wish to protect more than it does those who wish to cause malice. At least that is our intent when we do disclose bug information.
The benefit applies to organizations (and software vendors) who are actively keeping up to date with disclosed information. The others will likely be at greater risk, but the researcher has no control over that.
> Let’s be fair, though, and keep the consumer peon opinion out of this (after all, I am a consumer and you called me irrelevant when I tried to voice my opinion).
Perhaps a better explanation of my opinion is that there are consumers of products who very much want vulnerability information details. Should their desire to be informed be any less valid than those consumers who wish to stick their heads in sand and pretend away the existance of vulnerabilities?
> You are absolutely right that it is within your power to increase risk or decrease it on the Internet.
In most industries, when a really undesired side effect of using a product is discovered, say a childs toy that can lead to death by choking, public disclosure is looked upon as a good thing. Parents who are paying attention will now know to not allow their child to play with the faulty toy. To withold the information from parents in that situation would be considered unethical. I see software vulnerability information similarly, though I know (from speaking with you at different conferences over the years) that you are no fan of analogies from other industries.
When bugs are found, with or without a work around (patch), a competent organization can at least be on the ball enough to provide extra care in monitoring their devices, or perhaps even remove public access to them.
> To date, you (or bugfinders like you) have decided to increase it. It is not clear to me why you want to increase your personal stake to the detriment of the Internet, but that is your right (apparently).
Vendors have not done a good job providing a “fitness for use” guarantee of the software provided. Because of this I do think things will get worse before they get better, but eventually the consumers of insecure software will demand more formally evaluated (Ala Common Criteria) assurances of what they are receiving than they are now.
One of my favorite talks I’ve seen on that subject is archived here: rtsp://media-1.datamerica.com/blackhat/bh-usa-00/video/2000_Black_Hat_Vegas_VK3-Brian_Snow-We_Need_Assurance-video.rm
Not much has changed since this talk was delivered. Vendors are still providing an “attractive nuisance” to malicious attackers.
I support full disclosure in most situations because A) It provides the information to organizations that they need to test if they are affected by the reported bug; but perhaps more importantly B) it forces one to realize that we need better security controls. Technologies like SE Linux, Trusted Solaris, etc are definitely steps in the right direction. The “find a bug”, “disclose a bug”, “patch a bug” game may be ultimately fruitless without part B kicking into effect.
Robert