Robert Graham at Errata Security has yet another thoughtful post – this one on the “rudeness” of vulnerability disclosure. His key point:
“However, vuln disclosure isn’t friendly. It is an inherently rude act.”
It is an interesting post, primarily focused on the psychological relationship between bugfinders and vendors, but the thing I find the most puzzling is that final phrase in the final sentence: “unfettered security research serves the greater good.”
I guess my big question is how Rob defines “the greater good.” I infer from his post that he thinks in terms of software defects. That is, the existing vulnerability discovery and disclosure cycle has led to fewer vulnerabilities than there would have been had this process not existed. This seems like a fairly reasonable assertion. My only question in this regard is whether the exorbitant cost was worth it.
But there is another more important aspect to security research that gets ignored quite frequently – risk. I believe that if not all, then almost all “whitehat” security researchers are focused on the vulnerability part of the risk equation in their attempts to reduce risk. But the ultimate consequences, in the form of compromises, is largely overlooked. So the real pertinent question about whether vulnerability discovery and disclosure “works for the greater good” is not if vulnerabilities are reduced; it is whether incidents and the likelihood of future incidents are reduced. That is not clear to me at all. [Note that there is an even more granular notion of cost here that probably isn't worth getting into at this point.]
I believe if you asked 100 security professionals who have been in the security profession for at least 15 years whether the existing vuln discovery/disclosure process has led to more compromises or fewer compromises since 2000 than would have happened without the process, that 95 or more of them would assert that the existing process of disclosure has likely led to more compromises than would have happened otherwise. And even if many disagreed, the evidence seems fairly self-evident in this regard.
This is a challenging point to make, because the past happened as it happened, obviously we can’t change it (unless you explode a nuclear bomb at the key location of significant electromagnetic energy, of course). But presumably bugfinders considered what the likely outcome would be if they didn’t go about discovering and disclosing all those vulnerabilities.
Ultimately, my position is that our risk was increased over the past 10 years, it actually resulted in more incidents, and security was more expensive than it would have been had we not gone down the path of public and semi-private discovery and disclosure that we did. Further, I believe that there is a preponderance of vulnerabilities to the extent that, although patching vulnerabilities does lead to a smaller attack surface, the attack surface is so large that this is inconsequential to the net impact on risk. That is, the reduction in attack surface does not outweigh the increase in threat arising from this discovery and disclosure process.
Last paragraph is very well-stated (not that the rest was bad…just saying).