I wrote a more in-depth risk assessment for the DNS flaw on the Burton Group SRMS blog.
Btw, I have completely ignored the costs of patch failure aspect of things, though it seems to be causing more of a problem lately.
I wrote a more in-depth risk assessment for the DNS flaw on the Burton Group SRMS blog.
Btw, I have completely ignored the costs of patch failure aspect of things, though it seems to be causing more of a problem lately.
I’ll repost my comment from your blog here in case you want to respond, but not on your corporate blog:
Vulnerability disclosure in this case is different then normal.
This is a design level flaw, and requires design level fixes. This is less like your average buffer overflow, and more like the disclosure of the existence of the first buffer overflow. The fix is not like a patch to one buffer overflow, but more like the addition of ASLR and NX protection.
It’s a sad thing that many parts of our critical infrastructure are still in such a state that new attack classes are still being discovered and exploited, but the exploits are valuable in helping us to design better protections into the infrastructure. For example, a DNS hardening RFC that was in progress is now being rewritten because some of its assumptions were proven incorrect byt his new class of attack.
Anyone who knows about DNS knows that the state of the art is always right on the edge of brokenness, and that the major players are patching just enough to fix todays vulnerability. DJB was one of the few exceptions, and he designed all the strength he could think of into his implementation. His implemention had the fix that is being rolled out today, many years ago. To me, that’s the real story here, that we should be spending our effort both building things to be as secure as we can reasonably be, and researching new attack classes and defenses that can be applied in a generic manner rather then chasing the latest buffer overflow or XSS bug. Research in static analysis, strong protections in frameworks and libraries available to everyone, and strong well designed protocols that are reusable like SSL and DNSSEC are the things that really push security forward. Vulnerability disclosure is just a useful tool in pushing the those goals along.
Here’s my question for you. DNS source port randomization has always been a good idea, has been well know and even been implemented in some software for many years now.
Why is it that people will only roll it out after a vulnerability disclosure and media circus?
In a perfect world, if the industry as a whole really cared about security and was proactively strengthening their software development life cycle and deploying the best defenses we knew of, then vulnerability disclosure might be a bad thing. We don’t live in that world, we live in a world where the cheapest thing that just barely works gets thrown on the net.
I would like to live in that world, and spend my time and energy pushing for systemic fixes as much as possible. Until the world is perfect and we sit around making daisy chains, we need vulnerability disclosure.
A side note is that people who care could have protected themselves on day 1 of vulnerability disclosure, as I did. Larger deployments of course take more testing, but 2-3 weeks is not an unreasonable time period.
We as an industry can only help those who care. Is there collateral damage? Yes. Would there be more or less damage without vulnerability disclosure, if security was perpetually at the state it was 10 years ago, or even at a standstill of where we are today? Where should we freeze the research clock?
Unfortunately in most areas of life, you must be running to stand still, and security is no different. If you don’t care, you will get run over. We can only help those who care. It’s an imperfect universe.