Since TQBF is nice enough to read my posts and intelligent enough to respond to my arguments with alternative, reasonable (though ultimately wrong ) ones, I am glad he took my obvious goading to expand on his argument. A couple of key point/counterpoints:
TQBF: It was simply not possible to run a secure mail server in 1996. It is possible now. The reason it’s possible is because security "researchers" beat the living hell out of software in the late 90′s.
I don’t get:
1) why he thinks mail servers are "secure" today. I sure as heck hope he hasn’t upgraded since 1996, otherwise, suggesting they have been magically cured is just the sort of weakness I would look for in my next victim (were I a black hat); or
2) why he thinks that security researchers beating the living hell out of a mail server is the ONLY way to make it more secure. It happens to have been the way that was used, but this isn’t by necessity (note that I still don’t stipulate that mail servers have been cured). Humans are incredibly industrious and creative when necessary.
TQBF: Back in 1996, there were just as many people complaining about "disclosure management" issues as there are now. Donn Parker compared the release of SATAN, a tool that finds open file shares, to "distributing high-powered rocket launchers throughout the world, free of charge". The same arguments Lindstrom uses now: that disclosure serves no purpose, that the evildoers will use vulnerabilities anyways, all applied and were used back then.
The point is, when you advanced those arguments in 1996, you were dead wrong. Why are you any more correct now?
There I go, getting myself lumped in with others again. I don’t know the particulars of Parker’s argument, though I do recall the hubbub around SATAN, but I also can’t state categorically that he (and apparently by association, I) was wrong. I don’t get what evidence is proof of being wrong, either. The key to being "secure" is minimizing incidents, since they are the unwanted outcome. Answering the question, "Did we have more or fewer incidents because of this practice?" is nontrivial. One of the key reasons is that researchers change the future so there is nothing to compare it to. In that spirit, I say THEY were wrong; we would have had fewer incidents without the researchers’ work (wow, that feels empowering! Unsubstantiated statements of fact – now I know why everyone in our profession does it!).
TQBF: Lindstrom says, "things change". I guess so. Can you support that statement with evidence?
I am guessing TQBF is kidding here. I have lots of evidence that things change, from countries of the world to my weight and hair color to the distributed nature and component architecture of our computing environments.
TQBF: "…I’m happier to know that they’ve got library and OS-level protection against stack and heap overflows now, and confident that Microsoft customers wouldn’t have had those protections without disclosure work."
How silly is that? We’ve had trusted computing since (well, I don’t actually know, but at least the mid-80′s). And here’s another secret (sssshhh): we will come up with a solution to any problem worth solving – we’ve been proving that for the past two thousand years.
How come most people live in houses of wood and glass? A big bad wolf could get in anytime he wanted. Microsoft customers may not have needed those protections if it weren’t for that work. And I GUARANTEE you, that if they needed the protection, they would have gotten it regardless.
TQBF: In the "risk reduction game of small numbers", finding a vulnerability before the Russian Mafia does provides a measurable reduction in the number of machines that can be compromised.
This is (probably provably) not true unless the Russian Mafia only has the one vulnerability or you find all the vulnerabilities they have. If I want to steal your car and I have six secret ways to do it and you found out five of them, your car is mine. The number of cars, or machines, is not reduced. Add to that the number of machines that DO get compromised when a vulnerability is disclosed and you have a no-win situation.
Btw, I am going to go out on a limb and suggest that over the next five years, we will find at least two thousand new vulnerabilities in software that exists today! Are you really suggesting that we are so in synch with the (pick your bad guy) that we will find all of them "before [they] do?" I say you’re crazy.
(TQBF has lots more commentary in his post that rely on the near-impossibility of total overlap and the inherent stupidity of humans, both of which I believe to be quite wrong).
TQBF does also mention source code auditing, something I agree is very worthwhile and I often leave out of my arguments. Not sure why, since I think source code auditing is useful now and holds great promise for the future.