What I learned on Twitter yesterday:
- There are lots of security folks with no lint in their navels.
- HZON LIKES TO YELL.
- People with handles don't know how to use Google.
- Mogull is the spokesperson for a new analyst firm called "Analysts in General."
- Folks never tire of hearing the same old diatribe when they agree with it, but get extremely frustrated when someone provides the same old response.
- there aren't many substantive perspectives on bugfinding – people do it because it feels good.
Thanks for playing!
Hey Pete,
It’s John, or hzon on twitter. My Robert Anton Wilson-style thought experiment was sincere, as I think there is an element of truth to what you’re saying, and I’ve wrestled with these issues my entire career. Someone took it and ran with it, and you have to admit that it was pretty funny. :>
(I think we all develop a warped sense of humor from staring at code way too long.)
The problem I had was that it’s hard to take your entire view seriously because a couple things clashed considerably with my experiences reviewing software professionally over the last decade or so. Basically, in my experience, there is a finite (and relatively small) number of security vulnerabilities in any given program, and removing a vulnerability does materially increase the security of that piece of software. I went through a phase of thinking everything is hopeless, but my work has taught me that’s not really the case.
Sure, there might be more vulnerabilities that you missed, but the thing is that the state of the art in vulnerability research/exploitation evolves over time. As long as you make a good effort at checking against the current collective world knowledge of what constitutes a vulnerability, you can do pretty good. In a massively complex world, pretty good isn’t perfect, but it’s still pretty good. That’s worth something.
This is really tangential to the disclosure debate, as most “bugfinding” is done by security consultants under NDAs, contracted by the companies that develop the software. This work doesn’t result in any public security advisories, as no one would pay for you to publicly draw attention to their failings. It’s actual, real, hard, adult, professional, computer science work, and, while it can be fun at times, I assure you that, for the most part, it’s not.
At any rate, I know you enjoy this debate. I assure you that I do not, but it’s only fair to respond as I raised a criticism of your position. I’ll buy you a beer sometime and we can argue about how long my prison sentence should be. :>
Take care,
John
@John -
“you have to admit that it was pretty funny”
Funny in a one joke, Adam Sandler sort of way, yes. Ridicule is a pretty standard tactic when there are no substantive arguments to be made.
“it’s hard to take your entire view seriously…”
I am not sure you know my entire view – you’d have to do a lot of reading over a number of years to get a sense for that.
“here is a finite (and relatively small) number of security vulnerabilities in any given program, and removing a vulnerability does materially increase the security of that piece of software.”
I agree completely with this and with your further points about finding vulns for specific companies under specific scenarios – that is why QA is useful. Clearly, finding and fixing vulnerabilities within your own environment reduces your risk. The problem comes in the aggregate, when you factor in every vulnerability, Internet user, and attacker in the world.
I don’t think “everything is hopeless” but I do think we need to come up with better ways to succeed.
“I know you enjoy this debate.”
Don’t know why you think I enjoy being ridiculed. I most definitely do not. Unfortunately, I am compelled to try to point out the effect on risk and to maintain a point of view that I believe is correct even in the face of pretty much everyone else in the security profession.
“I’ll buy you a beer sometime and we can argue about how long my prison sentence should be. :>”
I accept on the beer part. I don’t really recall saying anything about jail in the past – I think it is confused with points I made in support of civil action by software companies.
Pete,
Ah, I mis-read your 6 points as being a summary of your position, as opposed to a specific reaction to that article.
I can’t really offer you substantive argument, though, as you will soon witness, I am a world-class rambler. :> Furthermore, I don’t find that I strongly disagree with any of your points, though I tend to see some of the issues as more a matter of degree and balance. I too think we tend to drink our own koolaid a bit, so I don’t find your thoughts to be offensive or patently false.
Real quick, I’d probably agree that reflexive full-disclosure does increase risk; that obscurity/secrecy is a legitimate security mechanism; and that we definitely need to work towards quantifying what constitutes acceptable degrees of vulnerability in software. In my experience, binary notions of platonic technical perfection and secure vs. insecure are pretty useless when you’re trying to secure any sufficiently complicated real-world system.
To your second point, as a consultant, I found it slightly amusing that vendors would very often try to maximize their security spending against avoiding embarrassment and stigma, as opposed to achieving some sort of pure technical perfection. This seemed blasphemous to me at first, but when I sat down and thought it through, I couldn’t really fault them.
It occurs to me that this is actually probably a decent case for justifying disclosure. In essence, it provides a level of social pressure (through fear of harm) that compels vendors to police themselves. I think this probably benefits everyone in the long run, as you only need to grievously victimize a few software vendors and their customers before they all start to worry about being the next target. This seems to me to be market forces at their finest, though I make no pretense that it’s a friendly or altruistic sort of activity. :>
As far as your points 3 & 4, I think if you talk to vulnerability researchers, you’ll find that the collision rate among discoveries is a bit higher than one would expect. This isn’t really something that you’ll find anyone rushing to concede, either, as it’s not a particularly great feeling when it happens. I’d also assert that it’s a little bit harder to find vulnerabilities than your summary indicates.
Actually, I think the subtext of your points 3 and 4 implies that people that find vulnerability research to be challenging are effectively incompetent. That’s where you’re likely to get an emotional reaction, as that’s a far worse insult than being called naive or immoral. :>
I’ll stop myself before I try to write a mission statement for vulnerability research and disclosure, but I’ll just say quickly that in essence, I think you’re right to question the myth that bug-finging is inherently noble work that is paternalistically concerned with minimizing the risk to every computer user in the world.
That’s the sort of thing that sounds good on paper but isn’t particularly realistic. In practice, most everyone’s computer probably already has six kinds of spyware on it, and we don’t really have time to worry about them as we have a lot of important work to get through before the robot war.
(In seriousness, there are thoughtful, ethical, and “substantive” people on our side of the fence. They quit talking to me years ago due to obvious reasons, but I do distinctly remember meeting them once.)
At any rate, let me quit while I’m massively behind and apologize for coming off like a jerk. I try to keep my humor light, self-deprecating, and largely centered around the inherent funniness of heavy metal toxicity and lemurs, but sometimes I fall short of the mark.
Take care,
John