Microsoft had its Blue Hat briefing last week. The New York Times reports on it. There is nothing really new here, but I do want to comment on one quote, attributed to David Maynor of ISS:
Microsoft had also erred in public assertions about the security of its coming Xbox 360 game console, he said, adding, "You’re a huge target, and when you challenge people, they will prove you wrong."
In today’s climate, this is exactly wrong. Microsoft should make itself a target – we know from experience that they already are, so attempting to find more bugs more quickly actually is a better scenario than random public finding.
A quick step back: in a perfect world, I am an advocate of nobody looking for any vulnerabilities. The only possible exception there is to be looking for completely new techniques (which happen once in a blue moon). But this buffer overflow stuff is completely destructive; there is no redeeming value.
Of course, we don’t live in that perfect world. We live in a world where random people look for random vulnerabilities in random applications. Randomness is kiling us. In this world, the best thing to do is to create a set period of time and have a contest, real or implied. Get everyone focused on one platform, perhaps by offering a reward (from the manufacturer, not this ridiculous stuff by third parties).
So making provocative statements in public is a great way to find as many bugs as possible within a set period of time. In addition, the provocation and any possible reward provide more reason to make them public.
What Dave actually said was “Did you really mean to poke the bear? You’re a huge target…”
Adam – thanks for the clarification (I am looking forward to reading your summary as well). The actual quote sounds a bit more noncommittal, so I guess I would encourage him to assert that in today’s security climate, they SHOULD poke the bear.
Blue Hat Report
The other thing I did at Microsoft last week was I participated in Blue Hat. Microsoft invites a selection of interesting researchers to come to Redmond and present a talk to a variety of people within the company. Blue…
There are … strange economic realities in the security world. For example, vulnerabilities cost customers much more than they cost vendors, since the vendor has already received payment while the customer is faced with a degrading product. The money paid cannot be exploited but the value received can be.
So yes, there’s a desync in importance between vendors and customers. It’s well known.
You know, randomness and progress are kind of linked. The alternative is single sourced innovation.
@Dan -
Thanks for the comment. It sounds like you are looking at face value $$$ for vendors but patch costs for customers. I am not sure I agree with that assertion. The costs to a vendor for patching a system can be significant (developer time, customer support time, opportunity cost for Vista). Of course, the same can be said for customers, but it is not clear to me that there is “much more” of a difference.
re: Strange economic realities – I definitely agree with you there. The economic reality is that very little we do in security makes economic sense.
I am not sure what you mean by your last point. If you are suggesting that random vulnerability seeking leads to “progress” that would only have single sourced innovation as an alternative, then I am happy to heartily disagree.
The vulnerability discovery we do today has only detrimental benefit, and I could prove it if I only had the math skills.
Vulnerability development has destroyed the concept that “nobody would ever find this”. This has caused changes across the board — neither Microsoft nor Linux developers have any doubt that if they use obviously exploitable mechanisms, their products will eventually fail.
Vendors lose the value of future purchases. Customers lose the value of present assets. Since money now is worth more than potential money later, customers have much more to lose from a given security risk than a vendor.
MS is about to drop a mad amount of economic data on security / anti-malware / SP2. I’ve gotten some early previews; it’s *beautiful*.
Re, Randomness — systems can be designed to do one thing and one thing only, or they can be designed to recombine in new and interesting ways. There is little hope of progress in the former model, but security is orders of magnitude more difficult in the latter. Here is a realm we have trouble.
I would agree with you if you can guarantee every person who finds bugs be very altruistic with their research. As a researcher myself, I wish this was the case, but its not. BTW my comment had far more to do with pr than discouraging researchers. The actual context was that people were going to find a way to break it and by boasting about how secure it is just makes you look bad (like Oracle with their “unbreakable” campaign).
@David -
Not sure exactly what you mean with your altruism condition. If you believe that “altruistic” research is beneficial, you should want as many vulns found as quickly as possible. If you believe it is unlikely that the good guys will find the same bugs that the bad guys do, then timing doesn’t matter (though you would be correct in your belief).
I think boasting is the best way to get vulns found quickly and at least have a chance of “bad guys” making their finds public in order to embarrass Microsoft. I am not sure where PR fits into secure software, but MS is in a no-win situation anyway. I’d rather the “altruistic” type do whatever damage they are going to do sooner not later, so we can move on.
The entire of my comment is that every security system has bugs, the more you boast about how great your security is the worst you look when its broken.