Folly of Vulnerability Seeking

I have written about the idea that vulnerability seeking is ultimately more harmful than it is beneficial to the community in a number of places, most recently here. In a nutshell, my argument revolves around the notion that there are likely many more vulnerabilities being created everyday than there are vulnerabilities being discovered, and the costs are escalating so rapidly for every newly-discovered bug that it doesn’t make sense to continue, especially since that there has been almost no evidence of "zero-day" exploits (which I define as exploits in the wild that target a vulnerability that is not generally known to the security community).

Adam Shostack provides some feedback on the problem, which I will respond to here:

Pete Lindstrom has argued that we need to end the bug-hunt:

Once evaluated, neither reason provides a good foundation for continuing the practice of vulnerability seeking, but it gets much worse when we consider the consequences.

There is a rarely mentioned upside to all this bugfinding, which is that researchers use the exploit code to test defensive mechanisms. Companies like Immunix, PivX, or Sana could not accurately test their tools without exploit code. That’s not an argument for immediate release. But those zoos of exploit code are very useful.

I don’t see any reason that exploit code would cease to exist, the volume and proliferation would just slow down. Of course, I certainly wouldn’t lose sleep if there were no exploits anymore. Ultimately, the existence of these host intrusion prevention products is what makes my opinion stronger – because there are solutions that don’t rely on signatures of known attacks. In addition, since some of these solutions work by identifying all legitimate activities and blocking those not defined, testing is as simple as attempting to perform other activities (and if it isn’t simple, that is fine).

More importantly, Lindstrom says what we should not do. He’s clearly been talking to too many security experts. I’d like to hear what we should do. More laws like the DMCA? Privately paid bug bounties? Public beheadings?

Adam – thanks for being a perfect straight man. Here is what we should do (I am going to be lazy and cut and paste from an old email msg):

1) We are still stuck w/ trying to convince a large part of the security population not to participate in group hugs around discovery, and telling their vendors otherwise. The only alternative I can think of is to regulate… Although it may be that this discovery stuff is illegal already, right? Hmmm, maybe we should just enforce the law…

2) Have the BSA/SPA create a program whereby once a year, anyone who has found a vulnerability has the opportunity to get it off their chests (say, January) and get paid for it. This is intended to address accidental or circumstantial discovery that will still occur, and drives PREDICTABLE DISCOVERY AND DISCLOSURE. Then, in April or May or whenever, everyone issued their patches. We would have to address sheer volume, and give vendors a chance to come up with a patch… Then vendors would contribute to the kitty based on the number of vulnerabilities in their products.

3) Number 2 would probably be crazy unless we have a mitigation strategy: Software Safety Data Sheets. SSDSes act like Material Safety Data Sheets in the chemical world – they identify interactions and other ‘bad stuff’ that can happen to chemicals and ship with the chemicals. The SSDS ships with software and it identifies (help me out here) File/Directory access rights required, API, Shared Libraries used, and other key "touch points" of any software program. Software companies ship this SSDS as an XML file that can be imported on the receiving end into a Host IPS solution like Okena. There are plenty of solutions out there now that have to "autolearn" this exact same stuff, so why shouldn’t the vendor know about it anyway? I actually would support the mandate of SSDSes as a hedge against some sort of crazy software liability legislation.

4) More honeypots. Well, if the govt ran these, there would probably be cries of "echelon" out there, but bottom line is we would need to protect against the real bad guys. Should be easier once the good guys get out of the way, but still difficult and therefore will require more monitoring. I don’t recommend honeypots in an environment, but think there is plenty of opportunity to use more deception to learn about attackers than is in use today.

I think that Lindstrom and I are in full agreement: The current system is bad, and we’d all like to do better. I don’t think attacking the bugfinders is the right approach. We need to stem the problem where it starts. The problem starts with development languages that are unsafe at any speed. Developers aren’t trained in their use. Projects are driven to ship quickly, without good QA.

In his personal reply, Shostack also states that "We agree the situation stinks." Of course, Democrats and Republicans have "agreed the situation stinks" for some time now and still can’t come to terms with how to resolve it. Regardless of whether it actually does stink or not – in some respects, this can be as human a problem as tpyos that never go away – I don’t believe the ends justify the means. It’s like killing innocent bystanders in an attempt to capture the criminal.

There are better ways to develop? The eXtreme Programming folks call for better test harnasses. Better modularity allows you to develop and test patches faster. Better patch management, including bullet-proof rollback, allows your customers to deploy patches at lower risk. More use of things like Stackguard automatically close off venues of attack.

I recite these things because there are better ways to do things. Those better ways make sense, and smart companies are adapting them. I’d love to see an analogous way to improve bug-hunting.

Hopefully, some of my ideas above will strike a chord with folks.