Ryan Naraine’s Zero Day is writing about the current IE -> Firefox vulnerability and the question of responsibility. This may be a good example of the issues associated with liability. While Ryan seems to indicate that Microsoft is more to blame, it seems equally clear to others (and me) that any "interference" in qualifying inputs by Microsoft could backfire on them.
Meanwhile, Jeremiah Grossman is writing about the vulnerability auction site recently set up. I plan to write up a bit on this, but one point he makes is pertinent to this discussion as well:
"My question for them is, despite how they feel, do they have a responsibility to at least to attempt to bid for a vulnerability to defend their customers? I think so. That and invest more into their SDLC so there is less to bid on."
It is much too easy to fall into the "insecure by exception" trap that Jeremiah has fallen into. The notion that the existence of any vulnerability means that one should invest more in the SDLC assumes that one can write perfect software. To the extent that you can’t, then investing more may or may not help. We don’t know how many vulnerabilities is "reasonable" but always assume that if we find one, that is unreasonable… and yet not many folks suggest perfect, invulnerable software is attainable (except in trivial cases). (In addition to the suggestion that bidding on ad hoc vulnerabilities is "defending" customers is specious at best.)
Software liability is a bad idea for a number of reasons, and these are just two of them.
These auctions are a bit like the whole guns for cash system set up in some cities, aren’t they? “Bring in your gun, and no questions asked we’ll give you money to take it off your hands” becomes “Bring in your vulnerabilities, and no questions asked we’ll offer money to prevent you from using it against us.” The questions in both cases are essentially the same. 1. Is this a reward for bad behavior? Or, phrased in starker terms: is this blackmail? 2. What happens if the government (or in this case the programmer affected) is outbid? Are we creating a situation where those who want to do harm can easily identify where the tools to do that harm are located? But the larger question that underlies all of this, a question that Grossman obviously answers in the affirmative, is whether or not we accept that the free market should be pure in all cases, even when it comes to matters of security. And, as you suggest here, is this a fair and reasonable state of affairs if we accept that no code is ever going to be produced without vulnerabilities?