In a post about Oracle password weaknesses, Thomas asked:
So, Peter Lindstrom, is this the kind of disclosure you believe should be outlawed?
I replied in the comments:
No. It isn’t a bug, it is simply a weak implementation and passwords are suspect to begin with. The configuration recommendations appeared reasonable to me at first glance.
In an attempt to ridicule me, Thomas updated his post with:
I just want to thank Peter Lindstrom his comment, which adds the phrase "it’s not a bug, just a weak implementation" to the lexicon.
I responded with a second post which appears lost in limbo:
Well, I was going to say "it’s not a bug, it’s a feature" but that is too cliche. I assume you get my point regardless. If you don’t, then APPARENTLY you don’t have enough experience to figure it out.
And Chris W added, again in Limboland (these comments don’t show up on the blog page, but do on the Blogger comments page):
To Pete: So I guess design flaws are OK to out. If the vendor meant it to be that way it is fair game to say, "this is how the vendor built it" and describe its weaknesses. Well I would agree that it is a good thing to out design flaws. That is what we did at L0pht with the LANMAN hashing weaknesses and Microsoft added password strength filtering. Then years later they even let you turn off storage of the LANMAN hash. They were quicker when we pointed out that NTLM challenge response over the wire was subject to a dictionary attack. We then got NTLMv2 which is MUCH better. Just when was Microsoft going to fix these weaknesses if their customers didn’t know about it? These kind of design fixes cost millions of dollars.
I assume that Thomas, and in some respects Chris W, are pulling my chain, trying to find a chink in my armor wrt my position on the destructiveness of vulnerability seeking. Let me elaborate.
First, there is a difference between coding mistakes that can be exploited by breaking a program’s functional use and coding decisions that correctly reflect the design. Tom and Chris apparently believe that if a system can be compromised, it has a "flaw". They are wrong. (And I would love to know how they secure their houses, cars, and persons against all attacks everywhere.)
Second, I can only surmise that Thomas and Chris haven’t spent time in large organizations, or they would understand the distinction a bit better. You see, there is this thing called "risk" which means that as soon as you deploy a system, it may be attacked. There are lots of ways to compromise these systems. Our goal, as security professionals, is to assess this risk to determine the ways in which a system can be compromised. So we come up with a number of threat scenarios and evaluate the likelihood that they will be exploited.
Anyone that has worked in a large company knows that the real risks associated with passwords are social engineering and attacks against the account owner. The whole "password hash" thing is a minimal problem, particularly in switched environments with decent controls at the O.S. level, because it is much easier for an attacker to simply call someone and ask them for their password.