I know in the past, I have deferred my opinion on how cryptanalysis compares with bughunting in terms of impact. My current opinion is that it ultimately acts in the same way, except that it has a much milder affect. That is, the imminent threat of bughunting significantly increases the risk on the Internet immediately while cryptanalysis is likely to take years to actually come to fruition.
There are at least three other reasons why the impact is lower than bughunters:
- The availability of proof of concept code and do-it-yourself exploits.
- The knowledge required to properly execute a cryptanalytic attack against some encrypted/hashed data.
- The nature of the attack typically requires some other compromise first – i.e. you have to have something to crack before you can attack it.
In addition to those three reasons for lower impact, there are two other differences that work in favor of cryptanalysis:
- Cryptanalysis deals with math, not code (when bugs are found in crypto products, they have the same problem as any other bug, ceteris parabus).
- New information about cracking codes normally provide new insights and techniques that further advance the knowledgebase of cryptographers. While this may be true with bughunting, it is pretty rare. (Cryptographers out there – feel free to correct me if this one isn’t accurate and the techniques/insights are generally not new.)
My assessment of the risk suggests that the imminent threat and corresponding risk from successful cryptanalysis is much lower than with bughunting, so I am not very concerned about it… yet.
“My assessment of the risk suggests that the imminent threat and corresponding risk from successful cryptanalysis is much lower than with bughunting”
In general, I agree with this statement, but probably in a different way than implied by the post. There seems to be some implication that these types of research should not be published (even conducted?) and that seems naive. How can one attempt to build things without first understanding how to break things? And, how can one fix something they do not know is broken in the first place?
As far as the imminent threats, cryptanalytic results are often academic in nature, while “bughunting” results are often practical in nature. A cryptographic mechanism can be broken in theory while the break remains totally useless in the realworld due to, say, its computational requirements or weaker security notions being enough for the uses of the mechanism, but a bug in a particular implementation exposes some sort of flaw or problem in that immediate implementation and often these issues can be leveraged for practical attacks. So, the impact of cryptanalytic results may be big in terms of future crypto designs, but small in terms of current realworld attacks on implementations, while the impact of “bughunting” may be big in terms of current realword attacks on implementations, but small to medium in terms of influencing design and development of future implementations (they do, however, result in fixes/workarounds for the buggy implementation and its direct relatives, as long as the research is disclosed). I think part of the variation in impacts is caused by the fact the crypto community is small, open, and homogeneous in a way that it generally uses cryptanalytic results to eliminate weaker crypto from the ballgame quite early on before its widespread adoption and use, while “bughunting” in implementations covers a large, both closed and open, and heterogeneous community and is, well, after implementation. (Also, this does not mean that practical attacks from cryptanalytic results do not occur, but these are diluted across the vast number of research breaks, which can be some years away from being extended to the realworld if ever, and, even when practical attacks in current systems are possible due to cryptanalytic results, the attacks themselves are often much more difficult to get to generate useful results by themselves than attacks stemming from “bughunting.” As we have all heard before, crypto is often not the weakest link in the security chain of realworld systems.)
However, things like poor implementation and improper usage of crypto blur the line between cryptanalysis and “bughunting.” These two arenas are by no means mutally exclusive in my view and combinations of the two can be used to devastating effect. In recent days, watermarking attacks on older versions of loop-aes, depending on your threat model for disk encryption, total crushing of WEP, according the its own design goals, and AES cache timing analysis, depending on your predictions for next year, have been in some of the discussions people are having around me.
In all of these cases (“bughunting,” cryptanalyzing, or blur), the publishing of security research leads to the ability to improve what is out there, whether by designing and building better primitives, protocols, and systems, or just fixing flaws in current deployments. And, in general, third party audits often play an important part in the review process.