Should Verisign sue Sotirov / Appelbaum?

[I am not a lawyer.]

With many security researchers supporting software liability (or at least some), it is useful to review alternative legal actions that may be taken throughout the discovery and disclosure process. I think the recent RapidSSL hack by Sotirov and Appelbaum is an interesting case.

In case you missed it, they demonstrated a hack where they could essentially spoof a Certificate Authority by taking advantage of known flaws in MD5 and (now exposed) weaknesses in RapidSSL's certificate issuance process (the serial number identification).

This is interesting primarily because it is uncommon. When Adobe and Cisco (and others) attempted legal action against security researchers, the researchers were demonstrating vulnerabilities in software that, if compromised, would impact their customers. This obviously is a concern, but in the end it is very difficult to show damages. That is generally true with most vulnerabilities.

Things change with cases like this (and potentially Software as a Service) where damages are borne by the vendor itself.

This vulnerability comes pretty close to demonstrating business fraud as far as I can tell. I don't know how far they went in their conference session, but if they actually issued a certificate, it seems to me that Verisign /RapidSSL could claim lost revenue (as minimal as it might be at this stage).

I am not sure what to make of their intentional secrecy to protect against legal action. I don't think it was a very smart move to tell people that you purposely didn't tell a vendor because they might have a legal right to keep you from presenting.

In any case, the approach by these folks completely disregards responsible disclosure (as anyone can at this stage) and the whole issue is couched in reasoning that could be applied to any vulnerability discovered — anyone can withhold information from vendors under the assertion that legal action may be taken.

11 comments for “Should Verisign sue Sotirov / Appelbaum?

  1. January 3, 2009 at 10:33 pm

    Are you kidding me?

    These guys did the community a favor.

    - ferg

  2. Pete
    January 3, 2009 at 11:37 pm

    @ferg -

    No, I am not kidding. What kind of favor was done and who is the community you are talking about?



  3. Jon
    January 4, 2009 at 10:28 am


    Shouldn’t the CA also be liable to the customers (and non-customers because of the intrinsic risk) because they knowingly withheld a security update to their service that was exposed is academia almost two years prior? [1]. Doesn’t a CA have a higher level of responsibility again because one weak CA weakens all CAs, including the responsible ones that at a minimum stopped using MD5 for signing? Or, to make your argument more logically consistent, wouldn’t the authors of [1] be more liable than the group that showed the flaw, since the cat was out of the bag in the paper? (Although, there’s overlap between the authors and members of the two groups.) Or, do you differentiate between theoretical risk and empirical risk, and if so, why? Or, heck, go back to 2004 and sue the authors of the first MD5 paper on collisions [2]. They showed that MD5 was weak to begin with.

    Before any researchers even are considered in litigation, I would like to see the CA be penalized. At a minimum, their CA cert should have been (or can still be) revoked from all browsers. We, as in the Internet community, have no way to trust them when they say their CA was not compromised from any prior attacks, even though the information was in the public court since 2007 and weaknesses were known since 2004. This demonstration should have not even been able to occur, in my opinion, if the CA acted more responsibly and heeded the warnings in the 2007 paper: “Therefore we repeat, with more urgency, our recommendation that MD5 is no longer used in new X.509 certificates.” Or, the CA could have even implemented a work-around: “Obviously, the attack becomes effectively impossible if the CA adds a sufficient amount of fresh randomness to the certificate fields before the public key, such as in the serial number (as some already do, though probably for different reasons).”

    When do vendors become responsible for not responding to publicly-known flaws that can impact any Internet user and not just the vendor’s customer base?

    [1] “Chosen-prefix Collisions for MD5 and Colliding X.509 Certificates for Different Identities”, Feb. 2007
    [2] “Collisions for Hash Functions MD4, MD5, HAVAL-128 and RIPEMD”, August 2004

  4. January 4, 2009 at 12:19 pm

    I’m inclined to support Jon on his response. Why would the researchers be sued? They demonstrated a proof-of-concept on a theoretical attack that was published in 2007. With clock cycles for hire (cloud computing), they warned us that the whole PKI infrastructure was in great danger because noone acted upon this ‘old’ research. Time to move on from MD5.

    They did contact some vendors like Mozilla and Microsoft about the issue weeks (or even months) prior to the presentation.

    I saw their presentation live from the first row in Berlin. They did make a certificate and posted it online as proof of concept but they had it expire in 2004 to avoid issues and had it put on the certificate revocation list. I think they acted ‘quite’ responsibly.

    Set you PC clock back and test it on (signed by MD5 collisions inc.)

  5. Pete
    January 4, 2009 at 9:24 pm

    @Jon -

    1) It certainly would be interesting for the researchers or customers or anyone you suggest has suffered damages to take RapidSSL to court. The damages are not as clear to me, though that would change if you know of a case or cases when this weakness was exploited in the wild.

    2) I think you are assigning a level of trust to CAs that most Internet users do not. It is pretty easy to get a certificate signed (remember the Microsoft one?), and this isn’t even the easiest. In any case, nobody cares about authenticity, they just want the encrypted comm.

    3) Don’t forget about the intelligent adversary. It is remarkable how easily the real ‘bad guy’ gets left out of the equation.

    4) All non-trivial software has vulnerabilities. The world lives with it the same way the world lives with things like glass and plywood and easily picked locks on houses without suggesting the builders or the manufacturers are liable if someone breaks in.

    @Security4all -

    The reason researchers could get sued is that they have compromised the business process and (potentially) acted in a fraudulent manner. This is an exercise anyway, as Verisign appears generally supportive of their efforts.

    Thanks for the thoughtful comments!

  6. CG
    January 6, 2009 at 12:36 pm

    I think you should go watch the presentation before, well before you respond to any other comments.

  7. Pete
    January 6, 2009 at 1:00 pm

    @CG -

    Does this count? ;-) If there are factual inaccuracies in my post, please point them out. If you disagree with my opinion, I’d enjoy hearing the counterargument. Thanks.

  8. January 6, 2009 at 4:22 pm

    The (online) discussion continued, you should read this latest article by Alex:

    They never issued certificates to anyone. I still don’t see how they acted in a fraudulent way and disregarded responsible disclosure to the vendor.

  9. Jon
    January 6, 2009 at 5:29 pm

    Back to you Pete:

    My argument related to your argument was that the CA should be liable to the end user and/or Internet community prior towards any litigation going towards the authors. A better argument against your position (IANAL either) would be my hope that since the CA did not take actions against a 2 year old paper that, their lawsuit would not hold up. But my cynicism for technical reasoning affecting legal proceedings is very deep.

    #1: I don’t recall anything about the CA being sued, so I see this as a non sequitur to my comment. I see the CA being penalized by the community it supposedly is offering trust.

    #2, #3, #4 I’m lost as to how these relate to my comments. No offense, but they seem like straw man arguments. For #2, step back and then ask yourself what is the purpose of a CA. #3, I don’t think I included or excluded any adversary in the discussion. #4, um, yeah, not even touching this.

    Btw, I have disabled the RapidSSL CA [1] in my settings. The whole point of a CA is trust.

    [1] C=US, O=Equifax Secure Inc., CN=Equifax Secure Global eBusiness CA-1

  10. Pete
    January 6, 2009 at 6:52 pm

    @Security4All -

    I agree that it might not be fraud, I just think that this kind of situation is grayer than ‘standard’ vuln discovery/disclosure as discussed in my original post.

    Responsible disclosure incorporates notification of the affected vendor with some grace period prior to public discussion. They didn’t do that. The reason they used is a reason that anyone could use in this arena and therefore means that nobody needs to follow responsible disclosure (which they don’t anyway).

  11. Pete
    January 6, 2009 at 7:15 pm

    @Jon -

    I certainly don’t mean them to be straw men, but I do find your argument fairly generic as well, so maybe I am taking liberties. Here are some clarifications:

    - It is up to the damaged party to assert liability and to sue somebody. That means someone who feels damaged should attempt it. To suggest somebody is liable also assumes that the injured party will sue for damages. I think it would be interesting (and fruitless) for someone to try to sue the CA.

    - The action above does not preclude RapidSSL from seeking damages from the researchers for reasons originally described. I don’t know if this would work or not. Clearly, the researchers increased the risk by a huge amount (though I don’t believe increased risk is enough to assert damages).

    - These two theoretical civil suits are not mutually exclusive nor are they synchronized in any way, so I don’t understand why you think someone should be sued “before” or “prior” to anyone else.

    - My point with 3 and 4 relate to the assumption that people always want to assign liability based on individual vulnerabilities (how many times as the Internet died again?) yet there is no objective litmus test to determine this with software (actually, there is, but nobody uses it). Affect and availability heuristics prevail in cases like these, which is probably the reason you don’t want to “touch this”.

    - If you are suggesting that every website that can set up an SSL tunnel should be trusted, I think you are crazy. No, the point of the generic CA is definitely not trust. It simply satisfies a process to set up an encrypted channel between two untrusted points. (This is different from true CAs in meaningful PKIs).

Comments are closed.