Debian OpenSSL Vulnerability Still Pains Two Years On

In April 2006, someone filed a bug report with Debian complaining that OpenSSL (an open source SSL/TLS implementation) read data from an uninitialised buffer. This was causing Valgrind (a brilliant debugging tool) to report memory usage warnings anywhere the OpenSSL random number generator was used. This was actually the behaviour intended by the OpenSSL developers, as the uninitialised buffer was being fed into an entropy pool used by the random number generator.

A fix was proposed that reset the uninitialised buffer to zero before first use. This would have had minimal security implications as OpenSSL used multiple reliable entropy sources in addition to the buffer. However, the initial patch to do this didn't stop all the Valgrind warnings. Somehow the proposed fix mutated into one that caused almost no entropy to be added to the pool except for the process ID, rendering the random number generator almost entirely useless. This in turn led OpenSSL to create extremely vulnerable SSL certificates, SSH keys and a bunch of other things. The broken OpenSSL version made it into Debian in September 2006. It later propagated into Ubuntu.

Incredibly, no one noticed the vulnerability until 2008. By then, a massive number of packages and users had been affected. Cue scrambling to fix the bug in Debian and Ubuntu, creation of blacklists and scanners for vulnerable OpenSSL and OpenSSH keys, arguments about who was to blame and how open source practices could have failed so badly. Also, cartoons.

Yes, I did feel the need to reiterate the entire bug's history. Reading the bug report is like watching a slow-motion car crash. Even the "fix" for the initial bug was applied incorrectly, which meant it appeared in Debian version 0.9.9c-1 of OpenSSL (released 17 September) instead of 0.9.8b-1 (released 4 May). I suppose that was a good thing.

Even now, over two years after the bug was discovered and almost four years after it was originally introduced, the damage it caused is still being discovered. The Electronic Frontier Foundation, launching a project into all publicly used SSL certificates has had to delay because they are disclosing vulnerabilities to websites they found using weak private keys generated by the broken Debian/Ubuntu OpenSSL versions.

As the EFF SSL Observatory page will be updated at some point, it currently reads:

This project is not fully launched yet, because we are currently engaging in vulnerability disclosure for around 28,000 websites that we observed to be using extremely weak private keys generated by a buggy version of OpenSSL.

Further information can be found in the SSL Observatory DEFCON 18 slides. Of the 28K vulnerable certificates seen, the 530 validating ones are the most interesting. The others were either invalid or the 12K issued by private certificate authorities. Only 73 of the 530 valid certificates had been revoked. In particular none of the 140 valid certificates by Equifax had been revoked, and only 4 of the 125 issued by Cybertrust.

In conclusion, this all seems rather depressing. A bug in a patch to an important open source security library went unnoticed for two years, despite reducing of the effective security of the keys it generated to almost nothing (15 bits). Furthermore, a bunch of private "trusted" companies still haven't taken measures to ensure SSL certificates they generated using this library have been marked invalid another two years after the bug was found.

On a positive note, I'm sure Debian has learnt from its mistakes by now, but it would still be nice to find some policy document to show what has changed (I failed to find one). Rather unsettlingly it seems like this was the exact type of bug the Debian Security Audit Project was set up to spot.

Unfortunately, nothing so positive can be said about the state of Certificate Authorities. Until one is sued for not taking proper security precautions, their behavior is unlikely to change. As for security, there's nothing to stop a CA from collaborating with a government or other entity that wants to eavesdrop on communications (as the EFF warns). Also, no amount of money spent on an SSL certificate from even the most trustworthy CA will protect against a rogue certificate created (or successfully forged) from a different CA also trusted by the web browser. In other words, SSL is nowhere near as trustworthy as you think.

MD5 collision used to create rogue certificate authority

This really is quite nice. Researchers used collisions in the MD5 hash algorithm to create a rogue CA (Certification Authority) certificate signed by RapidSSL. RapidSSL is apparently trusted by the majority of web browsers.

It was demonstrated years ago that MD5 collisions could be generated relatively easily, but that hasn't stopped MD5 being used in a number of contexts where cryptographically secure hash functions are required. The attack involves getting a CA that uses MD5 for hashing to provide a legitimate website certificate. The attacker then generates their own CA certificate which has the same hash as the legitimate one. It is now possible for the attacker to generate SSL certificates for arbitrary websites, signed by their rogue CA certificate. As the rogue CA certificate has the same hash as a legitimate one signed by the CA, browsers that trust the CA will accept the fraudulent site's identity.

What I really like about this work is that the authors have managed to bridge the gap between a result primarily of interest to security researchers and an issue which could affect the average web user. Quite often, when cryptographic research results get publicity, the implications are so obscure to anyone without a knowledge of cryptography that the reporting soon becomes badly distorted. Hopefully, this example of how MD5's unsuitability as a cryptographic hash can lead to such an easily comprehensible real-world vulnerability might make people take notice of the danger in using broken hashing and encryption algorithms. Well, one can hope.

As Bruce Schneier points out, the plethora of valid sites with broken certificates have trained users to ignore SSL warnings so the ability to spoof SSL certificates doesn't really add that much. Having said that, Firefox 3 goes out of its way to make it difficult to access web-sites with invalid SSL certificates. Still, given the multitude of far simpler methods criminals have for acquiring sensitive information, it's unlikely this attack will ever be seen in the wild.