Ben Laurie blathering

Vendors Are Bad For Security

I’ve ranted about this at length before, I’m sure – even in print, in O’Reily’s Open Sources 2. But now Debian have proved me right (again) beyond my wildest expectations. Two years ago, they “fixed” a “problem” in OpenSSL reported by valgrind[1] by removing any possibility of adding any entropy to OpenSSL’s pool of randomness[2].

The result of this is that for the last two years (from Debian’s “Etch” release until now), anyone doing pretty much any crypto on Debian (and hence Ubuntu) has been using easily guessable keys. This includes SSH keys, SSL keys and OpenVPN keys.

What can we learn from this? Firstly, vendors should not be fixing problems (or, really, anything) in open source packages by patching them locally – they should contribute their patches upstream to the package maintainers. Had Debian done this in this case, we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was. But no, it seems that every vendor wants to “add value” by getting in between the user of the software and its author.

Secondly, if you are going to fix bugs, then you should install this maxim of mine firmly in your head: never fix a bug you don’t understand. I’m not sure I’ve ever put that in writing before, but anyone who’s worked with me will have heard me say it multiple times.

Incidentally, while I am talking about vendors who are bad for security, it saddens me to have to report that FreeBSD, my favourite open source operating system, are also guilty. Not only do they have local patches in their ports system that should clearly be sent upstream, but they also install packages without running the self-tests. This has bitten me twice by installing broken crypto, most recently in the py-openssl package.

[1] Valgrind is a wonderful tool, I recommend it highly.

[2] Valgrind tracks the use of uninitialised memory. Usually it is bad to have any kind of dependency on uninitialised memory, but OpenSSL happens to include a rare case when its OK, or even a good idea: its randomness pool. Adding uninitialised memory to it can do no harm and might do some good, which is why we do it. It does cause irritating errors from some kinds of debugging tools, though, including valgrind and Purify. For that reason, we do have a flag (PURIFY) that removes the offending code. However, the Debian maintainers, instead of tracking down the source of the uninitialised memory instead chose to remove any possibility of adding memory to the pool at all. Clearly they had not understood the bug before fixing it.

P.S. I’d link to the offending patch in Debian’s source repository. If I could find a source repository. But I can’t.


Thanks to Cat Okita, I have now found the repo. Here’s the offending patch. But I have to admit to being astonished again by the fix, which was committed five days before the advisory! Do these guys have no clue whatsoever?


  1. […] je točno tu kriv teško je reći (slično kao i kod Debian vs. OpenSSL fijaska neki dan), ali kombinacija svega ovoga je neupotrebljivost […]

    Pingback by Free Software Stuff » Blog Archive » Workaround za Firefox3b5 probleme u Linuxu — 16 May 2008 @ 7:18

  2. I’m not against downstream dev’s writing their own patches. It’s my personal preference that they refrain from doing so as MUCH as possible, but there exist cases where it is justifiable that the dev patch something personally.

    This situation, however, was not one of the acceptable situations. Consider how the events, unfolded:

    1) Debian dev finds code he doesn’t understand. His code-checker complains, he thinks the code might not be necessary. It is not a bug, but simply code that, if indeed unnecessary, makes the program less “clean”.
    2) Debian dev devises a patch. But he does not know whether the patch is safe or not.
    3) Debian dev posts proposed patch to the openssl-dev mailing list, stating that he does not know whether or not the patch is safe to use.
    4) The OpenSSL devs fail to review the Debian dev’s proposed patch.
    5) The Debian dev,goes ahead and implements the patch anyway, despite the fact that he does not know whether or not it is a secure patch and despite the fact that the patch doesn’t , technically, actually even fix any problems.
    6) It turned out that the patch was not secure, and thus Debian and Debian-derived distributions (which compose a healthy percentage of total Linux users) were left vulnerable.

    The Debian dev had nothing (real) to gain and everything to lose. Bad move on his part. When making a mistake has consequences *that* dire, you don’t mess around making things look pretty. Especially not in crypto libraries. Period.

    Comment by B-Con — 16 May 2008 @ 8:47

  3. […] תגובת המפתחים […]

    Pingback by סטארטר · את מי מעניין הבאג בOpenSSL בדביאן? — 16 May 2008 @ 9:15

  4. “Not only do they have local patches in their ports system that should clearly be sent upstream, but they also install packages without running the self-tests.”

    1) I like to point out that FreeBSD patches have been send upstream, and some are still unresolved.
    e.G. Mi. 01. Okt. 2003,

    2) Also the regression-test are availible by typing “make test” in the portsdir.

    3) The package cluster build packages and runs the self-tests:
    Please check e.G.:

    Comment by Dinoex — 16 May 2008 @ 11:30

  5. Initially while reading the bug report, I thought that it was completely Debian developers fault, but I am not convinced of that anymore. Certainly, the Debian developer should have submitted the patch to upstream, but, at least, he discussed the proposed change with the OpenSSL Team (or, more precisely, with some members of that team who is reading the openssl-dev list) and they did not fall laughing about it as Ben Laurie suggested.

    What is _really_ upsetting about the current situation is the following Ben’s statement: “Openssl-dev is a list for people developing OpenSSL based software, not a list for discussing the development of OpenSSL itself. I don’t have the bandwidth to read it myself. If you want to communicate with the OpenSSL developers you need to use”

    If openssl-dev is a wrong place then why it is announced on the OpenSSL site as the ML for “Discussions on development of the OpenSSL library. Not for application development questions!” and also in README in the “HOW TO CONTRIBUTE TO OpenSSL” section, we can read: “Development is coordinated on the openssl-dev mailing list (see for information on subscribing). If you would like to submit a patch, send it to with the string “[PATCH]” in the subject.” No official documentation mentions

    That makes me wonder if the OpenSSL team really wants to receive patches or only to bash Debian developers. Had the Debian developer submitted the patch, wouldn’t the patch simply ignore, because many OpenSSL developers do not bother to read the ML that they announced as the place for submitting patches?

    Comment by Dmitry — 16 May 2008 @ 14:48

  6. I’s not a good security practice to add data from uninitialized buffer to a random pool, for at least two reasons:

    1. It could contain information of a GRATER security level than the pseudo-random bytes intended to generate, leaking that information. e.g: a private key.

    2. It could contain previous pseudo-random bytes that could potentially CANCEL the mewly generated random bytes, if mixing function is a simple XOR. Even if mixing function is it’s not a XOR, it could also degradate the cryptographic primitives security, because they were not designed to be mixed with a previous hash-function state.

    I think 50% of the responsibility lies on the original openssl developers.

    Comment by Sergio Demian Lerner — 16 May 2008 @ 16:47

  7. Sergio (#156), if you’re so smart why don’t you read the code in question? The data is not added to a random pool, it is added to a cryptographic hash function. You can add as much deterministic, predictable data as you like, it won’t make the result more predictable. The uninitialized data is only a small part of (hopefully random) data fed to the hash function to generate a seed, anyways.

    Comment by Chris — 16 May 2008 @ 17:39

  8. B-Con (#152), point 3 and 4 are incorrect. The maintainer of the Debian package never provided any patch to the OpenSSL developers. He didn’t submit a bug report either. All he did was asking about some debug issue involving valgrind. This was simply completely unprofessional.

    Comment by Chris — 16 May 2008 @ 17:46

  9. Sergio (#156) – Where I can I download this GRATER security level? Will it slice and dice those attempting to crack my systems? I am also interested in the kitten-powered RNG you mention in reason #2.

    Comment by brad — 16 May 2008 @ 19:34

  10. (in all seriousness though, I think several improvements could be made by both “sides,” and all the flaming is unfortunate – the hyperbolic criticisms of both have some merit, and it’d be nice to see both teams apologize and improve! i.e. patch vetting/transparency on the debian side, and code commenting/dev team contact info on the openssl side)

    Comment by brad — 16 May 2008 @ 19:39

  11. I disagree with Chris. I’m not smarter than anyone here. This is what cryptographers (which I’m not) recommed: just do what the crypto primitives where designed to do. Don’t try do clever tricks.

    You cannot feed ANYTHING to a crypto primitive: it doesn’t matter if the data is deterministic or not. There may be special cases where feeding a crypto function with data related to the internal state of a primitive may actually DECREASE it’s security.
    To make my point clear, (theorically) the output entropy could DECREASE if you manage to cancel the existent entropy bits. Internal states are meant to be internal. Uninitialized data could contain bits of internal states from previous function calls.

    I’m not saying that in this particular case that’s what happens. But as far as I know, it’s not recommended to feed a crypto primitive with uninitialized data.


    Comment by Sergio Demian Lerner — 17 May 2008 @ 0:29

  12. I don’t agree with Brad.
    Let’s say you use a very important and secure 15360-bit private key for message signing which happened to be left on uninitialized memory. (realistic assumption)
    Afterwards you use that memory to feed an old and vulnerable PRNG like MD2 (a fact). Some time later a clever mathematician discover a severe weakness in the PRNG or some guy removes a critical line of code and suddenly some manages to recover the seed of a pseudo-random stream. Then you will have the private key also compromised.

    To minimize the damage in case of a bug you should not expose unnecessarily sensitive information. Uninitialized memory could contain sensitive information.

    It’s about damage control.

    Comment by Sergio Demian Lerner — 17 May 2008 @ 1:09

  13. […] Vendors Are Bad For Security […]

    Pingback by Linux - Graves problemas en el algoritmo que genera los números aleatorios en debian. ( OpenSSL) | — 17 May 2008 @ 20:17

  14. @Chris: Can you quote the exact parts of the C99 standard that show that this behavior is not undefined? I can’t find anything about that in section of the C99 standard. However, the standard definitely says that “If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate.” (6.7.8,#10 in N869, the latest public draft of the C99 standard, I don’t have the final version here right now, but would be surprised if it said anything else). Use of an object with indeterminate value results in undefined behavior, 3.18.

    Even with trap representations, from the section that you quoted, undefined behavior results:
    “Thus, an automatic variable can be initialized to a trap |
    representation without causing undefined behavior, but
    the value of the variable cannot be used until a proper
    value is stored in it.”

    Comment by hb — 19 May 2008 @ 9:34

  15. […] Ben Laurie explains more about the problems: I’ve ranted about this at length before, I’m sure. But now Debian have proved me right (again) beyond my wildest expectations. Two years ago, they “fixed” a “problem” in OpenSSL. The result of this is that for the last two years anyone doing pretty much any crypto on Debian (and hence Ubuntu) has been using easily guessable keys. This includes SSH keys, SSL keys and OpenVPN keys. […]

    Pingback by - blog » Open Source Software - a Chink in the Armour — 19 May 2008 @ 17:09

  16. lol normally i love a good rant, but when its justified. laurie remove your head from your buttox – the proposed change was posted to openssl-dev for comment. [ref: @SWP above]

    if anyone is going to use archaic code and rely on admitted atypical use, quoting “usually it is bad to have any kind of dependency on uninitialised [sic] memory”, then the author must document it. plain and simple. ref: [@Steve Friedl above]

    as referenced by the openssl-dev link, even the supposed authority didn’t have a clue. if they don’t know how the code works, how is anyone else supposed to?

    this is software dev 101: document your shit.

    Comment by just me — 19 May 2008 @ 19:05

  17. […] not the first commenter to say this. I just feel really strongly about writing maintainable […]

    Pingback by This Blog Needs No Name | May | 2008 — 19 May 2008 @ 22:06

  18. hb (#164), I replied here: (In a nutshell, it seems I was wrong and this particular use of uninitialized memory causes indeed undefined behaviour.)

    To those who say “you must document all non-obvious code”, could you maybe show us some non-trivial real-life code that you consider properly documented? I actually think that documentation can be bad. Namely, I think it should be limited to documenting interfaces (of functions and modules) – most of the time, as a rule of thumb. Comments can quickly get out of sync and often they are worded badly, so that they are misleading or harder to understand than the code. It can also happen that people rather read the comments (which are wrong) than the code and therefore make incorrect assumptions. If some code is hard to understand, I think it’s better to force an reader to read the whole context, than adding some comments which are unmaintained and never checked for correctness neither by humans or a compiler.

    Also specialized code very often requires a lot of expert knowledge on a topic which you simply cannot explain in a couple of comments. Even if this doesn’t apply to this particular example of using uninitialized memory, I find it horribly naive to assume that you only have to add comments and everything will be fine and easy to understand.

    Comment by Chris — 20 May 2008 @ 13:08

  19. […] happened; for those who are interested and are of a technical bent, some good articles are here and here (and here, […]

    Pingback by No Comment « James Viscosi’s Scribblings — 22 May 2008 @ 4:13

  20. I want to know that there aren’t other morons like this with the ability to commit to crypto packages in my distro.

    This is really, really sad. This guy should never touch software again

    Comment by circuit_breaker — 22 May 2008 @ 6:31

  21. […] the OpenSSL programmers themselves, but of the Debian team, known for their security expertise. OpenSSL developer Ben Laurie raged, "Never fix a bug you don’t understand! Had Debian [sent the bug to us] in this case, we (the […]

    Pingback by Debian team opens linux to hackers - HardwareLogic — 23 May 2008 @ 20:53

  22. Install Windows, problem solved!

    Comment by Jerry — 24 May 2008 @ 7:19

  23. […] OpenSSL developer Ben Laurie raged, “Never fix a bug you don’t understand!  Had Debian [sent the bug to us] in this case, we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was. But no, it seems that every vendor wants to ‘add value’ by getting in between the user of the software and its author.” […]

    Pingback by Huge Hole in Open Source Software Found, Leaves Millions Vulnerable — 25 May 2008 @ 4:41

  24. “Guilt” might be on both sides, but I definitely only see unfounded arrogance – and lies, sorry – from the side represented by this blog’s author.

    LOOK at what was said on the mailing list:

    “But I have no idea what effect this
    really has on the RNG. The only effect I see is that the pool
    might receive less entropy. But on the other hand, I’m not even
    sure how much entropy some unitialised data has.
    What do you people think about removing those 2 lines of code?”

    So, he pretty clearly ASKED if there could be any horrible side effects the change might provoke. That’s precisely what he was asking.

    In subsequent posting, he made it CRYSTAL CLEAR that he intended to remove BOTH lines, because when Marco Roeland said:

    “So yes I think not using the uninitialized memory (it’s only a single
    line, the other occurrence is already commented out) helps valgrind.”

    He re-stated (just as he had on the very first posting, very clearly even there) that both lines came into the equation:

    “Afaik, both are actually being used, the one without the #ifndef
    PURIFY just doesn’t seem to be used that much.”

    So… an unfortunate misunderstanding – maybe. But you, on this blog, are coming across as the one to blame for that arrogance that helps cause such misunderstandings!
    As has already been said countless times now, 1) that mailing list was definitely the right place to publish the stuff, 2) it’s the vendors’ job to look at problem and fix them, 3) they did notify upstream and ask for their opinion on the ML, 4) team members did reply, 5) the question was stated pretty clearly, 6) it was not a FAQ at the time as testifies.

    And I’ll add that it’s, clearly, utterly preposterous to say that the respondents to the ML inquiry assumed that the lines would only be removed for debugging and then restored. Ridiculous. If someone says “But I have no idea what effect this really has on the RNG”, it obviously means they CARE about any adverse effects on the RNG! Why would you care, if you only wanted to debug?

    Right: he didn’t mark his posting as a patch, and he didn’t state he was a Debian mantainer.
    So, thinking he wasn’t a Debian maintainer, but rather “it’s only this poor fool, at worst he’ll break the security of his own machine” (which has been a line of defense from some OpenSSL people on this blog, I seem to see) is, uhm, a good defense?! I’ll let it speak for itself.

    Comment by LjL — 25 May 2008 @ 15:26

  25. […] are a lot of sites around the web that inform us about the “OpenSSL debacle” in the Debian based Linux systems. A piece of code that was committed “accidentally” […]

    Pingback by tOMPSON’s blog » Blog Archive » OpenSSL Debacle — 29 May 2008 @ 8:25

  26. […] #14: Commenting – sometimes it’s crucial Recently, some controversy (see, for example, here) erupted around a mistake made in the OpenSSL library used by the Debian project. The mistake was […]

    Pingback by Jotting #14: Commenting - sometimes it’s crucial « Jottings on Software — 1 Jun 2008 @ 23:31

  27. […] flagged by static analysis software. (For other takes on the problem: my colleague Ben Laurie has taken the Debian maintainers to task and added some clarifications about the response, XKCD has neatly summed up the issue with a comic […]

    Pingback by Debian/OpenSSL vulnerability: subtle and fatal (1/2) « Random Oracle — 7 Jun 2008 @ 20:39

  28. […] the 7th of May, 2008, Debian fixed the now famous OpenSSL Weak PRNG bug. So, I’m pretty stunned to read, over 9 months later, Verisign’s newsletter saying […]

    Pingback by Links » Verisign Demonstrate Their Lack of Integrity — 14 Jan 2009 @ 14:29

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress