Ben Laurie blathering

Bad Science at Microsoft

Its just been drawn to my attention that I have been quoted on Microsoft’s Red Hat bashing page

“Although it’s still often used as an argument, it seems quite clear to me that the “many eyes” argument, when applied to security, is not true.”
— Ben Laurie, Director of Security, Apache Foundation

You’d think Microsoft could afford to do just a teensy bit of checking: it’s the Apache Software Foundation, and I was never Director of Security (it has never had one) – I was the chair of the security team for several years, though.

That aside, I’m sure I’m going to get asked a lot what I meant by this. I believe I said this in the wake of my security review of OpenSSL, which found several serious security flaws. One of these gave rise to the Slapper worm, which spread like wildfire. The whole experience led me to make some interesting conclusions.

The first was that people don’t actually read code very much. In practice, in my view, code only gets read when someone wants to change it, either to fix a bug or add new functionality. Since security bugs are rarely seen except when someone is trying to exploit them, most people have no real incentive to find and fix them. This was especially driven home to me because in order to do the security review I had to read almost all of OpenSSL’s code. It was amazingly time-consuming and even more amazingly boring. No-one is going to do this after the fact unless they are paid to do it.

None of the bugs I found were particularly new. They had all been around for anyone to discover for years. Thus, I conclude, no-one was looking for them (at least, no-one who was prepared to report them). The “many eyes” were simply not there.

The second thing I observed was that many people did not patch security holes. The Slapper worm came into existence almost a month after I announced the fixed version of OpenSSL, and yet surveys showed that over 50% of Apache servers were still vulnerable at that time. Of course, this is now old news, many an academic study has measured these delays, but at the time it was only known empirically.

So, where’s the bad science? Firstly, focusing on the “many eyes” fallacy fails to capture an important difference between open and closed source: namely that if I want to do a security review of an open source product, I can. For Microsoft’s products I would have to (potentially illegally) reverse engineer them before I could even start.

Secondly, the fact that more bugs are found in an open source product than a closed source one is not, in itself, an indicator that more bugs exist – or even are known. It is equally plausible that the availability of the source encourages a more collaborative approach to security, so that those few who do search for bugs are more inclined to report them than to exploit them. It is also the case that, since open source products cannot conceal their security fixes, they are more inclined to make them public, even if they had no need to. I know, for example, that both OpenSSL and Apache always assign CVE numbers for security fixes. Do Microsoft assign CVE numbers for bugs that are not disclosed? Certainly they are often vague about what security updates actually fix, which would strongly suggest that the details are not public.
Thirdly, the study on which they rest their conclusion is comparing apples and oranges. From the report

For each operating system, Secunia tracks all vulnerabilities that affect a full installation of all components and packages included in the current release.

A full release of Windows is far less functional than a full release of Red Hat. Windows will only include the base operating system, whereas RH will include pretty much every open source project you’ve ever heard of. So, simply counting vulnerabilities in a full install is highly biased. A fairer comparison would be to look at an install of RH with equivalent functionality. Presumably that doesn’t cast Windows in such a favourable light, or they would have done it.

Finally, their study shows that Windows actually had more bugs classified as “highly critical” than RH. 5 for Windows versus 2 for RHES 4 and 1 for RHES 3. I would say this makes the conclusion of even this biased study more than a little suspect.

Incidentally, looking at their graph of vulnerabilities over time, both RH systems appear to be showing signs of deceleration (that is, fewer bugs found per time period), whereas Windows is at best flat, or perhaps even accelerating.


  1. More eyes is definitely a flawed concept in terms of security.

    > Secondly, the fact that more bugs are found in an open source product than a closed source one is not, in itself, an indicator that more bugs exist – or even are known.

    Open source does expose more bugs than a closed source products because of security-by-obscurity effect of closed source projects. While hidden bugs isn’t a good for any project, it ensures that less people are affected cumulatively over time as the bugs trickle slowly. Whereas for an open source project it becomes relatively easy for crackers to explore the source code looking for defects.

    I am not so sure open-sourciness actually benefits security-conscious applications.

    Comment by Angsuman Chakraborty — 30 Aug 2007 @ 15:24

  2. I don’t believe that the “more eyes” concept is flawed, as a theory it is probably the best concept for this subject. It is of course in practice that it all falls apart.

    The concept relies on the “good nature” of whoever finds a bug or exploit to report it and it also relies on the alertness of the thousands of users to keep their software up to date.

    Neither of these two dependencies can ever hope to be met, but that does change the fact the theoretical concept is pristine.

    Comment by Douglas Willcocks — 29 Sep 2007 @ 14:13

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress