Links

Ben Laurie blathering

26 Aug 2007

Mapping Crypto to Capabilities

Filed under: Crypto,Distributed stuff,Security — Ben @ 7:00

I’ve been thinking.

Let me preface this by suggesting: if there were a globally trusted capability-secure computing fabric, we would have no need of (some kinds of) crypto.

Why? What do we do with crypto? We sign things, and we encrypt things. How do we do this with our GTCSCF? Easy. To sign something, I hand that something to the relying party. He then exercises his capability to me that checks authenticity of somethings, and I respond that it is authentic.

Encryption is even easier – I just send the something down a capability leading to the intended recipient.

So, I claim, there is effectively a mapping between crypto (at least for signing and encrypting) and a great capability machine in the sky (i.e. the GTCSCF).

Considering this idea further, it seems to me that this is essentially the core idea behind Universal Composability. If I can show that my crypto system does indeed map to a GTCSCF, then I have a crypto system that can clearly be composed with other crypto systems, and only have the consequences we would expect from a capability-secure system that implemented the same functionality.

What would it mean to make such a proof? My, perhaps amateur, understanding is that you would have to show that the corresponding capabilities have the properties we expect of them: that they are opaque, unforgeable, and only obtainable by being handed them in some way.

This sounds doable to me, modulo assumptions about the hardness of the discrete log problem, and the like.

25 Aug 2007

Bad Science at Microsoft

Filed under: Open Source,Security — Ben @ 22:03

Its just been drawn to my attention that I have been quoted on Microsoft’s Red Hat bashing page

“Although it’s still often used as an argument, it seems quite clear to me that the “many eyes” argument, when applied to security, is not true.”
— Ben Laurie, Director of Security, Apache Foundation

You’d think Microsoft could afford to do just a teensy bit of checking: it’s the Apache Software Foundation, and I was never Director of Security (it has never had one) – I was the chair of the security team for several years, though.

That aside, I’m sure I’m going to get asked a lot what I meant by this. I believe I said this in the wake of my security review of OpenSSL, which found several serious security flaws. One of these gave rise to the Slapper worm, which spread like wildfire. The whole experience led me to make some interesting conclusions.

The first was that people don’t actually read code very much. In practice, in my view, code only gets read when someone wants to change it, either to fix a bug or add new functionality. Since security bugs are rarely seen except when someone is trying to exploit them, most people have no real incentive to find and fix them. This was especially driven home to me because in order to do the security review I had to read almost all of OpenSSL’s code. It was amazingly time-consuming and even more amazingly boring. No-one is going to do this after the fact unless they are paid to do it.

None of the bugs I found were particularly new. They had all been around for anyone to discover for years. Thus, I conclude, no-one was looking for them (at least, no-one who was prepared to report them). The “many eyes” were simply not there.

The second thing I observed was that many people did not patch security holes. The Slapper worm came into existence almost a month after I announced the fixed version of OpenSSL, and yet surveys showed that over 50% of Apache servers were still vulnerable at that time. Of course, this is now old news, many an academic study has measured these delays, but at the time it was only known empirically.

So, where’s the bad science? Firstly, focusing on the “many eyes” fallacy fails to capture an important difference between open and closed source: namely that if I want to do a security review of an open source product, I can. For Microsoft’s products I would have to (potentially illegally) reverse engineer them before I could even start.

Secondly, the fact that more bugs are found in an open source product than a closed source one is not, in itself, an indicator that more bugs exist – or even are known. It is equally plausible that the availability of the source encourages a more collaborative approach to security, so that those few who do search for bugs are more inclined to report them than to exploit them. It is also the case that, since open source products cannot conceal their security fixes, they are more inclined to make them public, even if they had no need to. I know, for example, that both OpenSSL and Apache always assign CVE numbers for security fixes. Do Microsoft assign CVE numbers for bugs that are not disclosed? Certainly they are often vague about what security updates actually fix, which would strongly suggest that the details are not public.
Thirdly, the study on which they rest their conclusion is comparing apples and oranges. From the report

For each operating system, Secunia tracks all vulnerabilities that affect a full installation of all components and packages included in the current release.

A full release of Windows is far less functional than a full release of Red Hat. Windows will only include the base operating system, whereas RH will include pretty much every open source project you’ve ever heard of. So, simply counting vulnerabilities in a full install is highly biased. A fairer comparison would be to look at an install of RH with equivalent functionality. Presumably that doesn’t cast Windows in such a favourable light, or they would have done it.

Finally, their study shows that Windows actually had more bugs classified as “highly critical” than RH. 5 for Windows versus 2 for RHES 4 and 1 for RHES 3. I would say this makes the conclusion of even this biased study more than a little suspect.

Incidentally, looking at their graph of vulnerabilities over time, both RH systems appear to be showing signs of deceleration (that is, fewer bugs found per time period), whereas Windows is at best flat, or perhaps even accelerating.

19 Aug 2007

Brad Fitzpatrick on the Social Graph

Filed under: Anonymity/Privacy,Identity Management,Security — Ben @ 0:22

Brad Fitzpatrick writes about a problem that is essentially the same as my motivating example. His proposal avoids what I consider the interesting problems by only dealing with public data, though I think I would dispute that by so doing he solves 90% of the problem.

I also worry about whose perception of public is the correct one. If I have, say, a Facebook and a Flickr account, and a friend who knows what they both are, will I be happy if that friend broadcasts the fact that they’re both me? Possibly not.

In any case, interesting reading.

18 Aug 2007

UninformIT

Filed under: Anonymity/Privacy,Identity Management — Ben @ 5:56

Dick Hardt draws my attention to an article about the dangers of user-centric identity in something called informIT. As Dick says, the article tells us that, duh, if we screw up our websites then we screw up our users, too.

But it seems to me that there’s an even more fundamental issue. If, as the author correctly, if somewhat ungrammatically, claims, “the average users usually reuse the same username/password pairs for most of their accounts”, why, exactly, is it worse if the user types this same username and password into the same place every time (and probably far less often) than if the user is obliged to type it whenever he sees a login page?

It seems to me that the user stands a far better chance of being sure that he is typing his password in the correct place if there is only one correct place instead of several hundred.

14 Aug 2007

A Motivating Example

My friend, Carrie Gates (of CA Labs), posed me the following problem.

Let us imagine two services. The first we’ll call Facebook. Facebook is yet another of those obnoxious social networking services. The second we’ll call Flickr. Flickr lets me upload pictures and also acts as yet another, perhaps slightly less obnoxious, social network.

Flickr, being a kind, generous and forward-thinking sort of service, is happy to allow other services to build on top of it. It will let them link accounts for their users to Flickr accounts and show their users Flickr photos from those accounts. Flickr also allows me to choose who can see my photos. I can let just anyone see them, I can restrict access to my friends or I can make my pictures entirely private, so that only I can see them.

Facebook doesn’t let me upload pictures. But they’re smart – they’ve offloaded that bit of tedium to Flickr. You can tell Facebook what your Flickr account is, and then Facebook will display your Flickr pictures as if they were Facebook’s very own. Whether this is cheap, cunning or just good for the user I leave open to debate, but this is how these services work.

The interesting question arises when a friend wants to see my Flickr pictures on my Facebook pages (again, whether this is a good or bad idea I leave aside, but let’s just agree that people want to do this).

Now we have an interesting quandary. In fact, two interesting quandaries. Or maybe even three. The first arises if my friend is a Flickr friend. That is, I have told Flickr that his Flickr account is allowed to see my “friends only” pictures. The second if my friend is a Facebook friend. That is, I have told Facebook that his Facebook account is allowed to see my “friends only” pictures. The third arises when I trust Flickr more than Facebook, but this one I will have to explain later.

In the first case, Facebook is not itself aware that my friend is allowed to see these pictures. OK, you say, that’s pretty easy – Flickr knows, so all Facebook as to do is tell Flickr which Flickr account is trying to view my pictures, and hey presto! my friend can see my “friends only” picture. But what if my friend has not told Facebook what his Flickr account is? And why, indeed, should he? Then, of course, he can’t see my pictures (or perhaps he can, see the third quandary).

In the second case, Facebook knows he is my friend, but how does it tell this to Flickr? Flickr doesn’t expose APIs for saying who is a friend – Flickr takes the view that this would probably be insecure and certainly be quite confusing. Of course, Facebook has access to my Flickr account (obviously it is to my benefit to be able to manage my Flickr photos without leaving Facebook), so it could take matters into its own hands and show him my pictures anyway. Unfortunately, this would also give access to my completely private pictures, which I think I would take a dim view of.

And this leads to the third quandary. If I trust Flickr more than I trust Facebook, then by even indulging in this whole game I have reduced my security, as illustrated above.

OK, so now that I have set the scene, and, I hope, filled you with fear for the poor victims (err, I mean, “users”) of these services, the question arises: is there a way to do this properly? Can we achieve everything we desire and still leave everyone secure and with privacy intact?

One answer is to demand that every Facebook user must give their Flickr account to Facebook. Good luck with that. Clearly this sucks for all sorts of reasons, not least of which is that it totally fails to scale to the case of hundreds of Flickrs and Facebooks. It is also a disaster waiting to happen from a security and privacy point of view.

Obviously there must be better answers. I have some thoughts on this, but before I write them up I’m interested to hear what the blogosphere can come up with.

Feynman once said that if you could understand the two-slit experiment, then you understand the whole of quantum mechanics. This example is probably not quite as fundamental, but it seems to me to be, in some way, the two-slit experiment of identity.

BTW, all services in this blog post are fictional and any resemblance between them and real services is entirely coincidental.

2 Aug 2007

Side-Channel Attacks and Security Theatre

Filed under: Crypto,Rants,Security — Ben @ 10:00

OpenSSL fixed yet another side-channel attack recently. For those of you not in the know, a side-channel attack is one where process A figures out some aspect of what process B is doing by observing some change in the behaviour of process A. A trivial example of this would be to guess whether process B is running or idle by checking what percentage of the CPU process A is getting.

More advanced versions exploit various tricks CPUs to make things go faster, such as caches and branch prediction. Somewhat surprisingly, these attacks can provide enough information to leak information like RSA keys. This, of course, causes people who are trying to make a name for themselves to get quite excited – if they can claim to be able to steal secret keys, then that is news.

However, this all seems rather silly to me. In order to mount most of these attacks the attacker must be local – that is, they have to be able to run code on the same machine as the machine using the secret key. Now, every good security person knows that if your attacker has the ability to run stuff on your machine, it is game over, so why are we even caring about these attacks? This is security theatre of exactly the type that we geeks like to accuse the TSA of on a regular basis – isn’t it time we started making fun of ourselves, too?

Why don’t we? Presumably for exactly the reasons that governments like security theatre. Its good for business. We make people feel loved and protected. We keep people like CERT in jobs. Security companies can issue updates to products. Staff can spend lots of lovely overtime hours doing QA for the emergency rollout of a security update. The economy benefits!

Isn’t it time we stopped fixing these attacks? It isn’t as if the fixes come for free – they almost always make the crypto slower. And, as I said above, until we have platforms that are actually robust in the face of hostile users that can run code on them, there is absolutely no point in avoiding these attacks.

By the way, OpenSSL is far from being the only crypto library that’s vulnerable to this attack, but the advisory will only be about OpenSSL. Why? Diminishing returns, that’s why – OpenSSL is the most widely used crypto library. Once you’ve broken that, the theatrical value of the others is minimal, so why bother? Because you care about security, you say? I rest my case.

1 Aug 2007

Old School Journalism

Filed under: General,Rants — Ben @ 13:21

I was planning to write about the Professional Association of Teachers (PAT) calling for YouTube to be closed down in order to combat bullying, but there seems little point, since in the same article Emma-Jane Cross of BeatBullying hit the nail on the head

“Calls for social networking sites like YouTube to be closed because of cyberbullying are as intelligent as calls for schools to be closed because of bullying.”

You’ll notice that in the above, I do not link to PAT, nor do I link to YouTube, Emma-Jane Cross or BeatBullying. Normally I would, but as I was about to embark on a session of Googling, I thought “Why do I have to do this? If the BBC had got with the programme there would be links in their article that I could follow.”

Which leads me on to the thought that old media should stop whining about how they are the real journalists and we losers with blogs are just some pale imitation and start, instead, providing a service that is as good as the average blog, instead of a mere transposition of their print columns onto web pages.

The whole point about the web is it allows you to link to your sources, to tangents of interest and to full versions of documents mentioned. But the old media does none of this: they think the web is like paper. If they don’t want to go the way of the dinosaurs they need to drag themselves into the 20th century and start linking.

Powered by WordPress