Links

Ben Laurie blathering

16 Nov 2010

Apache vs. Oracle

Filed under: Open Source,Open Standards,Rants — Ben @ 11:49

As a founder of the Apache Software Foundation I have long been frustrated by the ASF’s reluctance to confront Sun and now Oracle head-on over their continued refusal to allow Apache Harmony to proceed as an open source Java implementation.

So I am very pleased to see the board finally draw a line in the sand, basically saying “honour the agreements or we block the next version of Java”. Oracle’s response is just ridiculous, including the blatant

Oracle provides TCK licenses under fair, reasonable, and non-discriminatory terms consistent with its obligations under the JSPA.

why even bother to say that in response to ASF’s charges? Do they think the ASF are suddenly going to say “oops, we see the light now, how wrong we were to expect a TCK licence under fair, reasonable and non-discriminatory terms consistent with your obligations under the JSPA, we see it all clearly now and humbly withdraw our unwarranted request”?

Well, whatever Oracle expected, the ASF’s response was short and sweet

Oracle statement regarding Java: “Now is the time for positive action (and) to move Java forward.”

The ball is in your court. Honor the agreement.

How this will play out is very uncertain, at least to me, but one thing is sure: delay and vacillation are over, at least from the ASF. Expect plenty of delay from Oracle, though.

23 May 2010

Nigori: Protocol Details

As promised, here are the details of the Nigori protocol (text version). I intend to publish libraries in (at least) C and Python. At some point, I’ll do a Stupid version, too.

Comments welcome, of course, and I should note that some details are likely to change as we get experience with implementation.

13 Jan 2010

Is SSL Enough?

Filed under: Crypto,Open Standards,Security — Ben @ 21:16

In response to my post on OAuth WRAP, John Panzer asks

[A]re you arguing that we shouldn’t rely on SSL? OAuth WRAP (and for that matter, OAuth 1.0 PLAINTEXT) rely on SSL to mitigate the attacks mentioned. Ben Adida’s argument is that SSL libraries won’t save you because people can misconfigure and misuse the libraries. But OAuth libraries will save you; apparently they can’t be misconfigured. There seems to be a small contradiction here. Especially since OAuth is much less mature than SSL.

I am not saying we shouldn’t rely on SSL, and I am not arguing that SSL libraries won’t save you (though it’s pretty clear that they are often misused – in particular, failure to check that the certificate presented corresponds to the server you were trying to connect to is a fantastically common error, it seems – in other words, SSL is often used in a mode that gives no protection against a man-in-the-middle). What I am saying is that when you design a security protocol, you should design something that addresses the appropriate threat model. Now, I am not aware of a published threat model for OAuth WRAP, so I instead apply the one I have in my head for authentication protocols, since that’s what it is. In my off-the-top-of-my-head model of authentication protocols there are various properties I want

  • No replays: if someone gets hold of a request, they should not be able to replay it.
  • Not malleable: if someone sees one request, they should not be able to create another correct one.
  • No credential equivalent: the server should not be able to create a request that looks like it came from the client.

And so forth. I will not create a complete set of requirements, because that’s a tough job, and it’s nearly time for supper. However, you can easily see that OAuth WRAP does not satisfy any of these requirements. Nor, incidentally, do username/password logins.

Now, you can argue that the use of SSL makes the requirements redundant, and I have some sympathy for that argument but, as we have seen, SSL can have flaws in it. And, in fact, for example, the current flaw is perfect for attacking OAuth WRAP – I could inject a request in front of your WRAP request that causes your credential to be sent to me, and now, bingo, I can do anything at all that you can do. A well designed protocol would not suffer from this issue.

But even if we ignore the weakness in SSL, there are other requirements that are not met – in particular, the “no credential equivalent” requirement is not addressed at all by SSL. The server can easily fabricate a request and claim I made it. This is a terrible property for a protocol that is supposed to be used to protect my assets.

So, in short, I agree that you can use SSL to make a crappy protocol less crappy. But the right thing to do is to figure out what your requirements are (really, not fudge them so they fit your protocol, as I rather suspect will happen here) and then design a protocol that satisfies them. If that protocol happens to be “password over SSL” then great, you’re home and dry. But I do not see how any modern, well-designed authentication protocol could be that way.

8 Jan 2010

TLS Renegotiation Fix: Nearly There

Filed under: Crypto,General,Open Source,Open Standards,Security — Ben @ 13:19

Finally, after a lot of discussion, the IESG have approved the latest draft of the TLS renegotation fix. It is possible it’ll still change before an RFC number is assigned, but it seems unlikely to me.

But that doesn’t mean there isn’t plenty of work left to do. Now everyone has to implement it (in fact, many have already done so, including tracking the various changes as the I-D was updated), interop test with each other and roll out to clients and servers. And even then it isn’t over, since until clients are (mostly) universally updated, servers will have to allow old clients to connect and clients may have to be prepared to connect to old servers. In the case of a new server and an old client, it doesn’t hugely matter that the client has not been updated because it is defended by the server, which should not allow a renegotiation to occur if the client is old. However, in the case of an old server and a new client, or an old server and an old client, then there’s a problem – the client could be attacked. Obviously a new client can detect it is talking to an old server, and decline to play, but for some transitional period, it seems likely that clients will have to tolerate this, perhaps warning their user.

We could summarise the situation like this:

Client
Old New
Server Old vulnerable vulnerable but client is aware, client should decline or at least warn
New not vulnerable if renegotiation is forbidden, client is unaware not vulnerable, everyone is aware

5 Jan 2010

Security Is Hard: Live With It

Filed under: Open Source,Open Standards,Programming,Rants,Security — Ben @ 17:59

I’ve been meaning to summon the energy to write about OAuth WRAP. It’s hard to do, because like OpenID, OAuth WRAP is just so obviously a bad idea, it’s difficult to know where to start. So I was pleased to see that Ben Adida saved me the trouble.

I understand. Security is hard. Getting those timestamps and nonces right, making sure you’ve got the right HMAC algorithm… it’s non-trivial, and it slows down development. But those things are there for a reason. The timestamp and nonce prevent replay attacks. The signature prevents repurposing the request for something else entirely. That we would introduce a token-as-password web security protocol in 2010 is somewhat mind-boggling.

Exactly. The idea that security protocols should be so simple than anyone can implement them is attractive, but as we’ve seen, wrong. But does the difficulty of implementing them mean they can’t be used? Of course not – SSL is fantastically hard to implement. But it is also fantastically widely deployed. Why? Because there are several open source libraries that do everything for you. Likewise every crypto algorithm under the sun is hard to implement, but there’s no shortage of libraries for them, either.

Clearly the way forward for OAuth is not to dumb it down to the point where any moron can implement it, the way forward is to write libraries that implement a properly secure version, and have everyone use them.

If the amount of effort that has been wasted on OAuth WRAP (and OpenID) had instead been put instead into writing code for the various platforms then we would probably now have pretty much universal support for OAuth and no-one would be whining that it’s too hard to implement.

Instead, we will spend the next decade or two clearing up the mess that we seem to be intent on creating. It makes me tired.

1 Sep 2009

Kim Cameron Explains Why Hoarding Is Not Hoarding

Filed under: Crypto,Open Source,Open Standards,Privacy — Ben @ 14:13

I’ve been meaning for some time to point out that it’s been well over a year since Microsoft bought Credentica and still no sign of any chance for anyone to use it. Kim Cameron has just provided me with an appropriate opportunity.

Apparently the lack of action is because Microsoft need to get a head start on implementation. Because if they haven’t got it all implemented, they can’t figure out the appropriate weaseling on the licence to make sure they keep a hold of it while appearing to be open.

if you don’t know what your standards and implementations might look like, you can’t define the intellectual property requirements.

Surely the requirements are pretty simple, if your goal is to not hoard? You just let everyone use it however they want. But clearly this is not what Microsoft have in mind. They want it “freely” used on their terms. Not yours.

30 May 2009

Wave Trust Patterns

Filed under: Crypto,Open Source,Open Standards,Privacy,Security — Ben @ 6:04

Ben Adida says nice things about Google Wave. But I have to differ with

… follows the same trust patterns as email …

Wave most definitely does not follow the same trust patterns as email, that is something we have explicitly tried to improve upon, In particular, the crypto we use in the federation protocol ensures that the origin of all content is known and that the relaying server did not cheat by omitting or re-ordering messages.

I should note, before anyone gets excited about privacy, that the protocol is a server-to-server protocol and so does not identify you any more than your email address does. You have to trust your server not to lie to you, though – and that is similar to email. I run my own mail server. Just saying.

I should also note that, as always, this is my personal blog, not Google’s.

20 May 2009

ECMAScript 5

Filed under: Open Standards,Programming,Security — Ben @ 4:35

When I started working on Caja I had not really plumbed the depths of Javascript (or, as it is more correctly called, ECMAScript 3) and I was very surprised to learn how powerful it actually is. I was also pretty startled by some of the nasty gotchas lurking for the unwary (or even wary) programmer (had I known, perhaps I would never had tried to get Caja off the ground!).

For some time now, the ECMAScript committee has been working on a new version of Javascript which fixes many of these problems without breaking all the existing Javascript that is out there. This seems to me a remarkable achievement; Mark Miller, Mike Samuel (both members of the Caja team) and Waldemar Horwat gave a very interesting talk about these gotchas and how the ES5 spec manages to wriggle around them. I recommend it highly. Slides are available for those who don’t want to sit through the presentation, though I would say it is worth the effort.

20 Dec 2008

IETF Shoots Itself in the Foot

Filed under: Open Standards — Ben @ 20:22

The IETF recently introduced new rules for all Internet Drafts and RFCs that require contributors to grant copyright and trademark licences. This is a perfectly sensible thing to do, of course, but it seems to have introduced an unforeseen problem: it may no longer possible to produce revised versions of old RFCs, particularly if the original authors are dead, as is the case for SMTP, for example.

Oops! This is a great example of why you really need to get your intellectual property rights in order from the start, or you’ve got one hell of a mess to clear up.

20 Nov 2008

You Need Delegation, Too

Kim wants to save the world from itself. In short, he is talking about yet another incident where some service asks for username and password to some other service, in order to glean information from your account to do something cool. Usually this turns out to be “harvest my contacts so I don’t have to find all my friends yet again on the social network of the month”, but in this case it was to calculate your “Twitterank”. Whatever that is. Kim tells us

The only safe solution for the broad spectrum of computer users is one in which they cannot give away their secrets. In other words: Information Cards (the advantage being they don’t necessarily require hardware) or Smart Cards. Can there be a better teacher than reality?

Well, no. There’s a safer way that’s just as useful: turn off your computer. Since what Kim proposes means that I simply can’t get my Twitterank at all (oh, the humanity!), why even bother with Infocards or any other kind of authentication I can’t give away? I may as well just watch TV instead.

Now, the emerging answer to this problem is OAuth, which protects your passwords, if you authenticate that way. Of course, OAuth is perfectly compatible with the notion of signing in at your service provider with an Infocard, just as it is with signing in with a password. But where is the advantage of Infocards? Once you have deployed OAuth, you have removed the need for users to reveal their passwords, so now the value add for Infocards seems quite small.

But if Infocards (or any other kind of signature-based credential) supported delegation, this would be much cooler. Then the user could sign a statement saying, in effect, “give the holder of key X access to my contacts” (or whatever it is they want to give access to) using the private key of the credential they use for logging in. Then they give Twitterank a copy of their certificate and a copy of the new signed delegation certificate. Twitterank presents the chained certificates and proves they have private key X. Twitter checks the signature on the chained delegation certificate and that the user certificate is the one corresponding to the account Twitterank wants access to, and then gives access to just the data specified in the delegation certificate.

The beauty of this is it can be sub-delegated, a facility that is entirely missing from OAuth, and one that I confidently expect to be the next problem in this space (but apparently predicting such things is of little value – no-one listens until they hit the brick wall lack of the facility puts in their way).

27 Jul 2008

Why Not W3C or IETF?

Filed under: Open Standards — Ben @ 12:46

Ralf Bendrath asks what’s wrong with the W3C and the IETF that the OWF is trying to solve? So, to be very brief…

The W3C is a pay-to-play cartel that increasingly gets nothing done. Open source developers can’t even participate, as a rule. It also has an IPR policy that’s just as crap as everything else we’re trying not to emulate. So, not a realistic alternative.

The IETF is much better, but its main problem is that it has no IPR policy at all, other than “tell us what you know”. In practice this often works out OK, but there have been some notable instances where the outcome was pretty amazingly ungood, such as RSA’s stranglehold over SSL and TLS for years – a position Certicom are now trying to emulate with ECC, also via the IETF.

A more minor objection to the IETF that I hope the OWF will solve similarly to the ASF is that it is actually too inclusive. Anyone is allowed to join a working group and have as much say as anyone else. This means that any fool with time on their hands can completely derail the process for as long as they feel like. In my view, a functional specification working group should give more weight to those that are actually going to implement the specification and those who have a track record of actually being useful, much as the ASF pays more attention to contributors, committers and members, in that order.

24 Jul 2008

Open Web Foundation

Filed under: Open Source,Open Standards — Ben @ 18:41

I’m very pleased that we’ve launched the Open Web Foundation today. As Scott Kveton says

The OWF is an organization modeled after the Apache Software Foundation; we wanted to use a model that has been working and has stood the test of time.

When we started the ASF, we wanted to create the best possible place for open source developers to come and share their work. As time went by, it became apparent that the code wasn’t the only problem – standards were, too. The ASF board (and members, I’m sure) debated the subject several times whilst I was serving on it, and no doubt still does, but we always decided that we should focus on a problem we knew we could solve.

So, I’m extra-happy that finally a group of community-minded volunteers have come together to try to do the same thing for standards.

Powered by WordPress