Ben Laurie blathering

8 Jan 2010

TLS Renegotiation Fix: Nearly There

Filed under: Crypto,General,Open Source,Open Standards,Security — Ben @ 13:19

Finally, after a lot of discussion, the IESG have approved the latest draft of the TLS renegotation fix. It is possible it’ll still change before an RFC number is assigned, but it seems unlikely to me.

But that doesn’t mean there isn’t plenty of work left to do. Now everyone has to implement it (in fact, many have already done so, including tracking the various changes as the I-D was updated), interop test with each other and roll out to clients and servers. And even then it isn’t over, since until clients are (mostly) universally updated, servers will have to allow old clients to connect and clients may have to be prepared to connect to old servers. In the case of a new server and an old client, it doesn’t hugely matter that the client has not been updated because it is defended by the server, which should not allow a renegotiation to occur if the client is old. However, in the case of an old server and a new client, or an old server and an old client, then there’s a problem – the client could be attacked. Obviously a new client can detect it is talking to an old server, and decline to play, but for some transitional period, it seems likely that clients will have to tolerate this, perhaps warning their user.

We could summarise the situation like this:

Old New
Server Old vulnerable vulnerable but client is aware, client should decline or at least warn
New not vulnerable if renegotiation is forbidden, client is unaware not vulnerable, everyone is aware

10 Nov 2009

Turn-based Protocols Somewhat Safe

Filed under: Crypto,Security — Ben @ 22:08

Wietse Venema has a nice analysis showing how an attack on Postfix doesn’t work. The core point here is that in turn-based protocols the common implementation is such that the server (or client – let’s call it an agent) will consume input from the OpenSSL layer character by character, in effect. This means that OpenSSL in turn will only consume a single SSL packet at a time, in order to provide input to the agent. The agent will then send output before reading further input. That output will be encrypted to the man-in-the-middle, because the OpenSSL layer has not yet processed the packets with the renegotiation in them. The victim will be unable to read the output, so the attacker has no option but to consume it. This means the attacker can execute arbitrary commands before handing over to the victim, but he can only hijack the very first command the victim sends – by sending an incomplete command himself, followed by the renegotiation. This makes it hard to kid the victim into doing anything interesting before something goes awry with the protocol and one end or the other gives up.

This defence only works if the application layer does not greedily consume from the OpenSSL layer – if you read everything OpenSSL has got, then the renegotiate will occur before you start sending responses.

Wietse thinks this means that Postfix can’t be attacked in any interesting way. My experience of making such claims has been unfortunate, so I’ll reserve judgement. My advice is still to upgrade to 0.9.8l if you can (or whatever fix your vendor is offering for this issue). If you need 0.9.8m in a hurry because you need renegotiation and can update both ends, let me know.

However, I rather suspect (but definitely don’t know) this will save many turn-based protocols from severe attacks, though I’m prepared to bet there are interesting corner cases.

Incidentally, Adam Langley pointed out to me that if protocols had included a sentinel character – i.e. a character that could only ever appear at the start of a command, then some attacks against HTTP would fail, and also even the hijacking of the victim’s first command would not be possible in SMTP, though arbitrary prefixing would work, still. Simply numbering the commands sent would fix that, though – and defeat some injection attacks (not that these are currently possible in SSL). These kinds of measures would be interesting defences in depth for future protocols. Keeping a running hash sounds like a better idea to me, though.

8 Nov 2009

SSL MitM, Day 4

Filed under: Crypto,Security — Ben @ 18:12

Are we having fun yet? First, thanks to Benson, the only person so far to have expressed any kind of appreciation for the work we volunteers do.

Now to Q&A.

  • Several people have pointed out that Adam Langley is unhappy that I (and others) have maligned TLS. Apparently

    …it’s not a flaw in TLS. The TLS security properties are exactly what was intended.

    Isn’t that wonderful? Notice how much more secure the world became now we’ve got that cleared up.

    Also notice how this “intended” property was so carefully explained, and how everyone involved immediately noticed that every single protocol that is layered on top of TLS got this wrong and had them fix it. Not.

    I’m not particularly interested in the blame game. I don’t really care who’s at fault here. What I care about is fixing the problems we have. TLS is broken. TLS is not broken. Whatever. I still seem to be patching code, either way.

    By the way, Adam is incorrect in supposing this is tied to client certificates – this seems to be a common confusion. Any TLS session can be hijacked, so long as the server allows renegotiation. Which pretty much anything based on OpenSSL does.

  • Many people seem to think that fixing this (or working around it, as in OpenSSL 0.9.8l) will break session resumption. I’m not sure where this comes from, but it won’t.
  • On a related note: can a resumed session be attacked? I don’t quite have the energy to test this, but I assume it can – although it would be a weird thing to do, I believe a client is allowed to resume a session during a renegotiation. So, to the server, this would look like the client connected, negotiated a new session, said some stuff, then renegotiated an old session and continued.
  • On why this isn’t the same as XSRF, when considering HTTPS – I forgot to mention that the attacker can use methods other than GET or POST, which is all you can do with XSRF.
  • Again on XSRF: those who think no clicks are necessary for XSRF are wrong – the victim must first follow a link to your evil page.
  • Is this really a MitM attack? Eric Rescorla correctly points out it is rather more limited than a classic MitM attack, but I don’t like his term either. When I first heard of it I called it a blind prefix injection attack. Which is still my preferred term.

So, what next? Eric Rescorla et al have proposed a TLS extension which, when implemented by both clients and servers, fixes this problem by cryptographically binding the two sessions (before and after renegotiation) together.

I have today committed code (mostly written by Eric Rescorla) implementing this extension to the OpenSSL tree, in the OpenSSL_0_9_8-stable branch. This is to allow review, of course, but also interop testing. An earlier version of this, which was based on 0.9.8l, was tested against a completely independent implementation by Nasko Oskov. Unfortunately we (the OpenSSL team) later decided that 0.9.8m should be based on the head of the 0.9.8 branch so that it would include various other bug and security fixes, so this version is not exactly the same as the one I tested with Nasko. I will be re-testing at the earliest opportunity.

Implementing and testing this fix has raised a problem, though. One of the nice features of the extension is that it is back compatible with old clients, so long as they don’t try to renegotiate. However, there is no corresponding mechanism for back compatibility with old servers. A client connecting to an old server has no way to know whether an attack occurred or not – only the server can detect that (it sees a renegotiation) – but since the server is old, it won’t know this is bad. I don’t have a solution to this problem at this time. Perhaps we shouldn’t try to solve it, and just require servers to upgrade.

6 Nov 2009

SSL MitM Attack, Part 2

Filed under: Crypto,Security — Ben @ 12:46

A lot can happen in a day. Yesterday the news broke that SSL was compromised. We immediately (OK, it took about 10 hours) released a new version of OpenSSL, 0.9.8l, which mitigates the problem by completely disabling renegotiation. Obviously this will break some sites, and so is not a full fix, so the next step is to implement Eric Rescorla’s TLS extension. However, before I get on with that, it seems I have a few questions to answer.

Firstly, I must thank the anonymous poster who said “OpenSSL is written by monkeys”. But dude, you should’ve included the link. I’ve been meaning to link to that for ages. Well, days.

Secondly, as Marsh said, there is a better answer for people who need renegotiation. This is the extension mentioned above. It won’t work unless clients also implement that, but we are working on that, too (and clearly any client that uses OpenSSL will get it for free as soon as I get the next version out).

To the bloke who asked about ISA and OWA: I have no idea what either of those are.

Does this affect SGC (Server-Gated Cryptography)? I don’t actually know. I think it does, because I think SGC uses renegotiation, but I am not sure. If anyone knows, comment!

To the “but this is just XSRF” (Cross-site request forgery) guy:

  • XSRF does not give the attacker control over headers.
  • Your attack didn’t work on me: I didn’t click the link.
  • HTTP is not the only protocol that uses SSL.

Though the fact that this attack doesn’t actually make HTTP much worse is a pretty damning indictment of HTTP (and HTML)!

Will this patch break session resumption? No – and nor will the 0.9.8l release, which does the same thing more elaborately and correctly.

Finally, even once we’ve implement the extension it seems to me this is not really the true fix – really applications should be aware of renegotiations and not carry trust across their boundaries. But more on that later, I’ve got code to write.

5 Nov 2009

Another Protocol Bites The Dust

Filed under: Crypto,Open Source,Security — Ben @ 8:03

For the last 6 weeks or so, a bunch of us have been working on a really serious issue in SSL. In short, a man-in-the-middle can use SSL renegotiation to inject an arbitrary prefix into any SSL session, undetected by either end.

To make matters even worse, through a piece of (in retrospect) incredibly bad design, HTTP servers will, under some circumstances, replay that arbitrary prefix in a new authentication context. For example, this is what happens if you configure Apache to require client certificates for one directory but not another. Once it emerges that your request is for a protected directory, a renegotiation will occur to obtain the appropriate client certificate, and then the original request (i.e. the stuff from the bad guy) gets replayed as if it had been authenticated by the client certificate. But it hasn’t.

Not that the picture is all rosy even when client certificates are not involved. Consider the attacker sending an HTTP request of his choosing, ending with the unterminated line “X-Swallow-This: “. That header will then swallow the real request sent by the real user, and will cause any headers from the real user (including, say, authentication cookies) to be appended to the evil request.

It’s obviously going to take a little while for the world to patch this – and since the news is spreading like wildfire I’ve put up a patch to OpenSSL that bans all renegotiation. I’m sure an official release will follow very shortly.

Note that the patch is against the head of the OpenSSL 0.9.8 development tree (that is, it is against 0.9.8l-dev). You may have to do a little work to patch against other versions. And if you intend to deploy this patch permanently, please change at least the textual version of the version number, which you can find in crypto/opensslv.h. Also note that if you need renegotiation for your site to work, I have no solution for you, other than you redesign your site. Sorry.

23 Sep 2009

AES Explained

Filed under: Crypto,Open Source — Ben @ 10:27

AES in cartoon form – a really nice explanation. Example code to go with it, too.

1 Sep 2009

Kim Cameron Explains Why Hoarding Is Not Hoarding

Filed under: Crypto,Open Source,Open Standards,Privacy — Ben @ 14:13

I’ve been meaning for some time to point out that it’s been well over a year since Microsoft bought Credentica and still no sign of any chance for anyone to use it. Kim Cameron has just provided me with an appropriate opportunity.

Apparently the lack of action is because Microsoft need to get a head start on implementation. Because if they haven’t got it all implemented, they can’t figure out the appropriate weaseling on the licence to make sure they keep a hold of it while appearing to be open.

if you don’t know what your standards and implementations might look like, you can’t define the intellectual property requirements.

Surely the requirements are pretty simple, if your goal is to not hoard? You just let everyone use it however they want. But clearly this is not what Microsoft have in mind. They want it “freely” used on their terms. Not yours.

30 May 2009

Wave Trust Patterns

Filed under: Crypto,Open Source,Open Standards,Privacy,Security — Ben @ 6:04

Ben Adida says nice things about Google Wave. But I have to differ with

… follows the same trust patterns as email …

Wave most definitely does not follow the same trust patterns as email, that is something we have explicitly tried to improve upon, In particular, the crypto we use in the federation protocol ensures that the origin of all content is known and that the relaying server did not cheat by omitting or re-ordering messages.

I should note, before anyone gets excited about privacy, that the protocol is a server-to-server protocol and so does not identify you any more than your email address does. You have to trust your server not to lie to you, though – and that is similar to email. I run my own mail server. Just saying.

I should also note that, as always, this is my personal blog, not Google’s.

29 May 2009

Google Wave Federation

Filed under: Crypto,Security — Ben @ 0:26

Today Google announced Google Wave. I’m not going to talk about Wave itself, just search for it and get a ton of articles. Suffice it to say that it is awesome.

What I want to mention is the Wave Federation Protocol, and in particular, General Verifiable Federation, which is the part my talented colleague Lea Kissner and I worked on. I know I’m a crypto geek, but I think this protocol is pretty interesting, with applications wider than just Google Wave, since it creates a platform for building federated messaging systems in which you do not trust intermediaries.

Lea and I welcome feedback on the protocol, which we are sure is full of mistakes right now, as we were in a bit of a rush to hit today’s deadline…

(And for those friends who are probably wondering now if this is why I went to Australia earlier this year, the answer is, unsurprisingly: yes).

22 Feb 2009

What Is DNSSEC Good For?

Filed under: Crypto,DNSSEC,Security — Ben @ 18:24

A lot of solutions to all our problems begin with “first find a public key for the server”, for example, signing XRD files. But where can we get a public key for a server? Currently the only even slightly sane way is by using an X.509 certificate for the server. However, there are some problems with this approach

  1. If you are going to trust the key, then the certificate must come from a trusted CA, and hence costs money.
  2. Because the certificate is a standard X.509 certificate, it can be used (with the corresponding private key, of course) to validate an HTTPS server – but you may not want to trust the server with that power.
  3. The more we (ab)use X.509 certificates for this purpose, the more services anyone with a certificate can masquerade as (for the certified domain, of course).

One obvious way to fix these is to add extensions to the certificates that prevent their use for inappropriate services. Of course, then we would have to get the CAs to support these extensions and figure out how to validate certificate requests that used them.

But I have to wonder why we’re involving CAs in this process at all? All the CA does is to establish that the person requesting the certificate is the owner of the corresponding domain. But why do we need that service? Why could the owner of the domain not simply include the certificate in the DNS – after all, only the owner of the domain can do that, so what further proof is required?

Obviously the answer is: DNS is not secure! This would allow anyone to easily spoof certificates for any domain. Well, yes – that’s why you need DNSSEC. Forgetting the details of DNSSEC, the interesting feature is that the owner of a domain also owns a private key that can sign entries in that domain (and no-one else does, if the owner is diligent). So, the domain owner can include any data they want in their zone and the consumer of the data can be sure, using DNSSEC, that the data is valid.

So, when the question “what is the public key for service X on server Y?” arises, the answer should be “look it up in the DNS with DNSSEC enabled”. The answer is every bit as secure as current CA-based certificates, and, what’s more, once the domain owner has set up his domain, there is no further cost to him – any new keys he needs he can just add to his zone and he’s done.

Does DNSSEC have any other uses? OK, it would be nice to know that the A record you just got back corresponds to the server you were looking for, but if you trust a connection just on the basis that you used the right address, you are dead meat – you’ll need some key checking on top of it (for example, by using TLS) to avoid attacks by evil proxies (such as rogue wifi hotspots) or routing attacks and so forth. For me, the real value in DNSSEC is cryptographic key distribution.

11 Feb 2009

Crypto Craft Knowledge

Filed under: Crypto,Programming,Rants,Security — Ben @ 17:50

From time to time I bemoan the fact that much of good practice in cryptography is craft knowledge that is not written down anywhere, so it was with interest that I read a post by Michael Roe about hidden assumptions in crypto. Of particular interest is this

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

Actually, in my extensive experience of reviewing security-critical code, this particular error is extremely common. Why does Michael assume that they probably wouldn’t? Because he is steeped in the craft knowledge around crypto. But most developers aren’t. Most developers don’t even have the right mindset for secure coding, let alone correct cryptographic coding. So, why on Earth do we expect them to follow our unwritten rules, many of which are far from obvious even if you understand the crypto?

8 Jan 2009

OpenPGP:SDK V0.9 Released

Filed under: Crypto,Open Source — Ben @ 23:14

A long time ago my sometimes collaborator, Rachel Willmer, and I started work on a BSD-licensed OpenPGP library, sponsored by Nominet.

Things slowed down a bit when, shortly after getting the initial code done, I got hired by Google – that was a bit distracting. But Rachel has been plugging away ever since, with occasional interference from me. So, I’m pleased to announce that we’ve reached the point of a somewhat feature complete release, for some value of “feature complete”, OpenPGP:SDK V0.9.

Of course, its an open source project, so contributions and bug reports are welcome. More to the point, if anyone would rather use a library than shell out to gpg, I’d be happy to help them figure out how.

7 Jan 2009

Yet Another Serious Bug That’s Been Around Forever

Filed under: Crypto,Open Source,Programming,Rants,Security — Ben @ 17:13

Late last year the Google Security Team found a bug in OpenSSL that’s been there, well, forever. That is, nearly 10 years in OpenSSL and, I should think, for as long as SSLeay existed, too. This bug means that anyone can trivially fake DSA and ECDSA signatures, which is pretty damn serious. What’s even worse, numerous other packages copied (or independently invented) the same bug.

I’m not sure what to say about this, except to reiterate that it seems people just aren’t very good at writing or reviewing security-sensitive code. It seems to me that we need better static analysis tools to catch this kind of obvious error – and so I should bemoan, once more, that there’s really no-one working seriously on static analysis in the open source world, which is a great shame. I’ve even offered to pay real money to anyone (credible) that wants to work in this area, and still, nothing. The closed source tools aren’t that great, either – OpenSSL is using Coverity’s free-for-open-source service, and it gets a lot of false positives. And didn’t find this rather obvious (and, obviously staticly analysable) bug.

Oh, I should also say that we (that is, the OpenSSL Team) worked with oCERT for the first time on coordinating a response with other affected packages. It was a very easy and pleasant experience, I recommend them highly.

4 Dec 2008

What Does “Unphishable” Mean?

Filed under: Crypto,General,Identity Management,Security — Ben @ 7:36

About a week ago, I posted about password usability. Somewhere in there I claimed that if passwords were unphishable, then you could use the same password everywhere.

Since then, I have had a steady stream of people saying I am obviously wrong. So, let’s take them one at a time…

…as long as the password I type in there is send over (encrypted of course) to the backend and recoverable there as plaintext password, you have to trust it is stored/used securely there.

This does assume that everywhere you use it actually secures your password, and doesn’t just store it as plain text.

…there are many attacks to finding your password — an administrator at Facebook could look it up in the password database…

OK, OK, that’s three, but they say the same thing. This one is easily dismissed – obviously if we are using an unphishable protocol the password is not sent at all and it is not kept in Facebook’s database. If it were, then clearly a phisher would easily be able to get your password once he tricked you into typing it in on his site.

Even with perfect or near-perfect hardware, somebody will always find a way to game the system via social engineering.

Don’t forget that we are in a utopia here where users only ever type their passwords into the unphishable password gadget. I think it’s pretty reasonable to assume that if we’ve trained users to do that, we have also trained them to never reveal their password at all anywhere else, including in person, over the phone, via video-conference or during a teledildonics session. Yes, this does mean changing the world, but … utopia, remember?

Mythical crypto-gadgets simply won’t save the day. All somebody has to do is replace your crypto-gadget with an identical-looking crypto-gadget of their own making and now it becomes the new “password” input field that is so phishable

This seems to be more a criticism of the idea that we can ever get to the password utopia, which is a fair comment, but doesn’t make my argument incorrect. I will offer, though, hardware devices (such as the one I wrote about recently) as an answer. Clearly much harder to replace with “an identical-looking crypto-gadget of their own making” than software.

There is also the notion of the “trusted path” which, if anyone ever figures out how to implement it in software, would make such a replacement equally difficult even if we don’t use hardware. However, if you read the Red Pill/Blue Pill paper, you’ll see I don’t hold out much hope for this.

you could have a weak password that the hacker could attack via brute force

This one is actually correct! Yes, it’s true that an unphishable password must be strong. Clearly no system relying solely on a password can defend against an attacker guessing the password and seeing if it works. The only defence against this is to make it infeasible for the attacker to guess it in reasonable time. So, yes, you must use a strong password. Sorry about that.

The primary reason one should not use the same password everywhere is that once that password is discovered at one location, then it can be reused at other locations

I feel that we’re veering off into philosophy slightly with this one, particularly since, in the same post, Conor says

I also look forward to being able to login once at the start of my day and maintain that state in a reasonably secure fashion for the entire day without having to re-authenticate every few minutes

which is an interesting piece of doublethink – surely if whatever provides this miraculous experience (one I also look forward to) is compromised then you are just as screwed – so wouldn’t the argument be that I should have a large number of these things, which I have to log into separately?

Nevertheless, I will have a go at it. In our utopia, remember, our password is only ever revealed to trusted widgets (whether hardware, software or something else is immaterial). This means, of course, that the password can’t be “discovered at one location” – this is the nature of unphishability! Therefore, I claim that the criticism is a priori invalid. Isn’t logic wonderful?

I don’t follow.

Because I can’t be fooled into divulging some credential where I shouldn’t means that it is appropriate that I use it everywhere? Are there not other attack vectors that would drool at the thought?

I include this for completeness. Clearly, this is a rhetorical device. When Paul comes up with an actual attack, rather than suggesting that there surely must be one, I shall respond.


Conversely, that the fact that I can use the same credential everywhere is somehow a necessary aspect of ‘unphishability’?

Indeed it is. If it were unsafe to use the same credential everywhere, then the protocol must somehow reveal something to the other side that can be used to impersonate you (generally known as a “password equivalent” – for example, HTTP Digest Auth enrollment reveals a password equivalent that is not your password). This would make the protocol phishable. Therefore, it is a necessary requirement that an unphishable protocol allows you to use the same password everywhere.

Even more finally, for those whose heads exploded at the notion that I can log in with a password without ever revealing the password or a password equivalent, I offer you SRP.

2 Dec 2008

Multi-layered Authentication

Filed under: Crypto,Identity Management,Security — Ben @ 18:28

Although I don’t usually represent Google when I post on this blog (it being my personal blog), I figured readers might be interested in a blog post and an article I wrote with my colleague, Eric Sachs, about combining different authentication mechanisms in order to improve the user experience of strong authentication without having to change all the software in the world.

Of course, there are almost certainly better ways to do all this if you can change the software – that’s our next mission!

26 Nov 2008

Do Passwords Scale?

Filed under: Crypto,Identity Management,Security — Ben @ 11:40

Today I spent an hour with a bunch of academics. Each of the panellists had to talk for a few minutes to set the scene. I decided to talk about the worst usability disaster ever, namely passwords.

The problem with passwords is that we pin all our security on them. Although I can imagine a future world where we pin our security on other things, like strong credentials, I still wonder how that world really looks?

In particular, when I buy the latest Mac, how do I get it all signed up with all these credentials? It seems to me that the only usable answer to that question is that I fetch them from the cloud. How do I do that? Ultimately, with a password of some kind. Yes, you can back it up with a dongle or something, but when I lose that, how do I get a new one? I call you and give you … a password (of course I include my mother’s maiden name, my postcode, my date of birth, the name of my first and most beloved hamster and all that other nonsense as passwords). And if I only ever do this every couple of years, how easy is it to persuade me to do it wrong? Pretty damn easy, if you ask me. And you did.

So, where does this leave us? Users must have passwords, so why fight it? Why not admit that its where we have to be and make it a familiar (but secure) process, so that users can actually safely use passwords, phishing-free?

The answer to this is deeply sad. It is because we have done a fantastic job on usability of passwords. They’re so usable that anyone will type their password anywhere they see the word “password” with a box next to it. Phishing is utterly trivial because we have trained the world to expect to be phished every time they see a new website.

Of course, we can fix this cryptographically – that’s easy. But let’s say we did that. How do we stop the user from ever typing their password into a phishable box from this day forward? So long as they only ever type the password into the crypto gadget that does the unphishable protocol, they are safe, no matter who asks them to log in. But as soon as they type it into a text box on a web page, they’re screwed.

So, this is why passwords are the worst usability disaster ever.

Anyway, now to the title. Suppose we get to this utopia, an academic suggested, we’d still be screwed because passwords don’t scale. Because, of course, we need a different password for each site we log in to.

Well, no. If your password is unphishable, then it is obviously the case that it can be the same everywhere. Or it wouldn’t be unphishable. The only reason you need a password for each site is because we’re too lame to fix the real problem. Passwords scale just fine. If it wasn’t for those pesky users (that we trained to do the wrong thing), that is.

20 Nov 2008

You Need Delegation, Too

Kim wants to save the world from itself. In short, he is talking about yet another incident where some service asks for username and password to some other service, in order to glean information from your account to do something cool. Usually this turns out to be “harvest my contacts so I don’t have to find all my friends yet again on the social network of the month”, but in this case it was to calculate your “Twitterank”. Whatever that is. Kim tells us

The only safe solution for the broad spectrum of computer users is one in which they cannot give away their secrets. In other words: Information Cards (the advantage being they don’t necessarily require hardware) or Smart Cards. Can there be a better teacher than reality?

Well, no. There’s a safer way that’s just as useful: turn off your computer. Since what Kim proposes means that I simply can’t get my Twitterank at all (oh, the humanity!), why even bother with Infocards or any other kind of authentication I can’t give away? I may as well just watch TV instead.

Now, the emerging answer to this problem is OAuth, which protects your passwords, if you authenticate that way. Of course, OAuth is perfectly compatible with the notion of signing in at your service provider with an Infocard, just as it is with signing in with a password. But where is the advantage of Infocards? Once you have deployed OAuth, you have removed the need for users to reveal their passwords, so now the value add for Infocards seems quite small.

But if Infocards (or any other kind of signature-based credential) supported delegation, this would be much cooler. Then the user could sign a statement saying, in effect, “give the holder of key X access to my contacts” (or whatever it is they want to give access to) using the private key of the credential they use for logging in. Then they give Twitterank a copy of their certificate and a copy of the new signed delegation certificate. Twitterank presents the chained certificates and proves they have private key X. Twitter checks the signature on the chained delegation certificate and that the user certificate is the one corresponding to the account Twitterank wants access to, and then gives access to just the data specified in the delegation certificate.

The beauty of this is it can be sub-delegated, a facility that is entirely missing from OAuth, and one that I confidently expect to be the next problem in this space (but apparently predicting such things is of little value – no-one listens until they hit the brick wall lack of the facility puts in their way).

19 Oct 2008


Filed under: Crypto,Open Source,Programming,Security — Ben @ 19:12

Just for fun, I wrote a demo implementation of J-PAKE in C, using OpenSSL for the crypto, of course. I’ve pushed it into the OpenSSL CVS tree; you can find it in demos/jpake. For your convenience, there’s also a copy here.

I’ve tried to write the code so the data structures reflect the way a real implementation would work, so there’s a structure representing what each end of the connection knows (JPakeUser), one for the zero-knowledge proofs (JPakeZKP) and one for each step of the protocol (JPakeStep1 and JPakeStep2). Normally there should be a third step, where each end proves knowledge of the shared key (for example, by Alice sending Bob H(H(K)) and Bob sending Alice H(K)), since differing secrets do not break any of the earlier steps, but because both ends are in the same code I just compare the resulting keys.

The code also implements the protocol steps in a modular way, except that communications happen by magic. This will get cleaned up when I implement J-PAKE as a proper OpenSSL library component.

The cryptographic implementation differs from the Java demo (which I used for inspiration) in a few ways. I think only one of them really matters: the calculation of the hash for the Schnorr signature used in the zero-knowledge proofs – the Java implementation simply concatenates a byte representation of the various parameters. This is a security flaw, as it can be subjected to a “moving goalposts” attack. That is, the attacker could use parameters that gave the same byte representation, but with different boundaries between the parameters. I avoid this attack by including a length before each parameter. Note that I do not claim this attack is feasible, but why gamble? It worked on PGP, after all.

The code and data structures are completely different, though. Also, because of the cryptographic difference, the two implementations would not interoperate.

1 Sep 2008

Crypto Everywhere

Filed under: Crypto,Security — Ben @ 21:00

Recent events, and a post to the OpenID list got me thinking.

Apparently rfc2817 allows an http url tp be used for https security.

Given that Apache seems to have that implemented [1] and that the
openid url is mostly used for server to server communication, would
this be a way out of the http/https problem?

I know that none of the browsers support it, but I suppose that if the
client does not support this protocol, the server can redirect to the
https url? This seems like it could be easier to implement that XRI .

Disclaimer: I don’t know much about rfc2817



The core issue is that HTTPS is used to establish end-to-end security, meaning, in particular, authentication and secrecy. If the MitM can disable the upgrade to HTTPS then he defeats this aim. The fact that the server declines to serve an HTTP page is irrelevant: it is the phisher that will be serving the HTTP page, and he will have no such compunction.

The traditional fix is to have the client require HTTPS, which the MitM is powerless to interfere with. Upgrades would work fine if the HTTPS protocol said “connect on port 80, ask for an upgrade, and if you don’t get it, fail”, however as it is upgrades work at the behest of the server. And therefore don’t work.

Of course, the client “requires” HTTPS because there was a link that had a scheme of “https”. But why did was that link followed? Because there was an earlier page with a trusted link (we hope) that was followed. (Note that this argument applies to both users clicking links and OpenID servers following metadata).

If that page was served over HTTP, then we are screwed, obviously (bearing in mind DNS cache attacks and weak PRNGs).

This leads to the inescapable conclusion that we should serve everything over HTTPS (or other secure channels).

Why don’t we? Cost. It takes far more tin to serve HTTPS than HTTP. Even really serious modern processors can only handle a few thousand new SSL sessions per second. New plaintext sessions can be dealt with in their tens of thousands.

Perhaps we should focus on this problem: we need cheap end-to-end encryption. HTTPS solves this problem partially through session caching, but it can’t easily be shared across protocols, and sessions typically last on the order of five minutes, an insanely conservative figure.

What we need is something like HTTPS, shareable across protocols, with caches that last at least hours, maybe days. And, for sites we have a particular affinity with, an SSH-like pairing protocol (with less public key crypto – i.e. more session sharing).

Having rehearsed this discussion many times, I know the next objection will be DoS on the servers: a bad guy can require the server to spend its life doing PK operations by pretending he has never connected before. Fine, relegate PK operations to the slow queue. Regular users will not be inconvenienced: they already have a session key. Legitimate new users will have to wait a little longer for initial load. Oh well.

13 Aug 2008


Filed under: Crypto,Open Source — Ben @ 11:21

When I joined Google over two years ago I was asked to find a small project to get used to the way development is done there. The project I chose was one that some colleagues had been thinking about, a key management library. I soon realised that unless the library also handled the crypto it was punting on the hard problem, so I extended it to do crypto and to handle key rotation and algorithm changes transparently to the user of the library.

About nine months later I handed over my “starter project” to Steve Weis, who has worked on it ever since. For a long time we’ve talked about releasing an open source version, and I’m pleased to say that Steve and intern Arkajit Dey did just that, earlier this week: Keyczar.

Keyczar is an open source cryptographic toolkit designed to make it easier and safer for developers to use cryptography in their applications. Keyczar supports authentication and encryption with both symmetric and asymmetric keys. Some features of Keyczar include:

  • A simple API
  • Key rotation and versioning
  • Safe default algorithms, modes, and key lengths
  • Automated generation of initialization vectors and ciphertext signatures

When we say simple, by the way, the code for loading a keyset and encrypting some plaintext is just two lines. Likewise for decryption. And the user doesn’t need to know anything about algorithms or modes.

Great work, guys! I look forward to the “real” version (C++, of course!).

« Previous PageNext Page »

Powered by WordPress