Links

Ben Laurie blathering

8 May 2011

Checking SSL Certificates

Filed under: Security — Ben @ 12:31

I mentioned my work on the Google Certificate Catalog recently. One thing I forgot is a command-line utility I wrote to perform the check for you automatically.

You can find it here.

3 Apr 2011

Improving SSL Certificate Security

Filed under: Crypto,DNSSEC,Security — Ben @ 19:47

Given how often I say on this blog that I am not speaking for my employer, I am amused to be able to say for once that I am. Over here.

9 Mar 2011

Capsicum Wins Cambridge Ring Award

Filed under: Capabilities,Security — Ben @ 19:28

Of course, I know that capabilities are really important, and that the work we (I say we as if I did much – the hard graft is down to Robert Watson and Jon Anderson) have done on adding capabilities to FreeBSD is particularly awesome. But I continue to be amazed at the community reaction to it.

The latest accolade is the rather unwieldy Cambridge Ring Hall of Fame Award for Best Publication of the Year.

You know, I’m beginning to think we might actually make some serious progress with capabilities in the next year or two. Watch this space, there’s a lot going on in this field!

24 Feb 2011

Who the Hell are 2o7?

Filed under: Privacy,Security — Ben @ 13:38

My friend Adriana pointed me to this cool track-blocking extension for Chrome.

Back in the day, I used to do this kind of blocking “by hand” – i.e. by manually deciding which cookies to block and which to allow. This is far from an exact science – it’s fairly easy to block some sites into uselessness – so I’m pleased to see an automated alternative.

In any case, it all came to an end when Chrome decided (without any explanation I ever saw) to drop the ability to control cookies, so extensions are probably the only way now.

Anyway, it reminded me of something I kept meaning to look into but never really got very far, which is 2o7.net. This domain crops up all the time if you start monitoring cookies, and clearly is some massive tracking operation. But I’ve never heard of it, and nor has anyone else I know.

So … who the hell are 2o7? (And yes, I can do whois, which leads me to Omniture. Not much the wiser. Except they now seem to be owned by Adobe – mmm – looking forward to mixing all that tracking data with Adobe’s careful attention to security).

Note, btw, the cool track-blocking extension doesn’t appear to have heard of 2o7 either. From my experience you can just block all their cookies without harm.

16 Feb 2011

Two Cool Caja Things

Filed under: Caja — Ben @ 14:01

Firstly, Paypal are using Caja to protect their customers from errors or evilness in gadgets. There are also some performance hints here.

Secondly, my esteemed colleague, Jasvir Nagra, has put together a really nice playground for Caja. Have a go, it’s pretty.

That is all.

21 Dec 2010

Is Openleaks The Next Haystack?

As everyone who’s even half-awake knows by now, a bunch of people who used to work on Wikileaks have got together to work on Openleaks. From what I hear, Openleaks is going to be so much better than Wikileaks – it will have no editorial role, it will strongly protect people who submit leaks, it’s not about the people who run it, it’ll be distributed and encrypted.

But where’s the design to back up this rhetoric? Where are the security reviews from well-known authorities? They seem to be missing. Instead we have excited articles in mainstream media about how wonderful it is going to be, and how many hours the main man has spent on it.

This sounds very familiar indeed. And we all know what happened last time round.

Of course, Openleaks may be fine, but I strongly suggest that those who are working on it publish their plan and subject it to scrutiny before they put contributors at risk.

As always, I offer my services in this regard. I am sure I am not alone.

18 Dec 2010

ƃuıʇsılʞɔɐlq uʍop-ǝpısd∩

Filed under: Anonymity,Crypto,Identity Management,Lazyweb,Privacy — Ben @ 14:54

A well-known problem with anonymity is that it allows trolls to ply their unwelcome trade. So, pretty much every decent cryptographic anonymity scheme proposed has some mechanism for blacklisting. Basically these work by some kind of zero-knowledge proof that you aren’t on the blacklist – and once you’ve done that you can proceed.

However, this scheme suffers from the usual problem with trolls: as soon as they’re blacklisted, they create a new account and carry on. Solving this problem ultimately leads to a need for strong identification for everyone so you can block the underlying identity. Obviously this isn’t going to happen any time soon, and ideally never, so blacklists appear to be fundamentally and fatally flawed, except perhaps in closed user groups (where you can, presumably, find a way to do strong-enough identification, at least sometimes) – for example, members of a club, or employees of a company.

So lately I’ve been thinking about using “blacklists” for reputation. That is, rather than complain about someone’s behaviour and get them blacklisted, instead when you see someone do something you like, add them to a “good behaviour blacklist”. Membership of the “blacklist” then proves the (anonymous) user has a good reputation, which could then be used, for example, to allow them to moderate posts, or could be shown to other users of the system (e.g. “the poster has a +1 reputation”), or all sorts of other things, depending on what the system in question does.

The advantage of doing it this way is that misbehaviour can then be used to remove reputation, and the traditional fallback of trolls no longer works: a new account is just as useless as the one they already have.

There is one snag that I can see, though, which is at least some anonymity systems with blacklisting (e.g. Nymble, which I’ve somehow only recently become aware of) have the side-effect of making every login by a blacklisted person linkable. This is not good, of course. I wonder if there are systems immune to this problem?

Given that Jan Camenisch et al have a presentation on upside-down blacklisting (predating my thinking by quite a long way – one day I’ll get there first!), I assume there are – however, according to Henry, Henry and Goldberg, Camenisch’s scheme is not very efficient compared to Nymble or Nymbler.

2 Dec 2010

P2P DNS

Filed under: DNSSEC — Ben @ 14:10

Apparently the Pirate Bay are tired of ICANN and want to start their own peer-to-peer DNS. I think their chances of wide adoption are pretty near zero, but it’s an interesting area that’s needed serious exploration for quite some time. Obviously if you’re doing P2P DNS you need to use DNSSEC or attacks become trivial. Since they also want to have multiple registrars who can nominate themselves, it seems a proposal I made to the DNS working group many years ago could be handy. Basically, the idea is to distribute keys for “islands of security” by having bilateral agreements between them, so each island signs some set of other island’s keys, if they want to. The user then bootstraps their set of keys by starting from an island or islands they trust.

When ferreting this out I found that the -01 version is already on my server, and I just uploaded -02 – not sure what the differences are, when I have some time I’ll make a diff. Probably.

27 Nov 2010

Why Identity Is Still Just Login

Filed under: Identity Management,Privacy — Ben @ 15:14

It seems everyone now agrees that Internet identity is about a collection of assertions relating to a person (or thing, if you want to go there). Some of these assertions are assertions one makes about oneself, for example “I like brussels sprouts”, and assertions others make about you, for instance “Ben is a UK citizen, signed the UK Government”. These assertions are essentially invariant, or slowly varying, for the most part. So what makes an identity, we agree, is some collection of these assertions.

But we also agree that people need to assert different identities: there’s Ben at work, Ben as a member of the OpenSSL team, Ben at home and so on. Each of these identities, we like to think, corresponds to a different collection of assertions.

All we need to do, we think, to map these identities onto collections of electronic assertions and we’ll have solved The Internet Identity Problem. People will no longer be required to type in their credit card number five times a day, endlessly write down their address (and correct it when it changes) and so on. The ‘net will become one lovely, seamless experience of auto-communicated factoids about me that are just right for every circumstance and I’ll never fill in a form again.

You can probably see where I’m going. The more I think about it, the more I realise that every situation is different. My “identity” is contextual, and different for each context. We know this from endless studies of human behaviour.

So, what was the point of doing what every electronic identity system wants me to do, namely aggregating various assertions about me into various identities, and then choosing the right identity to reveal? To match this to what I do in the real world, I will need a different identity for each context.

So, what was the point of even talking about identities? Why not simply talk about assertions, and find better ways for me to quickly make the assertions I want to make. Cut out the middleman and do away with the notion of identity.

In practice, of course, this is exactly what has happened. The only vestige of this electronic identity that makes any sense is the ability to log in as some particular “person”. After that all my decisions are entirely contextual, and “identity” doesn’t help me at all. And so what we see is that “identity” has simply become “login”. And will always be so.

In a sense this is exactly what my Belay research project is all about – allowing me to decide exactly what I reveal in each individual context. In Belay, the ability to log in to a particular site will become the same kind of thing as any other assertion – a fine-grained permission I can grant or not grant.

Note: I am not against bundles of assertions – for example, I think lines one and two of my address clearly belong together (though for some purposes the first half of my postcode, or just my country, should suffice) or, less facetiously, when I use my credit card I almost always need to send a bunch of other stuff along with the credit card number. What I doubt is that the notion of a bundle that corresponds to an “identity” is a useful one. This implies that UIs where you pick “an identity” are doomed to failure – some other approach is needed.

25 Oct 2010

Firesheep: Session Hijacking for Morons

Filed under: Crypto,Privacy,Security — Ben @ 13:35

OK, we’ve all known forever that using any kind of credential over an unencrypted connection is a Bad Idea(tm). However, we also know that pretty much every website does an Obi-wan over session cookies, which typically travel over HTTP. “These are not the credentials you are looking for” they tell us.

Firesheep proves that comprehensively wrong. Surf your favourite login-requiring site on an open network, and *BANG*, you’re pwned. Awesome piece of work. Eric Butler, the author, says

Websites have a responsibility to protect the people who depend on their services. They’ve been ignoring this responsibility for too long, and it’s time for everyone to demand a more secure web. My hope is that Firesheep will help the users win.

19 Oct 2010

Phished by Visa: The Aftermath

Filed under: Security — Ben @ 12:18

Well over a year ago I wrote about how stupid the Verified by Visa program is. Apparently the mainstream press have now caught up as fraudsters gear up to exploit this fantastic piece of security design. I particularly like the claim from a UK Cards Association representative that VbV reduces fraud (at around 2:30) – immediately after a victim explains that her bank refused to even investigate the possibility of fraud.

This is, of course, in line with the modern banking strategy for fraud: shift all blame to the customer.

2 Oct 2010

Aims not Mechanisms

Filed under: Privacy,Rants — Ben @ 22:18

I’m a big fan of the EFF, so it comes as a bit of a surprise when I see them say things that don’t make any sense.

A while back the EFF posted a bill of privacy rights for social network users. Whilst I totally sympathise with what the EFF is trying to say here, I’m disappointed that they head the way of policymakers by ignoring inconvenient technical reality and proposing absurd policies.

In particular, I refer you to this sentence:

The right to control includes users’ right to decide whether their friends may authorize the service to disclose their personal information to third-party websites and applications.

In other words, if I post something to a “social network” (whatever that is: yes, I have an informal notion of what it means, and I’m sure you do, too, but is, say, my blog part of a “social network”? Email?) then I should be able to control whether you, a reader of the stuff I post, can do so via a “third-party application”. For starters, as stated, this is equivalent to determining whether you can read my post at all in most cases, since you do so via a browser, which is a “third-party application”. If I say “no” to my friends using “third-party applications” then I am saying “no” to my friends reading my posts at all.

Perhaps, then, they mean specific third-party applications? So I should be able to say, for example, “my friends can read this with a browser, but not with evil-rebroadcaster-app, which not only reads the posts but sends them to their completely public blog”? Well, perhaps, but how is the social network supposed to control that? This is only possible in the fantasy world of DRM and remote attestation.

Do the EFF really want DRM? Really? I assume not. So they need to find a better way to say what they want. In particular, they should talk about the outcome and not the mechanism. Talking about mechanisms is exactly why most technology policy turns out to be nonsense: mechanisms change and there are far more mechanisms available than any one of us knows about, even those of us whose job it is to know about them. Policy should not talk about the means employed to achieve an aim, it should talk about the aim.

The aim is that users should have control over where their data goes, it seems. Phrased like that, this is clearly not possible, nor even desirable. Substitute “Disney” for the “the users” and you can immediately see why. If you solve this problem, then you solve the DRM “problem”. No right thinking person wants that.

So, it seems like EFF should rethink their aims, as well as how they express them.

26 Sep 2010

The Tragedy of the Uncommons

Filed under: Rants,Security — Ben @ 3:46

An interesting phenomenon seems to be emerging: ultra-hyped projects are turning out to be crap. I am, of course, speaking of Haystack and Diaspora (you should follow these links, I am not going to go over the ground they cover, much).

The pattern here is that some good self-promoters come up with a cool idea, hype it up to journalists, who cannot distinguish it from the other incomprehensible cool stuff we throw at them daily, who duly write about how it’ll save the world. The interesting thing is what happens next. The self-promoters now have to deliver the goods. But, for some reason, rather than enlisting the help of experts to assist them, they seem to be convinced that because they can persuade the non-experts with their hype they can therefore build this system they have been hyping. My instatheory[1] is that it’d dilute their fame if they shared the actual design and implementation. They’ve got to save the world, after all. Or we could be more charitable and follow Cialdini: it seems humans have a strong drive to be consistent with their past actions. Our heroes have said, very publicly, that they’re going to build this thing, so now they have a natural tendency to do exactly what they said[2].

But the end result, in my sample of two, is disastrous. Haystack has completely unravelled as fundamentally flawed. Diaspora seems to be deeply rooted in totally insecure design. I hope I am preaching to the choir when I say that security is not something that should be bolted on later, and that the best way to do security design is to have the design reviewed as widely as possible. In both Haystack and DIaspora’s cases that could, and should, have been a full public review. There is no excuse for this, it wastes a vast amount of enthusiasm and energy (and money) on ultimately destructive goals.

I don’t have any great ideas on how to fix this, though. Yes, reporters getting expert assistance will help. Many of the experts in the security field are quite outspoken, it isn’t hard to track them down. In Diaspora’s case, perhaps one could have expected that Kickstarter would take a more active role in guidance and mentoring. Or if they already do, get it right.

Natural selection gets you every time.

BTW, if any journalists are reading this, I am absolutely happy to take a call to explain, in English, technological issues.

[1] I love this word. Ben Hyde introduced me to it.

[2] This is known as “consistency” in the compliance trade.

14 Sep 2010

Experimenting With Client Certificates

Filed under: Crypto,Identity Management — Ben @ 16:30

I was recently contacted about yet another attempt to use client certificates for authentication. As anyone paying attention knows, this has some attractions but is pretty much unusable in browsers because of their diabolical UIs. So, I was fascinated to learn that this particular demo completely avoids that issue by implementing TLS entirely in Javascript! This strikes me as a hugely promising approach: now we have complete freedom to experiment with UI, whilst the server side can continue to use off-the-shelf software and standard configurations.

Once UI has been found that works well, I would hope that it would migrate to be part of the browser, it seems pretty clear that doing this on the webpage is not likely to lead to a secure solution in the long run. But in the meantime, anyone can have a crack at their own UI, and all they need is Javascript (OK, for non-coders that might sound like a problem, but believe me, the learning curve is way shallower than any browser I’ve played with).

Anway, pretty much end-of-message, except for some pointers.

I am very interested in finding competent JS/UI people who would be interested in banging harder on this problem – I can do all the crypto stuff, but I confess UI is not my forte! Anyone out there?

Note, by the way, that the focus on browsers as the “home of authentication” is also a barrier to change – applications also need to authenticate. This is why “zero install” solutions that rely on browsers (e.g. OpenID) are likely doomed to ultimate failure – by the time you’ve built all that into an application (which is obviously not “zero install”), you might as well have just switched it to using TLS and a client certificate…

16 Aug 2010

It’s All About Blame

Filed under: Anonymity,Crypto,Privacy,Security — Ben @ 17:57

I do not represent my employer in this post.

Eric Schmidt allegedly said

“The only way to manage this is true transparency and no anonymity. In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it.”

I don’t care whether he actually said it, but it neatly illustrates my point. The trouble with allowing policy makers, CEOs and journalists define technical solutions is that their ability to do so is constrained by their limited understanding of the available technologies. At Google (who I emphatically do not represent in this post), we have this idea that engineers should design the systems they work on. I approve of this idea, so, speaking as a practising engineer in the field of blame (also known as security), I contend that what Eric really should have allegedly said was that the only way to manage this is true ability to blame. When something goes wrong, we should be able to track down the culprit. Governments will demand it.

Imagine if, the next time you got on a plane, instead of showing your passport, you instead handed over an envelope with a fancy seal on it, containing your ID, with windows showing just enough to get you on the plane (e.g. your ticket number and photo). The envelope could be opened on the order of a competent court, should it turn out you did something naughty whilst travelling, but otherwise you would remain unidentified. Would this not achieve the true aim that Eric allegedly thinks should be solved by universal identification? And is it not, when spread to everything, a better answer?

Of course, in the physical world this is actually quite hard to pull off, tamper-proof and -evident seals being what they are (i.e. crap), but in the electronic world we can actually do it. We have the crypto.

Just sayin’.

14 Aug 2010

FreeBSD Capsicum

Filed under: Capabilities,Security — Ben @ 12:34

I mentioned FreeBSD Capsicum in my roundup of capability OSes earlier this year without mentioning that I am involved in the project. Since then we’ve managed to port and sandbox Chromium, using less code than any other Chromium sandbox (100 lines), as well as a number of other applications. Also impressive, I think, is the fact that Robert Watson managed to write this sandbox in just two days, having never seen the Chromium codebase before – this is as much a testament to Robert’s coding skills and the clean Chromium codebase as it is to Capsicum, but nevertheless worth a mention.

Anyway, at USENIX Security this week, we won Best Student Paper. A PC member described the paper to me as “excellent” and “very important”. Robert has also blogged about it rather more eloquently than I can manage at this time in the morning.

You can read the paper, too, if you want.

Even more exciting, FreeBSD 9 will include the Capsicum capability framework, allowing the peaceful coexistence of capability and POSIX programs. Although this has been attempted before, as far as I am aware all previous versions have put a POSIX emulation layer on top of a capability system, rather than grafting capabilities onto POSIX. Since Capsicum is highly efficient and FreeBSD is a perfectly sound and portable system (and my server OS of choice), this opens up the possibility of a gradual migration to capabilities, something that has been problem up to now.

Robert and I (and a host of others) are continuing our research into practical capability systems, Robert at Cambridge and me at Google. Work is also in progress to port Capsicum to Linux.

26 Jun 2010

Nigori Update

Filed under: Nigori — Ben @ 15:33

It’s been a while (I’ve been busy on another project, more on that soon, I hope), but finally…

I’ve updated the protocol slightly to correct a subtle bug in the secret splitting specification. You can find the latest versions and diffs here.

I’ve also finally got around to tidying the code up some (though there’s still plenty more to do), you can find an appspot server, a command line client and various libraries, all in Python, at nigori.googlecode.com. As always, patches are welcome!

The code does not fully reflect the draft protocol yet – in particular, it still uses a Schnorr signature where the draft calls for DSA.

If you want to play with the command-line client, I already have a server running on appspot. Here’s how … from the client directory, run

$ ./client.sh nigori-server.appspot.com 80 register name password
200 OK

$ ./client.sh nigori-server.appspot.com 80 authenticate name password
200 OK

Replaying: this should fail
401 Unauthorized

$ ./client.sh nigori-server.appspot.com 80 add user password name secret
/usr/local/lib/python2.6/site-packages/Crypto/Util/randpool.py:40: RandomPool_DeprecationWarning: This application uses RandomPool, which is BROKEN in older releases.  See http://www.pycrypto.org/randpool-broken
  RandomPool_DeprecationWarning)
200 OK
Status: 200 OK
Content-Type: text/html; charset=utf-8
Cache-Control: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Content-Length: 0


$ ./client.sh nigori-server.appspot.com 80 get user password name 
0 at 1277559350.600000: secret

Not the most elegant interface in the world. Note that the server is experimental, I may break it, delete all the data, etc. Of course, you can run your own.

Note also that the whole protocol is experimental at this point, I wouldn’t rely on it to store your vital passwords just yet!

9 Jun 2010

TLS Renegotiation, 7 Months On

Filed under: General,Security — Ben @ 9:18

It’s been 7 months since the TLS renegotiation problem went public and Opera’s security group have a couple of interesting articles about it. The first is about adoption of patched versions and the verdict is not good, as this graph shows…

Only 12% of servers are patched.

At this rate it will be two years before the fix is widely adopted!

The second is about version intolerance – scarily, nearly 90% of patched servers will not work when a future version of TLS bumps the major version number to 4 (it is currently 3). This is pretty astonishingly crap, and is likely to cause us problems in the future, so I’m glad the Opera guys are working hard to track down the culprits.

By the way, at least according to Opera, OpenSSL does not have this problem.

8 Jun 2010

XAuth: Who Should Know What?

Filed under: Anonymity,Privacy,Security — Ben @ 11:26

Note that I am not speaking for my employer in this post.

I’ve been following the debate around XAuth with interest. Whilst the debate about whether centralisation is an acceptable stepping stone to an in-browser service is interesting, I am concerned about the functionality of either solution.

As it stands, XAuth reveals to each relying party all of my identity providers, so that it can then present UI to allow me to choose one of them to authenticate to the RP. Why? What business of the RP is it where I have accounts? All that should be revealed is the IdP I choose to reveal (if any). This seems easy enough to accomplish, even in the existing centralised version: all that has to happen is for the script that xauth.org serves is to include the UI for IdP choice.

This is not just privacy religion (or theatre): as the EFF vividly illustrated with their Panopticlick experiment, it is surprisingly easy to uniquely identify people from signals you would have thought were not at all identifying, such as browser version and configuration information. Indeed, a mere 33 IdPs would provide enough information (if evenly distributed) to uniquely identify every person in the world. Meebo had no difficulty at all coming up with 15 of them for page one of many in their introductory blog post

15 IdPs on page 1 of many

23 May 2010

Nigori: Protocol Details

As promised, here are the details of the Nigori protocol (text version). I intend to publish libraries in (at least) C and Python. At some point, I’ll do a Stupid version, too.

Comments welcome, of course, and I should note that some details are likely to change as we get experience with implementation.

« Previous PageNext Page »

Powered by WordPress