Links

Ben Laurie blathering

19 Oct 2010

Phished by Visa: The Aftermath

Filed under: Security — Ben @ 12:18

Well over a year ago I wrote about how stupid the Verified by Visa program is. Apparently the mainstream press have now caught up as fraudsters gear up to exploit this fantastic piece of security design. I particularly like the claim from a UK Cards Association representative that VbV reduces fraud (at around 2:30) – immediately after a victim explains that her bank refused to even investigate the possibility of fraud.

This is, of course, in line with the modern banking strategy for fraud: shift all blame to the customer.

2 Oct 2010

Aims not Mechanisms

Filed under: Privacy,Rants — Ben @ 22:18

I’m a big fan of the EFF, so it comes as a bit of a surprise when I see them say things that don’t make any sense.

A while back the EFF posted a bill of privacy rights for social network users. Whilst I totally sympathise with what the EFF is trying to say here, I’m disappointed that they head the way of policymakers by ignoring inconvenient technical reality and proposing absurd policies.

In particular, I refer you to this sentence:

The right to control includes users’ right to decide whether their friends may authorize the service to disclose their personal information to third-party websites and applications.

In other words, if I post something to a “social network” (whatever that is: yes, I have an informal notion of what it means, and I’m sure you do, too, but is, say, my blog part of a “social network”? Email?) then I should be able to control whether you, a reader of the stuff I post, can do so via a “third-party application”. For starters, as stated, this is equivalent to determining whether you can read my post at all in most cases, since you do so via a browser, which is a “third-party application”. If I say “no” to my friends using “third-party applications” then I am saying “no” to my friends reading my posts at all.

Perhaps, then, they mean specific third-party applications? So I should be able to say, for example, “my friends can read this with a browser, but not with evil-rebroadcaster-app, which not only reads the posts but sends them to their completely public blog”? Well, perhaps, but how is the social network supposed to control that? This is only possible in the fantasy world of DRM and remote attestation.

Do the EFF really want DRM? Really? I assume not. So they need to find a better way to say what they want. In particular, they should talk about the outcome and not the mechanism. Talking about mechanisms is exactly why most technology policy turns out to be nonsense: mechanisms change and there are far more mechanisms available than any one of us knows about, even those of us whose job it is to know about them. Policy should not talk about the means employed to achieve an aim, it should talk about the aim.

The aim is that users should have control over where their data goes, it seems. Phrased like that, this is clearly not possible, nor even desirable. Substitute “Disney” for the “the users” and you can immediately see why. If you solve this problem, then you solve the DRM “problem”. No right thinking person wants that.

So, it seems like EFF should rethink their aims, as well as how they express them.

26 Sep 2010

The Tragedy of the Uncommons

Filed under: Rants,Security — Ben @ 3:46

An interesting phenomenon seems to be emerging: ultra-hyped projects are turning out to be crap. I am, of course, speaking of Haystack and Diaspora (you should follow these links, I am not going to go over the ground they cover, much).

The pattern here is that some good self-promoters come up with a cool idea, hype it up to journalists, who cannot distinguish it from the other incomprehensible cool stuff we throw at them daily, who duly write about how it’ll save the world. The interesting thing is what happens next. The self-promoters now have to deliver the goods. But, for some reason, rather than enlisting the help of experts to assist them, they seem to be convinced that because they can persuade the non-experts with their hype they can therefore build this system they have been hyping. My instatheory[1] is that it’d dilute their fame if they shared the actual design and implementation. They’ve got to save the world, after all. Or we could be more charitable and follow Cialdini: it seems humans have a strong drive to be consistent with their past actions. Our heroes have said, very publicly, that they’re going to build this thing, so now they have a natural tendency to do exactly what they said[2].

But the end result, in my sample of two, is disastrous. Haystack has completely unravelled as fundamentally flawed. Diaspora seems to be deeply rooted in totally insecure design. I hope I am preaching to the choir when I say that security is not something that should be bolted on later, and that the best way to do security design is to have the design reviewed as widely as possible. In both Haystack and DIaspora’s cases that could, and should, have been a full public review. There is no excuse for this, it wastes a vast amount of enthusiasm and energy (and money) on ultimately destructive goals.

I don’t have any great ideas on how to fix this, though. Yes, reporters getting expert assistance will help. Many of the experts in the security field are quite outspoken, it isn’t hard to track them down. In Diaspora’s case, perhaps one could have expected that Kickstarter would take a more active role in guidance and mentoring. Or if they already do, get it right.

Natural selection gets you every time.

BTW, if any journalists are reading this, I am absolutely happy to take a call to explain, in English, technological issues.

[1] I love this word. Ben Hyde introduced me to it.

[2] This is known as “consistency” in the compliance trade.

14 Sep 2010

Experimenting With Client Certificates

Filed under: Crypto,Identity Management — Ben @ 16:30

I was recently contacted about yet another attempt to use client certificates for authentication. As anyone paying attention knows, this has some attractions but is pretty much unusable in browsers because of their diabolical UIs. So, I was fascinated to learn that this particular demo completely avoids that issue by implementing TLS entirely in Javascript! This strikes me as a hugely promising approach: now we have complete freedom to experiment with UI, whilst the server side can continue to use off-the-shelf software and standard configurations.

Once UI has been found that works well, I would hope that it would migrate to be part of the browser, it seems pretty clear that doing this on the webpage is not likely to lead to a secure solution in the long run. But in the meantime, anyone can have a crack at their own UI, and all they need is Javascript (OK, for non-coders that might sound like a problem, but believe me, the learning curve is way shallower than any browser I’ve played with).

Anway, pretty much end-of-message, except for some pointers.

I am very interested in finding competent JS/UI people who would be interested in banging harder on this problem – I can do all the crypto stuff, but I confess UI is not my forte! Anyone out there?

Note, by the way, that the focus on browsers as the “home of authentication” is also a barrier to change – applications also need to authenticate. This is why “zero install” solutions that rely on browsers (e.g. OpenID) are likely doomed to ultimate failure – by the time you’ve built all that into an application (which is obviously not “zero install”), you might as well have just switched it to using TLS and a client certificate…

13 Sep 2010

Wasting Public Money: Birth, Marriage and Death Digitisation

Filed under: Open Data — Ben @ 14:10

In 1998 a group of us started FreeBMD, a project to transcribe and make freely available the Birth, Marriage and Death records for England and Wales. The project has been wildly successful and 12 years on we have 250 million records in our database.

In the meantime the government has twice decided to spend a vast amount of taxpayers’ money duplicating our work. The first project, DoVE, was started in 2005. Three years and £8.5 million later, the project had transcribed 130 million records and was closed down. At no point in the process was FreeBMD contacted – not even to inform us that there was a tender open to do what we clearly were highly qualified to do. Nor were the transcribed records made freely available to those who had paid for them. Oh no, that wouldn’t be the thing to do at all – they were instead given to the GRO to sell.

Fast forward a few years, Big Brother is upon us. And I don’t mean the TV program. In 2009 the Identity and Passport Service decide to try again. I’ll quote it here,

The D&I project is currently in a pause status as IPS awaits the outcome of the government’s Comprehensive Spending Review (CSR). It is possible that the outcome of the CSR will impact the overall scope of the project, as well as timescales and procurement activity.

since history shows that the government are not very good at preserving records[1]. Anyway, you’ll notice that it’s been suspended again, at what cost to the taxpayer I don’t know, perhaps someone out there does. Since the new government has decided to scrap identity cards, which were the driving force for this project (note: no public access to the transcription was planned) I am quietly confident that the outcome of the CSR will be to scrap the project. Again. Of course, they will call it “stalled” or “delayed” so when they next decide to waste our money on it they can revive it.

Anyway, let me go on record now and say this: FreeBMD will complete this transcription, without cost to the taxpayer, given access to the source records. There’s just one condition: we have to be able to publish the complete transcription, free of charge, on the Internet. Of course, it’ll go a bit faster if we do get some money, so I won’t say we wouldn’t accept if it were offered!

Of course, we’ve always been prepared to do this, but why would civil servants shaft their cronies by saving money in that way?

[1] All references to DoVE[2] seem to have been conveniently obliterated by a “move” of the GRO’s website, even though some of it is still hosted on the same website!

[2] Well, at least all references on this rather nice timeline I discovered while researching this post.

31 Aug 2010

Cod Chowder

Filed under: Food,Recipes — Ben @ 5:05

Chowder isn’t exactly rocket science, but this went pretty well, so documenting it here…

I actually made this almost entirely from frozen ingredients and it was just fine. Fresh might be better.

Finely chopped leek
Smoked bacon, sliced (I used some lardons I had in the freezer)
Cubed potatoes
Chicken stock (maybe fish stock would be better, I didn’t have any) or water
Milk (about half as much as stock)
Pepper
Mace
Cod
King prawns
Sweetcorn
Cream

Fry the leeks and bacon in a little butter/olive oil (I used both) until pretty soft – I didn’t crisp the bacon for a change. I think it is better for chowder not to. Add cubed potatoes and fry for a bit longer, then add chicken stock (or water or fish stock) and bring to the boil. Simmer until the potatoes have softened, then zap half the mixture with a blender (I just did this in situ). Season (I didn’t need salt, there was enough in the bacon). Add milk, fish, prawns and bring back up to a simmer, cook for a few minutes, making sure the fish falls apart. Add cooked sweetcorn and bring back up to temperature. Finally, add some cream.

Quantities should be chosen so that the final result is good and thick.

Serve with warm, crusty bread and butter. Works as a whole meal.

16 Aug 2010

It’s All About Blame

Filed under: Anonymity,Crypto,Privacy,Security — Ben @ 17:57

I do not represent my employer in this post.

Eric Schmidt allegedly said

“The only way to manage this is true transparency and no anonymity. In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it.”

I don’t care whether he actually said it, but it neatly illustrates my point. The trouble with allowing policy makers, CEOs and journalists define technical solutions is that their ability to do so is constrained by their limited understanding of the available technologies. At Google (who I emphatically do not represent in this post), we have this idea that engineers should design the systems they work on. I approve of this idea, so, speaking as a practising engineer in the field of blame (also known as security), I contend that what Eric really should have allegedly said was that the only way to manage this is true ability to blame. When something goes wrong, we should be able to track down the culprit. Governments will demand it.

Imagine if, the next time you got on a plane, instead of showing your passport, you instead handed over an envelope with a fancy seal on it, containing your ID, with windows showing just enough to get you on the plane (e.g. your ticket number and photo). The envelope could be opened on the order of a competent court, should it turn out you did something naughty whilst travelling, but otherwise you would remain unidentified. Would this not achieve the true aim that Eric allegedly thinks should be solved by universal identification? And is it not, when spread to everything, a better answer?

Of course, in the physical world this is actually quite hard to pull off, tamper-proof and -evident seals being what they are (i.e. crap), but in the electronic world we can actually do it. We have the crypto.

Just sayin’.

14 Aug 2010

FreeBSD Capsicum

Filed under: Capabilities,Security — Ben @ 12:34

I mentioned FreeBSD Capsicum in my roundup of capability OSes earlier this year without mentioning that I am involved in the project. Since then we’ve managed to port and sandbox Chromium, using less code than any other Chromium sandbox (100 lines), as well as a number of other applications. Also impressive, I think, is the fact that Robert Watson managed to write this sandbox in just two days, having never seen the Chromium codebase before – this is as much a testament to Robert’s coding skills and the clean Chromium codebase as it is to Capsicum, but nevertheless worth a mention.

Anyway, at USENIX Security this week, we won Best Student Paper. A PC member described the paper to me as “excellent” and “very important”. Robert has also blogged about it rather more eloquently than I can manage at this time in the morning.

You can read the paper, too, if you want.

Even more exciting, FreeBSD 9 will include the Capsicum capability framework, allowing the peaceful coexistence of capability and POSIX programs. Although this has been attempted before, as far as I am aware all previous versions have put a POSIX emulation layer on top of a capability system, rather than grafting capabilities onto POSIX. Since Capsicum is highly efficient and FreeBSD is a perfectly sound and portable system (and my server OS of choice), this opens up the possibility of a gradual migration to capabilities, something that has been problem up to now.

Robert and I (and a host of others) are continuing our research into practical capability systems, Robert at Cambridge and me at Google. Work is also in progress to port Capsicum to Linux.

15 Jul 2010

Alternatives to Adium?

Filed under: Lazyweb — Ben @ 16:04

When I’m at home, I tend to use Pidgin for IM. When travelling, I generally use Adium. But Adium is driving me nuts: basically it is fantastically unstable. Empirically this appears to be related to the number of contacts, of which I have many (i.e. reducing the number makes it less crashy).

So … what can I use on MacOS that’s less crap than Adium but still supports OTR?

12 Jul 2010

Cabbage and Peas

Filed under: Recipes — Ben @ 11:15

I have a vague recollection of being served this somewhere, but I can’t remember where.

Smoked bacon
Cabbage (we used sweetheart, but I don’t think it is critical, savoy would probably be even nicer)
Frozen peas
Double cream

Slice the bacon thinly, fry in a little oil until crispy (at least, that’s what I’d do, my sous-chef decided to stop sooner than that and it was fine), chop cabbage into strips, add to the bacon+fat and braise (I found I didn’t need a lid at all, but you may – and even may need to add a little water, depending on the cabbage). When the cabbage is nearly done, add the frozen peas. As soon as they defrost, a generous gloop of double cream. Add salt at some point if the bacon isn’t too salty and pepper in any case.

We ate this with roast pork belly and roast potatoes. Yummy.

26 Jun 2010

Nigori Update

Filed under: Nigori — Ben @ 15:33

It’s been a while (I’ve been busy on another project, more on that soon, I hope), but finally…

I’ve updated the protocol slightly to correct a subtle bug in the secret splitting specification. You can find the latest versions and diffs here.

I’ve also finally got around to tidying the code up some (though there’s still plenty more to do), you can find an appspot server, a command line client and various libraries, all in Python, at nigori.googlecode.com. As always, patches are welcome!

The code does not fully reflect the draft protocol yet – in particular, it still uses a Schnorr signature where the draft calls for DSA.

If you want to play with the command-line client, I already have a server running on appspot. Here’s how … from the client directory, run

$ ./client.sh nigori-server.appspot.com 80 register name password
200 OK

$ ./client.sh nigori-server.appspot.com 80 authenticate name password
200 OK

Replaying: this should fail
401 Unauthorized

$ ./client.sh nigori-server.appspot.com 80 add user password name secret
/usr/local/lib/python2.6/site-packages/Crypto/Util/randpool.py:40: RandomPool_DeprecationWarning: This application uses RandomPool, which is BROKEN in older releases.  See http://www.pycrypto.org/randpool-broken
  RandomPool_DeprecationWarning)
200 OK
Status: 200 OK
Content-Type: text/html; charset=utf-8
Cache-Control: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Content-Length: 0


$ ./client.sh nigori-server.appspot.com 80 get user password name 
0 at 1277559350.600000: secret

Not the most elegant interface in the world. Note that the server is experimental, I may break it, delete all the data, etc. Of course, you can run your own.

Note also that the whole protocol is experimental at this point, I wouldn’t rely on it to store your vital passwords just yet!

13 Jun 2010

FreeBMD Seeks An Executive Director

Filed under: General — Ben @ 15:35

I don’t often mention the FreeBMD project on this blog, perhaps I should. Anyway, we (the trustees of FreeBMD) have decided that it’s time to hire an Executive Director. It occurred to me that some of my readers might be interested, or might know someone who is.

9 Jun 2010

TLS Renegotiation, 7 Months On

Filed under: General,Security — Ben @ 9:18

It’s been 7 months since the TLS renegotiation problem went public and Opera’s security group have a couple of interesting articles about it. The first is about adoption of patched versions and the verdict is not good, as this graph shows…

Only 12% of servers are patched.

At this rate it will be two years before the fix is widely adopted!

The second is about version intolerance – scarily, nearly 90% of patched servers will not work when a future version of TLS bumps the major version number to 4 (it is currently 3). This is pretty astonishingly crap, and is likely to cause us problems in the future, so I’m glad the Opera guys are working hard to track down the culprits.

By the way, at least according to Opera, OpenSSL does not have this problem.

8 Jun 2010

XAuth: Who Should Know What?

Filed under: Anonymity,Privacy,Security — Ben @ 11:26

Note that I am not speaking for my employer in this post.

I’ve been following the debate around XAuth with interest. Whilst the debate about whether centralisation is an acceptable stepping stone to an in-browser service is interesting, I am concerned about the functionality of either solution.

As it stands, XAuth reveals to each relying party all of my identity providers, so that it can then present UI to allow me to choose one of them to authenticate to the RP. Why? What business of the RP is it where I have accounts? All that should be revealed is the IdP I choose to reveal (if any). This seems easy enough to accomplish, even in the existing centralised version: all that has to happen is for the script that xauth.org serves is to include the UI for IdP choice.

This is not just privacy religion (or theatre): as the EFF vividly illustrated with their Panopticlick experiment, it is surprisingly easy to uniquely identify people from signals you would have thought were not at all identifying, such as browser version and configuration information. Indeed, a mere 33 IdPs would provide enough information (if evenly distributed) to uniquely identify every person in the world. Meebo had no difficulty at all coming up with 15 of them for page one of many in their introductory blog post

15 IdPs on page 1 of many

23 May 2010

Nigori: Protocol Details

As promised, here are the details of the Nigori protocol (text version). I intend to publish libraries in (at least) C and Python. At some point, I’ll do a Stupid version, too.

Comments welcome, of course, and I should note that some details are likely to change as we get experience with implementation.

18 May 2010

Nigori: Storing Secrets in the Cloud

Filed under: Crypto,Nigori,Security — Ben @ 18:27

Lately, I’ve been thinking about phishing. Again. If we want users to take our sensible advice and use different passwords everywhere, then they’ve got to be able to remember those passwords and move them from machine to machine. In order to do that with any ease, we’ve got to store them in the cloud. But the question is, how to do that securely?

So, that’s what I’ve been working on for a while, and the result is Nigori, a protocol and open source implementation for storing secrets in the cloud. It doesn’t require you to trust anyone (other than your completely insecure client, of course … I’m working on that, too). The storage server(s) are incapable of getting hold of the keying material, and if you want you can use splits to ensure that individual servers can’t even attack the encrypted secrets.

Of course, Nigori isn’t just for passwords, you could also use it to store private keys and the like. For example, Salmon can use it to store signing keys.

The source is in a bit of a state right now, following some hack’n’slay related to appspot’s crypto … oddities, but I’ll post about that soon. For now, in case you missed it above, here’s an overview document.

10 May 2010

Programming Languages

Filed under: Programming — Ben @ 19:00

I don’t often go in for the reblogging thing, but this made me laugh out loud in several places. For example:

1986 – Brad Cox and Tom Love create Objective-C, announcing “this language has all the memory safety of C combined with all the blazing speed of Smalltalk.” Modern historians suspect the two were dyslexic.

25 Apr 2010

Wikileaks: The Facts

Filed under: Anonymity,Civil Liberties — Ben @ 18:17

Apparently some reporters think it’s useful to make stupid claims about Wikileaks. I won’t bother to link, but just in case you mistook them for journalism: for the record, I am a member of Wikileaks’ advisory board and I am honoured to be. I don’t think Julian Assange is crazy, I think he’s a very talented guy. Yeah, he’s a little unusual, but that just adds to the fun. It is true, however, that I don’t know anything about how Wikileaks operates in detail and it is also true that I think that’s a good idea.

If you don’t know what I’m talking about, I hear there’s a search engine that might help. Or you could do something useful with your time.

9 Apr 2010

Stir-fried Trout and Almond

Filed under: Recipes — Ben @ 11:15

My son complained that he didn’t like fish and to compensate he demanded we stir-fry it. Here’s what I invented…

Trout fillets, with skin (I think we used river trout)
Ginger
Spring onions
Dark soy
Sugar
Cornflour
Almonds

Cut the trout fillets into strips about 1/2″ wide. Marinade in dark soy, finely chopped ginger and spring onions and a little sugar. While it is marinading stir-fry the almonds (a little oil and a very low heat works best, don’t pause or they burn and burnt nut is very bitter). Remove the almonds, add some cornflour to the marinade, then get a little oil smoking hot. Cook the strips of trout, salvaged one at a time from the marinade, for a couple of minutes a side. Do them in batches so they crisp up nicely (especially the skin side). Once they’re all done, quickly stir fry them with the almonds and a little more soy. I resisted the temptation to throw the extra marinade into the dish at this point. Turn off the heat, mix in yet more finely chopped spring onion and a little sesame oil. As usual, no quantities, but I recommend a _lot_ of spring onions, use about half in the marinade and half at the end (I used 10 spring onions for 6 trout fillets).

Serve with rice, of course, and a vegetable (I did leeks).

27 Mar 2010

Capability Operating Systems

Filed under: Caja,Capabilities,Security — Ben @ 16:31

Now that we’ve deployed the most successful capability language ever, it’s time to start thinking about the rest of the stack, and one end of that stack is the operating system.

Luckily, thinking in this area has been going on a long time – indeed, capabilities were invented in the context of the OS, though for a long time were thought to be the exclusive domain of specialised hardware. Some of that hardware ended up being extremely widely deployed, too, so don’t think this is the stuff of lab experiments only. Sadly, though, despite the hardware supported capabilities, these were not generally exposed up to the level of the kernel/userland interface; they were thought to be useful only within the kernel (with one notable, but not very well known or widely used, exception),

However, more recently it has been realised that capabilities are not only useful in userland, but also can be implemented on top of commodity hardware, resulting in a crop of new capability operating systems. But these still suffer from the problems that traditional capability languages have suffered from: they need the world to be completely reinvented before you can use them. Because the capability paradigm is fundamentally different from the ambient authority ACL-based world we live in, no existing software can fully enjoy the benefits of capabilities without at least some rewriting.

So, the interesting research question has now become: how can we move toward this world without having to rewrite everything on day one? Some progress has been made with mapping POSIX onto capabilities. Heading in a completely different direction is the idea of running existing OSes as guests on a capability system. Yet another approach is to apply capabilities to more restricted domains: one that I have been involved in is the idea of hosting untrusted software “in the cloud”, in the same vein as Google App Engine. Because this software is all new, changing the way it has to work is not a big deal.

But the thing that interests me most is the work being done on FreeBSD, which allows capability-based code to coexist with (or even be contained within) existing POSIX code. This provides a genuine, believably workable, migration path from existing systems to a brave new capability world. We can move one application (or even one library) at a time, without breaking anything. Which is why I am pleased to be able to say I am involved in this work, too. What’s even better is this work is by no means specific to FreeBSD – the same principles could be applied to any POSIX system (so Linux and Mac OS X would be good targets). Just as we have seen success with Caja it seems to me that this route can deliver success at the OS level, because it allows a gradual, piecemeal migration.

Unusually for me, I have not interrupted my narrative flow by naming or saying too much directly about the various things I link to – however, I appreciate that following links in the middle of reading can get distracting, so here are many of the links again with some explanation…

Caja: a capability version of Javascript. I have written about it before.

CAP computer: the project that invented capabilities.

IBM System/38: more capability hardware.

AS/400: derived from the System/38. Although this had capabilities, they were not exposed to userland. Very widely used commercially.

KeyKOS: a commercial capability operating system.

Amoeba: an experimental capability system – like Caja, it tends to advertise its other virtues rather than describing itself as a capability system.

EROS: another experimental capability OS – originally intended to provide robustness, not security. The first to run on a standard PC.

CapROS: when EROS was discontinued, it lived on as CapROS. Google has recently sponsored the development of a web-hosting experiment on top of CapROS.

Coyotos: by the original designer of EROS. Now also discontinued (can you spot a trend here?).

Plash: the Principle of Least Authority Shell. This shell runs on Linux, figures out from the command line what any particular invocation of an executable should have access to, creates a sandbox with access to only those things, then maps POSIX calls onto the sandboxed things.

L4: a modern capability-based microkernel.

L4Linux: Linux running on top of L4. Although this is nice for things like driver isolation, it seems like the wrong direction because it does not assist with exposing capabilities to userland.

FreeBSD Capsicum: a capability mode for FreeBSD. Whole executables can opt in to this mode, coexisting with POSIX binaries. Even more interestingly, libraries can spawn off capability-mode subprocesses whilst effectively remaining in POSIX mode themselves. This allows the transparent implementation of privilege separation. This project has also been sponsored by Google.

« Previous PageNext Page »

Powered by WordPress