Ben Laurie blathering

26 May 2008

Preprint: The Neb

Filed under: Security — Ben @ 18:38

Actually, this paper has a longer and sillier title, “Choose the Red Pill and the Blue Pill”. It was born at NDSS in a conversation with Abe Singer, my co-author.

The basic idea is that we have a choice between an operating system we can trust and one that is usable. A trustable system would be very bland and grey (the red pill) and a usable system would be full of fun and colour – but security would be a fantasy (the blue pill). In the paper, we discuss how to have your cake and eat it.

“The Neb” is the secure operating system we propose (short, of course, for the Nebuchadnezzar), btw.

23 May 2008

Preprint: (Under)mining Privacy in Social Networks

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 15:11

Actually, I’m not sure if this one ends up in print or not. But anyway, I think its content is obvious from the title.

My colleagues Monica Chew and Dirk Balfanz did all the hard work on this paper.

22 May 2008

Preprint: Access Control

Filed under: Capabilities,Programming,Security — Ben @ 17:07

I have three currently unpublished papers that may be of interest. This one has been submitted but not yet accepted. As you can guess from the title, it’s about access control, particularly in the area of mashups, gadgets and web applications.

This is the introduction:

Access control is central to computer security. Traditionally, we wish to restrict the user to exactly what he should be able to do, no more and no less.

You might think that this only applies to legitimate users: where do attackers fit into this worldview? Of course, an attacker is a user whose access should be limited just like any other. Increasingly, of course, computers expose services that are available to anyone — in other words, anyone can be a a legitimate user.

As well as users there are also programs we would like to control. For example, the program that keeps the clock correctly set on my machine should be allowed to set the clock and talk to other time-keeping programs on the Internet, and probably nothing else\footnote{Perhaps it should also be allowed a little long-term storage, for example to keep its calculation of the drift of the native clock.}.

Increasingly we are moving towards an environment where users choose what is installed on their machines, where their trust in what is installed is highly variable\footnote{A user probably trusts their
operating system more than their browser, their browser more than the pages they browse to and some pages more than others.} and where “installation” of software is an increasingly fluid concept,
particularly in the context of the Web, where merely viewing a page can cause code to run.

In this paper I explore an alternative to the traditional mechanisms of roles and access control lists. Although I focus on the use case of web pages, mashups and gadgets, the technology is appliable to all access control.

And the paper is here.

Regular readers will not be surprised to hear I am talking about capabilities.

21 May 2008

Modern Mail Clients

Filed under: General,Lazyweb — Ben @ 12:54

Way back when, I used to use Pine to read my email. After it had marked everything I read as unread again once too many times (admittedly not entirely its fault, but it did leave everything ’til the last minute), I switched to, well, something else. I don’t remember exactly what. But after a long series of experiments I ended up with Thunderbird, which I mostly like – or at least hate less than all other clients I’ve tried.

But, it really doesn’t handle big mailboxes very well. I’m lazy when it comes to tidying up, as my wife will testify, and so I tend to find myself with 100,000 read messages lying around and a similar number unread.

Thunderbird can read mailboxes like that (which is an improvement – earlier versions couldn’t), but it really doesn’t handle deleting them very well. Select even a small number, like a thousand or so, and hit delete, and watch Thunderbird go away for a very long sleep.

In the end I had to go back to Pine to tidy my mailbox. Incidentally, I tried mutt, but it couldn’t handle more than a few thousand messages at a time. Pine seems to manage whatever I throw at it, though its UI can only be described as arcane.

So, my question to the lazyweb: is there an answer to this? A modern open source client that can do graphical stuff, is nice to use and can handle big IMAP mailboxes? Or is my Thunderbird/Pine hybrid as good as it gets?

20 May 2008

Picnic Cous Cous

Filed under: Recipes — Ben @ 9:42

I invented this to eat at Glyndebourne, doncha know.

Olive oil
Onion, chopped
Cumin, dry fried and ground
Coriander, dry friend and ground
Ginger, finely chopped
Chicken breast (might also be nice with thigh), roughly chopped
Dried apricots, soaked
Chicken stock
Pistachios, shelled and roasted
Pine nuts, roasted
Coriander leaf
Cous cous

Although I describe the nuts as roasted I actually did them in a dry frying pan over a low heat, stirring constantly. Likewise the spices (of which you need a lot).

Gently fry the onions in olive oil until soft. Add the ground spices and ginger, increase the heat a little, stir and fry for a couple of minutes. Add the chopped chicken breast and cook, stirring occasionally, until mostly done. Add the liquid from the soaked apricots, roughly chop the apricots and add them, plus the raisins and a little concentrated chicken stock (I use a liquid concentrate). If there isn’t enough water, then add some more, but you don’t need to even cover the chicken. Salt and pepper to taste. Bring to the boil, cover and simmer, stirring occasionally and breaking up the larger chicken pieces with your spoon. After about 30 minutes, turn off and leave to cool (overnight if you wish).

Then prepare the cous cous according to its instructions. Once it is ready, mix in the cooled (and thickened, there should be no free liquid by now) chicken stuff, the nuts, lots of chopped coriander and a little chopped parsley. Then wrap it up and take it to your picnic. We had a green salad and a potato salad with it.

I was planning to offer lime wedges for squeezing over it, but I forgot.

Also nice microwaved if you have leftovers.

15 May 2008

Exploiting Network Cards

Filed under: Security — Ben @ 16:07

A friend of mine, Arrigo Triulzi (no web page that he wants to admit to), has just posted this fantastically scary missive to the Robust Open Source mailing list (no public archive, so I will quote it in its entirety)

I’ve been working on firmware for the past two and a bit years, in particular in the field of firmware viruses.

Without needlessly boring everyone with the various steps allow me to share an interesting observation: drivers often assume the hardware is misbehaved but never malicious. It is fascinating to discover what can be done by making the hardware malicious.

Summarising briefly my work, as yet unpublished except the obligatory notices to the affected vendors (in what follows please read NIC as strictly wired, no wireless cards):

1) there are remarkably naive “protection” methods to prevent malicious users from overwriting NIC firmware with something of their choice,

2) as an extension to 1) above it is amazing to discover how simply firmware can be updated over the wire on specific NICs,

3) from 1 & 2 above, after about two years, I’ve reached my goal of writing a totally transparent firewall bypass engine for those firewalls which are PC-based: you simply overwrite the firmware in both NICs and then perform PCI-to-PCI transfers between the two cards for suitably formatted IP packets (modern NICs have IP “offload engines” in hardware and therefore can trigger on incoming and outgoing packets). The resulting “Jedi Packet Trick” (sorry, couldn’t resist) fools, amongst others, CheckPoint FW-1, Linux-based Strongwall, etc. This is of course obvious as none of them check PCI-to-PCI transfers,

4) I have extended the technique to provide VM escape support: one writes packets from a bridged guest into the network which initiates the NIC firmware update, updates the firmware and then the NIC firmware is used to inject code into the underlying VM host. The requirement to write to the network is then dropped as all that is required is the pivoting in the NIC firmware.

This scares the crap out of me, just as it stands. But he’s missed a trick, IMO: because of the nature of the PCI bus, you can use the same technique on any machine with a vulnerable NIC to read all of RAM. You might even be able to read disk, too, depending on the disk controller.

Oh boy, this is going to be a can of worms once exploits start appearing (if they haven’t already, that is).

Debian and OpenSSL: The Last Word?

Filed under: Open Source,Programming,Security — Ben @ 15:59

I am reliably informed that, despite my previous claim, at least one member of the OpenSSL team does read openssl-dev religiously. For which he should be commended. I read it sometimes, too, but not religiously.

So, forget I said that you don’t reach the OpenSSL developers by posting on openssl-dev.

14 May 2008

Debian and OpenSSL: The Aftermath

Filed under: Open Source,Programming,Security — Ben @ 10:09

There have been an astonishing number of comments on my post about the Debian OpenSSL debacle, clearly this is a subject people have strong feelings about. But there are some points raised that need addressing, so here we go.

Firstly, many, many people seem to think that I am opposed to removing the use of uninitialised memory. I am not. As has been pointed out, this leads to undefined behaviour – and whilst that’s probably not a real issue given the current state of compiler technology, I can certainly believe in a future where compilers are clever enough to work out that on some calls the memory is not initialised and take action that might be unfortunate. I would also note in passing that my copy of K&R (second edition) does not discuss this issue, and ISO/IEC 9899, which some have quoted in support, rather post-dates the code in OpenSSL. To be clear, I am now in favour of addressing this issue correctly.

And this leads me to the second point. Many people seem to be confused about what change was actually made. There were, in fact, two changes. The first concerned a function called ssleay_rand_add(). As a developer using OpenSSL you would never call this function directly, but it is usually (unless a custom PRNG has been substituted, as happens in FIPS mode, for example) called indirectly via RAND_add(). This call is the only way entropy can be added to the PRNG’s pool. OpenSSL calls RAND_add() on buffers that may not have been initialised in a couple of places, and this is the cause of the valgrind warnings. However, rather than fix the calls to RAND_add(), the Debian maintainer instead removed the code that added the buffer handed to ssleay_rand_add() to the pool. This meant that the pool ended up with essentially no entropy. Clearly this was a very bad idea.

The second change was in ssleay_rand_bytes(), a function that extracts randomness from the pool into a buffer. Again, applications would access this via RAND_bytes() rather than directly. In this function, the contents of the buffer before it is filled are added to the pool. Once more, this could be uninitialised. The Debian developer also removed this call, and that is fine.

The third point: several people have come to the conclusion that OpenSSL relies on uninitialised memory for entropy. This is not so. OpenSSL gets its entropy from a variety of platform-dependent sources. Uninitialised memory is merely a bonus source of potential entropy, and is not counted as “real” entropy.

Fourthly, I said in my original post that if the Debian maintainer had asked the developers, then we would have advised against such a change. About 50% of the comments on my post point to this conversation on the openssl-dev mailing list. In this thread, the Debian maintainer states his intention to remove for debugging purposes a couple of lines that are “adding an unintialiased buffer to the pool”. In fact, the first line he quotes is the first one I described above, i.e. the only route to adding anything to the pool. Two OpenSSL developers responded, the first saying “use -DPURIFY” and the second saying “if it helps with debugging, I’m in favor of removing them”. Had they been inspired to check carefully what these lines of code actually were, rather than believing the description, then they would, indeed, have noticed the problem and said something, I am sure. But their response can hardly be taken as unconditional endorsement of the change.

Fifthly, I said that openssl-dev was not the way to ensure you had the attention of the OpenSSL team. Many have pointed out that the website says it is the place to discuss the development of OpenSSL, and this is true, it is what it says. But it is wrong. The reality is that the list is used to discuss application development questions and is not reliably read by the development team.

Sixthly, my objection to the fix Debian put in place has been misunderstood. The issue is not that they did not fully reverse their previous patch – as I say above, the second removal is actually fine. My issue is that it was committed to a public repository five days before an advisory was issued. Only a single attacker has to notice that and realise its import in order to start exploiting vulnerable systems – and I will be surprised if that has not happened.

I think that’s about enough clarification. The question is: what should we do to avoid this happening again? Firstly, if package maintainers think they are fixing a bug, then they should try to get it fixed upstream, not fix it locally. Had that been done in this case, there is no doubt none of this would have happened. Secondly, it seems clear that we (the OpenSSL team) need to find a way that people can reliably communicate with us in these kinds of cases.

The problem with the second is that there are a lot of people who think we should assist them, and OpenSSL is spectacularly underfunded compared to most other open source projects of its importance. No-one that I am aware of is paid by their employer to work full-time on it. Despite the widespread use of OpenSSL, almost no-one funds development on it. And, indeed, many commercial companies who absolutely depend on it refuse to even acknowledge publicly that they use it, despite the requirements of the licence, let alone contribute towards it in any way.

I welcome any suggestions to improve this situation.

Incidentally, some of the comments are not exactly what I would consider appropriate, and there’s a lot of repetition. I moderate comments on my blog, but only to remove spam (and the occasional cockup, such as people posting twice, not realising they are being moderated). I do not censor the comments, so don’t blame me for their content!

13 May 2008

Vendors Are Bad For Security

Filed under: Open Source,Programming,Security — Ben @ 14:09

I’ve ranted about this at length before, I’m sure – even in print, in O’Reily’s Open Sources 2. But now Debian have proved me right (again) beyond my wildest expectations. Two years ago, they “fixed” a “problem” in OpenSSL reported by valgrind[1] by removing any possibility of adding any entropy to OpenSSL’s pool of randomness[2].

The result of this is that for the last two years (from Debian’s “Etch” release until now), anyone doing pretty much any crypto on Debian (and hence Ubuntu) has been using easily guessable keys. This includes SSH keys, SSL keys and OpenVPN keys.

What can we learn from this? Firstly, vendors should not be fixing problems (or, really, anything) in open source packages by patching them locally – they should contribute their patches upstream to the package maintainers. Had Debian done this in this case, we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was. But no, it seems that every vendor wants to “add value” by getting in between the user of the software and its author.

Secondly, if you are going to fix bugs, then you should install this maxim of mine firmly in your head: never fix a bug you don’t understand. I’m not sure I’ve ever put that in writing before, but anyone who’s worked with me will have heard me say it multiple times.

Incidentally, while I am talking about vendors who are bad for security, it saddens me to have to report that FreeBSD, my favourite open source operating system, are also guilty. Not only do they have local patches in their ports system that should clearly be sent upstream, but they also install packages without running the self-tests. This has bitten me twice by installing broken crypto, most recently in the py-openssl package.

[1] Valgrind is a wonderful tool, I recommend it highly.

[2] Valgrind tracks the use of uninitialised memory. Usually it is bad to have any kind of dependency on uninitialised memory, but OpenSSL happens to include a rare case when its OK, or even a good idea: its randomness pool. Adding uninitialised memory to it can do no harm and might do some good, which is why we do it. It does cause irritating errors from some kinds of debugging tools, though, including valgrind and Purify. For that reason, we do have a flag (PURIFY) that removes the offending code. However, the Debian maintainers, instead of tracking down the source of the uninitialised memory instead chose to remove any possibility of adding memory to the pool at all. Clearly they had not understood the bug before fixing it.

P.S. I’d link to the offending patch in Debian’s source repository. If I could find a source repository. But I can’t.


Thanks to Cat Okita, I have now found the repo. Here’s the offending patch. But I have to admit to being astonished again by the fix, which was committed five days before the advisory! Do these guys have no clue whatsoever?

12 May 2008

The World Without “Identity” or “Federation” is Already Here

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 12:24

My friend Alec Muffett thinks we should do away with “Big I” Identity. I’m all for that … but Alec seems to be quite confused.

Firstly, his central point, that all modern electronic identity requires the involvement of third parties, is just plain wrong. OpenID, which he doesn’t mention, is all about self-asserted identity – I put stuff on webpages I own and that’s my identity. Cardspace, to the extent it is used at all, is mostly used with self-signed certificates – I issue a new one for each site I want to log in to, and each time I visit that site I prove again that I own the corresponding private key. And, indeed, this is a pretty general theme through the “user-centric” identity community.

Secondly, the idea that you can get away with no third party involvement is just unrealistic. If everyone were honest, then sure, why go beyond self-assertion? But everyone is not. How do we deal with bad actors? Alec starts off down that path himself, with his motorcycling example: obviously conducting a driving test on the spot does not scale well – when I took my test, it took around 40 minutes to cover all the aspects considered necessary to establish sufficient skill, and I’d hesitate to argue that it could be reduced. The test used to be much shorter, and the price we paid was a very high death rate amongst young motorcyclists; stronger rules have made a big inroads on that statistic. It is not realistic to expect either me or the police to spend 40 minutes establishing my competence every time it comes into question. Alec appears to be recognising this problem by suggesting that the officer might instead rely on the word of my local bike club. But this has two problems, firstly I am now relying on a third party (the club) to certify me, which is exactly counter to Alec’s stated desires, and secondly, how does one deal with clubs whose only purpose is to certify people who actually should not be allowed to drive (because they’re incompetent or dangerous, for example)?

The usual answer one will get at this point from those who have not worked their way through the issues yet is “aha, but we don’t need a central authority to fix this problem, instead we can rely on some kind of reputation system”. The trouble is no-one has figured out how you build a reputation system in cyberspace (and perhaps in meatspace, too) that is not easily subverted by people creating networks of “fake” identities purely in order to boost their own reputations – at least, not without some kind of central authority attesting to identity.

Yet another issue that has to be faced is what to do about negative attributes (e.g. “this guy is a bad risk, don’t lend him money because he never pays it back”). No-one is going to willingly make those available to others. Once more, we end up having to invoke some kind of authority.

Of course, there are many cases where self-assertion is perfectly fine, so I have no argument with Alec there. And yes, there is a school of thought that says any involvement with self-issued stuff is a ridiculous idea, but you mostly run into that amongst policy people, who like to think that we’re all too stupid to look after ourselves, and corporate types who love silos (we find a lot of those in the Liberty Alliance and the ITU and such-like places, in my experience).

But the bottom line is that a) what he wants is insufficient to completely deal with the problems of identity and reputation and b) it is nothing that plenty of us haven’t been saying (and doing) all along – at least where it works.

Once you’ve figured that out, you realise how wrong

I am also here not going to get into the weirdness of Identity wherein the goal is to centralise your personal information to make management of it convenient, and then expend phenomenal amounts of brainpower implementing limited-disclosure mechanisms and other mathematica, in order to re-constrain the amount of information that is shared; e.g. “prove you are old enough to buy booze without disclosing how old you are”. Why consolidate the information in the first place, if it’s gonna be more work to keep it secret henceforth? It’s enough to drive you round the twist, but it’ll have to wait for a separate rant.

is. Consolidation is not what makes it necessary to use selective disclosure – that is driven by the need for the involvement of third parties. Obviously I can consolidate self-asserted attributes without any need for selective disclosure – if I want to prove something new or less revealing, I just create a new attribute. Whether its stored “centrally” (what alternative does Alec envision, I wonder?) or not is entirely orthogonal to the question.

Incidentally, the wit that said “Something you had, Something you forgot, Something you were” was the marvellous Nick Mathewson, one of the guys behind the Tor project. Also, Alec, if you think identity theft is fraud (as I do), then I recommend not using the misleading term preferred by those who want to shift blame, and call it “identity fraud” – in fraud, the victim is the person who believes the impersonator, not the person impersonated. Of course the banks would very much like you to believe that identity fraud is your problem, but it is not: it is theirs.

5 May 2008

Petition Against Unfair Motorcycle Tax

Filed under: Motorbikes — Ben @ 12:37

Not much I can add to the petition’s own words! Sign up here.

Changes to the law mean cars emitting less than 100g of CO2 per kilometre travelled would be exempt from paying Vehicle Excise Duty (road tax), while motorcycles are still required to pay.

This was outlined by your Chancellor Alistair Darling in his first budget last week, under the auspices of rewarding motorists for driving ‘green’ vehicles.

Despite Darling’s aim, the rate of road tax paid by motorcyclists is set to double in 2009, with the annual charge for a typical 125cc commuter bike set to grow from £15 per year at present, to £33 in 2009.

This makes a nonsense of the revised rates of vehicle excise duty, as motorcycles tend to emit less CO2 and use less fuel than cars, with the average CO2 output from motorcycles at 110g/km.

So why do those who ride greener two wheeled vehicles, use less road space and do not contribute to congestion get penalised whilst 4 wheel motorist whose vehicles use under 100g/km are exempt from road tax

Powered by WordPress