Ben Laurie blathering

12 Jun 2012

Venison and Chestnut Stew

Filed under: Recipes — Ben @ 14:57

Cooked this last night, it was pretty yummy.

venison (not sure what cut, we used a casserole mix from here)
red wine
chicken stock
bouquet garni
jelly (e.g. redcurrant, medlar)

Fry the chopped onion and bacon for a while. Add the venison chunks and fry until browned (note that proper recipes give you a lot of hooey about coating in flour and frying in batches – whilst I am as much a fan of the Maillard reaction as the next guy, I don’t think it is necessary to do it to every bit of meat in the whole stew, and nor can I discern an advantage in burning the flour). Unless your hob is a lot hotter than mine, what will almost certainly happen is it’ll brown a bit and then release a bunch of juices which it’ll boil in. If you want more browning, then give in to convention and fry in small batches. Anyway, once its boiled a bit, throw in some flour, stir until mixed then add wine and stock, about half and half, to cover the meat. Add thinly sliced carrot, celery and the bouquet garni at this point.

Leave to boil gently for 1 hour, then add the chestnuts (I used vacuum-packed ones). I added some crushed garlic, too, because I forgot it earlier. Add the medlar/redcurrant/whatever jelly, too (I used both of those). Boil for another hour. Eat.

I served it with herby mashed potato and leek and peas in saffron cream. Pretty damn good.

22 May 2012

Factoring RSA

Filed under: Crypto,Open Source,Security — Ben @ 13:13

Apparently I have not blogged about factoring weak RSA keys before. Well, I guess I have now 🙂

One thing I’ve been wondering ever since that research was done is: is there anything OpenSSL could do about this? I’ve been assuming OpenSSL was used to generate at least some of those keys.

So, I was interested to read this analysis. First off, it shows that it is highly likely that the bad keys were generated by OpenSSL and one other (proprietary) implementation. However, I have to argue with some details in an otherwise excellent writeup.

Firstly, this canard irritates me:

Until version 0.9.7 (released on Dec 31, 2002) OpenSSL relied exclusively on the /dev/urandom source, which by its very definition is non-blocking. If it does not have enough entropy, it will keep churning out pseudo-random numbers possibly of very poor quality in terms of their unpredictability or uniqueness.

By definition? Whose definition? When did Linux man pages become “by definition”? In FreeBSD, which, IMO, has a much sounder approach to randomness, urandom does block until it has sufficient entropy. Is poor design of the OS OpenSSL’s fault?

Which brings me to

FreeBSD prior to version 5 posed its own problem, since its /dev/random source silently redirected to /dev/urandom.

Well. Modern FreeBSD versions link /dev/urandom to /dev/random. That doesn’t seem like a material change to me. I’m pretty sure that the implementation changed, too – perhaps that’s more important than filenames?

Finally, in the summary:

Some unfortunate choices by the OpenSSL library didn’t help either.

Oh really? So the fact that a 10-year-old version of OpenSSL used a device that in some OSes is not very well designed is contributing to this problem? I’m finding this a little hard to swallow. Also, “choices”? What choices? Only one choice is mentioned.

The real problem is, IMNSHO: if you provide a weak random number source, then people will use it when they shouldn’t. The problem here is with the OS that is providing the randomness, not the OpenSSL library. So, why is the OS (which I am prepared to bet is Linux) not even mentioned?

28 Apr 2012

Using Capsicum For Sandboxing

Filed under: Capabilities,General,Programming,Security — Ben @ 18:07

FreeBSD 9.0, released in January 2012, has experimental Capsicum support in the kernel, disabled by default. In FreeBSD 10, Capsicum will be enabled by default.

But unless code uses it, we get no benefit. So far, very little code uses Capsicum, mostly just experiments we did for our paper. I figured it was time to start changing that. Today, I’ll describe my first venture – sandboxing bzip2. I chose bzip2 partly because Ilya Bakulin had already done some of the work for me, but mostly because a common failure mode in modern software is mistakes made in complicated bit twiddling code, such as decompressors and ASN.1 decoders.

These can often lead to buffer overflows or integer over/underflows – and these often lead to remote code execution. Which is bad. bzip2 is no stranger to this problem: CVE-2010-0405 describes an integer overflow that could lead to remote code execution. The question is: would Capsicum have helped – and if it would, how practical is it to convert bzip2 to use Capsicum?

The answers are, respectively, “yes” and “fairly practical”.

First of all, how does Capsicum mitigate this problem? The obvious way to defend a decompressor is to run the decompression engine in a separate process with no privilege beyond that needed to get its job done – which is the ability to read the input and write the output. In Capsicum, this is easy to achieve: once the appropriate files are open, fork the process and enter capability mode in the child. Discard all permissions except the ability to read the input and write the output (in Capsicum, this means close all other file descriptors and limit those two to read and write), and then go ahead and decompress. Should there be a bug in the decompressor, what does the attacker get? Well, pretty much what he had already: the ability to read the input file (he supplied it, so no news there!) and the ability to write arbitrary content to the output file (he already had that, since he could have chosen arbitrary input and compressed it). He also gets to burn CPU and consume memory. But that’s it – no access to your files, the network, any other running process, or anything else interesting.

I think that’s pretty neat.

But how hard is it to do? I answer that question in a series of diffs on GitHub, showing a step-by-step transformation of bzip2 into the desired form. I used a technique I like to call error-driven development; the idea is you attempt to make changes that will cause compilation to fail until you have completely accomplished your goal. This is a useful way to reassure yourself that you have made all necessary updates and there’s nothing hiding away you didn’t take care of. If you follow along by building the various stages, you’ll see how it works.

It turns out that in bzip2 this matters – it isn’t very beautifully written, and the code that looks like it might cleanly just take an input file and an output file and do the work in isolation, actually interacts with the rest of the code through various function calls and globals. This causes a problem: once you’ve forked, those globals and functions are now in the wrong process (i.e. the child) and so it is necessary to use RPC to bridge any such things back to the parent process. Error-driven development assures us that we have caught and dealt with all such cases.

So how did this work out in practice? Firstly, it turns out we have to give the compressor a little more privilege: it writes to stderr if there are problems, so we need to also grant write on stderr (note that we could constrain what it writes with a bit more effort). The callbacks we have to provide do not, I think, let it do anything interesting: cause the program to exit, make the output file’s permissions match the input file’s, and remove the input or output files (ok, removing the input file is slightly interesting – but note that bzip2 does this anyway).

Secondly, because we have not yet decided on an RPC mechanism, this particular conversion involves quite a bit of boilerplate: wrapping and unwrapping arguments for RPCs, wiring them up and all that, all of which would be vastly reduced by a proper RPC generator. Try not to let it put you off 🙂

Finally, the system has (at least) one global, errno. I did not deal with that so far, which means some errors will report the wrong error – but it is not particularly hard to do so.

So, on to the diffs. This is something of an experimental way to present a piece of development, so I’d be interested in feedback. Here they are, in order:

And there you are: bzip2 is now rendered safe from decompressor exploits, and it was only a few hours work. As we refine the support infrastructure, it will be even less work.

20 Apr 2012

Persian Pulled Lamb

Filed under: Recipes — Ben @ 10:37

I don’t usually link to existing recipes, but this was so good, I had to: We only let it marinade for one day, which seemed to work fine.

5 Apr 2012

Salmon and Peas in a Saffron Cream Sauce

Filed under: Recipes — Ben @ 19:42

An impromptu and fast recipe that worked really well.

olive oil
mixed herbs
salmon steak fillets
frozen peas

Put the saffron in a small amount of hot water. Get the butter and oil hot enough to bubble, add salt, pepper, mixed herbs. Shortly after, add the salmon, skin side down. Fry until the skin is crispy, then turn onto a side. Fry for a couple of minutes, turn again until all four sides are done. Throw in the frozen peas and mix with the fat. Add the saffron (and water, of course). Bring to the boil, then add cream. Bring to the boil again, season and serve. Try to keep one side of the salmon above the waterline throughout.

We had it with pasta. Start the pasta before the salmon, it really is that quick!

3 Apr 2012

EFF Finally Notice 0day Market

Filed under: Security — Ben @ 13:37

Six years after I first blogged about it, the EFF have decided that selling 0days may not be so great.

Maybe they should be reading my blog? 🙂

1 Mar 2012

Certificate Transparency: Spec and Working Code

Filed under: Certificate Transparency,Crypto,Open Source — Ben @ 17:29

Quite a few people have said to me that Certificate Transparency (CT) sounds like a good idea, but they’d like to see a proper spec.

Well, there’s been one of those for quite a while, you can find the latest version in the code repository, or for your viewing convenience, I just made an HTML version.

Today, though, to go with that spec, I’m happy to announce working code for a subset of the protocol. This covers the trickiest part – a fully backwards compatible SSL handshake between servers and clients. The rest of the protocol will necessarily all be new code for interacting with the log server and other new components, and so should not have these issues.

If you build the code according to the README, then you will find instructions in test/README for the demo.

What this does, in short, is the following:

  • Run a CT log server. Currently this has no persistence across runs, but does keep a full log in memory.
  • Issue a self-signed server certificate. A CA issued certificate would also be fine, but not so easy to automate for a demo.
  • Use the CT client to register that certificate with the log server and to obtain a log proof for it.
  • Use the CT client to convert that proof into a fake “certificate” which can be included in the certificate chain in the TLS handshake.
  • Run an Apache 2.2 instance to serve the self-signed certificate and the log proof certificate. Note that Apache is unmodified, all that is needed is appropriate configuration.
  • Use the CT client to connect to the Apache instance and verify the presented log proof.
  • You can also connect to Apache with an existing browser to check that you can still access the site despite the presence of the log proof.

There’s plenty more to be done, but this is the part that needs the earliest scrutiny, since we are bending the rules to get back compatibility and avoid the need to change server software. Client software has to change anyway to provide any benefit to users, so that’s less of a worry.

We welcome discussion, suggestions and questions on the mailing list.

How “Free” Leads to Closed

Filed under: Open Source — Ben @ 11:48

The FSF is fond of banging on about how the GPL is more “free” than other open source licences, even though it is actually a more restrictive licence than many others (for example, the Apache Licence).

So I find it ironic that the much anticipated Raspberry Pi is about as un-free as it is possible to be. Yes, it runs Linux. Can you run anything else? No, because the chipset is not documented, it is impossible to write drivers for any other OS. Its hard to imagine what would have happened if the dominant open OS was BSD or Apache licensed, but it is interesting to speculate: would this have happened in that world? Possibly not – one of the reasons the ASF adopted a more free licence was precisely because it is business-friendly. Would chipmakers obsessively protect their chip specs in that world? Who knows, but I like to think not.

In any case, if I were building a device like the Pi, I would not be using undocumented chips.

4 Feb 2012

Certificate Transparency Sites

Filed under: Crypto,Security — Ben @ 22:50

I may not have said much more about Certificate Transparency, but we’ve been working on it. So, those interested in following along (or joining in) are welcome to look at…


Mailing list.

Code repository.

The code repository also includes the spec, in xml2rfc format.

29 Nov 2011

Fixing CAs

Filed under: Security — Ben @ 12:58

Adam Langley and I have a proposal to bolster up the rather fragile Certificate Authority infrastructure.

TL;DNR: certificates are registered in a public audit log. Servers present proofs that their certificate is registered, along with the certificate itself. Clients check these proofs and domain owners monitor the logs. If a CA mis-issues a certificate then either

  • There is no proof of registration, so the browser rejects the certificate, or
  • There is a proof of registration and the certificate is published in the log, in which case the domain owner notices and complains, or
  • There is a proof of registration but the certificate does not appear in the log, in which case the proof is now proof that the log misbehaved and should be struck off.

And that, as they say, is that.

Update: Adam has blogged, exploring the design space.

1 Oct 2011

Open Source Transcription Software Developer

Filed under: Open Data,Open Source,Programming — Ben @ 18:06

Since we set up FreeBMD, FreeREG and FreeCEN things have come a long way, and so we’re revisiting how we do transcription. Those great guys at Zooniverse have released their Scribe transcription software, which they developed to use with Old Weather and Ancient Lives (and more to come), as open source.

We are working with them to develop a new transcription platform for genealogical records, based on Scribe, and we want to hire a developer to help us with it. Scribe itself is written in Ruby, so some familiarity with that would help. We also use Python and EC2, so knowing about those would be good, too. And the front-end is sure to be using Javascript, so there’s another tickbox to tick.

Finally, we intend to open source everything, and so a developer used to working in an open source community would be helpful.

Everything is negotiable. FreeBMD does not have offices, so this would be “work from home” (or the beach, or whatever suits you).

If you’re interested, send email to Feel free to forward this post, of course.

19 Sep 2011

Lessons Not Learned

Filed under: Identity Management,Security — Ben @ 15:50

Anyone who has not had their head under a rock knows about the DigiNotar fiasco.

And those who’ve been paying attention will also know that DigiNotar’s failure is only the most recent in a long series of proofs of what we’ve known for a long time: Certificate Authorities are nothing but a money-making scam. They provide us with no protection whatsoever.

So imagine how delighted I am that we’ve learnt the lessons here (not!) and are now proceeding with an even less-likely-to-succeed plan using OpenID. Well, the US is.

If the plan works, consumers who opt in might soon be able to choose among trusted third parties — such as banks, technology companies or cellphone service providers — that could verify certain personal information about them and issue them secure credentials to use in online transactions.

Does this sound familiar? Rather like “websites that opt in can choose among trusted third parties – Certificate Authorities – that can verify certain information about them and issue them secure credentials to use in online transactions”, perhaps? We’ve seen how well that works. And this time there’s not even a small number of vendors (i.e. the browser vendors) who can remove a “trusted third party” who turns out not to be trustworthy. This time you have to persuade everyone in the world who might rely on the untrusted third party to remove them from their list. Good luck with that (good luck with even finding out who they are).

What is particularly poignant about this article is that even though it’s title is “Online ID Verification Plan Carries Risks” the risks we are supposed to be concerned about are mostly privacy risks, for example

people may not want the banks they might use as their authenticators to know which government sites they visit


the government would need new privacy laws or regulations to prohibit identity verifiers from selling user data or sharing it with law enforcement officials without a warrant.

Towards the end, if anyone gets there, is a small mention of some security risk

Carrying around cyber IDs seems even riskier than Social Security cards, Mr. Titus says, because they could let people complete even bigger transactions, like buying a house online. “What happens when you leave your phone at a bar?” he asks. “Could someone take it and use it to commit a form of hyper identity theft?”

Dude! If only the risk were that easy to manage! The real problem comes when someone sets up an account as you with one of these “banks, technology companies or cellphone service providers” (note that CAs are technology companies). Then you are going to get your ass kicked, and you won’t even know who issued the faulty credential or how to stop it.

And, by the way, don’t be fooled by the favourite get-out-of-jail-free clause beloved by policymakers and spammers alike, “opt in”. It won’t matter whether you opt in or not, because the proof you’ve opted in will be down to these “trusted” third parties. And the guy stealing your identity will have no compunction about that particular claim.

12 Sep 2011

DNSSEC on the Google Certificate Catalog

Filed under: DNSSEC,Security — Ben @ 14:47

I mentioned my work on the Google Certificate Catalog a while back. Now I’ve updated it to sign responses with DNSSEC.

I also updated the command-line utility to verify DNSSEC responses – and added a little utility to fetch the root DNSSEC keys and verify a PGP signature on them.

As always, feedback is welcome.

23 Jul 2011

An Efficient and Practical Distributed Currency

Filed under: Anonymity,Crypto,Security — Ben @ 15:51

Now that I’ve said what I don’t like about Bitcoin, it’s time to talk about efficient alternatives.

In my previous paper on the subject I amused myself by hypothesizing an efficient alternative to Bitcoin based on whatever mechanism it uses to achieve consensus on checkpoints. Whilst this is fun, it is pretty clear that no such decentralised mechanism exists. Bitcoin enthusiasts believe that I have made an error by discounting proof-of-work as the mechanism, for example

I believe Laurie’s paper is missing a key element in bitcoin’s reliance on hashing power as the primary means of achieving consensus: it can survive attacks by governments.

If bitcoin relied solely on a core development team to establish the authoritative block chain, then the currency would have a Single Point of Failure, that governments could easily target if they wanted to take bitcoin down. As it is, every one in the bitcoin community knows that if governments started coming after bitcoin’s development team, the insertion of checkpoints might be disrupted, but the block chain could go on.

Checkpoints are just an added security measure, that are not essential to bitcoin’s operation and that are used as long as the option exists. It is important for the credibility of a decentralized currency that it be possible for it to function without such a relatively easy to disrupt method of establishing consensus, and bitcoin, by relying on hashing power, can.


Ben, your analysis reads as though you took your well-known and long-standing bias against proof-of-work and reverse engineered that ideology to fit into an ad hoc criticism of bitcoin cryptography. You must know that bitcoin represents an example of Byzantine fault tolerance in use and that the bitcoin proof-of-work chain is the key to solving the Byzantine Generals’ Problem of synchronising the global view.

My response is simple: yes, I know that proof-of-work, as used in Bitcoin, is intended to give Byzantine fault tolerance, but my contention is that it doesn’t. And, furthermore, that it fails in a spectacularly inefficient way. I can’t believe I have to keep reiterating the core point, but here we go again: the flaw in proof-of-work as used in Bitcoin is that you have to expend 50% of all the computing power in the universe, for the rest of time in order to keep the currency stable (67% if you want to go for the full Byzantine model). There are two problems with this plan. Firstly, there’s no way you can actually expend 50% (67%), in practice. Secondly, even if you could, it’s far, far too high a price to pay.

In any case, in the end, control of computing power is roughly equivalent to control of money – so why not cut out the middleman and simply buy Bitcoins? It would be just as cheap and it would not burn fossil fuels in the process.

Finally, if the hash chain really works so well, why do the Bitcoin developers include checkpoints? The currency isn’t even under attack and yet they have deemed them necessary. Imagine how much more needed they would be if there were deliberate disruption of Bitcoin (which seems quite easy to do to me).

But then the question would arise: how do we efficiently manage a distributed currency? I present an answer in my next preprint: “An Efficient Distributed Currency”.

2 Jul 2011

Decentralised Currencies Are Probably Impossible (But Let’s At Least Make Them Efficient)

Filed under: General — Ben @ 20:04

How time flies. Following my admittedly somewhat rambling posts on Bitcoin, I decided to write a proper paper about the problem. So, here’s a preprint of “Decentralised Currencies Are Probably Impossible (But Let’s At Least Make Them Efficient)”. It’s short! Enjoy.

I may submit this to a conference, I haven’t decided yet. Suggestions of where are welcome.

By the way, Bitcoin fanboys: I see I have been taken to task for my heretic views on the Bitcoin forums. Since those have cunningly been closed to anyone who does not already have some kind of track record of conforming to the standards of the forums (presumably meaning “don’t diss Bitcoin”) I am unable to respond to comments there, but I would like to note, for the record, that I have not deleted a single non-spam comment on my Bitcoin posts, contrary to claims I see there.

21 May 2011

Bitcoin is Slow Motion

Filed under: Anonymity,Crypto,General,Privacy,Security — Ben @ 5:32

OK, let’s approach this from another angle.

The core problem Bitcoin tries to solve is how to get consensus in a continuously changing, free-for-all group. It “solves” this essentially insoluble problem by making everyone walk through treacle, so it’s always evident who is in front.

But the problem is, it isn’t really evident. Slowing everyone down doesn’t take away the core problem: that someone with more resources than you can eat your lunch. Right now, with only modest resources, I could rewrite all of Bitcoin history. By the rules of the game, you’d have to accept my longer chain and just swallow the fact you thought you’d minted money.

If you want to avoid that, then you have to have some other route to achieve a consensus view of history. Once you have a way to achieve such a consensus, then you could mint coins by just sequentially numbering them instead of burning CPU on slowing yourself down, using the same consensus mechanism.

Now, I don’t claim to have a robust way to achieve consensus; any route seems to open to attacks by people with more resources. But I make this observation: as several people have noted, currencies are founded on trust: trust that others will honour the currency. It seems to me that there must be some way to leverage this trust into a mechanism for consensus.

Right now, for example, in the UK, I can only spend GBP. At any one time, in a privacy preserving way, it would in theory be possible to know who was in the UK and therefore formed part of the consensus group for the GBP. We could then base consensus on current wielders of private keys known to be in the UK, the vast majority of whom would be honest. Or their devices would be honest on their behalf, to be precise. Once we have such a consensus group, we can issue coins simply by agreeing that they are issued. No CPU burning required.

20 May 2011

Bitcoin 2

Filed under: Anonymity,Crypto,Security — Ben @ 16:32

Well, that got a flood of comments.

Suppose I take 20 ÂŁ5 notes, burn them and offer you a certificate for the smoke for ÂŁ101. Would you buy the certificate?

This is the value proposition of Bitcoin. I don’t get it. How does that make sense? Why would you burn ÂŁ100 worth of non-renewable resources and then use it to represent ÂŁ100 of buying power. Really? That’s just nuts, isn’t it?

I mean, it’s nice for the early adopters, so long as new suckers keep coming along. But in the long run it’s just a pointless waste of stuff we can never get back.

Secondly, the point of referencing “Proof-of-work Proves Not to Work” was just to highlight that cycles are much cheaper for some people than others (particularly botnet operators), which makes them a poor fit for defence.

Finally, consensus is easy if the majority are honest. And then coins become cheap to make. Just saying.

17 May 2011


Filed under: Anonymity,Distributed stuff,Security — Ben @ 17:03

A friend alerted to me to a sudden wave of excitement about Bitcoin.

I have to ask: why? What has changed in the last 10 years to make this work when it didn’t in, say, 1999, when many other related systems (including one of my own) were causing similar excitement? Or in the 20 years since the wave before that, in 1990?

As far as I can see, nothing.

Also, for what its worth, if you are going to deploy electronic coins, why on earth make them expensive to create? That’s just burning money – the idea is to make something unforgeable as cheaply as possible. This is why all modern currencies are fiat currencies instead of being made out of gold.

Bitcoins are designed to be expensive to make: they rely on proof-of-work. It is far more sensible to use signatures over random numbers as a basis, as asymmetric encryption gives us the required unforgeability without any need to involve work. This is how Chaum’s original system worked. And the only real improvement since then has been Brands‘ selective disclosure work.

If you want to limit supply, there are cheaper ways to do that, too. And proof-of-work doesn’t, anyway (it just gives the lion’s share to the guy with the cheapest/biggest hardware).

Incidentally, Lucre has recently been used as the basis for a fully-fledged transaction system, Open Transactions. Note: I have not used this system, so make no claims about how well it works.

(Edit: background reading – “Proof-of-Work” Proves Not to Work)

8 May 2011

Checking SSL Certificates

Filed under: Security — Ben @ 12:31

I mentioned my work on the Google Certificate Catalog recently. One thing I forgot is a command-line utility I wrote to perform the check for you automatically.

You can find it here.

29 Apr 2011

Pepper-crusted Tuna

Filed under: Recipes — Ben @ 10:53

I came across this on my frequent travels to the US (where they tend to call it pepper-crusted ahi, or even, rather redundantly, pepper-crusted ahi tuna). I don’t think I’ve ever seen it in the UK, but it is fantastically easy to cook. And delicious.

Tuna steaks (nice and fresh, so you can leave them rare)
Black peppercorns
Szechuan pepper (optional)

Crush the peppercorns in a pestle and mortar (or mortar and pestle if you’re American). They don’t need to be particularly finally divided, but try to at least split each one in half. Spread half the mix over one side of your tuna steaks and press it in – it sticks surprisingly well. Turn over and repeat. Then fry in hot olive oil for about 4 minutes a side (up to 8 if you’re too chicken for rare tuna, but I promise it tastes/feels much better rare). Do not keep turning them over, turn them just once. Sprinkle on some sea salt when done.

That’s it.

I often serve with plain boiled rice and pak choi.

« Previous PageNext Page »

Powered by WordPress