Ben Laurie blathering

17 May 2011


Filed under: Anonymity,Distributed stuff,Security — Ben @ 17:03

A friend alerted to me to a sudden wave of excitement about Bitcoin.

I have to ask: why? What has changed in the last 10 years to make this work when it didn’t in, say, 1999, when many other related systems (including one of my own) were causing similar excitement? Or in the 20 years since the wave before that, in 1990?

As far as I can see, nothing.

Also, for what its worth, if you are going to deploy electronic coins, why on earth make them expensive to create? That’s just burning money – the idea is to make something unforgeable as cheaply as possible. This is why all modern currencies are fiat currencies instead of being made out of gold.

Bitcoins are designed to be expensive to make: they rely on proof-of-work. It is far more sensible to use signatures over random numbers as a basis, as asymmetric encryption gives us the required unforgeability without any need to involve work. This is how Chaum’s original system worked. And the only real improvement since then has been Brands‘ selective disclosure work.

If you want to limit supply, there are cheaper ways to do that, too. And proof-of-work doesn’t, anyway (it just gives the lion’s share to the guy with the cheapest/biggest hardware).

Incidentally, Lucre has recently been used as the basis for a fully-fledged transaction system, Open Transactions. Note: I have not used this system, so make no claims about how well it works.

(Edit: background reading – “Proof-of-Work” Proves Not to Work)

21 Dec 2010

Is Openleaks The Next Haystack?

As everyone who’s even half-awake knows by now, a bunch of people who used to work on Wikileaks have got together to work on Openleaks. From what I hear, Openleaks is going to be so much better than Wikileaks – it will have no editorial role, it will strongly protect people who submit leaks, it’s not about the people who run it, it’ll be distributed and encrypted.

But where’s the design to back up this rhetoric? Where are the security reviews from well-known authorities? They seem to be missing. Instead we have excited articles in mainstream media about how wonderful it is going to be, and how many hours the main man has spent on it.

This sounds very familiar indeed. And we all know what happened last time round.

Of course, Openleaks may be fine, but I strongly suggest that those who are working on it publish their plan and subject it to scrutiny before they put contributors at risk.

As always, I offer my services in this regard. I am sure I am not alone.

3 Dec 2008

Red Pill/Blue Pill

Filed under: Distributed stuff,Security — Ben @ 15:28

As I have mentioned before, Abe Singer and I wrote a paper on giving up on general purpose operating system security, and instead performing all your security-important online operations from a separate device.

Anyway, we presented this at NSPW, and based on feedback we got there, we’ve now revised (actually, rewritten) “Take the Red Pill and the Blue Pill”.

20 Nov 2008

You Need Delegation, Too

Kim wants to save the world from itself. In short, he is talking about yet another incident where some service asks for username and password to some other service, in order to glean information from your account to do something cool. Usually this turns out to be “harvest my contacts so I don’t have to find all my friends yet again on the social network of the month”, but in this case it was to calculate your “Twitterank”. Whatever that is. Kim tells us

The only safe solution for the broad spectrum of computer users is one in which they cannot give away their secrets. In other words: Information Cards (the advantage being they don’t necessarily require hardware) or Smart Cards. Can there be a better teacher than reality?

Well, no. There’s a safer way that’s just as useful: turn off your computer. Since what Kim proposes means that I simply can’t get my Twitterank at all (oh, the humanity!), why even bother with Infocards or any other kind of authentication I can’t give away? I may as well just watch TV instead.

Now, the emerging answer to this problem is OAuth, which protects your passwords, if you authenticate that way. Of course, OAuth is perfectly compatible with the notion of signing in at your service provider with an Infocard, just as it is with signing in with a password. But where is the advantage of Infocards? Once you have deployed OAuth, you have removed the need for users to reveal their passwords, so now the value add for Infocards seems quite small.

But if Infocards (or any other kind of signature-based credential) supported delegation, this would be much cooler. Then the user could sign a statement saying, in effect, “give the holder of key X access to my contacts” (or whatever it is they want to give access to) using the private key of the credential they use for logging in. Then they give Twitterank a copy of their certificate and a copy of the new signed delegation certificate. Twitterank presents the chained certificates and proves they have private key X. Twitter checks the signature on the chained delegation certificate and that the user certificate is the one corresponding to the account Twitterank wants access to, and then gives access to just the data specified in the delegation certificate.

The beauty of this is it can be sub-delegated, a facility that is entirely missing from OAuth, and one that I confidently expect to be the next problem in this space (but apparently predicting such things is of little value – no-one listens until they hit the brick wall lack of the facility puts in their way).

10 Jul 2008

ACTA, The Pirate Bay and BTNS

Doc Searls just pointed me at a couple of articles. The first is about ACTA.

ACTA, first unveiled after being leaked to the public via Wikileaks, has sometimes been lauded by its supporters as “The Pirate Bay-killer,” due to its measures to criminalize the facilitation of copyright infringement on the internet – text arguably written specifically to beat pirate BitTorrent trackers. The accord will place add internet copyright enforcement to international law and force national ISPs to respond to international information requests, and subjects iPods and other electronic devices to ex parte searches at international borders.

Obviously this is yet another thing we must resist. The Pirate Bay’s answer to this

IPETEE would first test whether the remote machine is supporting the crypto technology; once that’s confirmed it would then exchange encryption keys with the machine before transmitting your actual request and sending the video file your way. All data would automatically be unscrambled once it reaches your machine, so there would be no need for your media player or download manager to support any new encryption technologies. And if the remote machine didn’t know how to handle encryption, the whole transfer would fall back to an unencrypted connection.

is a great idea, but … its already been done by the IETF BTNS (Better-Than-Nothing Security) Working Group.

The WG has the following specific goals:

a) Develop an informational framework document to describe the motivation and goals for having security protocols that support anonymous keying of security associations in general, and IPsec and IKE in particular

Hmmm. I guess I should figure out how I switch this on. Anyone?

25 Apr 2008

Yet Another Version Control System (and an Apache Module)

Filed under: Distributed stuff,Open Source — Ben @ 22:32

I recently finished off mod_digest for Canonical. To you: the guys that make Ubuntu.

In the process I was forced to use yet another distributed version control system, Bazaar. Once I’d figured out that the FreeBSD port was devel/bazaar-ng and not devel/bazaar, I quite liked it. All these systems are turning out to be pretty much the same, so it’s the bells and whistles that matter. In the case of Bazaar the bell (or whistle) I liked was this

$ bzr push
Using saved location: s

Yes! In Monotone, I’m permanently confused about branches and repos and, well, stuff. Mercurial makes me edit a config file to set a default push location. Bazaar remembers what I did last time. How obvious is that?

25 Mar 2008

Federated Messaging Meets Federated Identity

Filed under: Distributed stuff,Identity Management — Ben @ 23:51

XMPP, OAuth and OpenID. Social networking in real-time. Interesting. Peter Saint-Andre thinks we should talk about it.

Sign up here.

11 Mar 2008

RFC 5155

Filed under: Anonymity/Privacy,Crypto,Distributed stuff — Ben @ 11:43

After nearly 4 years of mind-bending minutiae of DNS (who would’ve thought it could be so complicated?), political wrangling and the able assistance of many members of the DNSSEC Working Group, particularly my co-authors, Roy Arends, Geoff Sisson and David Blacka, the Internet Draft I started in April 2004, “DNSSEC NSEC2 Owner and RDATA Format (or; avoiding zone traversal using NSEC)” now known as “DNS Security (DNSSEC) Hashed Authenticated Denial of Existence” has become RFC 5155. Not my first RFC, but my first Standards Track RFC. So proud!

Matasano Chargen explain why this RFC is needed, complete with pretty pictures. They don’t say why its complicated, though. The central problem is that although we all think of DNS as a layered system neatly corresponding to the dots in the name, it isn’t.

So, you might like to think, and it is often explained this way, that when I look up I first ask the servers for . who the nameserver for com is. Then I ask the com nameservers where the nameservers for is, who I then ask for the nameservers for and finally ask them for the address of

But it isn’t as easy as that. In fact, the zone can contain an entry without delegating This makes proving the non-existence of a name by showing the surrounding pair rather more challenging. The non-cryptographic version (NSEC) solved it by cunningly ordering the names so that names that were “lower” in the tree came immediately after their parents. Like this:

So, proving that, say, doesn’t exist means showing the pair (, Note that this pair does not prove the nonexistence of as you might expect from a simple lexical ordering. Unfortunately, once you’ve hashed a name, you’ve lost information about how many components there were in the name and so forth, so this cunning trick doesn’t work for NSEC3.

It turns out that in general, to prove the nonexistence of a name using NSEC you have to show at most two records, one to prove the name itself doesn’t exist, and the other to show that you didn’t delegate some parent of it. Often the same record can do both.

In NSEC3, it turns out, you have to show at most three records. And if you can understand why, then you understand DNS better than almost anyone else on the planet.

23 Feb 2008


Filed under: Distributed stuff,Identity Management,Security — Ben @ 13:38

Pamela is freaked out by sites that gather all your logins. So am I. But this is exactly why a group of us got together to create OAuth. OAuth allows you to delegate access to your various accounts without revealing your username and password. All we need now is for all these sites to start using it.

26 Aug 2007

Mapping Crypto to Capabilities

Filed under: Crypto,Distributed stuff,Security — Ben @ 7:00

I’ve been thinking.

Let me preface this by suggesting: if there were a globally trusted capability-secure computing fabric, we would have no need of (some kinds of) crypto.

Why? What do we do with crypto? We sign things, and we encrypt things. How do we do this with our GTCSCF? Easy. To sign something, I hand that something to the relying party. He then exercises his capability to me that checks authenticity of somethings, and I respond that it is authentic.

Encryption is even easier – I just send the something down a capability leading to the intended recipient.

So, I claim, there is effectively a mapping between crypto (at least for signing and encrypting) and a great capability machine in the sky (i.e. the GTCSCF).

Considering this idea further, it seems to me that this is essentially the core idea behind Universal Composability. If I can show that my crypto system does indeed map to a GTCSCF, then I have a crypto system that can clearly be composed with other crypto systems, and only have the consequences we would expect from a capability-secure system that implemented the same functionality.

What would it mean to make such a proof? My, perhaps amateur, understanding is that you would have to show that the corresponding capabilities have the properties we expect of them: that they are opaque, unforgeable, and only obtainable by being handed them in some way.

This sounds doable to me, modulo assumptions about the hardness of the discrete log problem, and the like.

15 Apr 2007


Filed under: Distributed stuff,Security — Ben @ 18:20

It isn’t often that I get to mention some of the weirder things I do at work, but here’s a write-up of something I was peripherally involved in, a bot that attempted to do distributed click fraud.

2 Mar 2007

Scalable Internet Architectures

Filed under: Distributed stuff,Open Source — Ben @ 15:22

My friend Theo Schlossnagle has for many years run one of the most popular tutorials at ApacheCon, Scalable Internet Architectures. In this tutorial he covers the ways he has built successful, resilient and very large-scale server farms.

Theo doesn’t believe in the traditional big-iron, expensive-database scheme for making these things, preferring tools like Spread, Backhand and Wackamole. Anyway, to cut to the chase, he has, at long last, written a book about it – which I’ve just finished reading. If you’ve ever built a server farm, you’ll find the first few chapters easy going, but after that he dives into the technical details rapidly and lucidly. I recommend it highly.

5 Feb 2007

Microformats, Decoupling and Self-Contained Standards

Filed under: Distributed stuff,Rants — Ben @ 4:03

Perhaps I don’t get microformats. I keep hearing people wanting to invent their own format for things for which we already have half a dozen known standards. When pressed, the justification is either that it is too complicated, or that they want to “decouple” from whatever-it-is that the existing formats are “supposed” to be for.

Sometimes this is fair comment, but often it seems to me to entirely miss the point. When a standard format is self-contained (that is, it doesn’t rely on being embedded in a whole mess of infrastructure in order to be meaningful) there’s no reason to associate it with its normal environment. Because it is self-contained you can just pick it up and use it elsewhere. There are many formats like this, at all levels of the stack; examples are OpenPGP, iCal, vCard, practically all XML, and, if you get right down to it, most of TCP/IP (witness amusing standards like IP over carrier pigeon – no, really, RFC 1149 – and its even been implemented).

How about complicated? Well, I contend that any widely used standard format has libraries that can parse it, and if it doesn’t, then software engineers need to put their software architect heads on occasionally, dammit.

So, neither of these arguments are standing up, as far as I can see. Which leads me to wonder: what are microformats all about? Why do people want to decouple? Are they just lazy? Or do they hate the communities that make the standards so much they want nothing to do with them? Or are they merely misguided?

Or have I totally missed the point, and microformats are actually only used where there’s no existing self-contained standard?

Answers on a postcard, please!

20 Jan 2007

OpenID and Phishing: Episode II

I do intend to write about mitigation at some point in the near future, but in the meantime points have been raised that I want to respond to.

First, the ridiculous (I hope there’s some sublime somewhere): HeresTomWithTheWeather says

criticizing the openid spec for not addressing phishing seems to be no different than criticizing the ip protocol because it doesn’t provide reliable, ordered data delivery.

Strangely, people want unreliable, unordered data delivery: it’s a useful service, unlike phishing. He goes on to say

this is old news. yawn.

So are murder and children starving to death.

Gary Krall says

A fair bit of time and consideration was given to some of the issues you raised here at VeriSign. Hans Granqvist drove alot of this in the use of Security Profiles.

and Hans Grinqvist whines (his own words, I promise)

There have been, since October 2006, a set of defined OpenID security profiles. The lion part of the profiles have been incorporated into the core spec.

Firstly, the really important stuff: Gary, its “a lot”, and Hans, “the lion’s share”. But seriously, if these address the phishing issue I’m obviously missing something major. From the security profiles document

By agreeing on one or several such security profiles, an OpenID relying party and identity provider can decide the security properties for their mutual OpenID communication before such communication takes place.

Phishing needs security between the identity provider (OP, actually, in OpenID parlance, I wish they’d be consistent) and the user. Can’t really see how security between the RP and the OP is going to address this. How about the “lion’s share” that’s gone into the main document?

A special type of man-in-the-middle attack is one where the Relying Party is a rogue party acting as a MITM. The RP would perform discovery on the End User’s Claimed Identifier and instead of redirecting the User Agent to the OP, would instead proxy the OP through itself. This would thus allow the RP to capture credentials the End User provides to the OP. While there are multiple ways to prevent this sort of attack, the specifics are outside the scope of this document. Each method of prevention requires that the OP establish a secure channel with the End User.

I think this is rather poorly expressed, but clearly describes the attack I have in mind. Almost consistently, it nearly punts on the issue. The one piece of information it adds: “each method of prevention requires that the OP establish a secure channel with the End User” strikes me as unsound, unless you take a rather wide view of the meaning of “secure channel”. There are, for example, zero-knowledge protocols that will not reveal any credentials to a man-in-the-middle, but do not require a secure channel for their execution. In any case, no useful advice is offered, despite claims to the contrary.

My friend, Adam Shostack, muses

It seems to me that if my local software knows who my ID providers are, rather than being told, then the problem goes away?

Indeed, but OpenID’s ground rules are that you should not need local software, and this is the nub of the issue.

Authentication on the web is broken, and has been for a long time. The OpenID fanboys want OpenID to work on any old platform using only standard software, and so therefore are doomed to live in the world of broken authentication. This is fine if what you protect with your OpenID is worthless, but it seems clear that these types of protocol are going to be used to authenticate for things of value. Why else would Verisign be in this game, for example? Or, indeed, Microsoft? Or IBM, HP and T-Mobile?

This is the root of the problem: if you want to protect anything of value, you have to do better than existing Web solutions. You need better client-side software. In an ideal world, this would be a standard component of browsers, but it isn’t. Why? Well, the reason is fairly apparent: the best general way to handle this problem is through zero-knowledge proofs. SRP is an often-quoted example, but there are many simpler ones. However, various (already rich) greedy bastards have successfully blocked wide deployment of these protocols in a cynical attempt to profit from patents that (allegedly) cover them. Sad, I think, that the world continues to suffer whilst a few seek a drop in their ocean of money. Since these general (and, I should add, very simple) solutions cannot be deployed, we end up with purpose-specific plugins instead of general-purpose mechamisms.

Finally, if I might go all Slashdot on you for a moment, from the light-at-the-end-of-the-tunnel department, David Recordon of Verisign Labs (and an editor of the OpenID specs) says

we’d love to spend time working with you to figure out what it would take to resolve your issues with the spec. With that said, I really do think that it will come from browser plugins and such.

which is nice. I will accept.

15 Dec 2006

Jobs for the Boys: DHS and the Root Zone

Filed under: Distributed stuff,Security — Ben @ 15:56

The Department of Homeland Security have a spec for signing the root. I’m sure they didn’t intend it to be (given the “NOT FOR FURTHER DISTRIBUTION” notice), but it is publicly available in a mailing list archive. In this spec they include the staffing requirements, which come to an astonishing 20 full-timers. Yes, 20 people to sign a zone that is currently 2,470 entries, for 1,193 names (most of which are glue) delegating a whole 265 domains.

Another part I find amusing (OK, I’m easily amused) is section 7.6 “Non-Scheduled Operations”.

A change in the KSK [Key Signing Key – the key everything else depends on] on the other hand takes a longer time as the new KSK has to be configured into resolvers all over with a world which can only take place after the operators of the resolvers have been convinced that the new KSK is valid.

So, “takes a longer time” is one way of putting it. Takes forever would be, perhaps, more accurate. I have a much better solution for this. But I guess it won’t be popular since it clearly makes the root redundant, and I’m sure ICANN, the DHS and the Department of Commerce wouldn’t like that. On the other hand, I think making the root irrelevant would fix a huge pile of stupidity that’s currently going on. And that would be a Good Thing.

2 Aug 2006

Physical Onion Routing

One of the recurring themes in my musings about identity management is my desire for unlinkability – if every transaction (in the broadest sense of that word) is independent of every other then it makes it difficult (I’d like to say impossible, but I’m a cynic) for anyone to build up a picture about you (for whatever value of “you” you’d like to choose).

But the thing that drives a coach and horses through this worthy goal is physical goods. All too often you end up wanting something delivered – a book, a CD, beer – and it has to go to somewhere linkable to you.

So, it occurred to me that you could arrange the physical equivalent of onion routing. Choose a friend, encrypt your address with his public key. Then choose another and encrypt friend one’s address and your encrypted address with his key, and a third and encrypt the second’s address, friend one’s encrypted address and your doubly-encrypted address to him. Give your provider of goods the third’s address and the encrypted package.

The provider then wraps your parcel up three times. On the outside of the third wrapper he puts the address of the third friend and the encrypted package. When it arrives at friend three, he decrypts the package, getting friend two’s address and a new encrypted package, which he then applies to the outside of the parcel and sends it on. Friends two and one repeat the process, the parcel arrives at your house, no-one knows where it came from and who it went to. Yes, friend one knows you got something, but has no idea where it came from. Friend three knows where it came from but not who it went to, and friend two separates them.

Any volunteers?

2 Mar 2006

The BBC Thinks RC4 is Crackable

Newsnight got a ton of flak over describing file sharing as theft. But, they whine, the real point is that encryption is being used, like, all over the place! And this means that the good folk at GCHQ will have a terrible time decrypting it all. Which they need to do to catch all the paedophiles and terrorists, obviously.

What they’ve totally missed is that the volume is not the issue, the strength of the encryption is. Newsnight’s self-styled “resident ubergeek”, Adam Livingstone, thinks RC4 is weak and could be cracked if only those pesky BitTorrenters wouldn’t clutter up the ‘net with their encrypted copies of broadcast TV (which, of course, they shouldn’t be sharing anyway – just because anyone can watch it, it doesn’t mean anyone can watch it, now does it? That stands to reason).

Mr. Livingstone should try consulting some real geeks before he opens his big mouth again.

Oh, and they also sob:

What we’d really like to hear is a debate on the issue we did raise. If the ISPs can’t now detect torrent data, then how will the security services manage it? And if they do figure it out, won’t RnySmile and company just up the ante again?

If you want a debate on that, dude, then provide somewhere to debate it. Or just read my blog – that’s your kind of debate: unidirectional.

19 Feb 2006

Distributed Hash Tables Revisited

Filed under: Anonymity/Privacy,Distributed stuff — Ben @ 22:47

I said it probably wasn’t original, and I was right. Beehive from Cornell is a concrete implementation of something very like the technique I described. It’s been used for various interesting projects, including P2P DNS, something that’s made possible, or even plausible, by DNSSEC.

The cool thing about using P2P for DNS is that it’s DoS resistant. And, apparently, you can get acceptable response speeds out of a P2P system.

I’d like to see more experimentation with P2P for infrastructure. Its certainly one way to improve the preservation of our privacy.

12 Feb 2006

Distributed Hash Tables and the Long Tail

Filed under: Distributed stuff — Ben @ 18:04

I spent some time with my friend Ben Hyde recently, and we got talking about distributed hash tables, and his favourite topic, power law distributions. Apparently if you are part of, say, a file-sharing network, and you happen to be the node that has the hash for some fantastically popular file, then you suffer a lot of pain: everyone requesting that file has to talk to you to find out where to get it from and this kills your ‘net connection.

So, I had this idea, which was probably not original, but since Ben thought it might work and the blogosphere is the new peer-reviewed journal, here it is.

At each node that is “responsible” for a hash, measure the traffic to that hash (i.e. number of requests). Take the log of the traffic and combine it with the hash, giving a new hash. Then the nodes that serve that hash should be determined by the new hash. The higher the log of the traffic, the more nodes should serve it. When participating nodes detect that the traffic has changed sufficiently, they should (obviously) hand off to the resulting new hash.

To search for a hash in this scheme, clients should start with the highest possible traffic and pick a random node (or two or three) to query that would serve the hash at that traffic level. If this fails, decrease the log and try again.

This should (at the cost of more global load) reduce local load.

It adds some complication, of course, and probably increases the chances of a false negative.

16 Sep 2005

Splash! Startup Fixed

Filed under: Distributed stuff — Ben @ 14:56

For those that don’t know, Splash! is a distributed masterless database, based on Spread, which is a group communication system.

Anyway, it has long suffered from a problem: if the database gets big, then when a new instance of Splash! starts it has a tendency to die. This is because Spread will simply drop a client that sends messages too fast. Wrongheaded, IMNSHO, but then, Spread does a fair few wrongheaded things. As of yesterday, its fixed: what I did in the end was make the database refresh over a unicast link. The sender of the refresh forks so they don’t block, and the receiver doesn’t care about blocking, coz its dead in the water until it gets refreshed anyway.

Seems to work.

I have a long-term ambition to replace both Spread and Splash! with something much cooler. So do other people I know – but I’d love to hear from people with similar ambitions.

Powered by WordPress