Links

Ben Laurie blathering

24 Jan 2013

Up-Goer 5 Capabilities

Filed under: Capabilities,Security — Ben @ 23:42

Just for fun, I tried to explain capabilities using only the ten hundred most used words. Here’s what I came up with.

They are a way to allow people to use the things they are allowed to use and not the things they are not allowed to use by giving them a key for each thing they are allowed to use. Every thing has its own key. If you are shown a key you can make another key for the same thing. Keys should be very, very, very hard to guess.

For capability purists: yes, I am describing network capabilities.

25 Nov 2012

Clocks and Security

Filed under: Security — Ben @ 19:48

I keep running into this.

People want to design protocols where replay attacks are prevented. To prevent replay attacks, you have to keep track of what’s already been said. But you don’t want to do this forever, so the smart thing to do is include time in the protocol. That is, the message is accepted if both the timestamp is recent and the nonce (or whatever) has not previously been used. This means that the server can discard nonces after a while and not worry about allowing replays of very old packets.

But then the problem is that clients don’t have the right time.

And so people jump through hoops to take account of this fact – extra round trips, time offsets, all sorts of nonsense.

Can we stop dancing around the problem and just fix the damn client’s clock? If the user wants the clock set wrong, fine – all we need is a correct clock for protocols. The machine can continue to show the user whatever wrong time he wants to see.

So how do we get this correct clock? Well, that doesn’t seem hard – we have NTP already, and it works pretty well. If we have mutually distrusting parties that don’t want to rely on each other’s clocks, then it doesn’t seem hard to have clocks with signatures (NTP already supports this) so each distrusting group can nominate its trusted time servers and multiple clocks can be maintained for them.

This seems like an entirely soluble problem, yet every time I review a protocol that needs it, it is thrown up as completely insoluble. It really seems like it’s time to bite that bullet – it’s not even hard (a rubber bullet?)!

Note: this does not solve any problems to do with untrusted clients. You still need to design your protocols to resist clients that want to mess with you. But at least you could stop worrying about time skew.

4 Oct 2012

What Is SHA-3 Good For?

Filed under: Crypto,Security — Ben @ 10:40

Cryptographers are excited because NIST have announced the selection of SHA-3. There are various reasons to like SHA-3, perhaps most importantly because it uses a different design from its predecessors, so attacks that work against them are unlikely to work against it.

But if I were paranoid, there’d be something else I’d be thinking about: SHA-3 is particularly fast in hardware. So what’s bad about that? Well, in practice, on most platforms, this is not actually particularly useful: it is quite expensive to get data out of your CPU and into special-purpose hardware – so expensive that hardware offload of hashing is completely unheard of. In fact, even more expensive crypto is hardly worth offloading, which is why specialist crypto hardware manufacturers tend to concentrate on the lucrative HSM market, rather than on accelerators, these days.

So, who benefits from high speed hardware? In practice, it mostly seems to be attackers – for example, if I want to crack a large number of hashed passwords, then it is useful to build special hardware to do so.

It is notable, at least to the paranoid, that the other recent crypto competition by NIST, AES, was also hardware friendly – but again, in a way useful mostly to attackers. In particular, AES is very fast to key – this is a property that is almost completely useless for defence, but, once more, great if you have some encrypted stuff that you are hoping to crack.

The question is, who stands to benefit from this? Well, there is a certain agency who are building a giant data centre who might just like us all to be using crypto that’s easy to attack if you have sufficient resource, and who have a history of working with NIST.

Just sayin’.

20 Sep 2012

Compression Violates Semantic Security

Filed under: Brain Function,Crypto,Security — Ben @ 16:24

There’s been quite a lot of noise about the still not-fully-disclosed CRIME attack on TLS recently. But, fully disclosed or not, I think we can say with certainty that it turns out that compression is a problem.

The interesting thing, to me at least, is that, in retrospect, this is completely obvious. In cryptography, we have standards that we hold encryption algorithms to, and one of these is semantic security. In short, this means that an attacker should learn nothing (other than length[1]) about a plaintext, given its ciphertext. One way this is often phrased is as a game: given two plaintexts of equal lengths, and one ciphertext made from one of the two plaintexts, then an attacker, who knows everything about the algorithm other than the key, should not be able to guess better than chance which of the two plaintexts was used.

It is obvious that, in general, if compression is used, this game can only go the attacker’s way: the length of the ciphertext must reveal something about the content of the plaintext. This is because, in general, not all texts can compress – indeed, if some plaintexts come out shorter, there must also be some that come out longer. So, since the attacker knows what compression algorithm is in use, he can tell which of the two plaintexts was used by the length of the ciphertext, in general (note that there may be pairs of plaintexts for which this is not true, but in general, there are pairs where the lengths are different). And thus he wins the game, which shows that compression simply cannot be used in a system giving semantic security[2].

And we expect TLS to give semantic security, just like all modern crypto. So, it should’ve been obvious from the start that compression was a non-runner. Why did we not realise? I think the answer to that question would be very interesting indeed. Also, what else do we do now that obviously violates semantic security?

[1] Sometimes even academics admit to real world constraints!

[2] Pedants might argue that actually, yes, you can use compression: just pad everything to the longest a plaintext could compress to. As I’ve noted above, if the compression works at all (that is, some texts are reduced in length), then some texts must actually expand. Which means that you must pad to longer than the original length. So, yeah, pedant, you can use compression, but only if it actually expands!

11 Sep 2012

Revocation Transparency and Sovereign Keys

Filed under: Certificate Transparency,Security — Ben @ 16:09

In line with Certificate Transparency (note, updated version, 2.1a), we’ve been thinking about how to do something similar for revocation. Not because we have any particular plan but because as soon as we mention CT, people always say “what about revocation?”. Which is, admittedly, in a bit of a pickle, and it isn’t at all obvious how to fix it. But however its fixed, we think its a good idea to have transparency – for everyone to be assured that they are seeing revocation state that is the same as everyone else is seeing, and for revocations to be auditable – just as we think certificate issuance should be.

So, we’re quite excited that recently we came up with not one, but two, mechanisms. One of them (Sparse Merkle Trees) even appears to be novel. There’s a brief write-up here.

Also, it turns out, Sparse Merkle Trees can be used to solve a problem that has been bugging me with Sovereign Keys since day one. The issue is that in SK the relying party needs to trust mirrors to tell it what the current status of any particular domain is (i.e. what the current key is), because the only other way to be sure is to download the entire database, which will be many gigabytes long. Using Sparse Merkle Trees plus a CT-like append-only log (as described in the RT document), this is no longer the case. Instead, we can generate a sparse tree containing leaves corresponding to the hashes of domain names. The value stored at the leaf is the domain’s current key (or whatever we want to store there). The sparse tree allows us to verify efficiently that we are, indeed, seeing the latest version, and the append-only log prevents abuse of the change mechanism to make temporary changes shown only to a subset of relying parties.

23 Aug 2012

Who Remembers VASCO?

Filed under: Certificate Transparency,Security — Ben @ 15:28

When I talk to people about what I’m doing, I usually mention the DigiNotar fiasco. I’m often surprised by how many people remember it, especially those not involved in security – and often not particularly technical.

DigiNotar, of course, no longer exists as a result of this incident. But who remembers VASCO, the company that owned DigiNotar? No-one, as far as I can tell. Apparently they suffer not at all from their incompetence.

I particularly love their press release

VASCO expects the impact of the breach of DigiNotar’s SSL and EVSSL business to be minimal. Through the first six months of 2011, revenue from the SSL and EVSSL business was less than Euro 100,000. VASCO does not expect that the DigiNotar security incident will have a significant impact on the company’s future revenue or business plans.

Well, they were not wrong there!

14 Aug 2012

Verifiable Logs: Solving The “Cryptocat Problem”

Filed under: Certificate Transparency,Open Source,Security — Ben @ 15:43

There’s been a lot of heat about Cryptocat lately. But not much light. In summary, one side says you can’t trust software you download from the ‘net to not reveal all your secrets, and the other side says that’s all we got, so suck it up. So, how do we fix this problem?

First off, lets take a look at the core of the problem: if you download something from the ‘net, how can you be sure what you got is what was advertised? One of the much-lauded benefits of open source is that it can be reviewed – experts can take a look and see whether it really does what it says. So, that deals with half the problem. But how do we know we got what the experts reviewed?

I propose that the answer is publicly verifiable logs. The idea is that anyone can operate a log of “stuff” that can be verified by anyone else. What do I mean by “verified”? I mean that if two people see the log, they can mutually check that they saw the same thing. Of course, this is trivial if you are prepared to send the whole log to each other – just check they’re identical. The trick is to do this verification efficiently.

Luckily we have a way to do that: Merkle Trees. These allow us to summarise the log with a short chunk of binary (the “root”). If we both get the same root, then we both have the same log. What’s more, they also allow an efficient proof that any particular item is in the log – given the item, the log can show a chain of hashes leading to the root. This chain proves that the item actually is in the log summarised by the root.

What’s more, with only a bit more cunningness, we can also efficiently show that any version of the log (with more data appended) contains any prior version. In other words, we can show that the log never deletes anything, but only grows by adding new things at the end.

Got it? To reiterate: it is possible to create a log that can demonstrate that everyone sees the same version, and that as it grows, everyone continues to see the same data added to it. What’s more, these things can be done efficiently[1].

Now we have that piece of machinery, how do we use it to solve the “Cryptocat problem”? Simple: every time Cryptocat does a new release, it pushes a copy of the source into the verifiable log. Every time you download Cryptocat, you verify that the version you are given is in the public log, and refuse to run it if not. And we’re done.

If Cryptocat ever decides to release a version that, say, reveals your keys, or decrypts your chats for a third party, then that version is on display for all to see. Cryptocat will get caught – and likely caught quite quickly. If Cryptocat tries to avoid this publication, then you won’t run it, so you’ll be safe.

Admittedly this does not actually _prevent_ Cryptocat from shafting you, but it does mean it is very unlikely to get away with it, and having done it once, it will probably not get the chance to do it to anyone again…

Note that it doesn’t matter if the author of Cryptocat is the one who made the change, or someone who hacked his site, or a man-in-the-middle. If they do not publish source, then you won’t run it. And if they do publish source, they get caught.

Incidentally, I originally proposed publicly verifiable logs for fixing PKI but they have many uses. Also, for Certificate Transparency, we are implementing a publicly verifiable log. I would be very happy to help with a version for logging software instead of certificates.

[1] To get an idea of what I mean by “efficiently” a proof that two log versions are consistent or that a particular item is in a particular log version consists of log_2(n) hashes, where n is the number of items in the log. So, for a log with a billion items, this proof would have around 30 entries, each, say, 32 bytes long. So, it takes me less than 1 kB for a proof about a log with a billion entries. How about a trillion? Just ten more entries, i.e. under 1,300 bytes.

31 Jul 2012

Certificate Transparency Version 2

Filed under: Certificate Transparency,Security — Ben @ 23:46

A lot of people didn’t like that the original version had a delay before you could issue a new certificate. So, we redesigned the protocol to avoid that problem.

In a nutshell, a new certificate is sent to the log, which immediately returns a signed hash of the certificate, indicating that the cert will be included in the log. It is required to actually appear in the log before a certain amount of time has passed. Other than that, everything proceeds along the same lines as before, though there are many detailed changes.

As always, comments welcome.

22 May 2012

Factoring RSA

Filed under: Crypto,Open Source,Security — Ben @ 13:13

Apparently I have not blogged about factoring weak RSA keys before. Well, I guess I have now 🙂

One thing I’ve been wondering ever since that research was done is: is there anything OpenSSL could do about this? I’ve been assuming OpenSSL was used to generate at least some of those keys.

So, I was interested to read this analysis. First off, it shows that it is highly likely that the bad keys were generated by OpenSSL and one other (proprietary) implementation. However, I have to argue with some details in an otherwise excellent writeup.

Firstly, this canard irritates me:

Until version 0.9.7 (released on Dec 31, 2002) OpenSSL relied exclusively on the /dev/urandom source, which by its very definition is non-blocking. If it does not have enough entropy, it will keep churning out pseudo-random numbers possibly of very poor quality in terms of their unpredictability or uniqueness.

By definition? Whose definition? When did Linux man pages become “by definition”? In FreeBSD, which, IMO, has a much sounder approach to randomness, urandom does block until it has sufficient entropy. Is poor design of the OS OpenSSL’s fault?

Which brings me to

FreeBSD prior to version 5 posed its own problem, since its /dev/random source silently redirected to /dev/urandom.

Well. Modern FreeBSD versions link /dev/urandom to /dev/random. That doesn’t seem like a material change to me. I’m pretty sure that the implementation changed, too – perhaps that’s more important than filenames?

Finally, in the summary:

Some unfortunate choices by the OpenSSL library didn’t help either.

Oh really? So the fact that a 10-year-old version of OpenSSL used a device that in some OSes is not very well designed is contributing to this problem? I’m finding this a little hard to swallow. Also, “choices”? What choices? Only one choice is mentioned.

The real problem is, IMNSHO: if you provide a weak random number source, then people will use it when they shouldn’t. The problem here is with the OS that is providing the randomness, not the OpenSSL library. So, why is the OS (which I am prepared to bet is Linux) not even mentioned?

28 Apr 2012

Using Capsicum For Sandboxing

Filed under: Capabilities,General,Programming,Security — Ben @ 18:07

FreeBSD 9.0, released in January 2012, has experimental Capsicum support in the kernel, disabled by default. In FreeBSD 10, Capsicum will be enabled by default.

But unless code uses it, we get no benefit. So far, very little code uses Capsicum, mostly just experiments we did for our paper. I figured it was time to start changing that. Today, I’ll describe my first venture – sandboxing bzip2. I chose bzip2 partly because Ilya Bakulin had already done some of the work for me, but mostly because a common failure mode in modern software is mistakes made in complicated bit twiddling code, such as decompressors and ASN.1 decoders.

These can often lead to buffer overflows or integer over/underflows – and these often lead to remote code execution. Which is bad. bzip2 is no stranger to this problem: CVE-2010-0405 describes an integer overflow that could lead to remote code execution. The question is: would Capsicum have helped – and if it would, how practical is it to convert bzip2 to use Capsicum?

The answers are, respectively, “yes” and “fairly practical”.

First of all, how does Capsicum mitigate this problem? The obvious way to defend a decompressor is to run the decompression engine in a separate process with no privilege beyond that needed to get its job done – which is the ability to read the input and write the output. In Capsicum, this is easy to achieve: once the appropriate files are open, fork the process and enter capability mode in the child. Discard all permissions except the ability to read the input and write the output (in Capsicum, this means close all other file descriptors and limit those two to read and write), and then go ahead and decompress. Should there be a bug in the decompressor, what does the attacker get? Well, pretty much what he had already: the ability to read the input file (he supplied it, so no news there!) and the ability to write arbitrary content to the output file (he already had that, since he could have chosen arbitrary input and compressed it). He also gets to burn CPU and consume memory. But that’s it – no access to your files, the network, any other running process, or anything else interesting.

I think that’s pretty neat.

But how hard is it to do? I answer that question in a series of diffs on GitHub, showing a step-by-step transformation of bzip2 into the desired form. I used a technique I like to call error-driven development; the idea is you attempt to make changes that will cause compilation to fail until you have completely accomplished your goal. This is a useful way to reassure yourself that you have made all necessary updates and there’s nothing hiding away you didn’t take care of. If you follow along by building the various stages, you’ll see how it works.

It turns out that in bzip2 this matters – it isn’t very beautifully written, and the code that looks like it might cleanly just take an input file and an output file and do the work in isolation, actually interacts with the rest of the code through various function calls and globals. This causes a problem: once you’ve forked, those globals and functions are now in the wrong process (i.e. the child) and so it is necessary to use RPC to bridge any such things back to the parent process. Error-driven development assures us that we have caught and dealt with all such cases.

So how did this work out in practice? Firstly, it turns out we have to give the compressor a little more privilege: it writes to stderr if there are problems, so we need to also grant write on stderr (note that we could constrain what it writes with a bit more effort). The callbacks we have to provide do not, I think, let it do anything interesting: cause the program to exit, make the output file’s permissions match the input file’s, and remove the input or output files (ok, removing the input file is slightly interesting – but note that bzip2 does this anyway).

Secondly, because we have not yet decided on an RPC mechanism, this particular conversion involves quite a bit of boilerplate: wrapping and unwrapping arguments for RPCs, wiring them up and all that, all of which would be vastly reduced by a proper RPC generator. Try not to let it put you off 🙂

Finally, the system has (at least) one global, errno. I did not deal with that so far, which means some errors will report the wrong error – but it is not particularly hard to do so.

So, on to the diffs. This is something of an experimental way to present a piece of development, so I’d be interested in feedback. Here they are, in order:

And there you are: bzip2 is now rendered safe from decompressor exploits, and it was only a few hours work. As we refine the support infrastructure, it will be even less work.

3 Apr 2012

EFF Finally Notice 0day Market

Filed under: Security — Ben @ 13:37

Six years after I first blogged about it, the EFF have decided that selling 0days may not be so great.

Maybe they should be reading my blog? 🙂

1 Mar 2012

Certificate Transparency: Spec and Working Code

Filed under: Certificate Transparency,Crypto,Open Source — Ben @ 17:29

Quite a few people have said to me that Certificate Transparency (CT) sounds like a good idea, but they’d like to see a proper spec.

Well, there’s been one of those for quite a while, you can find the latest version in the code repository, or for your viewing convenience, I just made an HTML version.

Today, though, to go with that spec, I’m happy to announce working code for a subset of the protocol. This covers the trickiest part – a fully backwards compatible SSL handshake between servers and clients. The rest of the protocol will necessarily all be new code for interacting with the log server and other new components, and so should not have these issues.

If you build the code according to the README, then you will find instructions in test/README for the demo.

What this does, in short, is the following:

  • Run a CT log server. Currently this has no persistence across runs, but does keep a full log in memory.
  • Issue a self-signed server certificate. A CA issued certificate would also be fine, but not so easy to automate for a demo.
  • Use the CT client to register that certificate with the log server and to obtain a log proof for it.
  • Use the CT client to convert that proof into a fake “certificate” which can be included in the certificate chain in the TLS handshake.
  • Run an Apache 2.2 instance to serve the self-signed certificate and the log proof certificate. Note that Apache is unmodified, all that is needed is appropriate configuration.
  • Use the CT client to connect to the Apache instance and verify the presented log proof.
  • You can also connect to Apache with an existing browser to check that you can still access the site despite the presence of the log proof.

There’s plenty more to be done, but this is the part that needs the earliest scrutiny, since we are bending the rules to get back compatibility and avoid the need to change server software. Client software has to change anyway to provide any benefit to users, so that’s less of a worry.

We welcome discussion, suggestions and questions on the mailing list.

4 Feb 2012

Certificate Transparency Sites

Filed under: Crypto,Security — Ben @ 22:50

I may not have said much more about Certificate Transparency, but we’ve been working on it. So, those interested in following along (or joining in) are welcome to look at…

Website.

Mailing list.

Code repository.

The code repository also includes the spec, in xml2rfc format.

29 Nov 2011

Fixing CAs

Filed under: Security — Ben @ 12:58

Adam Langley and I have a proposal to bolster up the rather fragile Certificate Authority infrastructure.

TL;DNR: certificates are registered in a public audit log. Servers present proofs that their certificate is registered, along with the certificate itself. Clients check these proofs and domain owners monitor the logs. If a CA mis-issues a certificate then either

  • There is no proof of registration, so the browser rejects the certificate, or
  • There is a proof of registration and the certificate is published in the log, in which case the domain owner notices and complains, or
  • There is a proof of registration but the certificate does not appear in the log, in which case the proof is now proof that the log misbehaved and should be struck off.

And that, as they say, is that.

Update: Adam has blogged, exploring the design space.

19 Sep 2011

Lessons Not Learned

Filed under: Identity Management,Security — Ben @ 15:50

Anyone who has not had their head under a rock knows about the DigiNotar fiasco.

And those who’ve been paying attention will also know that DigiNotar’s failure is only the most recent in a long series of proofs of what we’ve known for a long time: Certificate Authorities are nothing but a money-making scam. They provide us with no protection whatsoever.

So imagine how delighted I am that we’ve learnt the lessons here (not!) and are now proceeding with an even less-likely-to-succeed plan using OpenID. Well, the US is.

If the plan works, consumers who opt in might soon be able to choose among trusted third parties — such as banks, technology companies or cellphone service providers — that could verify certain personal information about them and issue them secure credentials to use in online transactions.

Does this sound familiar? Rather like “websites that opt in can choose among trusted third parties – Certificate Authorities – that can verify certain information about them and issue them secure credentials to use in online transactions”, perhaps? We’ve seen how well that works. And this time there’s not even a small number of vendors (i.e. the browser vendors) who can remove a “trusted third party” who turns out not to be trustworthy. This time you have to persuade everyone in the world who might rely on the untrusted third party to remove them from their list. Good luck with that (good luck with even finding out who they are).

What is particularly poignant about this article is that even though it’s title is “Online ID Verification Plan Carries Risks” the risks we are supposed to be concerned about are mostly privacy risks, for example

people may not want the banks they might use as their authenticators to know which government sites they visit

and

the government would need new privacy laws or regulations to prohibit identity verifiers from selling user data or sharing it with law enforcement officials without a warrant.

Towards the end, if anyone gets there, is a small mention of some security risk

Carrying around cyber IDs seems even riskier than Social Security cards, Mr. Titus says, because they could let people complete even bigger transactions, like buying a house online. “What happens when you leave your phone at a bar?” he asks. “Could someone take it and use it to commit a form of hyper identity theft?”

Dude! If only the risk were that easy to manage! The real problem comes when someone sets up an account as you with one of these “banks, technology companies or cellphone service providers” (note that CAs are technology companies). Then you are going to get your ass kicked, and you won’t even know who issued the faulty credential or how to stop it.

And, by the way, don’t be fooled by the favourite get-out-of-jail-free clause beloved by policymakers and spammers alike, “opt in”. It won’t matter whether you opt in or not, because the proof you’ve opted in will be down to these “trusted” third parties. And the guy stealing your identity will have no compunction about that particular claim.

12 Sep 2011

DNSSEC on the Google Certificate Catalog

Filed under: DNSSEC,Security — Ben @ 14:47

I mentioned my work on the Google Certificate Catalog a while back. Now I’ve updated it to sign responses with DNSSEC.

I also updated the command-line utility to verify DNSSEC responses – and added a little utility to fetch the root DNSSEC keys and verify a PGP signature on them.

As always, feedback is welcome.

23 Jul 2011

An Efficient and Practical Distributed Currency

Filed under: Anonymity,Crypto,Security — Ben @ 15:51

Now that I’ve said what I don’t like about Bitcoin, it’s time to talk about efficient alternatives.

In my previous paper on the subject I amused myself by hypothesizing an efficient alternative to Bitcoin based on whatever mechanism it uses to achieve consensus on checkpoints. Whilst this is fun, it is pretty clear that no such decentralised mechanism exists. Bitcoin enthusiasts believe that I have made an error by discounting proof-of-work as the mechanism, for example

I believe Laurie’s paper is missing a key element in bitcoin’s reliance on hashing power as the primary means of achieving consensus: it can survive attacks by governments.

If bitcoin relied solely on a core development team to establish the authoritative block chain, then the currency would have a Single Point of Failure, that governments could easily target if they wanted to take bitcoin down. As it is, every one in the bitcoin community knows that if governments started coming after bitcoin’s development team, the insertion of checkpoints might be disrupted, but the block chain could go on.

Checkpoints are just an added security measure, that are not essential to bitcoin’s operation and that are used as long as the option exists. It is important for the credibility of a decentralized currency that it be possible for it to function without such a relatively easy to disrupt method of establishing consensus, and bitcoin, by relying on hashing power, can.

or

Ben, your analysis reads as though you took your well-known and long-standing bias against proof-of-work and reverse engineered that ideology to fit into an ad hoc criticism of bitcoin cryptography. You must know that bitcoin represents an example of Byzantine fault tolerance in use and that the bitcoin proof-of-work chain is the key to solving the Byzantine Generals’ Problem of synchronising the global view.

My response is simple: yes, I know that proof-of-work, as used in Bitcoin, is intended to give Byzantine fault tolerance, but my contention is that it doesn’t. And, furthermore, that it fails in a spectacularly inefficient way. I can’t believe I have to keep reiterating the core point, but here we go again: the flaw in proof-of-work as used in Bitcoin is that you have to expend 50% of all the computing power in the universe, for the rest of time in order to keep the currency stable (67% if you want to go for the full Byzantine model). There are two problems with this plan. Firstly, there’s no way you can actually expend 50% (67%), in practice. Secondly, even if you could, it’s far, far too high a price to pay.

In any case, in the end, control of computing power is roughly equivalent to control of money – so why not cut out the middleman and simply buy Bitcoins? It would be just as cheap and it would not burn fossil fuels in the process.

Finally, if the hash chain really works so well, why do the Bitcoin developers include checkpoints? The currency isn’t even under attack and yet they have deemed them necessary. Imagine how much more needed they would be if there were deliberate disruption of Bitcoin (which seems quite easy to do to me).

But then the question would arise: how do we efficiently manage a distributed currency? I present an answer in my next preprint: “An Efficient Distributed Currency”.

21 May 2011

Bitcoin is Slow Motion

Filed under: Anonymity,Crypto,General,Privacy,Security — Ben @ 5:32

OK, let’s approach this from another angle.

The core problem Bitcoin tries to solve is how to get consensus in a continuously changing, free-for-all group. It “solves” this essentially insoluble problem by making everyone walk through treacle, so it’s always evident who is in front.

But the problem is, it isn’t really evident. Slowing everyone down doesn’t take away the core problem: that someone with more resources than you can eat your lunch. Right now, with only modest resources, I could rewrite all of Bitcoin history. By the rules of the game, you’d have to accept my longer chain and just swallow the fact you thought you’d minted money.

If you want to avoid that, then you have to have some other route to achieve a consensus view of history. Once you have a way to achieve such a consensus, then you could mint coins by just sequentially numbering them instead of burning CPU on slowing yourself down, using the same consensus mechanism.

Now, I don’t claim to have a robust way to achieve consensus; any route seems to open to attacks by people with more resources. But I make this observation: as several people have noted, currencies are founded on trust: trust that others will honour the currency. It seems to me that there must be some way to leverage this trust into a mechanism for consensus.

Right now, for example, in the UK, I can only spend GBP. At any one time, in a privacy preserving way, it would in theory be possible to know who was in the UK and therefore formed part of the consensus group for the GBP. We could then base consensus on current wielders of private keys known to be in the UK, the vast majority of whom would be honest. Or their devices would be honest on their behalf, to be precise. Once we have such a consensus group, we can issue coins simply by agreeing that they are issued. No CPU burning required.

20 May 2011

Bitcoin 2

Filed under: Anonymity,Crypto,Security — Ben @ 16:32

Well, that got a flood of comments.

Suppose I take 20 £5 notes, burn them and offer you a certificate for the smoke for £101. Would you buy the certificate?

This is the value proposition of Bitcoin. I don’t get it. How does that make sense? Why would you burn £100 worth of non-renewable resources and then use it to represent £100 of buying power. Really? That’s just nuts, isn’t it?

I mean, it’s nice for the early adopters, so long as new suckers keep coming along. But in the long run it’s just a pointless waste of stuff we can never get back.

Secondly, the point of referencing “Proof-of-work Proves Not to Work” was just to highlight that cycles are much cheaper for some people than others (particularly botnet operators), which makes them a poor fit for defence.

Finally, consensus is easy if the majority are honest. And then coins become cheap to make. Just saying.

17 May 2011

Bitcoin

Filed under: Anonymity,Distributed stuff,Security — Ben @ 17:03

A friend alerted to me to a sudden wave of excitement about Bitcoin.

I have to ask: why? What has changed in the last 10 years to make this work when it didn’t in, say, 1999, when many other related systems (including one of my own) were causing similar excitement? Or in the 20 years since the wave before that, in 1990?

As far as I can see, nothing.

Also, for what its worth, if you are going to deploy electronic coins, why on earth make them expensive to create? That’s just burning money – the idea is to make something unforgeable as cheaply as possible. This is why all modern currencies are fiat currencies instead of being made out of gold.

Bitcoins are designed to be expensive to make: they rely on proof-of-work. It is far more sensible to use signatures over random numbers as a basis, as asymmetric encryption gives us the required unforgeability without any need to involve work. This is how Chaum’s original system worked. And the only real improvement since then has been Brands‘ selective disclosure work.

If you want to limit supply, there are cheaper ways to do that, too. And proof-of-work doesn’t, anyway (it just gives the lion’s share to the guy with the cheapest/biggest hardware).

Incidentally, Lucre has recently been used as the basis for a fully-fledged transaction system, Open Transactions. Note: I have not used this system, so make no claims about how well it works.

(Edit: background reading – “Proof-of-Work” Proves Not to Work)

Next Page »

Powered by WordPress