Links

Ben Laurie blathering


Identity, Anonymity, Reputation, eCash and the Sybil Attack

I was musing the other day about identity management and anonymity. One of the problems with anonymity is it implies that anyone can create a new “identity” for themselves at any time. Of course, these identities tend to rapidly become pseudonymous rather than anonymous, because of the limitations of current technology, but that hardly matters since you can create new ones whenever you need.

This is all fine and dandy from the privacy perspective, but it leads to problems for people who are trying to provide services that aggregate user input. A great example came about when Wikipedia met Tor. As one can see from Wikipedia’s entry on Tor, it was not the heartwarmingest experience ever, and led to crazy email exchanges like this, wherein Jimmy Wales propose to “solve” the problem thusly

end user -> tor cloud -> authentication server -> trusted user tor cloud -> wikipedia

end user -> tor cloud -> authentication server -> untrusted user tor cloud -> no wikipedia

Simple.

simple, but useless. This doesn’t even need a Sybil attack (yes, I will get around to those shortly) to defeat. Let’s say we got this all implemented, and some clown defaces Wikipedia via the trusted user cloud. Now what? Well, you just track down the miscreant and remove their Tor privileges. Oops. Slight issue there, Tor anonymises the users. So, you know that some trusted user was responsible, but which one? There’s no way to tell. So, what’s the next step? That’s right, ban the trusted user Tor cloud from Wikipedia, and we’re no further forward. As Jimmy so aptly put it

I completely fail to comprehend why Tor server operators consistently refuse to take responsibility for their crazed users.

Tor server operators don’t refuse to take responsibility for their users, crazed or otherwise, they can’t take responsibility, because they don’t know who they are. That’s the whole point – its an anonymous network.

So, the next point one usually gets to when wrestling with this problem is to say “Aha! But we don’t need to know who the user is, we just need to make sure they’re well-behaved. So, what we’ll do is we’ll only let people with accounts post to Wikipedia and if they misbehave, we’ll switch the account off. But here’s the clever bit – we’ll let people create accounts anonymously so we still don’t know who they are, but we can distinguish the good guys from the bad guys by their behaviour.”

This works, of course, just fine. But it doesn’t really solve the problem, because of the Sybil attack. The bad guys just create an account, post abusively once, and then create the next account. Because you’ve got no way to link these two accounts (this is the definition of anonymity, remember), you have no way to defend against this behaviour.

OK, so once you’ve got the Sybil attack on board, generally your next port of call is reputation. What would fix this is if only people with good reputations can post. But there’s a problem here: if I can’t post, how do I get a good reputation? And if I can post, then aren’t we right back into the Sybil attack? One is tempted to start thinking about some global pseudonymous reputation, but that really blows the anonymity (and true pseudonymity) out of the water. In order to draw on your existing good reputation, you have to make some kind of link between your shiny new pseudonym and your global reputation. And there goes your anonymity.

A tempting thought at this point is anonymous cash: suppose you get a coin for every “good” post. Then you can spend those coins anonymously on new identities that have no reputation as yet. This scheme might even work, but there are some obvious problems with it

  • Some of the worst abusers are actually perfectly reasonable much of the time, and so would have plentiful cash reserves for when the mood takes them (I don’t have any references for this, but I know its true from personal experience)
  • Abusers that don’t currently behave like this can easily do so – just post occasional nice comments from your “good” account and then use the resulting coins in your “evil” account.
  • Because the cash is (necessarily) anonymous, there’ll be a market for it. Nice people will sell it in exchange for stuff that’s more useful to them, and nasty people will buy it.

All this led George Danezis and I to propose a fantastically elaborate scheme where you were anonymously vouched for, and the shit only hit the fan when you misbehaved.

Anyway, now we’ve got the background out of the way (yes, really, that was all background, and, to be honest, probably the more interesting part of this post), here’s what I was thinking…

Wouldn’t it be nice if you could let people create truly pseudonymous (that is, unlinkable to any other pseudonym) identities and solve the “first post” problem without resorting to a ton of crypto? It occurred to me that there’s a great example of a system that could do that, without substantial modification: Digg. Combine Digg with Slashdot and Wikipedia and you might have a workable solution.

What I mean is this: first time users’ posts go into a queue. Other users can signal approval or disapproval of stuff in the queue (this is like Digg), as stuff gets dug enough, the user’s reputation improves (this is like Slashdot), until eventually they can post without needing approval (and where the post would go is Wikipedia, in this example). Of course, even people with good reputation can turn into asshats overnight, so you’d need to still be able to undig posts (and consequently cause their removal and the downgrade of the nyms’ reputation).

2 Comments

  1. how does this help at all?

    can’t the bad people just create more accounts to perform approval of their silly piece of editing? or alternatively write a relevant piece or two, approve it up, and then get the status of ‘bypass que’ and then post the silliness.

    also, you would need different approvers for different content. say i approve all articles about german history, because i am a historian. is it appropriate i use my good rating to approve articles about eliptic curve cryptography? probably not. thusly i’ve had to classify myself in some fashion, hence i’m not anonymous.

    this also leads to the indication that you could possible find out the identity of certain approvers by the content that they approve.

    overall, i don’t think this is a good idea at all.

    Comment by lok — 5 Sep 2006 @ 6:56

  2. The problem with Digg is that users join together in ‘tribes’ who all promote each others stories regardless and it becomes a turf war.

    Let’s face it, the problem is human nature not a technical one.

    Perhaps a rotating dictatorship would work rather better than any attempt at democracy?

    Or run the whole thing by committee? Hahaha 😀

    Comment by Steve Lord — 9 Sep 2006 @ 2:09

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress