Links

Ben Laurie blathering


ƃuıʇsılʞɔɐlq uʍop-ǝpısd∩

A well-known problem with anonymity is that it allows trolls to ply their unwelcome trade. So, pretty much every decent cryptographic anonymity scheme proposed has some mechanism for blacklisting. Basically these work by some kind of zero-knowledge proof that you aren’t on the blacklist – and once you’ve done that you can proceed.

However, this scheme suffers from the usual problem with trolls: as soon as they’re blacklisted, they create a new account and carry on. Solving this problem ultimately leads to a need for strong identification for everyone so you can block the underlying identity. Obviously this isn’t going to happen any time soon, and ideally never, so blacklists appear to be fundamentally and fatally flawed, except perhaps in closed user groups (where you can, presumably, find a way to do strong-enough identification, at least sometimes) – for example, members of a club, or employees of a company.

So lately I’ve been thinking about using “blacklists” for reputation. That is, rather than complain about someone’s behaviour and get them blacklisted, instead when you see someone do something you like, add them to a “good behaviour blacklist”. Membership of the “blacklist” then proves the (anonymous) user has a good reputation, which could then be used, for example, to allow them to moderate posts, or could be shown to other users of the system (e.g. “the poster has a +1 reputation”), or all sorts of other things, depending on what the system in question does.

The advantage of doing it this way is that misbehaviour can then be used to remove reputation, and the traditional fallback of trolls no longer works: a new account is just as useless as the one they already have.

There is one snag that I can see, though, which is at least some anonymity systems with blacklisting (e.g. Nymble, which I’ve somehow only recently become aware of) have the side-effect of making every login by a blacklisted person linkable. This is not good, of course. I wonder if there are systems immune to this problem?

Given that Jan Camenisch et al have a presentation on upside-down blacklisting (predating my thinking by quite a long way – one day I’ll get there first!), I assume there are – however, according to Henry, Henry and Goldberg, Camenisch’s scheme is not very efficient compared to Nymble or Nymbler.

11 Comments »

  1. Hi Ben – two thoughts – (1) isn’t this basically Slashdot? (2) ‘ƃuıʇsılʞɔɐlq uʍop-ǝpısd∩’ == Whitelisting, no?

    Comment by Pat Patterson — 18 Dec 2010 @ 18:58

  2. 1) Anonymous slashdot – the idea is that the actions of participants should be unlinkable.

    2) Whitelists are generally the inverse of blacklists, rather than the opposite. If you see what I mean. But yes, you do end up with a whitelist, so why not.

    Comment by Ben — 18 Dec 2010 @ 19:05

  3. another two thoughts:

    The “you have to earn reputation before you can do certain things” is exactly the scheme implemented in stackoverflow.com (where it seems to work out quite well) .. they don’t do this anonymously though.

    Also: this sounds analogous to anonymous payment systems. Reputation seems to map onto being able to prove that you have a certain account balance, and “whitelisting” maps to paying someone some amount of reputation-points.

    Comment by levinalex — 18 Dec 2010 @ 23:24

  4. You may find our new tech report interesting: http://www.cacr.math.uwaterloo.ca/techreports/2010/cacr2010-23.pdf

    Among other things, it adds more flexibility in the blacklisting, and uses a technique due to Brands et al. to prove you’re not on a blacklist of size M, using O(sqrt(M)) exponentiation. (And this proof isn’t done on the critical path of while a user is trying to authenticate to the website.)

    Comment by Ian — 19 Dec 2010 @ 2:24

  5. You need a system that automatically awards small amounts of credit for performing ordinary social interactions; as in ordinary life, saying please and thank you, paying bills on time etc.

    Trolls don’t do those things.

    That’s what happens on eBay – people earn credit by paying for goods, by delivering goods on time and to specification. If they foul up they lose credit. It encourages everyone to behave well.

    If you do something brilliant you get a Nobel or a VC and that excuses a certain amount of dirty fingernails or wiping your nose on your sleeve.

    Comment by Ben's Dad — 19 Dec 2010 @ 8:38

  6. Semi-obvious bit o’ synergy: use machine learning (eg. a Bayesian Classifier) to scrutinise contributions of the reputationless. (The behaviour of those with reputation gives the classifier an idea of what constitutes good behaviour).

    Comment by ti — 19 Dec 2010 @ 14:00

  7. there are two basic systems I’ve seen that work really well against troll abuse, and they both involve limiting the ability for trolls to register an account for free.

    the first is an invitation-only model, where accounts can only be had by knowing someone else on the site. trolling has substantive consequences, resulting in you, your inviter, and your children all being banned.

    the other is requiring some pittance of money to register an account. Pinboard does this and it works very well. a troll is like a car thief, and will happily take his bile somewhere cheaper and easier when faced with a paywall.

    both of these systems require the user to have some sort of account, but that doesn’t mean it has to be tied to their actual identity—they can be whomever they want to be, as long as they play by the rules.

    Comment by numist — 19 Dec 2010 @ 20:45

  8. I suggest using blacklists to push anti-social behavior to the edge of the resource utilization graph: if you’re on the blacklist, your account is moved to the slowest server, running an old, buggy version of the software. Sometimes requests don’t complete, images randomly don’t load, it feels like a 56K modem, etc.

    The idea is to never let on that the account is blacklisted, but to degrade service to the point where being a troll just isn’t fun anymore.

    Comment by Chris Snyder — 20 Dec 2010 @ 14:51

  9. I’m becoming increasingly concerned that companies like Akismet are quietly curtailing innocent peoples’ freedom of speech without crime, notice, question or right to redress.
    I tried to start a civilised dialogue with them to find out how they define spam or spammer and have only received a resounding silence.
    Looking further into their practice I found that there used to be a site where people could check if they were in Akismet’s bad books but they had it closed down!
    As far as I’m concerned companies can whitelist or blacklist but they should be transparent otherwise they can inadvertently erode very precious freedoms.

    Comment by Mr Appy — 20 Dec 2010 @ 17:42

  10. I’ve been thinking along similar lines recently, although I call it distributed moderation rather than blacklisting.

    Imagine Usenet implemented over a social network. Each person keeps a private reputation value for each pseudonymous author, and the reputation values influence how far messages travel across the social network. If you consider an author to have a positive reputation then you forward their messages, resetting the TTL to the maximum value. If you consider them to have a neutral reputation (including unknown/unrated authors), you forward their messages but decrement the TTL. If you consider them to have a negative reputation, you drop their messages.

    A person can use any number of pseudonyms, which somewhat mitigates the linkability problem, although each pseudonym must independently earn a good reputation in order to reach a wide audience. The maximum TTL determines how many people unknown authors and pseudonym-hopping spammers can reach.

    For even better unlinkability, we could consider allowing fully anonymous messages, which must be moderated individually: a positive rating means forward the message, neutral/unrated means don’t forward, negative means delete. Anonymous messages require more effort from users than pseudonymous ones, and they propagate more slowly (each hop requires manual intervention).

    This can be made somewhat less arduous through a backward inclusion rule: if you forward a given message, also forward the message to which it replies (if any). That would allow an author with a good reputation to ‘bump’ an anonymous or pseudonymous message by replying to it, causing it to propagate automatically to everyone in the bumper’s audience.

    Comment by Michael — 21 Dec 2010 @ 11:52

  11. “good behavior blacklist”!

    You mean, whitelist.

    Comment by James A. Donald — 12 Jan 2011 @ 22:25

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress

Close
E-mail It