Links

Ben Laurie blathering

21 May 2011

Bitcoin is Slow Motion

Filed under: Anonymity,Crypto,General,Privacy,Security — Ben @ 5:32

OK, let’s approach this from another angle.

The core problem Bitcoin tries to solve is how to get consensus in a continuously changing, free-for-all group. It “solves” this essentially insoluble problem by making everyone walk through treacle, so it’s always evident who is in front.

But the problem is, it isn’t really evident. Slowing everyone down doesn’t take away the core problem: that someone with more resources than you can eat your lunch. Right now, with only modest resources, I could rewrite all of Bitcoin history. By the rules of the game, you’d have to accept my longer chain and just swallow the fact you thought you’d minted money.

If you want to avoid that, then you have to have some other route to achieve a consensus view of history. Once you have a way to achieve such a consensus, then you could mint coins by just sequentially numbering them instead of burning CPU on slowing yourself down, using the same consensus mechanism.

Now, I don’t claim to have a robust way to achieve consensus; any route seems to open to attacks by people with more resources. But I make this observation: as several people have noted, currencies are founded on trust: trust that others will honour the currency. It seems to me that there must be some way to leverage this trust into a mechanism for consensus.

Right now, for example, in the UK, I can only spend GBP. At any one time, in a privacy preserving way, it would in theory be possible to know who was in the UK and therefore formed part of the consensus group for the GBP. We could then base consensus on current wielders of private keys known to be in the UK, the vast majority of whom would be honest. Or their devices would be honest on their behalf, to be precise. Once we have such a consensus group, we can issue coins simply by agreeing that they are issued. No CPU burning required.

24 Feb 2011

Who the Hell are 2o7?

Filed under: Privacy,Security — Ben @ 13:38

My friend Adriana pointed me to this cool track-blocking extension for Chrome.

Back in the day, I used to do this kind of blocking “by hand” – i.e. by manually deciding which cookies to block and which to allow. This is far from an exact science – it’s fairly easy to block some sites into uselessness – so I’m pleased to see an automated alternative.

In any case, it all came to an end when Chrome decided (without any explanation I ever saw) to drop the ability to control cookies, so extensions are probably the only way now.

Anyway, it reminded me of something I kept meaning to look into but never really got very far, which is 2o7.net. This domain crops up all the time if you start monitoring cookies, and clearly is some massive tracking operation. But I’ve never heard of it, and nor has anyone else I know.

So … who the hell are 2o7? (And yes, I can do whois, which leads me to Omniture. Not much the wiser. Except they now seem to be owned by Adobe – mmm – looking forward to mixing all that tracking data with Adobe’s careful attention to security).

Note, btw, the cool track-blocking extension doesn’t appear to have heard of 2o7 either. From my experience you can just block all their cookies without harm.

21 Dec 2010

Is Openleaks The Next Haystack?

As everyone who’s even half-awake knows by now, a bunch of people who used to work on Wikileaks have got together to work on Openleaks. From what I hear, Openleaks is going to be so much better than Wikileaks – it will have no editorial role, it will strongly protect people who submit leaks, it’s not about the people who run it, it’ll be distributed and encrypted.

But where’s the design to back up this rhetoric? Where are the security reviews from well-known authorities? They seem to be missing. Instead we have excited articles in mainstream media about how wonderful it is going to be, and how many hours the main man has spent on it.

This sounds very familiar indeed. And we all know what happened last time round.

Of course, Openleaks may be fine, but I strongly suggest that those who are working on it publish their plan and subject it to scrutiny before they put contributors at risk.

As always, I offer my services in this regard. I am sure I am not alone.

18 Dec 2010

ƃuıʇsılʞɔɐlq uʍop-ǝpısd∩

Filed under: Anonymity,Crypto,Identity Management,Lazyweb,Privacy — Ben @ 14:54

A well-known problem with anonymity is that it allows trolls to ply their unwelcome trade. So, pretty much every decent cryptographic anonymity scheme proposed has some mechanism for blacklisting. Basically these work by some kind of zero-knowledge proof that you aren’t on the blacklist – and once you’ve done that you can proceed.

However, this scheme suffers from the usual problem with trolls: as soon as they’re blacklisted, they create a new account and carry on. Solving this problem ultimately leads to a need for strong identification for everyone so you can block the underlying identity. Obviously this isn’t going to happen any time soon, and ideally never, so blacklists appear to be fundamentally and fatally flawed, except perhaps in closed user groups (where you can, presumably, find a way to do strong-enough identification, at least sometimes) – for example, members of a club, or employees of a company.

So lately I’ve been thinking about using “blacklists” for reputation. That is, rather than complain about someone’s behaviour and get them blacklisted, instead when you see someone do something you like, add them to a “good behaviour blacklist”. Membership of the “blacklist” then proves the (anonymous) user has a good reputation, which could then be used, for example, to allow them to moderate posts, or could be shown to other users of the system (e.g. “the poster has a +1 reputation”), or all sorts of other things, depending on what the system in question does.

The advantage of doing it this way is that misbehaviour can then be used to remove reputation, and the traditional fallback of trolls no longer works: a new account is just as useless as the one they already have.

There is one snag that I can see, though, which is at least some anonymity systems with blacklisting (e.g. Nymble, which I’ve somehow only recently become aware of) have the side-effect of making every login by a blacklisted person linkable. This is not good, of course. I wonder if there are systems immune to this problem?

Given that Jan Camenisch et al have a presentation on upside-down blacklisting (predating my thinking by quite a long way – one day I’ll get there first!), I assume there are – however, according to Henry, Henry and Goldberg, Camenisch’s scheme is not very efficient compared to Nymble or Nymbler.

27 Nov 2010

Why Identity Is Still Just Login

Filed under: Identity Management,Privacy — Ben @ 15:14

It seems everyone now agrees that Internet identity is about a collection of assertions relating to a person (or thing, if you want to go there). Some of these assertions are assertions one makes about oneself, for example “I like brussels sprouts”, and assertions others make about you, for instance “Ben is a UK citizen, signed the UK Government”. These assertions are essentially invariant, or slowly varying, for the most part. So what makes an identity, we agree, is some collection of these assertions.

But we also agree that people need to assert different identities: there’s Ben at work, Ben as a member of the OpenSSL team, Ben at home and so on. Each of these identities, we like to think, corresponds to a different collection of assertions.

All we need to do, we think, to map these identities onto collections of electronic assertions and we’ll have solved The Internet Identity Problem. People will no longer be required to type in their credit card number five times a day, endlessly write down their address (and correct it when it changes) and so on. The ‘net will become one lovely, seamless experience of auto-communicated factoids about me that are just right for every circumstance and I’ll never fill in a form again.

You can probably see where I’m going. The more I think about it, the more I realise that every situation is different. My “identity” is contextual, and different for each context. We know this from endless studies of human behaviour.

So, what was the point of doing what every electronic identity system wants me to do, namely aggregating various assertions about me into various identities, and then choosing the right identity to reveal? To match this to what I do in the real world, I will need a different identity for each context.

So, what was the point of even talking about identities? Why not simply talk about assertions, and find better ways for me to quickly make the assertions I want to make. Cut out the middleman and do away with the notion of identity.

In practice, of course, this is exactly what has happened. The only vestige of this electronic identity that makes any sense is the ability to log in as some particular “person”. After that all my decisions are entirely contextual, and “identity” doesn’t help me at all. And so what we see is that “identity” has simply become “login”. And will always be so.

In a sense this is exactly what my Belay research project is all about – allowing me to decide exactly what I reveal in each individual context. In Belay, the ability to log in to a particular site will become the same kind of thing as any other assertion – a fine-grained permission I can grant or not grant.

Note: I am not against bundles of assertions – for example, I think lines one and two of my address clearly belong together (though for some purposes the first half of my postcode, or just my country, should suffice) or, less facetiously, when I use my credit card I almost always need to send a bunch of other stuff along with the credit card number. What I doubt is that the notion of a bundle that corresponds to an “identity” is a useful one. This implies that UIs where you pick “an identity” are doomed to failure – some other approach is needed.

25 Oct 2010

Firesheep: Session Hijacking for Morons

Filed under: Crypto,Privacy,Security — Ben @ 13:35

OK, we’ve all known forever that using any kind of credential over an unencrypted connection is a Bad Idea(tm). However, we also know that pretty much every website does an Obi-wan over session cookies, which typically travel over HTTP. “These are not the credentials you are looking for” they tell us.

Firesheep proves that comprehensively wrong. Surf your favourite login-requiring site on an open network, and *BANG*, you’re pwned. Awesome piece of work. Eric Butler, the author, says

Websites have a responsibility to protect the people who depend on their services. They’ve been ignoring this responsibility for too long, and it’s time for everyone to demand a more secure web. My hope is that Firesheep will help the users win.

2 Oct 2010

Aims not Mechanisms

Filed under: Privacy,Rants — Ben @ 22:18

I’m a big fan of the EFF, so it comes as a bit of a surprise when I see them say things that don’t make any sense.

A while back the EFF posted a bill of privacy rights for social network users. Whilst I totally sympathise with what the EFF is trying to say here, I’m disappointed that they head the way of policymakers by ignoring inconvenient technical reality and proposing absurd policies.

In particular, I refer you to this sentence:

The right to control includes users’ right to decide whether their friends may authorize the service to disclose their personal information to third-party websites and applications.

In other words, if I post something to a “social network” (whatever that is: yes, I have an informal notion of what it means, and I’m sure you do, too, but is, say, my blog part of a “social network”? Email?) then I should be able to control whether you, a reader of the stuff I post, can do so via a “third-party application”. For starters, as stated, this is equivalent to determining whether you can read my post at all in most cases, since you do so via a browser, which is a “third-party application”. If I say “no” to my friends using “third-party applications” then I am saying “no” to my friends reading my posts at all.

Perhaps, then, they mean specific third-party applications? So I should be able to say, for example, “my friends can read this with a browser, but not with evil-rebroadcaster-app, which not only reads the posts but sends them to their completely public blog”? Well, perhaps, but how is the social network supposed to control that? This is only possible in the fantasy world of DRM and remote attestation.

Do the EFF really want DRM? Really? I assume not. So they need to find a better way to say what they want. In particular, they should talk about the outcome and not the mechanism. Talking about mechanisms is exactly why most technology policy turns out to be nonsense: mechanisms change and there are far more mechanisms available than any one of us knows about, even those of us whose job it is to know about them. Policy should not talk about the means employed to achieve an aim, it should talk about the aim.

The aim is that users should have control over where their data goes, it seems. Phrased like that, this is clearly not possible, nor even desirable. Substitute “Disney” for the “the users” and you can immediately see why. If you solve this problem, then you solve the DRM “problem”. No right thinking person wants that.

So, it seems like EFF should rethink their aims, as well as how they express them.

16 Aug 2010

It’s All About Blame

Filed under: Anonymity,Crypto,Privacy,Security — Ben @ 17:57

I do not represent my employer in this post.

Eric Schmidt allegedly said

“The only way to manage this is true transparency and no anonymity. In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it.”

I don’t care whether he actually said it, but it neatly illustrates my point. The trouble with allowing policy makers, CEOs and journalists define technical solutions is that their ability to do so is constrained by their limited understanding of the available technologies. At Google (who I emphatically do not represent in this post), we have this idea that engineers should design the systems they work on. I approve of this idea, so, speaking as a practising engineer in the field of blame (also known as security), I contend that what Eric really should have allegedly said was that the only way to manage this is true ability to blame. When something goes wrong, we should be able to track down the culprit. Governments will demand it.

Imagine if, the next time you got on a plane, instead of showing your passport, you instead handed over an envelope with a fancy seal on it, containing your ID, with windows showing just enough to get you on the plane (e.g. your ticket number and photo). The envelope could be opened on the order of a competent court, should it turn out you did something naughty whilst travelling, but otherwise you would remain unidentified. Would this not achieve the true aim that Eric allegedly thinks should be solved by universal identification? And is it not, when spread to everything, a better answer?

Of course, in the physical world this is actually quite hard to pull off, tamper-proof and -evident seals being what they are (i.e. crap), but in the electronic world we can actually do it. We have the crypto.

Just sayin’.

8 Jun 2010

XAuth: Who Should Know What?

Filed under: Anonymity,Privacy,Security — Ben @ 11:26

Note that I am not speaking for my employer in this post.

I’ve been following the debate around XAuth with interest. Whilst the debate about whether centralisation is an acceptable stepping stone to an in-browser service is interesting, I am concerned about the functionality of either solution.

As it stands, XAuth reveals to each relying party all of my identity providers, so that it can then present UI to allow me to choose one of them to authenticate to the RP. Why? What business of the RP is it where I have accounts? All that should be revealed is the IdP I choose to reveal (if any). This seems easy enough to accomplish, even in the existing centralised version: all that has to happen is for the script that xauth.org serves is to include the UI for IdP choice.

This is not just privacy religion (or theatre): as the EFF vividly illustrated with their Panopticlick experiment, it is surprisingly easy to uniquely identify people from signals you would have thought were not at all identifying, such as browser version and configuration information. Indeed, a mere 33 IdPs would provide enough information (if evenly distributed) to uniquely identify every person in the world. Meebo had no difficulty at all coming up with 15 of them for page one of many in their introductory blog post

15 IdPs on page 1 of many

4 Mar 2010

Selective Disclosure, At Last?

Filed under: Anonymity,Crypto,Privacy,Security — Ben @ 5:34

Apparently it’s nearly five years since I first wrote about this and now it finally seems we might get to use selective disclosure.

I’m not going to re-iterate what selective disclosure is good for and apparently my friend Ben Hyde has spared me from the need to be cynical, though I think (I am not a lawyer!) he is wrong: the OSP applies to each individual specification – you are not required to use them in the context of each other.

So, for now, I will just celebrate the fact that Microsoft has finally made good on its promise to open up the technology, including BSD-licensed code. Though I guess I will have to inject one note of cynicism: a quick glance at the specification (you can get it here) suggests that they have only opened up the most basic use of the technology: the ability to assert a subset of the signed claims. There’s a lot more there. I hope they plan to open that up, too (how long will we have to wait, though?).

10 Dec 2009

How To Keep Your Facebook Stuff Private

Filed under: Privacy — Ben @ 15:27

Apparently, it is Facebook’s considered opinion that the way to avoid sharing data you don’t want shared is to not enter it

Barry Schnitt, a Facebook spokesman, said users could avoid revealing some information to non-friends by leaving gender and location fields blank.

I guess they’d agree, then, that the best option is to not use Facebook at all.

8 Dec 2009

Encryption Is Not Anonymisation

Filed under: Anonymity,Privacy — Ben @ 19:45

I was surprised to see

the encrypted (and thus anonymised) customer identity

in Richard Clayton’s analysis of Detica.

As we know from the AOL and Netflix fiascos, this is far from the truth. If you want anonymity, you also need unlinkability. But I’ve said that before.

1 Sep 2009

Kim Cameron Explains Why Hoarding Is Not Hoarding

Filed under: Crypto,Open Source,Open Standards,Privacy — Ben @ 14:13

I’ve been meaning for some time to point out that it’s been well over a year since Microsoft bought Credentica and still no sign of any chance for anyone to use it. Kim Cameron has just provided me with an appropriate opportunity.

Apparently the lack of action is because Microsoft need to get a head start on implementation. Because if they haven’t got it all implemented, they can’t figure out the appropriate weaseling on the licence to make sure they keep a hold of it while appearing to be open.

if you don’t know what your standards and implementations might look like, you can’t define the intellectual property requirements.

Surely the requirements are pretty simple, if your goal is to not hoard? You just let everyone use it however they want. But clearly this is not what Microsoft have in mind. They want it “freely” used on their terms. Not yours.

6 Jul 2009

Phormlessness

Filed under: Privacy — Ben @ 12:10

BT have canned Phorm. I don’t really have anything to add to that, except … yay!

30 May 2009

Wave Trust Patterns

Filed under: Crypto,Open Source,Open Standards,Privacy,Security — Ben @ 6:04

Ben Adida says nice things about Google Wave. But I have to differ with

… follows the same trust patterns as email …

Wave most definitely does not follow the same trust patterns as email, that is something we have explicitly tried to improve upon, In particular, the crypto we use in the federation protocol ensures that the origin of all content is known and that the relaying server did not cheat by omitting or re-ordering messages.

I should note, before anyone gets excited about privacy, that the protocol is a server-to-server protocol and so does not identify you any more than your email address does. You have to trust your server not to lie to you, though – and that is similar to email. I run my own mail server. Just saying.

I should also note that, as always, this is my personal blog, not Google’s.

4 May 2009

Why Privacy Will Always Lose

Filed under: Identity Management,Privacy — Ben @ 17:05

In social networks, that is.

I hear a lot about how various social networks have privacy that sucks, and how, if only they got their user interaction act together, users would do so much better at choosing options that protect their privacy. This seems obviously untrue to me, and here’s why…

Imagine that I have two otherwise identical social networking sites, one with great privacy protection (GPPbook) and one that has privacy controls that suck (PCTSbook). What will my experience be on these two sites?

When I sign up on GPPbook, having jumped through whatever privacy-protecting hoops there are for account setup, what’s the next thing I want to do? Find my friends, of course. So, how do I do that? Well, I search for them, using, say, their name or their email address. But wait – GPPbook won’t let me see the names or email addresses of people who haven’t confirmed they are my friends. So, I’m screwed.

OK, so clearly that isn’t going to work, let’s relax the rules a little and use the not-quite-so-great site, NQSGPPbook, which will show names. After all, they’re rarely unique, so that seems pretty safe, right? And anyway, even if they are unique, what have I revealed? That someone signed up for the site at some point in the past – but nothing more. Cool, so now I can find my friends, great, so I look up my friend John Smith and I find ten thousand of them. No problem, just check the photos, where he lives, his birthday, his friends and so forth, and I can tell which one is my John Smith. But … oh dear, no friend lists, no photos, no date of birth – this is the privacy preserving site, remember? So, once more I’m screwed.

So how am I going to link to my friends? Pretty clearly the only privacy preserving way to do this is to contact them via some channel of communication I have already established with them, say email or instant messaging, and do the introduction over that. Similarly with any friends of friends. And so on.

Obviously the experience on PCTSbook is quite different. I look up John Smith, home in on the ones that live in the right place, are the right age, have the right friends and look right in their photos and I click “add friend” and I’m done.

So, clearly, privacy is a source of friction in social networking, slowing down the spread of GPPbook and NQSGPPbook in comparison to PCTSbook. And as we know, paralleling Dawkins on evolution, what spreads fastest is what we find around. So what we find around is social networks that are bad at protecting privacy.

This yields a testable hypothesis, like all good science, and here it is: the popularity of a social networking site will be in inverse proportion to the goodness of its privacy controls. I haven’t checked, but I’ll bet it turns out to be true.

And since I’ve mentioned evolution, here’s another thing that I’ve been thinking about in this context: evolution does not yield optimal solutions. As we know, evolution doesn’t even drive towards locally optimal solutions, it drives towards evolutionary stable strategies instead. And this is the underlying reason that we end up with systems that everyone hates – because they are determined by evolution, not optimality.

So, is there any hope? I was chatting with my friends Adriana and Alec, co-conspirators in The Mine! Project, about this theory, and they claimed their baby was immune to this issue, since it includes no mechanism for finding your friends. I disagree, this means it is as bad as it possible for it to be in terms of “introduction friction”. But thinking further – the reason there is friction in introductions is because the mechanisms are still very clunky. I have to use cut’n’paste and navigating to web pages that turn up in my email (and hope I’m not being phished) and so forth to complete the introduction. But if the electronic channels of communication were as smooth and natural as, say, talking, then it would be a different story. All of a sudden using existing communications channels would not be a source of friction – instead not using them would be.

So, if you want to save the world, then what you need to do is improve how we use the ‘net to communicate. Make it as easy and natural (and private) as talking.

28 Mar 2009

Mining Is Easy

Filed under: Privacy,Security — Ben @ 13:33

I’ve written before about the risks involved in exposing the social graph. Now there’s a nice video showing just how easy it is to mine that graph, and other data we give away so freely, using Maltego2. Scary stuff.

1 Feb 2009

A Good Use of the TPM?

Filed under: Anonymity,Privacy,Security — Ben @ 20:33

Back when the TPM was called Palladium I made myself unpopular in some circles by pointing out that there were good uses for it, too, such as protecting my servers from attackers.

Whether that is practical is still an interesting question – it’s a very big step from a cheap device that does some cunning crypo to a software stack that can reliably attest to what is running (which is probably all that has saved us from the more evil uses of the TPM) – but at a recent get-together for privacy and anonymity researchers George Danezis and I ran, Mark Ryan presented an interesting use case.

He proposes using the TPM to hold sensitive data such that the guy holding it can read it – but if he does, then it becomes apparent to the person who gave him the data. Or, the holder can choose to “give the data back” by demonstrably destroying his own ability to read it.

Why would this be useful? Well, consider MI5’s plan to trawl through the Oyster card records. Assuming that government fails to realise that this kind of thing is heading us towards a police state, wouldn’t it be nice if we could check afterwards that they have behaved themselves and only accessed data that they actually needed to access? This kind of scheme is a step towards having that kind of assurance.

18 Dec 2008

Microsoft Show a Complete Lack of Respect for Privacy

Filed under: Anonymity,Privacy — Ben @ 20:24

I am astonished to read that Microsoft suddenly removed anonymity from blog posts. Retroactively. WTF? That’s just crazy, not to mention rude. I can’t begin to imagine who got outed by this and how annoyed they must be.

I also wonder whether, when they were pretending that posts were anonymous, they revealed that they were actually tracking who wrote them all? And finally I wonder, given the litigious nature of their homeland, who is going to sue them first?

3 Dec 2008

Podcast With Dave Birch

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 16:07

A few weeks ago, Dave Birch of Consult Hyperion interviewed me about Digital Identity. Because he weirdly doesn’t give a way to link to an individual podcast, here’s the M4a (whatever that is) and the MP3.

This was the first podcast I’ve done that I actually dared listen to. I think it came out pretty well.

Next Page »

Powered by WordPress