Ben Laurie blathering

17 Nov 2008

Identification Is Not Security

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 16:50

Kim writes about minimal disclosure. Funnily enough my wife, Camilla, spontaneously explained minimal disclosure to me a couple of nights ago. She was incensed that she ended up having to “prove” who she was in order to pay a bill over the phone.

First of all, they asked her for her password. Of course, she has no idea what her password might be with this particular company, so their suggestion was she guess. Camilla surprised me by telling me that she had, of course, declined to guess, because by guessing she would be revealing all her passwords that she might use elsewhere. So, they then resorted to the usual stupidity: mother’s maiden name, postcode, phone number and so forth. Camilla said she was happy to provide that information because she didn’t feel it was in any way secret – which, of course, means it doesn’t really authenticate her, either.

Anyway, her point was that in order to pay a bill she really shouldn’t have to authenticate to the payee – what do they care who pays the money, so long as it gets paid? In fact, really, we want the authentication to be the other way round – the payee should prove to her that they are really the payee. It would also be nice if they provided some level of assurance that she is paying the right bill. But they really don’t need to have any clue who she is, so long as she can hand over money somehow (which might, of course, including authenticating somehow to some money-handling middleman).

But what seems to be happening now is that everyone is using identity as a proxy for security. If we know who you are, then everything else springs from that.

Now, if what you want to do is to determine whether someone is authorised to do something, then certainly this is an approach that works. I find out who you are, then I look you up in my Big Table of Everything Everyone Is Allowed To Do, and I’m done. However, and now I finally circle back to Kim’s post, for many, if not most, purposes, identification is far more than is really needed. For example, Equifax just launched the Over 18 I-Card. I hope Equifax got this right and issued a card that doesn’t reveal anything else about you – but even if they didn’t, clearly it could be done – and clearly there’s value in proving you’re over 18, and therefore authorised to do some things you might not otherwise be able to do. Though I’d note that I am not over 18 in Equifax’ view because I do not have an SSN!

Anyway, current deficiencies aside, this is a great example of where minimal disclosure works better than identification – rather than everyone having a lookup table containing everyone in the world and whether they are over 18, someone who has the information anyway does the lookup once and then signs the statement “yep, the bearer is over 18”.

But in many other cases identification doesn’t work at all – after all, the premise of the ID card is that it is supposed to improve our security against terrorists. But its pretty obvious that identifying people really isn’t going to help – you can work that out just by thinking about it, but even more importantly, in several recent terrorist attacks everyone has been very thoroughly identified but it hasn’t helped one bit.

And in the case of my wife trying to pay a bill, identification was completely without purpose. Yet everyone wants to do it. As Kim says, we really need to rethink the world in terms of minimal disclosure – and as I show above, sometimes this is actually the easiest way to think about it – my one area of disagreement is that we should not call this “identity” or even “contextual identity”. We need a term that makes it clear it has nothing to do with identification. I prefer to think in terms of “proof of entitlement” or “proof of authority” – but those don’t exactly roll off the tongue … ideas?

10 Aug 2008

NYT Doesn’t Quite Get It, Hilarity From OpenID

Filed under: General,Identity Management,Privacy,Security — Ben @ 13:31

The New York Times’ Randy Stross has a piece about passwords and what a bad idea they are (sorry, behind a loginwall). So far, so good (and I’ll admit to bias here: I was interviewed for this piece, and whilst there’s no attribution, what I was saying is clearly reflected in the article), but Stross weirdly focuses on OpenID as the continuing cause of our password woes, because, he says, it is blocking the deployment of information cards, which will save us all.

Now, I am no fan of OpenID, but Stross is dead wrong here. OpenID says nothing about how you log in. It is not OpenID’s fault that the login is generally done with a password – that blame we must all accept collectively.

And whilst I firmly believe that the only way out of this mess is strong authentication, information cards are hardly the be-all and end-all of that game. They certainly have a way to go in usability before they’re going to be taking the world by storm. Don’t blame OpenID for that.

In the meantime, Scott Kveton, chair of the OpenID Foundation board, reacts:

The OpenID community has identified two key issues it needs to address in 2008 that Randy mentioned in his column; security and usability.

I just have to giggle. I mean, apart from those two minor issues, OpenID is pretty good, right? He forgot to mention privacy, though.

18 Jul 2008

Analysing Data Loss

Filed under: Privacy,Security — Ben @ 14:39

My colleague, Steve Weis, has an interesting article analysing the Dataloss Database. With pictures!

Within accidental disclosures, 36% were due to improper disposal of media or computers. Surprisingly, 30% were due to leaks via snail mail.

10 Jul 2008

ACTA, The Pirate Bay and BTNS

Doc Searls just pointed me at a couple of articles. The first is about ACTA.

ACTA, first unveiled after being leaked to the public via Wikileaks, has sometimes been lauded by its supporters as “The Pirate Bay-killer,” due to its measures to criminalize the facilitation of copyright infringement on the internet – text arguably written specifically to beat pirate BitTorrent trackers. The accord will place add internet copyright enforcement to international law and force national ISPs to respond to international information requests, and subjects iPods and other electronic devices to ex parte searches at international borders.

Obviously this is yet another thing we must resist. The Pirate Bay’s answer to this

IPETEE would first test whether the remote machine is supporting the crypto technology; once that’s confirmed it would then exchange encryption keys with the machine before transmitting your actual request and sending the video file your way. All data would automatically be unscrambled once it reaches your machine, so there would be no need for your media player or download manager to support any new encryption technologies. And if the remote machine didn’t know how to handle encryption, the whole transfer would fall back to an unencrypted connection.

is a great idea, but … its already been done by the IETF BTNS (Better-Than-Nothing Security) Working Group.

The WG has the following specific goals:

a) Develop an informational framework document to describe the motivation and goals for having security protocols that support anonymous keying of security associations in general, and IPsec and IKE in particular

Hmmm. I guess I should figure out how I switch this on. Anyone?

24 Jun 2008

Information Card Foundation Launched

Yet another industry alliance launches today: the Information Card Foundation (yes, I know that’s currently a holding page: as always, the Americans think June 24th starts when they wake up).

I have agreed to be a “Community Steering Member”, which means I sit on the board and get a vote on what the ICF does. Weirdly, I am also representing Google on the ICF board. I guess I brought that on myself.

I am not super-happy with the ICF’s IPR policy, though it is slightly better than the OpenID Foundation’s. I had hoped to get that fixed before launch, but there’s only so many legal reviews the various founders could put up with at short notice, so I will have to continue to tinker post-launch.

It is also far from clear how sincere Microsoft are about all this. Will they behave, or will they be up to their usual shenanigans? We shall see (though the adoption of a fantastically weak IPR policy is not the best of starts)! And on that note, I still wait for any sign of movement at all on the technology Microsoft acquired from Credentica – which they have kinda, sorta, maybe committed to making generally available. This is key, IMO, to the next generation of identity management systems and will only flourish if people can freely experiment with it. So what are they waiting for?

(More news reports than you can shake a stick at.)

23 May 2008

Preprint: (Under)mining Privacy in Social Networks

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 15:11

Actually, I’m not sure if this one ends up in print or not. But anyway, I think its content is obvious from the title.

My colleagues Monica Chew and Dirk Balfanz did all the hard work on this paper.

12 May 2008

The World Without “Identity” or “Federation” is Already Here

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 12:24

My friend Alec Muffett thinks we should do away with “Big I” Identity. I’m all for that … but Alec seems to be quite confused.

Firstly, his central point, that all modern electronic identity requires the involvement of third parties, is just plain wrong. OpenID, which he doesn’t mention, is all about self-asserted identity – I put stuff on webpages I own and that’s my identity. Cardspace, to the extent it is used at all, is mostly used with self-signed certificates – I issue a new one for each site I want to log in to, and each time I visit that site I prove again that I own the corresponding private key. And, indeed, this is a pretty general theme through the “user-centric” identity community.

Secondly, the idea that you can get away with no third party involvement is just unrealistic. If everyone were honest, then sure, why go beyond self-assertion? But everyone is not. How do we deal with bad actors? Alec starts off down that path himself, with his motorcycling example: obviously conducting a driving test on the spot does not scale well – when I took my test, it took around 40 minutes to cover all the aspects considered necessary to establish sufficient skill, and I’d hesitate to argue that it could be reduced. The test used to be much shorter, and the price we paid was a very high death rate amongst young motorcyclists; stronger rules have made a big inroads on that statistic. It is not realistic to expect either me or the police to spend 40 minutes establishing my competence every time it comes into question. Alec appears to be recognising this problem by suggesting that the officer might instead rely on the word of my local bike club. But this has two problems, firstly I am now relying on a third party (the club) to certify me, which is exactly counter to Alec’s stated desires, and secondly, how does one deal with clubs whose only purpose is to certify people who actually should not be allowed to drive (because they’re incompetent or dangerous, for example)?

The usual answer one will get at this point from those who have not worked their way through the issues yet is “aha, but we don’t need a central authority to fix this problem, instead we can rely on some kind of reputation system”. The trouble is no-one has figured out how you build a reputation system in cyberspace (and perhaps in meatspace, too) that is not easily subverted by people creating networks of “fake” identities purely in order to boost their own reputations – at least, not without some kind of central authority attesting to identity.

Yet another issue that has to be faced is what to do about negative attributes (e.g. “this guy is a bad risk, don’t lend him money because he never pays it back”). No-one is going to willingly make those available to others. Once more, we end up having to invoke some kind of authority.

Of course, there are many cases where self-assertion is perfectly fine, so I have no argument with Alec there. And yes, there is a school of thought that says any involvement with self-issued stuff is a ridiculous idea, but you mostly run into that amongst policy people, who like to think that we’re all too stupid to look after ourselves, and corporate types who love silos (we find a lot of those in the Liberty Alliance and the ITU and such-like places, in my experience).

But the bottom line is that a) what he wants is insufficient to completely deal with the problems of identity and reputation and b) it is nothing that plenty of us haven’t been saying (and doing) all along – at least where it works.

Once you’ve figured that out, you realise how wrong

I am also here not going to get into the weirdness of Identity wherein the goal is to centralise your personal information to make management of it convenient, and then expend phenomenal amounts of brainpower implementing limited-disclosure mechanisms and other mathematica, in order to re-constrain the amount of information that is shared; e.g. “prove you are old enough to buy booze without disclosing how old you are”. Why consolidate the information in the first place, if it’s gonna be more work to keep it secret henceforth? It’s enough to drive you round the twist, but it’ll have to wait for a separate rant.

is. Consolidation is not what makes it necessary to use selective disclosure – that is driven by the need for the involvement of third parties. Obviously I can consolidate self-asserted attributes without any need for selective disclosure – if I want to prove something new or less revealing, I just create a new attribute. Whether its stored “centrally” (what alternative does Alec envision, I wonder?) or not is entirely orthogonal to the question.

Incidentally, the wit that said “Something you had, Something you forgot, Something you were” was the marvellous Nick Mathewson, one of the guys behind the Tor project. Also, Alec, if you think identity theft is fraud (as I do), then I recommend not using the misleading term preferred by those who want to shift blame, and call it “identity fraud” – in fraud, the victim is the person who believes the impersonator, not the person impersonated. Of course the banks would very much like you to believe that identity fraud is your problem, but it is not: it is theirs.

26 Apr 2008

Do We Need Credentica?

Filed under: Anonymity,Crypto,Open Source,Privacy,Security — Ben @ 20:22

I read that IBM have finally contributed Idemix to Higgins.

But … I am puzzled. Everyone knows that the reason Idemix has not been contributed sooner is because it infringes the Credentica patents. At least, so says Stefan – I wouldn’t know, I haven’t checked. But it seems plausible that at least IBM think that’s true.

So, what’s changed? Have IBM decided that Idemix does not infringe? Or did Microsoft let them publish? Or what?

If its the former, then do others agree? And if its the latter, then in what sense is this open source? If IBM have some kind of special permission with regard to the patents, that is of no assistance to the rest of us.

It seems to me that someone needs to do some explaining. But if the outcome is that Idemix really is open source, then what is the relevance of Credentica?

Incidentally, I wanted to take a look at what it is that IBM have actually released, but there doesn’t seem to be anything there.

Can Phorm Intercept SSL?

Filed under: Crypto,Open Source,Privacy — Ben @ 18:24

Someone asked me to comment on a thread over at BadPhorm on SSL interception.

In short, the question is: can someone in Phorm’s position decrypt SSL somehow? The fear is driven by the existence of appliances that do just this. But these appliances need to do one of two special things to work.

The first possibility is where the appliance is deployed in a corporate network to monitor traffic going from browsers inside the corporation to SSL servers outside. In this case, what you do is you have the SSL appliance act as a CA, and you install its CA certificate in each browser’s store of trusted CAs. Then when the appliance sees an SSL request go past it quickly creates (some would say “forges”) a certificate for the server the request is destined for and instead of routing the connection on to the real server, instead answers it itself, using the newly created certificate. Because the browser trusts the appliance’s CA this all looks perfectly fine and it will proceed without a warning. The appliance then creates an outgoing connection to the real server and acts as a proxy between the browser and server, thus getting access to the plaintext of the interaction.

I’d note in passing that in Netronome’s diagram they show a “trust relationship” between the webserver and the SSL appliance. This is not correct. There need be no relationship at all between the webserver and the appliance – indeed it would be fair to say that many a webserver operator would view what the appliance is doing as downright sneaky. Or dishonest, even.

But, in any case, inside the corporation this behaviour seems fair enough to me – they’re paying for the browser, the machine it runs on, the network connection and the employee’s time. I guess they have a right to see the data.

Could Phorm do this? Well, they could try to persuade anyone stupid enough to install a CA certificate of theirs in their browser, and then yes, indeed, this trick would work for them. More of the story: don’t install such certificates. Note that last time I looked if you wanted to register to do online returns for VAT you had to install one of these things. Oops!

Or, they could get certified as a CA and get automatically installed in everyone’s browser. I’m pretty sure, however, that such a use of a CA key would find them in breach of the conditions attached to their certification.

So, in short, Phorm can only do this to people who don’t understand what’s going on – i.e. 99% of Internet users. But not me.

The second scenario is to deploy the SSL interception appliance at the webserver end of the network (at least, this is how its usually done), and have it sniff incoming connections to the webserver. However, to break these connections it needs to have a copy of the webserver’s private key. I’m reasonably confident that the vast majority of webserver operators will not be handing over their private keys to Phorm, so even “99%” users are safe from this attack.

By the way, if you want to see this one in action, then you can: the excellent network sniffer, Wireshark, can do it. Full instructions can be found here. No need to buy an expensive appliance.

23 Apr 2008

Why You Should Always Use End-to-End Encryption

Filed under: Anonymity/Privacy,Crypto,Privacy,Security — Ben @ 17:46

A Twitter user has had all her private messages exposed to the world. This is one of the reasons I try to avoid sending private messages (at least, ones that I would like to remain private) over any system that does not employ end-to-end encryption.

At least then my only exposure is to my correspondent, not the muppets that run the messaging service I used.

One service this poor unfortunate has done for the world, though, is to provide an excellent example of why you should use cryptography routinely: you need not have any more to hide than your embarrassment.

Incidentally, I am going to stop using the combined tag “Anonymity/Privacy” after this post – clearly they are not always both applicable.

« Previous Page

Powered by WordPress