Links

Ben Laurie blathering

19 Sep 2011

Lessons Not Learned

Filed under: Identity Management,Security — Ben @ 15:50

Anyone who has not had their head under a rock knows about the DigiNotar fiasco.

And those who’ve been paying attention will also know that DigiNotar’s failure is only the most recent in a long series of proofs of what we’ve known for a long time: Certificate Authorities are nothing but a money-making scam. They provide us with no protection whatsoever.

So imagine how delighted I am that we’ve learnt the lessons here (not!) and are now proceeding with an even less-likely-to-succeed plan using OpenID. Well, the US is.

If the plan works, consumers who opt in might soon be able to choose among trusted third parties — such as banks, technology companies or cellphone service providers — that could verify certain personal information about them and issue them secure credentials to use in online transactions.

Does this sound familiar? Rather like “websites that opt in can choose among trusted third parties – Certificate Authorities – that can verify certain information about them and issue them secure credentials to use in online transactions”, perhaps? We’ve seen how well that works. And this time there’s not even a small number of vendors (i.e. the browser vendors) who can remove a “trusted third party” who turns out not to be trustworthy. This time you have to persuade everyone in the world who might rely on the untrusted third party to remove them from their list. Good luck with that (good luck with even finding out who they are).

What is particularly poignant about this article is that even though it’s title is “Online ID Verification Plan Carries Risks” the risks we are supposed to be concerned about are mostly privacy risks, for example

people may not want the banks they might use as their authenticators to know which government sites they visit

and

the government would need new privacy laws or regulations to prohibit identity verifiers from selling user data or sharing it with law enforcement officials without a warrant.

Towards the end, if anyone gets there, is a small mention of some security risk

Carrying around cyber IDs seems even riskier than Social Security cards, Mr. Titus says, because they could let people complete even bigger transactions, like buying a house online. “What happens when you leave your phone at a bar?” he asks. “Could someone take it and use it to commit a form of hyper identity theft?”

Dude! If only the risk were that easy to manage! The real problem comes when someone sets up an account as you with one of these “banks, technology companies or cellphone service providers” (note that CAs are technology companies). Then you are going to get your ass kicked, and you won’t even know who issued the faulty credential or how to stop it.

And, by the way, don’t be fooled by the favourite get-out-of-jail-free clause beloved by policymakers and spammers alike, “opt in”. It won’t matter whether you opt in or not, because the proof you’ve opted in will be down to these “trusted” third parties. And the guy stealing your identity will have no compunction about that particular claim.

18 Dec 2010

ƃuıʇsılʞɔɐlq uʍop-ǝpısd∩

Filed under: Anonymity,Crypto,Identity Management,Lazyweb,Privacy — Ben @ 14:54

A well-known problem with anonymity is that it allows trolls to ply their unwelcome trade. So, pretty much every decent cryptographic anonymity scheme proposed has some mechanism for blacklisting. Basically these work by some kind of zero-knowledge proof that you aren’t on the blacklist – and once you’ve done that you can proceed.

However, this scheme suffers from the usual problem with trolls: as soon as they’re blacklisted, they create a new account and carry on. Solving this problem ultimately leads to a need for strong identification for everyone so you can block the underlying identity. Obviously this isn’t going to happen any time soon, and ideally never, so blacklists appear to be fundamentally and fatally flawed, except perhaps in closed user groups (where you can, presumably, find a way to do strong-enough identification, at least sometimes) – for example, members of a club, or employees of a company.

So lately I’ve been thinking about using “blacklists” for reputation. That is, rather than complain about someone’s behaviour and get them blacklisted, instead when you see someone do something you like, add them to a “good behaviour blacklist”. Membership of the “blacklist” then proves the (anonymous) user has a good reputation, which could then be used, for example, to allow them to moderate posts, or could be shown to other users of the system (e.g. “the poster has a +1 reputation”), or all sorts of other things, depending on what the system in question does.

The advantage of doing it this way is that misbehaviour can then be used to remove reputation, and the traditional fallback of trolls no longer works: a new account is just as useless as the one they already have.

There is one snag that I can see, though, which is at least some anonymity systems with blacklisting (e.g. Nymble, which I’ve somehow only recently become aware of) have the side-effect of making every login by a blacklisted person linkable. This is not good, of course. I wonder if there are systems immune to this problem?

Given that Jan Camenisch et al have a presentation on upside-down blacklisting (predating my thinking by quite a long way – one day I’ll get there first!), I assume there are – however, according to Henry, Henry and Goldberg, Camenisch’s scheme is not very efficient compared to Nymble or Nymbler.

27 Nov 2010

Why Identity Is Still Just Login

Filed under: Identity Management,Privacy — Ben @ 15:14

It seems everyone now agrees that Internet identity is about a collection of assertions relating to a person (or thing, if you want to go there). Some of these assertions are assertions one makes about oneself, for example “I like brussels sprouts”, and assertions others make about you, for instance “Ben is a UK citizen, signed the UK Government”. These assertions are essentially invariant, or slowly varying, for the most part. So what makes an identity, we agree, is some collection of these assertions.

But we also agree that people need to assert different identities: there’s Ben at work, Ben as a member of the OpenSSL team, Ben at home and so on. Each of these identities, we like to think, corresponds to a different collection of assertions.

All we need to do, we think, to map these identities onto collections of electronic assertions and we’ll have solved The Internet Identity Problem. People will no longer be required to type in their credit card number five times a day, endlessly write down their address (and correct it when it changes) and so on. The ‘net will become one lovely, seamless experience of auto-communicated factoids about me that are just right for every circumstance and I’ll never fill in a form again.

You can probably see where I’m going. The more I think about it, the more I realise that every situation is different. My “identity” is contextual, and different for each context. We know this from endless studies of human behaviour.

So, what was the point of doing what every electronic identity system wants me to do, namely aggregating various assertions about me into various identities, and then choosing the right identity to reveal? To match this to what I do in the real world, I will need a different identity for each context.

So, what was the point of even talking about identities? Why not simply talk about assertions, and find better ways for me to quickly make the assertions I want to make. Cut out the middleman and do away with the notion of identity.

In practice, of course, this is exactly what has happened. The only vestige of this electronic identity that makes any sense is the ability to log in as some particular “person”. After that all my decisions are entirely contextual, and “identity” doesn’t help me at all. And so what we see is that “identity” has simply become “login”. And will always be so.

In a sense this is exactly what my Belay research project is all about – allowing me to decide exactly what I reveal in each individual context. In Belay, the ability to log in to a particular site will become the same kind of thing as any other assertion – a fine-grained permission I can grant or not grant.

Note: I am not against bundles of assertions – for example, I think lines one and two of my address clearly belong together (though for some purposes the first half of my postcode, or just my country, should suffice) or, less facetiously, when I use my credit card I almost always need to send a bunch of other stuff along with the credit card number. What I doubt is that the notion of a bundle that corresponds to an “identity” is a useful one. This implies that UIs where you pick “an identity” are doomed to failure – some other approach is needed.

14 Sep 2010

Experimenting With Client Certificates

Filed under: Crypto,Identity Management — Ben @ 16:30

I was recently contacted about yet another attempt to use client certificates for authentication. As anyone paying attention knows, this has some attractions but is pretty much unusable in browsers because of their diabolical UIs. So, I was fascinated to learn that this particular demo completely avoids that issue by implementing TLS entirely in Javascript! This strikes me as a hugely promising approach: now we have complete freedom to experiment with UI, whilst the server side can continue to use off-the-shelf software and standard configurations.

Once UI has been found that works well, I would hope that it would migrate to be part of the browser, it seems pretty clear that doing this on the webpage is not likely to lead to a secure solution in the long run. But in the meantime, anyone can have a crack at their own UI, and all they need is Javascript (OK, for non-coders that might sound like a problem, but believe me, the learning curve is way shallower than any browser I’ve played with).

Anway, pretty much end-of-message, except for some pointers.

I am very interested in finding competent JS/UI people who would be interested in banging harder on this problem – I can do all the crypto stuff, but I confess UI is not my forte! Anyone out there?

Note, by the way, that the focus on browsers as the “home of authentication” is also a barrier to change – applications also need to authenticate. This is why “zero install” solutions that rely on browsers (e.g. OpenID) are likely doomed to ultimate failure – by the time you’ve built all that into an application (which is obviously not “zero install”), you might as well have just switched it to using TLS and a client certificate…

23 May 2010

Nigori: Protocol Details

As promised, here are the details of the Nigori protocol (text version). I intend to publish libraries in (at least) C and Python. At some point, I’ll do a Stupid version, too.

Comments welcome, of course, and I should note that some details are likely to change as we get experience with implementation.

4 May 2009

Why Privacy Will Always Lose

Filed under: Identity Management,Privacy — Ben @ 17:05

In social networks, that is.

I hear a lot about how various social networks have privacy that sucks, and how, if only they got their user interaction act together, users would do so much better at choosing options that protect their privacy. This seems obviously untrue to me, and here’s why…

Imagine that I have two otherwise identical social networking sites, one with great privacy protection (GPPbook) and one that has privacy controls that suck (PCTSbook). What will my experience be on these two sites?

When I sign up on GPPbook, having jumped through whatever privacy-protecting hoops there are for account setup, what’s the next thing I want to do? Find my friends, of course. So, how do I do that? Well, I search for them, using, say, their name or their email address. But wait – GPPbook won’t let me see the names or email addresses of people who haven’t confirmed they are my friends. So, I’m screwed.

OK, so clearly that isn’t going to work, let’s relax the rules a little and use the not-quite-so-great site, NQSGPPbook, which will show names. After all, they’re rarely unique, so that seems pretty safe, right? And anyway, even if they are unique, what have I revealed? That someone signed up for the site at some point in the past – but nothing more. Cool, so now I can find my friends, great, so I look up my friend John Smith and I find ten thousand of them. No problem, just check the photos, where he lives, his birthday, his friends and so forth, and I can tell which one is my John Smith. But … oh dear, no friend lists, no photos, no date of birth – this is the privacy preserving site, remember? So, once more I’m screwed.

So how am I going to link to my friends? Pretty clearly the only privacy preserving way to do this is to contact them via some channel of communication I have already established with them, say email or instant messaging, and do the introduction over that. Similarly with any friends of friends. And so on.

Obviously the experience on PCTSbook is quite different. I look up John Smith, home in on the ones that live in the right place, are the right age, have the right friends and look right in their photos and I click “add friend” and I’m done.

So, clearly, privacy is a source of friction in social networking, slowing down the spread of GPPbook and NQSGPPbook in comparison to PCTSbook. And as we know, paralleling Dawkins on evolution, what spreads fastest is what we find around. So what we find around is social networks that are bad at protecting privacy.

This yields a testable hypothesis, like all good science, and here it is: the popularity of a social networking site will be in inverse proportion to the goodness of its privacy controls. I haven’t checked, but I’ll bet it turns out to be true.

And since I’ve mentioned evolution, here’s another thing that I’ve been thinking about in this context: evolution does not yield optimal solutions. As we know, evolution doesn’t even drive towards locally optimal solutions, it drives towards evolutionary stable strategies instead. And this is the underlying reason that we end up with systems that everyone hates – because they are determined by evolution, not optimality.

So, is there any hope? I was chatting with my friends Adriana and Alec, co-conspirators in The Mine! Project, about this theory, and they claimed their baby was immune to this issue, since it includes no mechanism for finding your friends. I disagree, this means it is as bad as it possible for it to be in terms of “introduction friction”. But thinking further – the reason there is friction in introductions is because the mechanisms are still very clunky. I have to use cut’n’paste and navigating to web pages that turn up in my email (and hope I’m not being phished) and so forth to complete the introduction. But if the electronic channels of communication were as smooth and natural as, say, talking, then it would be a different story. All of a sudden using existing communications channels would not be a source of friction – instead not using them would be.

So, if you want to save the world, then what you need to do is improve how we use the ‘net to communicate. Make it as easy and natural (and private) as talking.

4 Dec 2008

What Does “Unphishable” Mean?

Filed under: Crypto,General,Identity Management,Security — Ben @ 7:36

About a week ago, I posted about password usability. Somewhere in there I claimed that if passwords were unphishable, then you could use the same password everywhere.

Since then, I have had a steady stream of people saying I am obviously wrong. So, let’s take them one at a time…

…as long as the password I type in there is send over (encrypted of course) to the backend and recoverable there as plaintext password, you have to trust it is stored/used securely there.

This does assume that everywhere you use it actually secures your password, and doesn’t just store it as plain text.

…there are many attacks to finding your password — an administrator at Facebook could look it up in the password database…

OK, OK, that’s three, but they say the same thing. This one is easily dismissed – obviously if we are using an unphishable protocol the password is not sent at all and it is not kept in Facebook’s database. If it were, then clearly a phisher would easily be able to get your password once he tricked you into typing it in on his site.

Even with perfect or near-perfect hardware, somebody will always find a way to game the system via social engineering.

Don’t forget that we are in a utopia here where users only ever type their passwords into the unphishable password gadget. I think it’s pretty reasonable to assume that if we’ve trained users to do that, we have also trained them to never reveal their password at all anywhere else, including in person, over the phone, via video-conference or during a teledildonics session. Yes, this does mean changing the world, but … utopia, remember?

Mythical crypto-gadgets simply won’t save the day. All somebody has to do is replace your crypto-gadget with an identical-looking crypto-gadget of their own making and now it becomes the new “password” input field that is so phishable

This seems to be more a criticism of the idea that we can ever get to the password utopia, which is a fair comment, but doesn’t make my argument incorrect. I will offer, though, hardware devices (such as the one I wrote about recently) as an answer. Clearly much harder to replace with “an identical-looking crypto-gadget of their own making” than software.

There is also the notion of the “trusted path” which, if anyone ever figures out how to implement it in software, would make such a replacement equally difficult even if we don’t use hardware. However, if you read the Red Pill/Blue Pill paper, you’ll see I don’t hold out much hope for this.

you could have a weak password that the hacker could attack via brute force

This one is actually correct! Yes, it’s true that an unphishable password must be strong. Clearly no system relying solely on a password can defend against an attacker guessing the password and seeing if it works. The only defence against this is to make it infeasible for the attacker to guess it in reasonable time. So, yes, you must use a strong password. Sorry about that.

The primary reason one should not use the same password everywhere is that once that password is discovered at one location, then it can be reused at other locations

I feel that we’re veering off into philosophy slightly with this one, particularly since, in the same post, Conor says

I also look forward to being able to login once at the start of my day and maintain that state in a reasonably secure fashion for the entire day without having to re-authenticate every few minutes

which is an interesting piece of doublethink – surely if whatever provides this miraculous experience (one I also look forward to) is compromised then you are just as screwed – so wouldn’t the argument be that I should have a large number of these things, which I have to log into separately?

Nevertheless, I will have a go at it. In our utopia, remember, our password is only ever revealed to trusted widgets (whether hardware, software or something else is immaterial). This means, of course, that the password can’t be “discovered at one location” – this is the nature of unphishability! Therefore, I claim that the criticism is a priori invalid. Isn’t logic wonderful?

I don’t follow.

Because I can’t be fooled into divulging some credential where I shouldn’t means that it is appropriate that I use it everywhere? Are there not other attack vectors that would drool at the thought?

I include this for completeness. Clearly, this is a rhetorical device. When Paul comes up with an actual attack, rather than suggesting that there surely must be one, I shall respond.

Finally…

Conversely, that the fact that I can use the same credential everywhere is somehow a necessary aspect of ‘unphishability’?

Indeed it is. If it were unsafe to use the same credential everywhere, then the protocol must somehow reveal something to the other side that can be used to impersonate you (generally known as a “password equivalent” – for example, HTTP Digest Auth enrollment reveals a password equivalent that is not your password). This would make the protocol phishable. Therefore, it is a necessary requirement that an unphishable protocol allows you to use the same password everywhere.

Even more finally, for those whose heads exploded at the notion that I can log in with a password without ever revealing the password or a password equivalent, I offer you SRP.

3 Dec 2008

Podcast With Dave Birch

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 16:07

A few weeks ago, Dave Birch of Consult Hyperion interviewed me about Digital Identity. Because he weirdly doesn’t give a way to link to an individual podcast, here’s the M4a (whatever that is) and the MP3.

This was the first podcast I’ve done that I actually dared listen to. I think it came out pretty well.

2 Dec 2008

Multi-layered Authentication

Filed under: Crypto,Identity Management,Security — Ben @ 18:28

Although I don’t usually represent Google when I post on this blog (it being my personal blog), I figured readers might be interested in a blog post and an article I wrote with my colleague, Eric Sachs, about combining different authentication mechanisms in order to improve the user experience of strong authentication without having to change all the software in the world.

Of course, there are almost certainly better ways to do all this if you can change the software – that’s our next mission!

26 Nov 2008

Do Passwords Scale?

Filed under: Crypto,Identity Management,Security — Ben @ 11:40

Today I spent an hour with a bunch of academics. Each of the panellists had to talk for a few minutes to set the scene. I decided to talk about the worst usability disaster ever, namely passwords.

The problem with passwords is that we pin all our security on them. Although I can imagine a future world where we pin our security on other things, like strong credentials, I still wonder how that world really looks?

In particular, when I buy the latest Mac, how do I get it all signed up with all these credentials? It seems to me that the only usable answer to that question is that I fetch them from the cloud. How do I do that? Ultimately, with a password of some kind. Yes, you can back it up with a dongle or something, but when I lose that, how do I get a new one? I call you and give you … a password (of course I include my mother’s maiden name, my postcode, my date of birth, the name of my first and most beloved hamster and all that other nonsense as passwords). And if I only ever do this every couple of years, how easy is it to persuade me to do it wrong? Pretty damn easy, if you ask me. And you did.

So, where does this leave us? Users must have passwords, so why fight it? Why not admit that its where we have to be and make it a familiar (but secure) process, so that users can actually safely use passwords, phishing-free?

The answer to this is deeply sad. It is because we have done a fantastic job on usability of passwords. They’re so usable that anyone will type their password anywhere they see the word “password” with a box next to it. Phishing is utterly trivial because we have trained the world to expect to be phished every time they see a new website.

Of course, we can fix this cryptographically – that’s easy. But let’s say we did that. How do we stop the user from ever typing their password into a phishable box from this day forward? So long as they only ever type the password into the crypto gadget that does the unphishable protocol, they are safe, no matter who asks them to log in. But as soon as they type it into a text box on a web page, they’re screwed.

So, this is why passwords are the worst usability disaster ever.

Anyway, now to the title. Suppose we get to this utopia, an academic suggested, we’d still be screwed because passwords don’t scale. Because, of course, we need a different password for each site we log in to.

Well, no. If your password is unphishable, then it is obviously the case that it can be the same everywhere. Or it wouldn’t be unphishable. The only reason you need a password for each site is because we’re too lame to fix the real problem. Passwords scale just fine. If it wasn’t for those pesky users (that we trained to do the wrong thing), that is.

20 Nov 2008

You Need Delegation, Too

Kim wants to save the world from itself. In short, he is talking about yet another incident where some service asks for username and password to some other service, in order to glean information from your account to do something cool. Usually this turns out to be “harvest my contacts so I don’t have to find all my friends yet again on the social network of the month”, but in this case it was to calculate your “Twitterank”. Whatever that is. Kim tells us

The only safe solution for the broad spectrum of computer users is one in which they cannot give away their secrets. In other words: Information Cards (the advantage being they don’t necessarily require hardware) or Smart Cards. Can there be a better teacher than reality?

Well, no. There’s a safer way that’s just as useful: turn off your computer. Since what Kim proposes means that I simply can’t get my Twitterank at all (oh, the humanity!), why even bother with Infocards or any other kind of authentication I can’t give away? I may as well just watch TV instead.

Now, the emerging answer to this problem is OAuth, which protects your passwords, if you authenticate that way. Of course, OAuth is perfectly compatible with the notion of signing in at your service provider with an Infocard, just as it is with signing in with a password. But where is the advantage of Infocards? Once you have deployed OAuth, you have removed the need for users to reveal their passwords, so now the value add for Infocards seems quite small.

But if Infocards (or any other kind of signature-based credential) supported delegation, this would be much cooler. Then the user could sign a statement saying, in effect, “give the holder of key X access to my contacts” (or whatever it is they want to give access to) using the private key of the credential they use for logging in. Then they give Twitterank a copy of their certificate and a copy of the new signed delegation certificate. Twitterank presents the chained certificates and proves they have private key X. Twitter checks the signature on the chained delegation certificate and that the user certificate is the one corresponding to the account Twitterank wants access to, and then gives access to just the data specified in the delegation certificate.

The beauty of this is it can be sub-delegated, a facility that is entirely missing from OAuth, and one that I confidently expect to be the next problem in this space (but apparently predicting such things is of little value – no-one listens until they hit the brick wall lack of the facility puts in their way).

17 Nov 2008

Identification Is Not Security

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 16:50

Kim writes about minimal disclosure. Funnily enough my wife, Camilla, spontaneously explained minimal disclosure to me a couple of nights ago. She was incensed that she ended up having to “prove” who she was in order to pay a bill over the phone.

First of all, they asked her for her password. Of course, she has no idea what her password might be with this particular company, so their suggestion was she guess. Camilla surprised me by telling me that she had, of course, declined to guess, because by guessing she would be revealing all her passwords that she might use elsewhere. So, they then resorted to the usual stupidity: mother’s maiden name, postcode, phone number and so forth. Camilla said she was happy to provide that information because she didn’t feel it was in any way secret – which, of course, means it doesn’t really authenticate her, either.

Anyway, her point was that in order to pay a bill she really shouldn’t have to authenticate to the payee – what do they care who pays the money, so long as it gets paid? In fact, really, we want the authentication to be the other way round – the payee should prove to her that they are really the payee. It would also be nice if they provided some level of assurance that she is paying the right bill. But they really don’t need to have any clue who she is, so long as she can hand over money somehow (which might, of course, including authenticating somehow to some money-handling middleman).

But what seems to be happening now is that everyone is using identity as a proxy for security. If we know who you are, then everything else springs from that.

Now, if what you want to do is to determine whether someone is authorised to do something, then certainly this is an approach that works. I find out who you are, then I look you up in my Big Table of Everything Everyone Is Allowed To Do, and I’m done. However, and now I finally circle back to Kim’s post, for many, if not most, purposes, identification is far more than is really needed. For example, Equifax just launched the Over 18 I-Card. I hope Equifax got this right and issued a card that doesn’t reveal anything else about you – but even if they didn’t, clearly it could be done – and clearly there’s value in proving you’re over 18, and therefore authorised to do some things you might not otherwise be able to do. Though I’d note that I am not over 18 in Equifax’ view because I do not have an SSN!

Anyway, current deficiencies aside, this is a great example of where minimal disclosure works better than identification – rather than everyone having a lookup table containing everyone in the world and whether they are over 18, someone who has the information anyway does the lookup once and then signs the statement “yep, the bearer is over 18”.

But in many other cases identification doesn’t work at all – after all, the premise of the ID card is that it is supposed to improve our security against terrorists. But its pretty obvious that identifying people really isn’t going to help – you can work that out just by thinking about it, but even more importantly, in several recent terrorist attacks everyone has been very thoroughly identified but it hasn’t helped one bit.

And in the case of my wife trying to pay a bill, identification was completely without purpose. Yet everyone wants to do it. As Kim says, we really need to rethink the world in terms of minimal disclosure – and as I show above, sometimes this is actually the easiest way to think about it – my one area of disagreement is that we should not call this “identity” or even “contextual identity”. We need a term that makes it clear it has nothing to do with identification. I prefer to think in terms of “proof of entitlement” or “proof of authority” – but those don’t exactly roll off the tongue … ideas?

15 Oct 2008

Federated Login Usability Studies

Filed under: Identity Management — Ben @ 15:14

Over the last few weeks, both Google and Yahoo! have released federated login usability studies.

Google’s proposes a flow very similar to login on Amazon, only changing “I’m a new customer” to “Help me log in” and “Do you have a foo.com account?” to “Do you have a foo.com password?”. Amazingly, this is enough for users to get themselves logged in without any training.

An interesting data point, though: users found their second login more confusing than the first. This is because they are used to having a password after the first login, whereas with a federated login, the experience is the same every time. Fortunately, although they’re not quite sure what’s going on, what they do ends up with them logged in anyway. My feeling is that if we start doing federated login widely this confusion will soon evaporate.

Yahoo!, on the other hand, focused on OpenID. This seems to have been a much less happy experience for users, which certainly comes as no surprise to me – it’s always been clear that the average user is not going to understand the idea of logging in with a URL. Plus, they’re damned unwieldy (i.e. big and hard to remember). So, their conclusion was one that doesn’t scale well: use per-IdP buttons.

This backs up my view that OpenID will never really work until it uses email addresses as user IDs.

11 Aug 2008

Call Me Nostradamus!

Filed under: Identity Management,Security — Ben @ 19:28

Looking for links for the previous article on OpenID, I came across this post, from May 2007.

Sun’s House of Cards?

Sun have a plan. In short, they’re going to have an OpenID provider which authenticates Sun employees only.

That is, so long as you trust your DNS. Or, in other words, if you aren’t using any untrusted networks. How often does that happen?

And in the comments we find

Well, obviously it all has to run over TLS to be useful. Which should address those issues, right?

Comment by Tim Bray — 8 May 2007 @ 22:43 |Edit This

“Obviously”. Yes, that’s obvious to you and me, but really you need to write down the rules.

Plus, of course, X.509 certs haven’t proved to be the most invulnerable things in the world.

Comment by Ben — 10 May 2007 @ 8:10 |Edit This

Now, if that isn’t prophetic, I don’t know what is.

10 Aug 2008

NYT Doesn’t Quite Get It, Hilarity From OpenID

Filed under: General,Identity Management,Privacy,Security — Ben @ 13:31

The New York Times’ Randy Stross has a piece about passwords and what a bad idea they are (sorry, behind a loginwall). So far, so good (and I’ll admit to bias here: I was interviewed for this piece, and whilst there’s no attribution, what I was saying is clearly reflected in the article), but Stross weirdly focuses on OpenID as the continuing cause of our password woes, because, he says, it is blocking the deployment of information cards, which will save us all.

Now, I am no fan of OpenID, but Stross is dead wrong here. OpenID says nothing about how you log in. It is not OpenID’s fault that the login is generally done with a password – that blame we must all accept collectively.

And whilst I firmly believe that the only way out of this mess is strong authentication, information cards are hardly the be-all and end-all of that game. They certainly have a way to go in usability before they’re going to be taking the world by storm. Don’t blame OpenID for that.

In the meantime, Scott Kveton, chair of the OpenID Foundation board, reacts:

The OpenID community has identified two key issues it needs to address in 2008 that Randy mentioned in his column; security and usability.

I just have to giggle. I mean, apart from those two minor issues, OpenID is pretty good, right? He forgot to mention privacy, though.

8 Aug 2008

OpenID/Debian PRNG/DNS Cache Poisoning

Filed under: Identity Management,Security — Ben @ 12:36

Where “P” stands for “Predictable”.

Richard Clayton and I today released a security advisory showing how three independent vulnerabilities combine to make a rather scary mess, mitigated only by the fact that no-one protects anything very valuable with OpenID anyway. But just think how much worse it could have been (on which I shall write more soon)!

24 Jun 2008

Information Card Foundation Launched

Yet another industry alliance launches today: the Information Card Foundation (yes, I know that’s currently a holding page: as always, the Americans think June 24th starts when they wake up).

I have agreed to be a “Community Steering Member”, which means I sit on the board and get a vote on what the ICF does. Weirdly, I am also representing Google on the ICF board. I guess I brought that on myself.

I am not super-happy with the ICF’s IPR policy, though it is slightly better than the OpenID Foundation’s. I had hoped to get that fixed before launch, but there’s only so many legal reviews the various founders could put up with at short notice, so I will have to continue to tinker post-launch.

It is also far from clear how sincere Microsoft are about all this. Will they behave, or will they be up to their usual shenanigans? We shall see (though the adoption of a fantastically weak IPR policy is not the best of starts)! And on that note, I still wait for any sign of movement at all on the technology Microsoft acquired from Credentica – which they have kinda, sorta, maybe committed to making generally available. This is key, IMO, to the next generation of identity management systems and will only flourish if people can freely experiment with it. So what are they waiting for?

(More news reports than you can shake a stick at.)

20 Jun 2008

Using OpenID Responsibly

Filed under: Identity Management,Security — Ben @ 12:46

Some guy called Thomas asks the very reasonable question (where “this problem” is the OpenID phishing problem):

Too much of all of this discussion around OpenID focuses around whether or not it’s OpenID’s job to solve this problem, whether it is insecure, whether it promotes phishing, and so on. But none of the discussion focuses on what you should actually *do* when you care about making it easy for people to use your site while keeping security good enough.

Someone smart on the topic care to tell me what I should be doing as a website maker, and as a potential OpenID user on other websites ?

So, the answer to this is: you should only accept OpenID logins from providers that use unphishable authentication. How can you know what authentication they use? Well, right now you can’t, but a group of us are about to work on the OpenID Provider Authentication Policy Extension (a.k.a. PAPE) which will enable you to find out.

Until then, my answer continues to be “just say no”, if you are a website maker. If you are an OpenID user, then the answer is to find a provider that supports unphishable authentication – at least you will be safe, even if the rest of the world continues to suffer.

23 May 2008

Preprint: (Under)mining Privacy in Social Networks

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 15:11

Actually, I’m not sure if this one ends up in print or not. But anyway, I think its content is obvious from the title.

My colleagues Monica Chew and Dirk Balfanz did all the hard work on this paper.

12 May 2008

The World Without “Identity” or “Federation” is Already Here

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 12:24

My friend Alec Muffett thinks we should do away with “Big I” Identity. I’m all for that … but Alec seems to be quite confused.

Firstly, his central point, that all modern electronic identity requires the involvement of third parties, is just plain wrong. OpenID, which he doesn’t mention, is all about self-asserted identity – I put stuff on webpages I own and that’s my identity. Cardspace, to the extent it is used at all, is mostly used with self-signed certificates – I issue a new one for each site I want to log in to, and each time I visit that site I prove again that I own the corresponding private key. And, indeed, this is a pretty general theme through the “user-centric” identity community.

Secondly, the idea that you can get away with no third party involvement is just unrealistic. If everyone were honest, then sure, why go beyond self-assertion? But everyone is not. How do we deal with bad actors? Alec starts off down that path himself, with his motorcycling example: obviously conducting a driving test on the spot does not scale well – when I took my test, it took around 40 minutes to cover all the aspects considered necessary to establish sufficient skill, and I’d hesitate to argue that it could be reduced. The test used to be much shorter, and the price we paid was a very high death rate amongst young motorcyclists; stronger rules have made a big inroads on that statistic. It is not realistic to expect either me or the police to spend 40 minutes establishing my competence every time it comes into question. Alec appears to be recognising this problem by suggesting that the officer might instead rely on the word of my local bike club. But this has two problems, firstly I am now relying on a third party (the club) to certify me, which is exactly counter to Alec’s stated desires, and secondly, how does one deal with clubs whose only purpose is to certify people who actually should not be allowed to drive (because they’re incompetent or dangerous, for example)?

The usual answer one will get at this point from those who have not worked their way through the issues yet is “aha, but we don’t need a central authority to fix this problem, instead we can rely on some kind of reputation system”. The trouble is no-one has figured out how you build a reputation system in cyberspace (and perhaps in meatspace, too) that is not easily subverted by people creating networks of “fake” identities purely in order to boost their own reputations – at least, not without some kind of central authority attesting to identity.

Yet another issue that has to be faced is what to do about negative attributes (e.g. “this guy is a bad risk, don’t lend him money because he never pays it back”). No-one is going to willingly make those available to others. Once more, we end up having to invoke some kind of authority.

Of course, there are many cases where self-assertion is perfectly fine, so I have no argument with Alec there. And yes, there is a school of thought that says any involvement with self-issued stuff is a ridiculous idea, but you mostly run into that amongst policy people, who like to think that we’re all too stupid to look after ourselves, and corporate types who love silos (we find a lot of those in the Liberty Alliance and the ITU and such-like places, in my experience).

But the bottom line is that a) what he wants is insufficient to completely deal with the problems of identity and reputation and b) it is nothing that plenty of us haven’t been saying (and doing) all along – at least where it works.

Once you’ve figured that out, you realise how wrong

I am also here not going to get into the weirdness of Identity wherein the goal is to centralise your personal information to make management of it convenient, and then expend phenomenal amounts of brainpower implementing limited-disclosure mechanisms and other mathematica, in order to re-constrain the amount of information that is shared; e.g. “prove you are old enough to buy booze without disclosing how old you are”. Why consolidate the information in the first place, if it’s gonna be more work to keep it secret henceforth? It’s enough to drive you round the twist, but it’ll have to wait for a separate rant.

is. Consolidation is not what makes it necessary to use selective disclosure – that is driven by the need for the involvement of third parties. Obviously I can consolidate self-asserted attributes without any need for selective disclosure – if I want to prove something new or less revealing, I just create a new attribute. Whether its stored “centrally” (what alternative does Alec envision, I wonder?) or not is entirely orthogonal to the question.

Incidentally, the wit that said “Something you had, Something you forgot, Something you were” was the marvellous Nick Mathewson, one of the guys behind the Tor project. Also, Alec, if you think identity theft is fraud (as I do), then I recommend not using the misleading term preferred by those who want to shift blame, and call it “identity fraud” – in fraud, the victim is the person who believes the impersonator, not the person impersonated. Of course the banks would very much like you to believe that identity fraud is your problem, but it is not: it is theirs.

Next Page »

Powered by WordPress