Links

Ben Laurie blathering

4 Mar 2010

Selective Disclosure, At Last?

Filed under: Anonymity,Crypto,Privacy,Security — Ben @ 5:34

Apparently it’s nearly five years since I first wrote about this and now it finally seems we might get to use selective disclosure.

I’m not going to re-iterate what selective disclosure is good for and apparently my friend Ben Hyde has spared me from the need to be cynical, though I think (I am not a lawyer!) he is wrong: the OSP applies to each individual specification – you are not required to use them in the context of each other.

So, for now, I will just celebrate the fact that Microsoft has finally made good on its promise to open up the technology, including BSD-licensed code. Though I guess I will have to inject one note of cynicism: a quick glance at the specification (you can get it here) suggests that they have only opened up the most basic use of the technology: the ability to assert a subset of the signed claims. There’s a lot more there. I hope they plan to open that up, too (how long will we have to wait, though?).

3 Jun 2007

Kim Cameron on Me on Selective Disclosure

Filed under: Anonymity/Privacy,Identity Management — Ben @ 4:18

Kim has a couple of posts responding to my paper on selective disclosure and my claim that CardSpace does not obey his fourth law.

First off, Kim thinks I shouldn’t say unlinkability and verifiability are things that an identity system should be required to support, on the basis that sometimes you want to be linked, and sometimes you don’t need verification. Well, of course … perhaps he didn’t notice that I said “here are three properties assertions must be able to have” – I didn’t say that every assertion should have these properties.

He also does not like this statement

Note a subtle but important difference between Kim’s laws and mine – he talks about identifiers whereas I talk about assertions. In an ideal world, assertions would not be identifiers; but it turns out that in practice they often are.

He claims that, in fact, his laws are about assertions. Allow me to quote his fourth law in full

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

You will note that this law talks about identifiers. Not identities. Not assertions. The point I am trying to make is that assertions are identifiers when they are signed using conventional signatures. This is because each time a signed assertion is presented it is identical to the last time it was presented, and different from all other signed assertions (since it must be linked to the identity of the subject of the assertion, or it is useless). The very core of my argument is that unless assertions are unlinkable, then they are identifiers – and, what’s more, they are omnidirectional identifiers. Therefore the “identity metasystem” as currently implemented cannot obey Kim’s fourth law.

Finally, he attempts to show that I am wrong about this claim, with the following argument

How does CardSpace hide the identity of the relying party? It associates some random information – unknown to the identity provider – with each Information Card. Then it hashes this random information (let’s call it a “salt”) with the identity of the site being visited. That is conveyed to the identity provider instead of the identity of the site. We call it the “Client Pseudonym”. Unlike a Liberty Alliance client pseudomym, the identity provider doesn’t know what relying party a client pseudonym is associated with.

The identity provider can use this value to determine that the user is returning to some site she has visited before, but has no idea which site that would be. Two users going to the same site would have cards containing different random information. Meanwhile, the Relying Party does not see the client pseudonym and has no way of calculating what client pseudonym is associated with a given user.

Of course, if the identity provider and the relying party never talk to each other, then this works just fine. But clearly it is easy for the two of them to put their heads together and find out who the user is. I require unlinkability even if everyone gangs up to track the user. So, this argument totally fails to satisfy my requirement.

I’m looking forward to the next post in Kim’s series…

The question now becomes that of how identity providers behave. Given that suddenly they have no visibility onto the relying party, is linkability still possible? I’ll discuss this next.

2 May 2007

Selective Disclosure

Filed under: Crypto,Identity Management,Security — Ben @ 19:10

I seem to have to explain selective disclosure about once a week – and I’m going to try to explain it again in about 5 minutes flat, next week at an OECD workshop on identity management.

So, I figured it was finally time to write a paper on it. Feedback welcome!

25 Jun 2007

Stefan Brands on Minimal Disclosure

Filed under: Anonymity/Privacy,Identity Management,Security — Ben @ 18:50

Stefan Brands writes eloquently about the spectrum of uses available when selective disclosure is employed, which I might paraphrase as ranging from “anonymous” to “completely privacy invading”, contrary to many peoples’ perceptions. Selective disclosure is often seen as a purely privacy-preserving technology; but that misses the point. Selective disclosure allows the full spectrum of options – from nothing at all to everything. Other signature mechanisms and technologies do not. It’s as simple as that.

One thing that intrigues me, though, is his statement at the end: that the issuer has the ability to control what is revealed. I’m dubious about the value of this property. The user should be aware of this control and therefore able to choose whether to show the certificate at all. Similarly, the relying party can refuse to continue the transaction unless his requirements for disclosure are satisfied. What did the identity provider add by having a hand in this decision?

17 May 2011

Bitcoin

Filed under: Anonymity,Distributed stuff,Security — Ben @ 17:03

A friend alerted to me to a sudden wave of excitement about Bitcoin.

I have to ask: why? What has changed in the last 10 years to make this work when it didn’t in, say, 1999, when many other related systems (including one of my own) were causing similar excitement? Or in the 20 years since the wave before that, in 1990?

As far as I can see, nothing.

Also, for what its worth, if you are going to deploy electronic coins, why on earth make them expensive to create? That’s just burning money – the idea is to make something unforgeable as cheaply as possible. This is why all modern currencies are fiat currencies instead of being made out of gold.

Bitcoins are designed to be expensive to make: they rely on proof-of-work. It is far more sensible to use signatures over random numbers as a basis, as asymmetric encryption gives us the required unforgeability without any need to involve work. This is how Chaum’s original system worked. And the only real improvement since then has been Brands‘ selective disclosure work.

If you want to limit supply, there are cheaper ways to do that, too. And proof-of-work doesn’t, anyway (it just gives the lion’s share to the guy with the cheapest/biggest hardware).

Incidentally, Lucre has recently been used as the basis for a fully-fledged transaction system, Open Transactions. Note: I have not used this system, so make no claims about how well it works.

(Edit: background reading – “Proof-of-Work” Proves Not to Work)

12 May 2008

The World Without “Identity” or “Federation” is Already Here

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 12:24

My friend Alec Muffett thinks we should do away with “Big I” Identity. I’m all for that … but Alec seems to be quite confused.

Firstly, his central point, that all modern electronic identity requires the involvement of third parties, is just plain wrong. OpenID, which he doesn’t mention, is all about self-asserted identity – I put stuff on webpages I own and that’s my identity. Cardspace, to the extent it is used at all, is mostly used with self-signed certificates – I issue a new one for each site I want to log in to, and each time I visit that site I prove again that I own the corresponding private key. And, indeed, this is a pretty general theme through the “user-centric” identity community.

Secondly, the idea that you can get away with no third party involvement is just unrealistic. If everyone were honest, then sure, why go beyond self-assertion? But everyone is not. How do we deal with bad actors? Alec starts off down that path himself, with his motorcycling example: obviously conducting a driving test on the spot does not scale well – when I took my test, it took around 40 minutes to cover all the aspects considered necessary to establish sufficient skill, and I’d hesitate to argue that it could be reduced. The test used to be much shorter, and the price we paid was a very high death rate amongst young motorcyclists; stronger rules have made a big inroads on that statistic. It is not realistic to expect either me or the police to spend 40 minutes establishing my competence every time it comes into question. Alec appears to be recognising this problem by suggesting that the officer might instead rely on the word of my local bike club. But this has two problems, firstly I am now relying on a third party (the club) to certify me, which is exactly counter to Alec’s stated desires, and secondly, how does one deal with clubs whose only purpose is to certify people who actually should not be allowed to drive (because they’re incompetent or dangerous, for example)?

The usual answer one will get at this point from those who have not worked their way through the issues yet is “aha, but we don’t need a central authority to fix this problem, instead we can rely on some kind of reputation system”. The trouble is no-one has figured out how you build a reputation system in cyberspace (and perhaps in meatspace, too) that is not easily subverted by people creating networks of “fake” identities purely in order to boost their own reputations – at least, not without some kind of central authority attesting to identity.

Yet another issue that has to be faced is what to do about negative attributes (e.g. “this guy is a bad risk, don’t lend him money because he never pays it back”). No-one is going to willingly make those available to others. Once more, we end up having to invoke some kind of authority.

Of course, there are many cases where self-assertion is perfectly fine, so I have no argument with Alec there. And yes, there is a school of thought that says any involvement with self-issued stuff is a ridiculous idea, but you mostly run into that amongst policy people, who like to think that we’re all too stupid to look after ourselves, and corporate types who love silos (we find a lot of those in the Liberty Alliance and the ITU and such-like places, in my experience).

But the bottom line is that a) what he wants is insufficient to completely deal with the problems of identity and reputation and b) it is nothing that plenty of us haven’t been saying (and doing) all along – at least where it works.

Once you’ve figured that out, you realise how wrong

I am also here not going to get into the weirdness of Identity wherein the goal is to centralise your personal information to make management of it convenient, and then expend phenomenal amounts of brainpower implementing limited-disclosure mechanisms and other mathematica, in order to re-constrain the amount of information that is shared; e.g. “prove you are old enough to buy booze without disclosing how old you are”. Why consolidate the information in the first place, if it’s gonna be more work to keep it secret henceforth? It’s enough to drive you round the twist, but it’ll have to wait for a separate rant.

is. Consolidation is not what makes it necessary to use selective disclosure – that is driven by the need for the involvement of third parties. Obviously I can consolidate self-asserted attributes without any need for selective disclosure – if I want to prove something new or less revealing, I just create a new attribute. Whether its stored “centrally” (what alternative does Alec envision, I wonder?) or not is entirely orthogonal to the question.

Incidentally, the wit that said “Something you had, Something you forgot, Something you were” was the marvellous Nick Mathewson, one of the guys behind the Tor project. Also, Alec, if you think identity theft is fraud (as I do), then I recommend not using the misleading term preferred by those who want to shift blame, and call it “identity fraud” – in fraud, the victim is the person who believes the impersonator, not the person impersonated. Of course the banks would very much like you to believe that identity fraud is your problem, but it is not: it is theirs.

6 Mar 2008

Microsoft Buys Credentica

Kim and Stefan blog about Microsoft’s acquisition of Stefan’s selective disclosure patents and technologies, which I’ve blogged about many times before.

This is potentially great news, especially if one interprets Kim’s

Our goal is that Minimal Disclosure Tokens will become base features of identity platforms and products, leading to the safest possible intenet. I don’t think the point here is ultimately to make a dollar. It’s about building a system of identity that can withstand the ravages that the Internet will unleash.

in the most positive way. Unfortunately, comments such as this from Stefan

Microsoft plans to integrate the technology into Windows Communication Foundation and Windows Cardspace.

and this from Microsoft’s Privacy folk

When this technology is broadly available in Microsoft products (such as Windows Communication Foundation and Windows Cardspace), enterprises, governments, and consumers all stand to benefit from the enhanced security and privacy that it will enable.

sound more like the Microsoft we know and love.

I await developments with interest.

29 Jun 2007

(Lack of) Knowledge Limits the Imagination

I was poking around Jacqui Smith’s website, since she is our new Home Secretary, and I was disappointed (but not surprised) to find this. I won’t bore you with yet another diatribe about ID cards, I’m sure you know what I’d say anyway. But I was interested to note this:

In addition, when we provide benefits or people receive free treatment on the NHS it is important that we know who the recipients of these services are. ID Cards will help us ensure that only those who are entitled to these services get them.

As I have been saying for a while, the imagination of policy makers is limited by what they think is possible. Notice that in the second sentence she talks about entitlement – but in the first about identity. Policy makers tend to think you must have strong identity assertions in order to judge whether there is entitlement. But this is not so: in fact, it is possible to prove entitlement without any hint of identity at all, using, of course, selective disclosure. Because policy makers don’t, in general, know about selective disclosure, they find it impossible to think in terms of anything other than what they are used to, which is, of course, various forms of identity combined with access control lists.

This is why I tend to go on about selective disclosure every time I am in a room with policy makers.

17 May 2007

Is Liberty Inherently User-Centric?

Filed under: Anonymity/Privacy,Identity Management — Ben @ 14:49

I have already stated that I believe that Liberty can be used in a user-centric way, but I am still being beaten up by Liberty proponents. They appear to want me to believe that Liberty discovery is only about user-centric identity.

I’m not buying it. Firstly, statements made by people involved in Liberty lead me to believe that they are interested in discovery of services that are not visible to users. But that’s just hearsay, so here’s some of Liberty’s own words, from the Liberty ID-WSF Security and Privacy Overview

• Notice.

Public-facing Liberty-enabled providers should provide the Principal clear notice of who is collecting the information, how they are collecting it (e.g., directly or through cookies, etc.), whether they disclose this information to other entities, etc.

• Choice.

Public-facing Liberty-enabled providers should offer Principals choice, to the extent appropriate given the circumstances, regarding how Personally Identifiable Information (PII) is collected and used beyond the use for which the information was provided. Providers should allow Principals to review, verify, or modify consents previously given. Liberty-enabled providers should provide for “usage directives” for data through contractual arrangements or through the use of Rights Expression Languages.

• Principal Access to Personally Identifiable Information (PII).

Consistent with, and as required by, relevant law, public-facing Liberty-enabled providers that maintain PII should offer a Principal reasonable access to view the non-proprietary PII that it collects from the Principal or maintains about the Principal.

• Correctness.

Public-facing Liberty-enabled provider should permit Principals the opportunity to review and correct PII that the entities store.

• Relevance.

Liberty-enabled providers should use PII for the purpose for which it was collected and consistent with the uses for which the Principal has consented.

• Timeliness.

Liberty-enabled providers should retain PII only so long as is necessary or requested and consistent with a retention policy accepted by the Principal.

• Complaint Resolution.

Liberty-enabled providers should offer a complaint resolution mechanism for Principals who believe their PII has been mishandled.

• Security.

Liberty-enabled providers should provide an adequate level of security for PII.

All good principles. If only terms like “public-facing Liberty-enabled providers” and “non-proprietary PII” had not been used, I would be totally buying that Liberty is all about user control.

As it is, I’m not sure why we’re arguing. Liberty seems, quite clearly, to have mechanisms that are aimed at allowing businesses to coordinate data they have on people, without the people being involved. It also has mechanisms that do allow the people to participate. This is good, and I’m sure many of us want to encourage their use in the latter mode. What’s more, I’m sure we’d all like to see Liberty adhere to its principles (for example, from the same document, “Avoiding collusion between identity provider and service provider”) by adopting, for example, selective disclosure techniques, so that it when it is used in these modes (and perhaps in others) it better protects the important people. That is, you.

In short, I think the people who are beating me up are on the same page as me, so can we stop arguing and do something constructive, please?

13 May 2007

Is Liberty User-Centric?

Paul Madsen and Pat Patterson berate me for suggesting that Liberty is all about silos. They’re right, of course. You can use Liberty to support user-centric identity management, if you want to. But I’m not buying their argument that Liberty is all about user-centric. Paul Madsen says that Liberty is built on the assumption that users keep their identity where they want to; if that were really true it would be a very strange assumption indeed, since its pretty clear that users currently do not have any control at all over where their identity is kept, to speak of.

So, I’ll definitely buy a modified version of Paul’s assumptions:

  1. Users’ identity will be kept in multiple places.
  2. The ‘where’ can be 3rd party identity providers as well as local storage (e.g. devices).
  3. It’s highly unlikely that all aspects of identity will be maintained at the same provider, i.e. there will be multiple ‘wheres’.
  4. Most users don’t want to be responsible for facilitating identity sharing by themselves providing the ‘where’.
  5. Experts will misinterpret 1-4 to suit whatever is their current competitive positioning.

I don’t see how changing the first assumption (from “users keep their identity where they want to”) makes any difference to the architecture of appropriate solutions, once you’ve combined it with the fourth assumption. Of course, if you drop the fourth assumption, it makes a huge difference, because you’ll architect a solution where the user is in control.

But Liberty cannot drop the fourth assumption: then facilities for discovery of data the user has no control over would not be needed.

Or, in other words, the base assumption of user-centric identity management is that users do want to control the “where”. If Liberty really were a user-centric architecture, it would have this assumption built in. And need I point out that assumption five applies to Liberty members just as well as anyone else?
Detractors will point out the dumbness of this idea

Ben, you want to remember where the various pieces of your identity are located, go for it. Write down the addresses on sticky notes, email them to yourselves, scribble them on your palm, be my guest. Should you be available when some provider seeks your identity, you can sort through the list of equivalent providers and specify your choice. How very user-centric.

Of course, the users won’t be managing their data by such primitive means. Their computer(s) or their chosen service provider(s) will do all the legwork. How dumb would I sound if I said Liberty couldn’t work because the sysadmins couldn’t possibly keep track of all the post-it notes they’d need for all that identity data?

Pat says

In any case, user privacy, consent and control has always been foremost

As I have explained in my paper on selective disclosure user privacy is just not possible to guarantee using the mechanisms that Liberty currently uses. Since user privacy is foremost, I look forward to Liberty’s adoption of selective disclosure.

Finally, Paul thinks he has taken the moral high ground by linking to this, so I feel obliged to point out once more that this blog does not reflect Google’s views on anything.

11 May 2007

How CardSpace Breaks the Rules

Daniel Bartholomew wants to know which of Kim’s laws CardSpace breaks, and Chris Bunio thinks the OECD workshop was not the correct forum for a detailed discussion.

How fortunate, then, that this blog exists! I can answer Daniel’s question, and Chris can educate us all on why I am wrong.

In fact, there are many ways CardSpace could violate the laws, but there is one which it is currently inherently incapable of satisfying, which is the 4th law – the law of directed identity – which says, once you’ve fought your way through the jargon, that your data should not be linkable. I explain this in some detail in my paper, “Selective Disclosure” (now at v0.2!), so, Chris and Daniel, I suggest you read it.

28 Mar 2007

Dilemmas of Privacy and Surveillance

The Royal Academy of Engineering has published an almost sensible paper on privacy and surveillance. They get off to a good start

There is a challenge to engineers to design products and services which can be enjoyed whilst
their users’ privacy is protected. Just as security features have been incorporated into car design, privacy protecting
features should be incorporated into the design of products and services that rely on divulging personal information.

but then wander off into cuckooland

sensitive personal information stored electronically could potentially be protected from theft or misuse by using digital
rights management technology.

Obviously this is even more loony than trying to protect music with DRM. Another example

Another issue is whether people would wish others to have privacy in this arena – for example, the concern might arise
that anonymous digital cash was used by money launderers or terrorists seeking to hide their identity. Thus this
technology represents another dilemma – should anonymous payment be allowed for those who wish to protect their
privacy, or should it be strictly limited so that it is not available to criminals?

Riiight – because we have these infallible methods for figuring out who is a criminal.

Also, as usual, no mention whatever of zero-knowledge or selective disclosure proofs. But even so, better than most of the policy papers out there. Perhaps next time they might consider consulting engineers with relevant knowledge?

(via ORG)

1 Mar 2007

Government Consultation on Information Assurance

The government is running a consultation on its e–Government framework for Information Assurance. The thing I find most disappointing about it is the complete inability to see beyond identification as a means of access control. I believe it was at PET 2005 that someone claimed that an analysis of citizens’ interactions with government in Australia showed that in over 90% of cases there was no need for the individual to be identified – all that was needed was a proof of entitilement. This can be achieved quite easily even using the kind of conventional cryptography the framework advocates, though this will still allow a citizen’s interactions to be linked with each other – which we all know is not desirable. Even better to use zero knowledge or selective disclosure proofs, as discussed ad nauseam in this blog. Yet, despite this, there is not a single mention of any access control method other than complete identification.
If you do nothing else, I encourage you to make this point in any submission you make.

15 Aug 2006

Identity Isn’t Just Identity Management, Anonymity Isn’t Privacy

Filed under: Anonymity/Privacy,Crypto,Identity Management — Ben @ 12:32

There’s been more comment on identity management and anonymity. It seems there’s two points that are commonly being overlooked or ignored.

Firstly, when I say anonymity should be the substrate I am not just talking about the behaviour of identity management systems, I also mean that the network itself must support anonymity. For example, currently, wherever you go you reveal your IP address. Any information you give away can be correlated via that address. People sometimes argue that this isn’t true where you have a dynamic address, but in practice that isn’t the case: most dynamic addresses change rarely, if ever – certainly they tend not to change unless you go offline, and the rise of always-on broadband makes this increasingly unusual. Even if the address does change occasionally, you only need to reveal enough information in the two sessions to link them together and then you are back to being correlated again.

Secondly, people seem to think that privacy is an adeqaute substitute for anonymity. I don’t believe this: privacy is all about voluntarily not linking stuff you could link. Anonymity is about making such linking impossible. Microsoft’s Cardspace claims to provide anonymity where, in fact, it is providing privacy. Stefan Brands comes close with his selective disclosure certificates, but they are still linkable, sadly. These systems only provide privacy if people agree to not make the links they could make. Anonymity provides privacy regardless of people’s attempts to undermine it. That’s why you need to have anonymity as your bottom layer, on which you build whatever level of privacy you can sustain; remember that until physical onion routing becomes commonplace you give the game away as soon as you order physical goods online, and there are many other ways to make yourself linkable.

30 Sep 2005

Ben’s Laws of Identity

Filed under: Crypto,Identity Management — Ben @ 21:46

Kim Cameron has his Laws of Identity, so why can’t I have mine? Mine are simpler and probably not complete, but they arose from the paper I wrote with Mary Rundle as a better way to explain what I’m getting at.

I claim that for an identity management system to be both useful and privacy preserving, there are three properties assertions must be able to have. They must be:

  • Verifiable
    There’s often no point in making a statement unless the relying party has some way of checking it is true. Note that this isn’t always a requirement – I don’t have to prove my address is mine to Amazon, because its up to me where my good get delivered. But I may have to prove I’m over 18 to get the porn delivered.
  • Minimal
    This is the privacy preserving bit – I want to tell the relying party the very least he needs to know. I shouldn’t have to reveal my date of birth, just prove I’m over 18 somehow.
  • Unlinkable
    If the relying party or parties, or other actors in the system, can collude to link together my various assertions, then I’ve blown the minimality requirement out of the water.

OK. So now we’re all on the same page, I gave my shortest talk ever recently at Stanford – under three minutes – on why X.509 (and all methods of making verifiable assertions I know of that are widely used) doesn’t work for identity management. The essence is this: standard X.509 statements are verifiable, but not minimal nor unlinkable. You can try to fix the minimality by having some third party issue single use certificates with minimal assertions in them on the basis of X.509 certificates you already have in your hand – but these are still not unlinkable, so bad people can get together to link everything back together again. Or, you can try self-signed certificates – minimal and unlinkable, but sadly not verifiable. I’m not aware of any other options. QED.

Another important point often glossed over is that unless the underlying network provides anonymity, then you are screwed anyway, since everything is linkable.

Of course, methods do exist that don’t have the problems of X.509 certificates – the best, IMO, being zero knowledge and selective disclosure proofs. But no-one uses them (yet).

Also, there’s hope for anonymity, in the shape of onion routing.

Incidentally, at the same workshop Dick Hardt gave the most fun presentation on identity management I’ve ever seen. Check it out.

Powered by WordPress