Ben Laurie blathering

18 Aug 2007


Filed under: Anonymity/Privacy,Identity Management — Ben @ 5:56

Dick Hardt draws my attention to an article about the dangers of user-centric identity in something called informIT. As Dick says, the article tells us that, duh, if we screw up our websites then we screw up our users, too.

But it seems to me that there’s an even more fundamental issue. If, as the author correctly, if somewhat ungrammatically, claims, “the average users usually reuse the same username/password pairs for most of their accounts”, why, exactly, is it worse if the user types this same username and password into the same place every time (and probably far less often) than if the user is obliged to type it whenever he sees a login page?

It seems to me that the user stands a far better chance of being sure that he is typing his password in the correct place if there is only one correct place instead of several hundred.

14 Aug 2007

A Motivating Example

My friend, Carrie Gates (of CA Labs), posed me the following problem.

Let us imagine two services. The first we’ll call Facebook. Facebook is yet another of those obnoxious social networking services. The second we’ll call Flickr. Flickr lets me upload pictures and also acts as yet another, perhaps slightly less obnoxious, social network.

Flickr, being a kind, generous and forward-thinking sort of service, is happy to allow other services to build on top of it. It will let them link accounts for their users to Flickr accounts and show their users Flickr photos from those accounts. Flickr also allows me to choose who can see my photos. I can let just anyone see them, I can restrict access to my friends or I can make my pictures entirely private, so that only I can see them.

Facebook doesn’t let me upload pictures. But they’re smart – they’ve offloaded that bit of tedium to Flickr. You can tell Facebook what your Flickr account is, and then Facebook will display your Flickr pictures as if they were Facebook’s very own. Whether this is cheap, cunning or just good for the user I leave open to debate, but this is how these services work.

The interesting question arises when a friend wants to see my Flickr pictures on my Facebook pages (again, whether this is a good or bad idea I leave aside, but let’s just agree that people want to do this).

Now we have an interesting quandary. In fact, two interesting quandaries. Or maybe even three. The first arises if my friend is a Flickr friend. That is, I have told Flickr that his Flickr account is allowed to see my “friends only” pictures. The second if my friend is a Facebook friend. That is, I have told Facebook that his Facebook account is allowed to see my “friends only” pictures. The third arises when I trust Flickr more than Facebook, but this one I will have to explain later.

In the first case, Facebook is not itself aware that my friend is allowed to see these pictures. OK, you say, that’s pretty easy – Flickr knows, so all Facebook as to do is tell Flickr which Flickr account is trying to view my pictures, and hey presto! my friend can see my “friends only” picture. But what if my friend has not told Facebook what his Flickr account is? And why, indeed, should he? Then, of course, he can’t see my pictures (or perhaps he can, see the third quandary).

In the second case, Facebook knows he is my friend, but how does it tell this to Flickr? Flickr doesn’t expose APIs for saying who is a friend – Flickr takes the view that this would probably be insecure and certainly be quite confusing. Of course, Facebook has access to my Flickr account (obviously it is to my benefit to be able to manage my Flickr photos without leaving Facebook), so it could take matters into its own hands and show him my pictures anyway. Unfortunately, this would also give access to my completely private pictures, which I think I would take a dim view of.

And this leads to the third quandary. If I trust Flickr more than I trust Facebook, then by even indulging in this whole game I have reduced my security, as illustrated above.

OK, so now that I have set the scene, and, I hope, filled you with fear for the poor victims (err, I mean, “users”) of these services, the question arises: is there a way to do this properly? Can we achieve everything we desire and still leave everyone secure and with privacy intact?

One answer is to demand that every Facebook user must give their Flickr account to Facebook. Good luck with that. Clearly this sucks for all sorts of reasons, not least of which is that it totally fails to scale to the case of hundreds of Flickrs and Facebooks. It is also a disaster waiting to happen from a security and privacy point of view.

Obviously there must be better answers. I have some thoughts on this, but before I write them up I’m interested to hear what the blogosphere can come up with.

Feynman once said that if you could understand the two-slit experiment, then you understand the whole of quantum mechanics. This example is probably not quite as fundamental, but it seems to me to be, in some way, the two-slit experiment of identity.

BTW, all services in this blog post are fictional and any resemblance between them and real services is entirely coincidental.

29 Jun 2007

(Lack of) Knowledge Limits the Imagination

I was poking around Jacqui Smith’s website, since she is our new Home Secretary, and I was disappointed (but not surprised) to find this. I won’t bore you with yet another diatribe about ID cards, I’m sure you know what I’d say anyway. But I was interested to note this:

In addition, when we provide benefits or people receive free treatment on the NHS it is important that we know who the recipients of these services are. ID Cards will help us ensure that only those who are entitled to these services get them.

As I have been saying for a while, the imagination of policy makers is limited by what they think is possible. Notice that in the second sentence she talks about entitlement – but in the first about identity. Policy makers tend to think you must have strong identity assertions in order to judge whether there is entitlement. But this is not so: in fact, it is possible to prove entitlement without any hint of identity at all, using, of course, selective disclosure. Because policy makers don’t, in general, know about selective disclosure, they find it impossible to think in terms of anything other than what they are used to, which is, of course, various forms of identity combined with access control lists.

This is why I tend to go on about selective disclosure every time I am in a room with policy makers.

25 Jun 2007

Stefan Brands on Minimal Disclosure

Filed under: Anonymity/Privacy,Identity Management,Security — Ben @ 18:50

Stefan Brands writes eloquently about the spectrum of uses available when selective disclosure is employed, which I might paraphrase as ranging from “anonymous” to “completely privacy invading”, contrary to many peoples’ perceptions. Selective disclosure is often seen as a purely privacy-preserving technology; but that misses the point. Selective disclosure allows the full spectrum of options – from nothing at all to everything. Other signature mechanisms and technologies do not. It’s as simple as that.

One thing that intrigues me, though, is his statement at the end: that the issuer has the ability to control what is revealed. I’m dubious about the value of this property. The user should be aware of this control and therefore able to choose whether to show the certificate at all. Similarly, the relying party can refuse to continue the transaction unless his requirements for disclosure are satisfied. What did the identity provider add by having a hand in this decision?

3 Jun 2007

Kim Cameron on Me on Selective Disclosure

Filed under: Anonymity/Privacy,Identity Management — Ben @ 4:18

Kim has a couple of posts responding to my paper on selective disclosure and my claim that CardSpace does not obey his fourth law.

First off, Kim thinks I shouldn’t say unlinkability and verifiability are things that an identity system should be required to support, on the basis that sometimes you want to be linked, and sometimes you don’t need verification. Well, of course … perhaps he didn’t notice that I said “here are three properties assertions must be able to have” – I didn’t say that every assertion should have these properties.

He also does not like this statement

Note a subtle but important difference between Kim’s laws and mine – he talks about identifiers whereas I talk about assertions. In an ideal world, assertions would not be identifiers; but it turns out that in practice they often are.

He claims that, in fact, his laws are about assertions. Allow me to quote his fourth law in full

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

You will note that this law talks about identifiers. Not identities. Not assertions. The point I am trying to make is that assertions are identifiers when they are signed using conventional signatures. This is because each time a signed assertion is presented it is identical to the last time it was presented, and different from all other signed assertions (since it must be linked to the identity of the subject of the assertion, or it is useless). The very core of my argument is that unless assertions are unlinkable, then they are identifiers – and, what’s more, they are omnidirectional identifiers. Therefore the “identity metasystem” as currently implemented cannot obey Kim’s fourth law.

Finally, he attempts to show that I am wrong about this claim, with the following argument

How does CardSpace hide the identity of the relying party? It associates some random information – unknown to the identity provider – with each Information Card. Then it hashes this random information (let’s call it a “salt”) with the identity of the site being visited. That is conveyed to the identity provider instead of the identity of the site. We call it the “Client Pseudonym”. Unlike a Liberty Alliance client pseudomym, the identity provider doesn’t know what relying party a client pseudonym is associated with.

The identity provider can use this value to determine that the user is returning to some site she has visited before, but has no idea which site that would be. Two users going to the same site would have cards containing different random information. Meanwhile, the Relying Party does not see the client pseudonym and has no way of calculating what client pseudonym is associated with a given user.

Of course, if the identity provider and the relying party never talk to each other, then this works just fine. But clearly it is easy for the two of them to put their heads together and find out who the user is. I require unlinkability even if everyone gangs up to track the user. So, this argument totally fails to satisfy my requirement.

I’m looking forward to the next post in Kim’s series…

The question now becomes that of how identity providers behave. Given that suddenly they have no visibility onto the relying party, is linkability still possible? I’ll discuss this next.

17 May 2007

Is Liberty Inherently User-Centric?

Filed under: Anonymity/Privacy,Identity Management — Ben @ 14:49

I have already stated that I believe that Liberty can be used in a user-centric way, but I am still being beaten up by Liberty proponents. They appear to want me to believe that Liberty discovery is only about user-centric identity.

I’m not buying it. Firstly, statements made by people involved in Liberty lead me to believe that they are interested in discovery of services that are not visible to users. But that’s just hearsay, so here’s some of Liberty’s own words, from the Liberty ID-WSF Security and Privacy Overview

• Notice.

Public-facing Liberty-enabled providers should provide the Principal clear notice of who is collecting the information, how they are collecting it (e.g., directly or through cookies, etc.), whether they disclose this information to other entities, etc.

• Choice.

Public-facing Liberty-enabled providers should offer Principals choice, to the extent appropriate given the circumstances, regarding how Personally Identifiable Information (PII) is collected and used beyond the use for which the information was provided. Providers should allow Principals to review, verify, or modify consents previously given. Liberty-enabled providers should provide for “usage directives” for data through contractual arrangements or through the use of Rights Expression Languages.

• Principal Access to Personally Identifiable Information (PII).

Consistent with, and as required by, relevant law, public-facing Liberty-enabled providers that maintain PII should offer a Principal reasonable access to view the non-proprietary PII that it collects from the Principal or maintains about the Principal.

• Correctness.

Public-facing Liberty-enabled provider should permit Principals the opportunity to review and correct PII that the entities store.

• Relevance.

Liberty-enabled providers should use PII for the purpose for which it was collected and consistent with the uses for which the Principal has consented.

• Timeliness.

Liberty-enabled providers should retain PII only so long as is necessary or requested and consistent with a retention policy accepted by the Principal.

• Complaint Resolution.

Liberty-enabled providers should offer a complaint resolution mechanism for Principals who believe their PII has been mishandled.

• Security.

Liberty-enabled providers should provide an adequate level of security for PII.

All good principles. If only terms like “public-facing Liberty-enabled providers” and “non-proprietary PII” had not been used, I would be totally buying that Liberty is all about user control.

As it is, I’m not sure why we’re arguing. Liberty seems, quite clearly, to have mechanisms that are aimed at allowing businesses to coordinate data they have on people, without the people being involved. It also has mechanisms that do allow the people to participate. This is good, and I’m sure many of us want to encourage their use in the latter mode. What’s more, I’m sure we’d all like to see Liberty adhere to its principles (for example, from the same document, “Avoiding collusion between identity provider and service provider”) by adopting, for example, selective disclosure techniques, so that it when it is used in these modes (and perhaps in others) it better protects the important people. That is, you.

In short, I think the people who are beating me up are on the same page as me, so can we stop arguing and do something constructive, please?

13 May 2007

Is Liberty User-Centric?

Paul Madsen and Pat Patterson berate me for suggesting that Liberty is all about silos. They’re right, of course. You can use Liberty to support user-centric identity management, if you want to. But I’m not buying their argument that Liberty is all about user-centric. Paul Madsen says that Liberty is built on the assumption that users keep their identity where they want to; if that were really true it would be a very strange assumption indeed, since its pretty clear that users currently do not have any control at all over where their identity is kept, to speak of.

So, I’ll definitely buy a modified version of Paul’s assumptions:

  1. Users’ identity will be kept in multiple places.
  2. The ‘where’ can be 3rd party identity providers as well as local storage (e.g. devices).
  3. It’s highly unlikely that all aspects of identity will be maintained at the same provider, i.e. there will be multiple ‘wheres’.
  4. Most users don’t want to be responsible for facilitating identity sharing by themselves providing the ‘where’.
  5. Experts will misinterpret 1-4 to suit whatever is their current competitive positioning.

I don’t see how changing the first assumption (from “users keep their identity where they want to”) makes any difference to the architecture of appropriate solutions, once you’ve combined it with the fourth assumption. Of course, if you drop the fourth assumption, it makes a huge difference, because you’ll architect a solution where the user is in control.

But Liberty cannot drop the fourth assumption: then facilities for discovery of data the user has no control over would not be needed.

Or, in other words, the base assumption of user-centric identity management is that users do want to control the “where”. If Liberty really were a user-centric architecture, it would have this assumption built in. And need I point out that assumption five applies to Liberty members just as well as anyone else?
Detractors will point out the dumbness of this idea

Ben, you want to remember where the various pieces of your identity are located, go for it. Write down the addresses on sticky notes, email them to yourselves, scribble them on your palm, be my guest. Should you be available when some provider seeks your identity, you can sort through the list of equivalent providers and specify your choice. How very user-centric.

Of course, the users won’t be managing their data by such primitive means. Their computer(s) or their chosen service provider(s) will do all the legwork. How dumb would I sound if I said Liberty couldn’t work because the sysadmins couldn’t possibly keep track of all the post-it notes they’d need for all that identity data?

Pat says

In any case, user privacy, consent and control has always been foremost

As I have explained in my paper on selective disclosure user privacy is just not possible to guarantee using the mechanisms that Liberty currently uses. Since user privacy is foremost, I look forward to Liberty’s adoption of selective disclosure.

Finally, Paul thinks he has taken the moral high ground by linking to this, so I feel obliged to point out once more that this blog does not reflect Google’s views on anything.

11 May 2007

Liberty Loves Silos

Filed under: Anonymity/Privacy,Identity Management — Ben @ 12:03

At both the recent Identity Open Space in Brussels, and the OECD workshop on identity management Liberty folk talked about the urgent need for protocols to discover identity services.

At the time, I was bemused: why would anyone need to discover services? Surely they would be communicated to you as they were needed? But last night I realised the truth: Liberty thinks you need discovery because they think it is both inevitable and correct that all your data should live in silos, beyond your control, and ideally where you can’t see it. Of course, in this case, you can’t assist in the process of locating information about you. Nor can you detect, let alone correct, inconsistency and incorrectness.

This is clearly so much better than user-centric identity (where, in case it isn’t obvious, discovery would be unnecessary – you would just ask me where to look). I can see why Liberty is so keen.

How CardSpace Breaks the Rules

Daniel Bartholomew wants to know which of Kim’s laws CardSpace breaks, and Chris Bunio thinks the OECD workshop was not the correct forum for a detailed discussion.

How fortunate, then, that this blog exists! I can answer Daniel’s question, and Chris can educate us all on why I am wrong.

In fact, there are many ways CardSpace could violate the laws, but there is one which it is currently inherently incapable of satisfying, which is the 4th law – the law of directed identity – which says, once you’ve fought your way through the jargon, that your data should not be linkable. I explain this in some detail in my paper, “Selective Disclosure” (now at v0.2!), so, Chris and Daniel, I suggest you read it.

10 May 2007

CardSpace and the Seven Laws (again)

At this OECD workshop on identity management, Fred Carter, of the Office of the Information and Privacy Commissioner, Ontario, spoke on “Functional Requirements for Privacy Enhancing Systems”. At one point he listed privacy protecting identity management systems, which he broadly defined as those following Kim’s seven laws. The list was short, just PRIME and Credentica … note the absence of CardSpace. So, I just had to ask: “does this mean that you believe CardSpace does not obey the seven laws?”. His reply? “Yes”.

Chris Bunio, a Senior Architect for Microsoft, was present. He did not dispute the claim.

2 May 2007

Laws of Identity, Revised

Filed under: Anonymity/Privacy,Identity Management — Ben @ 10:28

Many moons ago, I wrote my Laws of Identity. Yesterday, my friend Cat Okita pointed out a deficiency, so here’s an update…
I claim that for an identity management system to be both useful and privacy preserving, there are three properties assertions must be able to have. They must be:

  • Verifiable
    There’s often no point in making a statement unless the relying party has some way of checking it is true. Note that this isn’t always a requirement – I don’t have to prove my address is mine to Amazon, because its up to me where my goods get delivered. But I may have to prove I’m over 18 to get the alcohol delivered.
  • Minimal
    This is the privacy preserving bit – I want to tell the relying party the very least he needs to know. I shouldn’t have to reveal my date of birth, just prove I’m over 18 somehow.
  • Unlinkable
    If the relying party or parties, or other actors in the system, can, either on their own or in collusion, link together my various assertions, then I’ve blown the minimality requirement out of the water.

What’s changed? Cat pointed out that collusion isn’t necessary for linkability, people can do it all on their own.

18 Apr 2007

Privacy Preserving Road Usage Charging

Filed under: Anonymity/Privacy,Civil Liberties,Crypto — Ben @ 12:06

I recently attended a conference on “Respecting Privacy in Global Networks“. One of the talks was about road usage charging – the general idea being that instead of paying a flat fee related to your vehicle type, you pay for the roads you actually use. Of course, the obvious ways to implement this (either using a GPS to log a trail to some kind of secure device which is periodically examined to determine fees, or by collecting car details with roadside receivers) are stupendously privacy invading.

But, it occurs to me, we have the technology at our fingertips to make this system anonymous (except for defaulters) quite easily. All we need to do is fit cars with a device that can spend anonymous digital cash as they pass checkpoints. Cars that don’t fork out get their numberplate photographed. Obviously you have to back this up with legislation that forbids checking numberplates except on defaults but that seems easy enough.

Of course in London we should have this system for congestion charging, which already monitors everyone’s movements.

1 Apr 2007

Anonymity on IP Networks

Filed under: Anonymity/Privacy — Ben @ 14:13

I attended a security summit for OLPC recently (which I will write up more fully when I have time). One of the issues that was raised was the trackability of the laptops via their MAC address – this is of particular concern because they participate in the mesh network even when powered off(!).

So, you can change the MAC address occasionally, and that will help – though the sudden disappearance of MAC A and appearance of MAC B will give the game away in low density situations. But when you are online, that hardly matters, since your IP address isn’t going to change – and changing it will cause your connections to drop.

This got me thinking about anonymity on IP networks. And I suddenly realised I know how to do it! A while back, I created Apres, a system for anonymous presence (which, incidentally, there is actual code for). The essence of Apres is that the two parties who want to exchange presence have a shared secret. They derive an ephemeral identifier from that secret and the current time. They can then use that ephemeral identifier to find each other, in the case of Apres via a rendezvous server. Obviously, one could use a similar system to derive an IP address known to both parties but to no-one else.

What’s more, unlike Apres, you could do this entirely on the fly, at least for client/server systems – you could contact the server on some well-known port and agree a secret, and then start using it for connections with that server.

Obviously this is going to be somewhat tricky to implement on an IPv4 network, and if you are on a fixed machine, obfuscating your IP address isn’t particularly going to help. But with IPv6 and mesh networking, it becomes quite an interesting proposition. Now, if only I knew of a laptop that did mesh networking and IPv6

28 Mar 2007

Dilemmas of Privacy and Surveillance

The Royal Academy of Engineering has published an almost sensible paper on privacy and surveillance. They get off to a good start

There is a challenge to engineers to design products and services which can be enjoyed whilst
their users’ privacy is protected. Just as security features have been incorporated into car design, privacy protecting
features should be incorporated into the design of products and services that rely on divulging personal information.

but then wander off into cuckooland

sensitive personal information stored electronically could potentially be protected from theft or misuse by using digital
rights management technology.

Obviously this is even more loony than trying to protect music with DRM. Another example

Another issue is whether people would wish others to have privacy in this arena – for example, the concern might arise
that anonymous digital cash was used by money launderers or terrorists seeking to hide their identity. Thus this
technology represents another dilemma – should anonymous payment be allowed for those who wish to protect their
privacy, or should it be strictly limited so that it is not available to criminals?

Riiight – because we have these infallible methods for figuring out who is a criminal.

Also, as usual, no mention whatever of zero-knowledge or selective disclosure proofs. But even so, better than most of the policy papers out there. Perhaps next time they might consider consulting engineers with relevant knowledge?

(via ORG)

1 Mar 2007

Government Consultation on Information Assurance

The government is running a consultation on its e–Government framework for Information Assurance. The thing I find most disappointing about it is the complete inability to see beyond identification as a means of access control. I believe it was at PET 2005 that someone claimed that an analysis of citizens’ interactions with government in Australia showed that in over 90% of cases there was no need for the individual to be identified – all that was needed was a proof of entitilement. This can be achieved quite easily even using the kind of conventional cryptography the framework advocates, though this will still allow a citizen’s interactions to be linked with each other – which we all know is not desirable. Even better to use zero knowledge or selective disclosure proofs, as discussed ad nauseam in this blog. Yet, despite this, there is not a single mention of any access control method other than complete identification.
If you do nothing else, I encourage you to make this point in any submission you make.

19 Feb 2007

CardSpace Cannot Provide Privacy

Kim Cameron writes about “token independence” and how SAML doesn’t have it. As far as I can see, token independence is yet another word for unlinkability – that is, if I present a token twice, the two presentations should not be linkable. Of course, MS have a new word for this, too – they call it “non-auditing”.

However, Kim continues to be in denial about the impossibility of achieving this with traditional crypto. As I point out at every opportunity I get, a signed assertion using any traditional method is inherently linkable, because the signature itself is invariant. Scott Cantor points this out in a comment on Kim’s blog

I don’t think it’s enough to remove a couple of XML attributes to avoid the correlation attack you’re talking about. I think it requires non-traditional cryptography to present a signed claim of anything from a third party in such a way that the whole bag of bits can’t be used as a correlatable handle

Kim tries to wriggle out of this by saying

You don’t need special cryptography as long as you are willing to employ “bearer tokens” to convey non-unique assertions. You do need blind signatures once you want to associate tokens with proof keys.

I think by “bearer tokens” he means self-asserted tokens. This is surely a completely incorrect use of the term, but I assume its what he meant, since he seems to be saying the other possibility is a token signed by a “proof key” – whatever that is; presumably a key the relying party trusts in some way. Assuming all my guesses at his terminology are correct, then this argument is self-defeating – if the tokens are self-asserted, then they can be constructed on the fly each time they are needed, and so SAML will work just as well as any other way of expressing tokens, since the correlating fields can be changed each time.

If Microsoft are really serious about providing “non-audit” (i.e. unlinkable) modes for CardSpace, then they need to get with the program and stop trying to pretend that they can do this with RSA signatures. Its a shame that they’re going to such lengths to make CardSpace good but can’t quite seem to go the last mile and make their claims actually true. Perhaps they don’t want to?

6 Feb 2007

The Tories Hate ID Cards

They don’t work, they cost an arm and a leg, and they create a surveillance state. In short.

4 Dec 2006

Big Brother Knows Best

Filed under: Anonymity/Privacy,Civil Liberties — Ben @ 13:18

The Guardian printed a coupon last week that you could fill in and mail to the NHS asking to not have your medical records included on the NHS’ Spine.

In keeping with their policy of establishing consensus through advertising, the Department of Health have apparently responded that

nobody could have genuine grounds for claiming “substantial and unwarranted distress” as a result of having their intimate medical details included on a national computer system

Since, it seems, the criterion for being allowed to opt out is that you must demonstrate “substantial and unwarranted distress”, this means that no-one can opt out!

No doubt in a few months the fact that no-one has opted out will be quoted as evidence that there is widespread public support for the Spine.

Update: According to The Register, Lord Warner, health minister said:

“Patients will be informed in advance about new ways in which their information will be held and shared and they will be told they have the right to dissent – or ‘opt out’ – of having information shared.”

which doesn’t really tally with a statement by another minister, John Hutton:

“The Data Protection Act also provides patients with a right, where they are suffering substantial damage or distress, to object to processing of their data, including to prevent their data being held at all in an identifiable form, though this is expected to be a very rare event. We are currently considering how this right should apply to implementation of the NHS care record.”

If you care about this stuff, you might want to take a look at The Big Opt Out.

8 Nov 2006

If You Have Laws, Are You A Politician?

Filed under: Anonymity/Privacy,Crypto,Identity Management — Ben @ 12:14

A while back, I posted about Ontario’s love affair with Cardspace (I notice, btw, that Ann Cavoukian is so hip to this ‘net thing that she’s broken the link to her white paper, which is now here – confidence inspiring). In that post, I said that there was a false claim that the laws were “developed through an open consensus process”.

Kim, being a smart guy, responded thusly

there were many people who interacted with me when I was articulating the laws. I listed them all in the laws – and no one asked not to be mentioned, so far! I’ve actually been under the impression that there is general consensus that the laws move us forward. EVen you seem to agree.

So I don’t get your point. You don’t want people like Anne Cavoukian to get involved? You don’t think the laws are a good handle for doing so? You don’t think the laws have had a defining role in the emergence of user centric approaches? Or are you arguing that there is no consensus because we didn’t take a formal vote?

I actually thought the laws were a good way for the privacy community to hook up with those of us doing identity.

This strikes me as a politician’s response. I didn’t say there wasn’t a consensus that the laws move us forward. I didn’t say I disagreed with them. I didn’t say Anne Cavoukian should not get involved. I didn’t say the laws were a bad handle for involvement. I didn’t say that the laws have not had a defining role. I didn’t say there was no consensus because we didn’t vote.

What I did say is that the laws were not evolved through an open consensus process. Kim wrote them down. Many people said they were cool, including me. Kim may have made minor changes in response to discussion, but they are not the result of some kind of groupthink.

For example, I have often pointed out that the laws do not include the requirement for unlinkability but they have not been updated to include it (presumably either because Kim doesn’t think its a requirement, or, perhaps more realistically, because Cardspace does not support unlinkability). I and others have pointed out that law 4 is practically unreadable on its own – what are “omnidirectional” and “unidirectional” identifiers? Indeed, what are “public” and “private” entities – now I think about it, this law needs serious redrafting to make any sense.

Kim also says I should read Cavoukian’s version of the laws, and he’s right. She’s redrafted law 4 rather well:

A universal identity metasystem must be capable of supporting a range of identifiers with varying degrees of observability and privacy. Unidirectional identifiers are used by the user exclusively for the other party, and support an individual’s right to minimize data linkage across different sites. This is consistent with privacy principles that place limitations on the use and disclosure of one’s personal information. At the same time, users must also be able make use of omnidirectional identifiers provided by public entities in order to confirm who they are dealing with online and, thereby ensure that that their personal information is being disclosed appropriately. To further promote openness and accountability in business practices, other types of identifiers may be necessary to allow for appropriate oversight through the creation of audit trails.

I’m particularly interested by “unidirectional identifiers are used by the user exclusively for the other party, and support an individual’s right to minimize data linkage across different sites” – in other words, she recognizes the need for unlinkability. And its true that unidirectional identifiers support the right to minimize data linkage – but they don’t achieve it on their own, and this is where Cardspace currently falls down – unidirectional credentials are issued through a process that is entirely linkable. Unlinkability is achieved only if everyone agrees not to link.

Kim also says that no-one has asked to be removed from his list of “contributors”. This is totally unsurprising – many of them have a vested interest in staying friendly with Microsoft – but I know from private communications that not all of them actually agree that they have contributed.

Kim could easily refute my claim with facts rather than rhetoric. All he needs to do is point to the “wide-ranging conversation documented at” and show how his laws evolved through that conversation. I’ve looked – indeed, I’ve followed the discussion – and I haven’t found any evidence at all that supports this claim, let alone contributions by each of the listed people.

Incidentally, I notice that Kim links to practically every blog out there that talks about identity – but not mine. Is that because he doesn’t want to link to anyone that’s not 100% positive about Cardspace?

Germany Is Good For Privacy

Filed under: Anonymity/Privacy,Civil Liberties — Ben @ 5:03

A German court has ruled that ISPs have no right to store IP logs. You have to complain in order to get them erased, but that’s still way better than the usual situation.

Now all we need is the same ruling in the UK.

« Previous PageNext Page »

Powered by WordPress