Links

Ben Laurie blathering

23 Apr 2008

Why You Should Always Use End-to-End Encryption

Filed under: Anonymity/Privacy,Crypto,Privacy,Security — Ben @ 17:46

A Twitter user has had all her private messages exposed to the world. This is one of the reasons I try to avoid sending private messages (at least, ones that I would like to remain private) over any system that does not employ end-to-end encryption.

At least then my only exposure is to my correspondent, not the muppets that run the messaging service I used.

One service this poor unfortunate has done for the world, though, is to provide an excellent example of why you should use cryptography routinely: you need not have any more to hide than your embarrassment.

Incidentally, I am going to stop using the combined tag “Anonymity/Privacy” after this post – clearly they are not always both applicable.

Phorm Legal Analysis

Filed under: Anonymity/Privacy,Security — Ben @ 17:37

FIPR‘s Nick Bohm has written a fascinating legal analysis of Phorm’s proposed system. Its nice that RIPA’s effects are not all bad, but it turns out that, in Nick’s opinion, Phorm are on the hook for a number of other illegal acts under various acts…

  • The Regulation of Investigatory Powers Act 2000
  • The Fraud Act 2006
  • The Data Protection Act 1998

He also beats up Simon Watkin of the Home Office (well-known in UK privacy circles for spending a great deal of energy trying to persuade us all that RIPA [then known as RIP] was going to be alright, really), for a note he wrote which suggested that Phorm’s business model was just fine under RIPA. Simon stays true to form by pointing out that the note wasn’t actually advice, and was not based on paying any attention at all to what Phorm were actually proposing. One has to wonder, then, what the point of writing it was?

Perhaps more disturbingly, Nick also talks about what my be the first attempt at enforcement against Phorm. Not surprisingly, the police say they’re too busy and it’s the Home Office’s problem and the Home Office say its not their job to investigate offences under RIPA. Isn’t it lucky, then, that we are doing their investigating for them?

I’m also pleased to see that Nick supports my view that the consent of both the user and the web server must be obtained for Phorm’s interception to be legal under RIPA

RIPA s3(1) makes it lawful if the interception has the consent of both
sender and recipient (or if the interceptor has reasonable grounds for believing
that it does). This raises the question of whose consent is required for the
interception of communications of those using web browsers.

I’m also intrigued by Nick’s analysis of Phorm’s obligation under the Data Protection Act. Where sensitive personal data is processed by Phorm, then the user’s consent must be obtained. Nick argues that Phorm will see information relating to

• their racial or ethnic origin,
• their political opinions,
• their religious or similar beliefs,
• whether they are members of a trade union,
• their physical or mental health or condition,
• their sexual life,
• the commission or alleged commission by them of any offence, or
• any proceedings for any offence committed or alleged to have been
committed by them, the disposal of such proceedings or the sentence of
any court in such proceedings

It occurs to me that Nick has missed a trick here: the user might also view sensitive data relating to a third party – for example, they might participate in a closed web forum where, say, sexual preferences are discussed. In this case, it seems to me, the consent of that third party would need to be obtained by Phorm.

31 Mar 2008

More Bullshit from Phorm

Filed under: Anonymity/Privacy,Digital Rights,Security — Ben @ 14:54

Phorm continue to sob that us whining privacy advocates are misrepresenting their system

Phorm’s chairman and chief executive, Kent Ertugrul, said yesterday the firm was the victim of misinformation. “What is so strange about this is that if you were to put on a board what we do and what has been written about us and map the two, you would find there is very little correlation,” he said.

I’d be more than happy to compare what I’ve said to what their system actually does, only … when the Open Rights Group nominated me to be briefed by Phorm (in my capacity as both a director of ORG and a subject matter expert) they declined, on the basis that I work for a competitor, despite my assurance that I would not be acting for Google in any way, as is always the case when I do stuff for ORG. But, hey, trust is a one-way street, apparently, if you are Phorm – as one of the surveilled, I must trust them, but that’s no reason they should trust me, is it?

Strangely they were quite happy to brief two of my colleagues in detail, without any NDA – and my colleagues are planning to produce a full, public report of that briefing. With a bit of luck, they’ll have addressed all my concerns, but who knows? I wasn’t there to assist in that process.

Interestingly, they go on to say

“What we would like to do is issue a challenge to the privacy community to select some of their most technically savvy representatives and form an inspection committee. We would be delighted, on a recurring basis, to give those people the ability to spot inspect what it is we do.”

which rather emphasizes one of the core problems with their system: it requires everyone to trust that all this data they have gathered without consent is actually handled as they claim it is handled.

I do hope Phorm will be paying the going rate for this valuable service – but probably I won’t find out because I expect that, despite my obvious qualifications, I will be excluded from such a group. It wouldn’t do to have anyone too expert looking at their system, after all.

30 Mar 2008

Microsoft Implement The Evil Bit

Filed under: Anonymity/Privacy,Security — Ben @ 18:41

Thanks to the Shindig mailing list, I’ve just noticed this gem from Microsoft.

The essence here is that third party sites inside frames might invade your privacy by setting cookies, so IE6, by default, doesn’t let them set cookies. But, if they promise to be good, then it will allow them to be bad. Isn’t that marvellous?

What I think is particularly excellent about Microsoft’s support article is that they tell you how to suppress the behaviour by setting an appropriate P3P policy … but they don’t tell you what this policy really means, nor suggest that you should only set the policy if you actually conform to it.

Of course, you can tell it’s a Microsoft protocol because it takes 21 bytes to do what the original proposal could do in a single bit.

20 Mar 2008

Interoperability

Despite Kim’s promise in his blog

That doesn’t mean it is trivial to figure out the best legal mecahnisms for making the intellectual property and even the code available to the ecosystem. Lawyers are needed, and it takes a while. But I can guarantee everyone that I have zero intention of hoarding Minimal Disclosure Tokens or turning U-Prove into a proprietary Microsoft technology silo.

Like, it’s 2008, right? Give me a break, guys!

I’ve now heard through several different channels that Microsoft want to “ensure interoperability”. Well. Interoperability with what, I ask? In order for things to be interoperable, they must adhere to a standard. And for Microsoft to ensure interoperability, they have to both licence the intellectual property such that it can only be used in conformance to that standard and they have to control the standard.

I don’t know about you, but that sure sounds like a “proprietary Microsoft technology silo” to me.

13 Mar 2008

Bad Phorm?

Filed under: Anonymity/Privacy,Security — Ben @ 14:28

As anyone even half-awake knows, there has been a storm of protest over Phorm. I won’t reiterate the basic arguments, but I am intrigued by a couple of inconsistencies and/or misleading statements I’m spotting from Phorm’s techies.

In an interview in The Register, Phorm’s “top boffin” Marc Burgess says

What the profiler does is it first cleans the data. It’s looking at two sets of information: the information in the request that’s sent to the website and then information in the page that comes back.

From the request it pulls out the URL, and if that URL is a well known search engine such as Google or Yahoo! it’ll also look for the search terms that are in the request.

And then from the information returned by the website, the profiler looks at the content. The first thing it does is it ignores several classes of information that could potentially be sensitive. So there’s no form fields, no numbers, no email addresses (that is something containing an “@”) and anything containing a title like Mr or Mrs.

he says “there’s no form fields”. But this is in the response from the webserver. Form fields in the request sent to the webserver are fair game, it seems. In other words, Phorm are quite happy to spy on what you type, but will ignore form fields sent to you by the server – well, that’s big of them: those fields are usually empty. It’s interesting that many people have picked this up as a contradiction (that is, how can there be no form fields if you are looking at search terms, which are entered into a form field?) – but it has been carefully worded so that it is not contradictory, just easy to misinterpret.

Phorm can completely adhere to this public statement and yet still look at everything you typed. Note also that they only talk about filtering senstive data in the response and not in the request. So nothing, it seems, is really sacred.

Incidentally, they are silent about what they do with the body of the request (usually when you submit a form, the fields end up in the body rather than the URL). That fills me with curiosity.

Even ORG swallow this bit of propaganda (from ORG’s post)

Phorm assigns a user’s browser a unique identifying number, which, it is claimed, nobody can associate with your IP address, not even your ISP.

Of course, this is nonsense. The ISP can easily associate the identifying number with your IP address – all they have to do is watch the traffic and see which IP address sends the cookie with the Phorm ID in it. In fact, they could probably use the Phorm box for this, since it already sees all the data.

and Phorm’s CEO, Kent Ertegrul, again in the interview with The Register

It’s important to understand the distinction between actually recording stuff and concluding stuff. All of our systems sit inside BT’s network. Phorm has no way of going into the system and querying “what was cookie 1000062 doing?”. And even if we did we have no way of knowing who 1000062 was. And even if we did all we could pull out of it is product categories. There’s just no way of understanding where you’ve been, what you’ve done, what you’ve searched for.

They say this, but we have to take their word for it. Obviously the fact it sits inside BT’s network is no barrier to them connecting to it. Clearly they could just look at the traffic traversing the system and know exactly what cookie 1000062 is doing. And which IP address is doing it, which doesn’t tell you who is doing it, but certainly narrows it down. Analysis of the data will almost certainly allow identification of the individual concerned, of course.

Not, of course, that taking people’s word for their privacy practices is unacceptable – it is pretty much unavoidable. What I object to is Phorm’s attempts to convince us that it is impossible for them to misbehave. Of course, it is not.

Now let’s take a look at BT’s FAQ

Is my data still viewed when I am not participating?

No, when you don’t participate or switch the system off — it’s off. 100%. No browsing data whatsoever is looked at or processed by BT Webwise. . We should be clear: the Phorm servers are located in BT’s network and browsing data is not transmitted outside. Even if you are opted out, websites will still show you ads (as they do now) but these will not be ads from the Phorm service and they will not be more relevant to your browsing. In addition, you will also not get extra protection from fraudulent websites.

This is just obviously a lie. Since opt-out is controlled by a cookie, the system must look at your browsing data in order to determine whether you have the opt-out cookie or not. Naughty BT.

Furthermore, it is difficult to imagine how they could architect a system where your data did not traverse some box doing interception, though it may, of course, decide not to look at that data. But once more we’d have to take their word for it. How can we ever be sure they are not? Only by having our data not go to the box at all.

Talk Talk say they are going to architect their system in this way, somewhere in the comments on this post. I await details with interest – I can’t see how they can do it, except by either pushing the traffic through some other interception box, which doesn’t really change the situation at all, or by choosing whether to send to the Phorm box on the basis of IP address – which does not identify the user, so, for example, I could find myself opted-in by my children, without my knowledge!

All these worries apply to the system working as intended. What would happen if the Phorm box got pwned, I dread to think. I hope they’ve done their homework on hardening it! Of course, since they have “no access to the system”, it’ll be interesting to see how they plan to keep it up-to-date as attacks against it evolve.

11 Mar 2008

RFC 5155

Filed under: Anonymity/Privacy,Crypto,Distributed stuff — Ben @ 11:43

After nearly 4 years of mind-bending minutiae of DNS (who would’ve thought it could be so complicated?), political wrangling and the able assistance of many members of the DNSSEC Working Group, particularly my co-authors, Roy Arends, Geoff Sisson and David Blacka, the Internet Draft I started in April 2004, “DNSSEC NSEC2 Owner and RDATA Format (or; avoiding zone traversal using NSEC)” now known as “DNS Security (DNSSEC) Hashed Authenticated Denial of Existence” has become RFC 5155. Not my first RFC, but my first Standards Track RFC. So proud!

Matasano Chargen explain why this RFC is needed, complete with pretty pictures. They don’t say why its complicated, though. The central problem is that although we all think of DNS as a layered system neatly corresponding to the dots in the name, it isn’t.

So, you might like to think, and it is often explained this way, that when I look up a.b.example.com I first ask the servers for . who the nameserver for com is. Then I ask the com nameservers where the nameservers for example.com is, who I then ask for the nameservers for b.example.com and finally ask them for the address of a.b.example.com.

But it isn’t as easy as that. In fact, the example.com zone can contain an entry a.b.example.com without delegating b.example.com. This makes proving the non-existence of a name by showing the surrounding pair rather more challenging. The non-cryptographic version (NSEC) solved it by cunningly ordering the names so that names that were “lower” in the tree came immediately after their parents. Like this:

a.example.com
b.example.com
a.b.example.com
g.example.com
z.example.com

So, proving that, say, d.example.com doesn’t exist means showing the pair (a.b.example.com, g.example.com). Note that this pair does not prove the nonexistence of b.example.com as you might expect from a simple lexical ordering. Unfortunately, once you’ve hashed a name, you’ve lost information about how many components there were in the name and so forth, so this cunning trick doesn’t work for NSEC3.

It turns out that in general, to prove the nonexistence of a name using NSEC you have to show at most two records, one to prove the name itself doesn’t exist, and the other to show that you didn’t delegate some parent of it. Often the same record can do both.

In NSEC3, it turns out, you have to show at most three records. And if you can understand why, then you understand DNS better than almost anyone else on the planet.

6 Mar 2008

Microsoft Buys Credentica

Kim and Stefan blog about Microsoft’s acquisition of Stefan’s selective disclosure patents and technologies, which I’ve blogged about many times before.

This is potentially great news, especially if one interprets Kim’s

Our goal is that Minimal Disclosure Tokens will become base features of identity platforms and products, leading to the safest possible intenet. I don’t think the point here is ultimately to make a dollar. It’s about building a system of identity that can withstand the ravages that the Internet will unleash.

in the most positive way. Unfortunately, comments such as this from Stefan

Microsoft plans to integrate the technology into Windows Communication Foundation and Windows Cardspace.

and this from Microsoft’s Privacy folk

When this technology is broadly available in Microsoft products (such as Windows Communication Foundation and Windows Cardspace), enterprises, governments, and consumers all stand to benefit from the enhanced security and privacy that it will enable.

sound more like the Microsoft we know and love.

I await developments with interest.

25 Feb 2008

If You Have Cardspace, Why Use OpenID?

Filed under: Anonymity/Privacy,Identity Management,Security — Ben @ 14:31

Kim Cameron writes about fixing OpenID’s phishing problems by using Cardspace. Certainly I agree that using strong authentication to the OpenID provider fixes the phishing problem – but if you have strong authentication, why bother to use OpenID at all? Why not strongly authenticate to the site you are really trying to log into, instead?

Of course, Cardspace is a pretty heavyweight solution for this, so perhaps that’s what Kim’s getting at? It also doesn’t work well if you have more than one machine – moving your credentials around is not something Cardspace does well.

In my view, there’s a sweeter spot for solving this problem than Cardspace (or OpenID, obviously) – and that is to do strong authentication based purely on a password. That way, you can use the same password everywhere, so no problem with moving between machines, but can still resist phishing attacks and don’t have to make yourself linkable across all sites. Obviously supporting this would be way easier than taking the whole of Cardspace on board, but would have all of the immediate advantages. Clearly it would get you nowhere with advanced identity management, but its not like we don’t already have protocols for that and nor does there seem to be much demand for it yet.

23 Feb 2008

Wikileaks

Filed under: Anonymity/Privacy,Civil Liberties,Crypto — Ben @ 14:15

The Guardian has a nice article about Wikileaks today. This was triggered by bizarre behaviour on the part of Bank Julius Baer‘s lawyers, Lavely and Singer (“Attack Dogs of L.A. Law”), who asked Wikileaks to remove documents without specifying what documents or who their client was and then got an injunction to have the wikileaks.org domain deleted.

The documents are still available, of course.

One thing I should correct, though. The article says

Those behind Wikileaks include … Ben Laurie, a mathematician living in west London who is on the advisory board.

I’m not a mathematician (any more), and I’m not behind Wikileaks. I think its a good idea, and I did comment on an early design for the technical infrastructure (which, I must say, was cool), but I am otherwise uninvolved. Everyone thinks this is just a cunning ploy to distance myself from it, but really, its true.

13 Jan 2008

Be Careful With The Social Graph

Filed under: Anonymity/Privacy,Identity Management,Security — Ben @ 19:50

Bob Blakley is concerned that if we open up the social graph, then we’ll kill social networking (if I were you I’d skip the rather complicated and irrelevant analogy he kicks off with: to mangle my friend Jennifer Granick‘s oft-given advice, we should talk about the thing itself and not what it is like). His core point is that its not OK for Scoble to move his relationship data from one system to another because he doesn’t own that data – it is jointly owned by him and those with whom he has relationships.

Whilst I agree that it may not be OK to move such data around, I think Bob is wrong about the details. Plus he picked a terrible example: it hardly matters what Scoble did with his friends list because anyone can already see it.

And this precisely illustrates what seems most important to me: when I share social data, I do so under certain conditions, both explicit and implict. What I care about, really, is that those conditions continue to be met. I don’t really mind who does the enforcing, so long as it is enforced. So, it seems to me that its OK to create the social graph, you just have to be exceedingly careful what you do with it.

This presents two, in my view, enormous technical challenges. The first is dealing with a variety of different conditions applying to different parts of the graph. Even representing what those conditions are in any usable way is a huge task but then you also need to figure out how to combine them, both when multiple conditions apply to the same piece of data (for example, because you figured it out twice in different ways) or when the combination of various pieces of data, each with its own conditions, yield something new.

Once you’ve done that you are faced with a much larger problem: working out what the implicit conditions were and enforcing those, too. The huge adverse reaction we saw to Facebook’s Beacon feature shows that such implicit conditions can be unobvious.

Anyway, the bottom line is that those in favour of the social graph tend to see it as some nodes, representing people, and edges, representing relationships. What they ignore is the vast cloud of intertwined agreements and understandings woven around all those edges and nodes. But those are absolutely vital to the social graph. Without them, as Bob says

Opening the social graph will destroy social networks, and turn them into sterile public spaces in which formation of meaningful and intimate relationships is not possible.

So, by all means, open the social graph but do it really carefully.

One thing I’ll note in passing: it is very common, in human relationships, to reveal far more than you are supposed to – under condition that the recipient of the revelation maintains absolute secrecy about it. For example, everyone knows that Alice is bonking Bob except Alice’s husband and Bob’s wife. This is because a series of “absolute secrecy” conditions and careful thought have neatly partitioned the world with respect to this piece of information. Usually. Should a good social graph emulate this?

12 Jan 2008

Me-ville Versus The Global Village

Thanks to Adriana, I just came across an intriguing post on VRM. In it, two completely different versions of VRM are presented (he thinks he presented four, but I claim that the “vendor control” end of the spectrum is CRM, not VRM).

In Me-Ville, everything is anonymous and reputation/value-based. In the Global Village, its all about long-term relationships. I think this divide is interesting and sums up the differences in the approach taken by techies, like Alec Muffett and me versus the approach the fluffier, social people like Adriana Lukas and Doc Searls would like to take.

Who’s right? Well, normally I’d say I am, but I’m not sure I really know in this case. But recognition is the first step towards reconciliation.

7 Jan 2008

Presence is a Privacy Problem

Filed under: Anonymity/Privacy,Crypto,Security — Ben @ 20:16

I don’t know why I’ve never written about this before. One thing that’s always bugged me about instant messaging is that I can’t choose who sees my presence and who doesn’t. As a result, I don’t advertise presence, as people who IM with me will know.

Why do I care? Mostly because I am being a purist. But the purist point is this: by my presence information I give away information that can be correlated across channels. To take Kim Cameron’s favourite example, if my alter ego LeatherBoy always comes online at the same time as me, someone who can view both alter egos can eventually make the correlation. There are other channels – for example if LeatherBoy is always online when I buy something at Amazon, then, again, one can start to entertain the notion that we are the same.

There are people I wouldn’t mind assisting in organising their time by advertising my presence to. And probably others to whom I’d like it to be fabricated. But I can’t do that. IM is broken.

I did toy with turning it on, but with the definition of idle turned up really high (like, after 100 minutes), but the problem there is you can time my actual idle time from my advertised time and likewise the time I come back online. Clients don’t (currently) offer the option of being somewhat random about when they start to advertise a status change.

At least, though, I can fix that problem by modifying the client code. The selective presence problem is less tractable: the protocols do not support it.

24 Dec 2007

Handling Private Data with Capabilities

Filed under: Anonymity/Privacy,Capabilities,Programming,Security — Ben @ 7:10

A possibility I’ve been musing about that Caja enables is to give gadgets capabilities to sensitive (for example, personal) data which are opaque to the gadgets but nevertheless render appropriately when shown to the user.

This gives rise to some interesting, perhaps non-obvious consequences. One is that a sorted list of these opaque capabilities would itself have to be opaque, otherwise the gadget might be able to deduce things from the order. That is, the capabilities held in the sorted list would have to be unlinkable to the original capabilities (I think that’s the minimum requirement). This is because sort order reveals data – say the capabilities represented age or sexual preference and the gadget knows, for some other reason, what that is for one member of the list. It would then be able to deduce information about people above or below that person in the list.

Interestingly, you could allow the gadget to do arbitrary processing on the contents of the opaque capabilities, so long as it gave you (for example) a piece of code that could be confined only to do processing and no communication. Modulo wall-banging, Caja could make that happen. Although it might initially sound a bit pointless, this would allow the gadget to produce output that could be displayed to the user, despite the gadget itself not being allowed to know that output.

Note that because of covert channels, it should not be thought that this prevents the leakage of sensitive data – to do that, you would have to forbid any processing by the gadget of the secret data. But what this does do is prevent inadvertent leakage of data by (relatively) benign gadgets, whilst allowing them a great deal of flexibility in what they do with that data from the user’s point of view.

14 Dec 2007

Notification on Personal Data Breaches

Filed under: Anonymity/Privacy,Civil Liberties,Security — Ben @ 14:17

The government waited nearly a month before revealing that they had lost personal data on 25 million UK citizens. Presumably they could have waited forever if they’d thought they’d get away with it.

If you agree that there ought to be a law obliging organisations to reveal such breaches, then the petition for you is right here.

10 Nov 2007

Shirley Williams on the Identity Card

Filed under: Anonymity/Privacy,Civil Liberties — Ben @ 17:11

I listened to Shirley Williams today speak about the identity card on the always excellent “Any Questions” program on Radio 4. She is not a fan. First of all she made it clear that she believed the LSE’s estimate of the cost, at £19 billion, rather than the government’s, at £5.6 billion. But then she got really quite outspoken

I think the ID cards are much more serious than people realise … the absolute key thing, and I can’t stress this enough, is that the level of data that the government proposes to collect under the ID bill … adds up, in my view, to a Big Brother scheme of the most terrifying kind.

Because it is so expensive our government will sell our data to commercial interests

It will be a record of where you’ve been, what you’ve done, who you’ve talked to, and I think its a terrifying scheme and I’m another person who’s prepared to say I wouldn’t cooperate with it in any way at all (lots of applause)

When asked if she would court jail in her resistance to ID cards, she responded

Of course … My view is that the identity card will undermine individual civil liberty so seriously that one is entitled to say that one won’t cooperate with it. I have not suggested I would use violence, I am suggesting I wouldn’t cooperate with it, nor will I.

Yes, yes, Shirley, but there’s no need to beat about the bush – tell us what you really think! 🙂

I wonder if Shirley supports No2ID?

5 Nov 2007

Self-issued Cards Are More Secure

Filed under: Anonymity/Privacy,Identity Management,Security — Ben @ 21:01

Pamela Dingle takes some Liberty dudes to task for being obsessed with the letter of the spec. Her, perfectly reasonable, stance is that if she chooses to link a self-issued infocard to her bank account, then that’s at least as secure as any other means we know of for authenticating. Of course, she’s left out of this equation how she gets to make that association, and, of course, the Liberty dudes think you should only make such associations via the middleman of some kind of certificate issuer.

But there’s no reason to involve any card issuers in this at all – we have to have a relationship with the bank to get this off the ground in the first place, regardless of authentication mechanism, and, however that relationship works, we can use it to inform the bank about our self-issued card. Once we’ve done that we have strong authentication with the bank, no need for IdPs, CAs or any of that stuff. In fact, our authentication is stronger than if we had involved a third party – with a self-issued card, no-one else is in a position to make a forgery.

And, of course, we’ve removed a potential correlator of our activities from the equation. Score one for privacy.

30 Oct 2007

BBC on the iPlayer

Filed under: Anonymity/Privacy,Digital Rights,Open Source — Ben @ 13:56

An interesting podcast with Ashley Highfield, Director Future Media & Technology.

We’re not doing enough [about open source] and it is something I want to turn up the heat on

Well, that’s a good start, but he then goes on to say

The problem at the moment, there is no open source DRM. It’s almost a contradiction in terms, if you have DRM how can you have it open source? Because open source people will be able to find out how it works and get round it.

Oh, dear. Because, of course, no-one will work out how the Microsoft DRM works, just like they haven’t worked out all the other DRMs out there. Not.

In any case, this entirely misses the point: there is no DRM on the broadcast signal, nor was there on old-fashioned video tapes. Why are downloads different? Why is it not sufficient to rely on the law, as has always happened in the past? Why not assume that your users are mostly honest rather than treat them like criminals?

Clearly there’s a vast amount of money to be made by selling “DRM” solutions to gullible old media companies. It is sad that the BBC, who don’t even have to protect their profits, do not have the collective brains to see through this scam.

Perhaps there is light at the end of the tunnel?

Where do we go from here? … The solution then is to say either we look at a future beyond DRM or we’re going to find it very hard to put our content onto open source solutions.

But he is just teasing – they don’t actually look at this future, so I guess their choice is to not put their content onto open source solutions!

On eating your bandwidth

We do make people aware of it

so that’s alright then. He goes on to say

We’ve also got to … work better with the ISPs to ensure that they don’t throttle … iPlayer type content

I think he needs to add Parliament to his list of people to work better with, after the recent lunacy from Lord Triesman
.

They go on to try to justify the use of DRM in terms of maintaining contact with their audience and their responsibility for the quality of the broadcasts – others could, it seems, put out crappy versions of their free stuff. But hold on, why would anyone download the crappy version when you could have the good version for free from the BBC? Not explained, I suppose it must be obvious.

But it’ll all be alright in the future broadcasting panopticon, when omniscient and omnipotent Auntie can rule, godlike, over all use of “their” content.

Once we get to that stage, where the content, wherever it goes, can have all the rules associated with how it should behave, and once its able to tell us who’s viewing it, where they’re viewing it … then it doesn’t really matter where the content goes

Oh goody! So if I lie back and allow total privacy rape, then kind, generous Auntie will consider relaxing DRM.

28 Sep 2007

Has Cardspace Become Passport?

I reviewed an article about identity management the other day. It got me thinking about what is really used out there, and what for?

People like to hail OpenID as a huge success, but as far as I can see its popularity is entirely on the provider side. There are no consumers of note.

Similarly, Cardspace appears to live in its own little world, supported only by Microsoft products.

Funnily enough, the only thing that seems to really be used much is SAML, widely used in enterprise SSO and in Shibboleth.

So why does this make Cardspace like Passport? Well, the fear with Passport was that Microsoft would control all your identity. The end result was that Microsoft was the only serious consumer of Passport. When Cardspace is deployed such that all providers and consumers of identity are really the same entity, then all its alleged privacy advantages evaporate. As I have pointed out many times before, when consumers and providers collude, nothing is secret in Cardspace (and all other standard signature-based schemes). So, there’s no practical difference between Cardspace and Passport right now.

(Sorry, no links today, I’m in a hurry)

19 Aug 2007

Brad Fitzpatrick on the Social Graph

Filed under: Anonymity/Privacy,Identity Management,Security — Ben @ 0:22

Brad Fitzpatrick writes about a problem that is essentially the same as my motivating example. His proposal avoids what I consider the interesting problems by only dealing with public data, though I think I would dispute that by so doing he solves 90% of the problem.

I also worry about whose perception of public is the correct one. If I have, say, a Facebook and a Flickr account, and a friend who knows what they both are, will I be happy if that friend broadcasts the fact that they’re both me? Possibly not.

In any case, interesting reading.

Next Page »

Powered by WordPress