But should I? Does anyone care?
But should I? Does anyone care?
A long time ago, I tried to extend Python to support capabilities. It didn’t work out well – it turns out that the Python interpreter isn’t well suited – by the time Python has been compiled it has lost too much information to enforce the confinement required by capabilities. Also, it seems the Python developers aren’t really interested in capabilities (nor all that interested in security, it seems, since the restricted execution mode is not maintained).
Anyway, much later I realised that modifying the interpreter wasn’t the way to go – what’s much better is to compile a modified version of the language into the standard language – that way proves to be much easier.
So, I did this for Perl, on the basis that if you can secure Perl you can surely secure anything. I’ve given a couple of talks about it, but so far haven’t released any code. I finally got off my arse and did the first release. Very poorly documented, I’m afraid, but there is at least a mailing list!
You can find CaPerl here.
Pretty much every time someone starts talking about computer security they soon get around to talking about trust. But trust is such a bad way to describe what’s going on. Let’s look at a few examples…
TCG specifications will enable more secure computing environments without compromising functional integrity, privacy, or individual rights.
Well, OK, they could be used that way. But will they be? Of course not: this is all about Disney owning your computer.
Anyway, I could go on, but luckily I don’t have to. A friend pointed me to a rather good presentation on the subject by Deiter Gollman, which I mostly agree with.
Cory Doctorow points to some dude called Michael Arrington who talks about how the world needs a better online backup product. Clearly he hasn’t done the sums.
My default policy is to back up everything – in my experience, trying to choose what to back up is a great way to miss something vital. So, let’s say I have a rather modest 100 GB disk, and I have the usual ADSL link, i.e. 128 kb/s up. How long would it take me to do the first backup? 75 days. Really.
So, assume I get smart and can winnow that down to only 10% of my disk: then it still takes 7.5 days. Assume further that by some miracle only 10% of the files change each day, then daily backups take .75 of a day. That is, my uplink is maxed out most of the time.
Clearly this sucks.
He goes on to suggest that whoever produces this non-viable product should also give the use of 500 GB for $20 a year. I’m not sure where he buys disks, but where I buy them this means I might recover the original investment in, oh, 10 years or so. So long as the rest of the hardware is free and I buy in huge bulk. Great business model.
An interesting question is: will this get any better? In other words, does uplink speed per dollar improve faster than disk size per dollar? I’m sure someone has the historical data for that … let me know!
Apparently Ian Grigg is troubled by the word.
Mind you, Ben claims that x.509 is not suitable because “standard X.509 statements are verifiable, but not minimal nor unlinkable.” I’m troubled by that word “verifiable.” Either an x.509 cert points to somewhere else and therefore it in itself is not verifiable, just a reliable pointer to somewhere else, or the somewhere else is included in which case we are no longer talking about x.509.
By “verifiable” I mean you can check the signature, nothing deeper.
In response to my claim that no-one knows whether Credentica is supported, he says…
Actually, I’ve been working with Stefan to ensure that Credentica (the name of Stefan’s system) can work within the InfoCard model. I’ve said publicly that if it can’t, our implementation needs to be fixed.
This is a fascinating debating technique – respond to criticism by agreeing with it. Yes, Kim, it’s good that you’re prepared to fix it if its broken, though that does make the interesting assumption that it can be fixed, but that is exactly what I said – you don’t know whether it is supported.
In what seems to be a response to my assertion that Law 4 is broken he says…
Beyond this, the basic InfoCard implementation allows the blinding of the identity provider to the identity of the relying party by putting that identity through a one-way function with per-user salt. Any identity provider can then manufacture unidirectional identities and sign assertions without knowing what site they are being submitted to.
This doesn’t fix the problem. Clearly the site they are submitted to will know who the identity provider is, and so collusion between providers and sites is still possible.
He goes on to misunderstand what I said about Sxip…
To the extent that sxip wants its own unique user experience that has nothing to do with the user experience of other identity systems, then any common UI is “wrong for Sxip”. But Sxip should be able to distinguish between offering a basic identity experience within the framework of a metasystem (for example, working with InfoCard), and providing a unique value-add through its own supplementary UI (such value-add is a good and great idea).
My point was that Sxip apparently needs to interact with the user at a point where InfoCard is not prepared to do interaction, not that it is trying to offer a unique experience. However, since I don’t know Sxip well enough to be sure I should stop hammering on this point and leave it to those that do.
In summary, he says…
Nothing is being swept under the carpet. My goal is to deliver increasing clarity as we move forward.
Traditional certificates are linkable. But InfoCard Identity Providers can easily produce unlinkable identity assertions.
I keep hearing this but I don’t hear any satisfactory explanation of how. If the “unlinkable assertion” is in the form of a traditional certificate, then it is linkable by the Identity Provider, and the Identity Provider is known to the relying party, of necessity (since the certificate is signed by the provider, who must be trusted by the relying party), and so they can collude to reveal the original certificate (or whatever other assertion was made).
Finally, Kim says:
I need to write in a systematic way about the design decisions and capabilities of the Identity Metasystem proposal. Hopefully as that happens we can zero in on things that need to be fixed and extended going forward.
Indeed, this would be a good thing. But this is further evidence of the incompleteness of InfoCard. Yet we’re told it’ll be released to the public in a few months. Do I believe it will deliver on the promises by then? Not on the evidence so far. Are assurances of future detailed explanation supporting the unsupportable claims reassuring? No.
Don’t get me wrong, InfoCard has potential to be a very good thing – but only if its done right and not rushed out the door accompanied by empty promises.
I just posted two old papers that I’d somehow managed to not put up before.
The first is on Apres, a system for anonymous presence and the second is Minx, an anonymous packet format that defeats traffic marking attacks by making all packets valid (even corrupt ones), which I wrote with George Danezis.
I also have an implementation of Apres, as a Perl library, and two instantiations of it. The first is a pair of command-line tools that communicate using TCP/IP (over something like Tor, of course) and the second is an IRC bot that acts as the server and an xchat plugin that is the client. I have not bundled those up and put them somewhere yet, but I will someday (sooner if anyone actually wants to play with them: nag me).
My friend and fellow Shmoo, Bruce Potter, has an amusing (and correct) rant about choosing an OS, which he gave at Defcon this year. Anyway, he’s turned into an article. The article isn’t as much fun as the talk, but you missed the talk, didn’t you?
Powered by WordPress