Ben Laurie blathering

29 Apr 2008

Fun With FreeBSD and gmirror

Filed under: General — Ben @ 20:49

A while ago I moved a lot of my stuff from a very ancient box to a quite new one. For some reason the new one has three disks in it, and so we (that is the ultra-patient Lemon and me) decided to mirror two of them. Not really having need of a third enormous disk it was left spare for now (possibly this was unwise in retrospect).

Since I run FreeBSD on my server boxes, we used gmirror. Being adventurous, we also decided we were going to mirror the root partition – slightly nerve-wracking, because when FreeBSD boots, it boots from the (unmirrored) root partition. But the theory is this works fine with mirrored disks.

So, we had three disks, which FreeBSD saw as ad4 (ata2-master), ad5 (ata2-slave) and ad6(ata3-master). We figured that ad4 and ad6 should be the mirrors, since they are on different controllers. So that’s what we did and it all works fine.

Fast forward several months and its time to upgrade the kernel. We’re moving from FreeBSD 6.x to FreeBSD 7.x, so its slightly nerve-wracking, but I do what I always do, which is to build the system from source, following the time-honoured

1. `cd /usr/src' (or to the directory containing your source tree).
2. `make buildworld'
3. `make buildkernel KERNCONF=YOUR_KERNEL_HERE' (default is GENERIC).
4. `make installkernel KERNCONF=YOUR_KERNEL_HERE' (default is GENERIC).
[steps 3. & 4. can be combined by using the "kernel" target]
5. `reboot' (in single user mode: boot -s from the loader prompt).
6. `mergemaster -p'
7. `make installworld'
8. `make delete-old'
9. `mergemaster'
10. `reboot'

(btw, mergemaster -U is good medicine for step 9). Everything goes fine. Until I realise uname -a is still reporting we’re running FreeBSD 6.2! WTF?

Well, to make a long story short, for some reason the BIOS thinks ata2-slave(i.e. our “spare” disk!) is the “first” disk, and so this is what it boots off. Presumably during the build the system got installed on “disk 1”, whichever that happened to be (we didn’t actually do the base build ourselves). Then when mirroring was set up, our “disk 1” didn’t match the BIOSes, and confusion reigned.

The happy ending to this story, though, is that

  • You can run FreeBSD 7 userland on a FreeBSD 6.2 kernel!
  • Switching the BIOS to boot off “disk 2” (i.e. ata2-master) made everything work as it should

I am recording this episode not because I think it is very interesting but because I hope it’ll be useful to someone else.

26 Apr 2008

Do We Need Credentica?

Filed under: Anonymity,Crypto,Open Source,Privacy,Security — Ben @ 20:22

I read that IBM have finally contributed Idemix to Higgins.

But … I am puzzled. Everyone knows that the reason Idemix has not been contributed sooner is because it infringes the Credentica patents. At least, so says Stefan – I wouldn’t know, I haven’t checked. But it seems plausible that at least IBM think that’s true.

So, what’s changed? Have IBM decided that Idemix does not infringe? Or did Microsoft let them publish? Or what?

If its the former, then do others agree? And if its the latter, then in what sense is this open source? If IBM have some kind of special permission with regard to the patents, that is of no assistance to the rest of us.

It seems to me that someone needs to do some explaining. But if the outcome is that Idemix really is open source, then what is the relevance of Credentica?

Incidentally, I wanted to take a look at what it is that IBM have actually released, but there doesn’t seem to be anything there.

Can Phorm Intercept SSL?

Filed under: Crypto,Open Source,Privacy — Ben @ 18:24

Someone asked me to comment on a thread over at BadPhorm on SSL interception.

In short, the question is: can someone in Phorm’s position decrypt SSL somehow? The fear is driven by the existence of appliances that do just this. But these appliances need to do one of two special things to work.

The first possibility is where the appliance is deployed in a corporate network to monitor traffic going from browsers inside the corporation to SSL servers outside. In this case, what you do is you have the SSL appliance act as a CA, and you install its CA certificate in each browser’s store of trusted CAs. Then when the appliance sees an SSL request go past it quickly creates (some would say “forges”) a certificate for the server the request is destined for and instead of routing the connection on to the real server, instead answers it itself, using the newly created certificate. Because the browser trusts the appliance’s CA this all looks perfectly fine and it will proceed without a warning. The appliance then creates an outgoing connection to the real server and acts as a proxy between the browser and server, thus getting access to the plaintext of the interaction.

I’d note in passing that in Netronome’s diagram they show a “trust relationship” between the webserver and the SSL appliance. This is not correct. There need be no relationship at all between the webserver and the appliance – indeed it would be fair to say that many a webserver operator would view what the appliance is doing as downright sneaky. Or dishonest, even.

But, in any case, inside the corporation this behaviour seems fair enough to me – they’re paying for the browser, the machine it runs on, the network connection and the employee’s time. I guess they have a right to see the data.

Could Phorm do this? Well, they could try to persuade anyone stupid enough to install a CA certificate of theirs in their browser, and then yes, indeed, this trick would work for them. More of the story: don’t install such certificates. Note that last time I looked if you wanted to register to do online returns for VAT you had to install one of these things. Oops!

Or, they could get certified as a CA and get automatically installed in everyone’s browser. I’m pretty sure, however, that such a use of a CA key would find them in breach of the conditions attached to their certification.

So, in short, Phorm can only do this to people who don’t understand what’s going on – i.e. 99% of Internet users. But not me.

The second scenario is to deploy the SSL interception appliance at the webserver end of the network (at least, this is how its usually done), and have it sniff incoming connections to the webserver. However, to break these connections it needs to have a copy of the webserver’s private key. I’m reasonably confident that the vast majority of webserver operators will not be handing over their private keys to Phorm, so even “99%” users are safe from this attack.

By the way, if you want to see this one in action, then you can: the excellent network sniffer, Wireshark, can do it. Full instructions can be found here. No need to buy an expensive appliance.

25 Apr 2008

Yet Another Version Control System (and an Apache Module)

Filed under: Distributed stuff,Open Source — Ben @ 22:32

I recently finished off mod_digest for Canonical. To you: the guys that make Ubuntu.

In the process I was forced to use yet another distributed version control system, Bazaar. Once I’d figured out that the FreeBSD port was devel/bazaar-ng and not devel/bazaar, I quite liked it. All these systems are turning out to be pretty much the same, so it’s the bells and whistles that matter. In the case of Bazaar the bell (or whistle) I liked was this

$ bzr push
Using saved location: s

Yes! In Monotone, I’m permanently confused about branches and repos and, well, stuff. Mercurial makes me edit a config file to set a default push location. Bazaar remembers what I did last time. How obvious is that?

23 Apr 2008

Why You Should Always Use End-to-End Encryption

Filed under: Anonymity/Privacy,Crypto,Privacy,Security — Ben @ 17:46

A Twitter user has had all her private messages exposed to the world. This is one of the reasons I try to avoid sending private messages (at least, ones that I would like to remain private) over any system that does not employ end-to-end encryption.

At least then my only exposure is to my correspondent, not the muppets that run the messaging service I used.

One service this poor unfortunate has done for the world, though, is to provide an excellent example of why you should use cryptography routinely: you need not have any more to hide than your embarrassment.

Incidentally, I am going to stop using the combined tag “Anonymity/Privacy” after this post – clearly they are not always both applicable.

Phorm Legal Analysis

Filed under: Anonymity/Privacy,Security — Ben @ 17:37

FIPR‘s Nick Bohm has written a fascinating legal analysis of Phorm’s proposed system. Its nice that RIPA’s effects are not all bad, but it turns out that, in Nick’s opinion, Phorm are on the hook for a number of other illegal acts under various acts…

  • The Regulation of Investigatory Powers Act 2000
  • The Fraud Act 2006
  • The Data Protection Act 1998

He also beats up Simon Watkin of the Home Office (well-known in UK privacy circles for spending a great deal of energy trying to persuade us all that RIPA [then known as RIP] was going to be alright, really), for a note he wrote which suggested that Phorm’s business model was just fine under RIPA. Simon stays true to form by pointing out that the note wasn’t actually advice, and was not based on paying any attention at all to what Phorm were actually proposing. One has to wonder, then, what the point of writing it was?

Perhaps more disturbingly, Nick also talks about what my be the first attempt at enforcement against Phorm. Not surprisingly, the police say they’re too busy and it’s the Home Office’s problem and the Home Office say its not their job to investigate offences under RIPA. Isn’t it lucky, then, that we are doing their investigating for them?

I’m also pleased to see that Nick supports my view that the consent of both the user and the web server must be obtained for Phorm’s interception to be legal under RIPA

RIPA s3(1) makes it lawful if the interception has the consent of both
sender and recipient (or if the interceptor has reasonable grounds for believing
that it does). This raises the question of whose consent is required for the
interception of communications of those using web browsers.

I’m also intrigued by Nick’s analysis of Phorm’s obligation under the Data Protection Act. Where sensitive personal data is processed by Phorm, then the user’s consent must be obtained. Nick argues that Phorm will see information relating to

• their racial or ethnic origin,
• their political opinions,
• their religious or similar beliefs,
• whether they are members of a trade union,
• their physical or mental health or condition,
• their sexual life,
• the commission or alleged commission by them of any offence, or
• any proceedings for any offence committed or alleged to have been
committed by them, the disposal of such proceedings or the sentence of
any court in such proceedings

It occurs to me that Nick has missed a trick here: the user might also view sensitive data relating to a third party – for example, they might participate in a closed web forum where, say, sexual preferences are discussed. In this case, it seems to me, the consent of that third party would need to be obtained by Phorm.

20 Apr 2008

Oyster is Toast

Filed under: Crypto,Security — Ben @ 12:55

The MiFare stream cipher, as used in Oyster cards, has been comprehensively cracked. The researchers claim they can recover the key in well under 5 minutes after observing a single transaction.

When will people learn that making up your own ciphers is a fantastically bad idea?

16 Apr 2008

Nice Review of Caja

Filed under: Capabilities,Open Source,Programming,Security — Ben @ 1:41

Tim Oren posted about Caja.

…this adds up to a very good chance that something that’s right now fairly obscure could turn into a major force in Web 2.0 within months, not years. Because Caja modifies the de facto definition of JavaScript, it would have an immediate impact on any scripts and sites that are doing things regarded as unsafe in the new model. If you’ve got a Web 2.0 based site, get ready for a project to review for ‘Caja-safety’. If the Caja model spreads, then the edges of the sandbox are going to get blurry. Various users and sites will be able to make choices to allow more powerful operations, and figuring out which ones are significant and allow enhanced value could be a fairly torturous product management challenge, and perhaps allow market entry chances for more powerful forms of widgets and Facebook-style ‘apps’.

End of message.

6 Apr 2008

Conflicting Roles

Filed under: Identity Management,Security — Ben @ 23:51

Pamela Dingle writes about the problems of people having conflicting roles. Funnily enough I’m working on a paper about roles, too, but more on that later. Right now I wanted to observe that the problem she describes

There is no simple way to say that John is a broker 100% of the time, but 50% of the time he represents Client A and only Client A, and the other 50% he solely represents Client B. There is no way to represent mutual exclusivity of roles in a single user profile (that I’m aware of).

can handled in an interesting way in SE-Linux: there you can make the rule that once the user (or rather, a program acting on behalf of the user) has accessed any resource corresponding to Client A he is no longer allowed to access resources corresponding to Client B, and vice versa. Of course, leaping from this to the idea that you’ve built a real Chinese Wall between the two clients is falling foul of one of the fallacies of DRM: of course the user can find ways to transport data across that wall. But, nevertheless, SE-Linux is a system in which it is possible to express such policies.

Powered by WordPress