Ben Laurie blathering

27 Jan 2009

Caja on the Yahoo! Developer Network

Filed under: Caja,Programming — Ben @ 4:40

Here’s a nice (and brief) article about developing for the Yahoo! Application Platform, and, in the process, covering some of the benefits and gotchas of Caja.

Caja enforces the W3C standards for JavaScript, HTML, and CSS, regardless of browser (!).
Yes, you read correctly, if your code cajoles, it will run in Firefox and IE. This is amazing and its value cannot be overestimated.

18 Jan 2009

United Are Bastards

Filed under: Rants — Ben @ 20:41

I just accidentally went through the whole booking process on United’s US site. There were many seats in Economy Plus. But I couldn’t book because I don’t have a US address. So, I went through it all again on the UK site. Guess what? Economy Plus is allegedly booked solid. Bastards.

16 Jan 2009

Will GPLv3 Kill GPL?

Filed under: Open Source,Rants — Ben @ 14:49

I started looking at the LLVM project today, which is a replacement for the widely used gcc compiler for C and C++. My interest in this was prompted by thinking once more about static analysis, which it is pretty much impossible to use gcc for, and is likely to remain so, because Stallman opposes features which would enable it.

Anyway, being an optimist, I thought perhaps the interest in LLVM and clang (the C/C++ front end) were prompted by a sudden surge of interest in open source static analysis, but asking around, it seems it is not so.

The primary motivator appears to be GPLv3. Why? Well, here’s a few facts.

  • GPLv3 is not compatible with GPLv2. Don’t take my word for it, believe Richard.
  • Linux is, of course, famously GPLv2 without the upgrade clause, and hence GPLv3 incompatible.
  • FreeBSD, for example, are unlikely to accept software into the core that is GPLv3. No new licence can be used without core team approval and I am told this has not been given for GPLv3 and is not likely to be.
  • Commercial users of open source have always been a bit twitchy about GPLv2, but they’re very twitchy indeed about GPLv3. And don’t tell me commercial users are not important: these days they are the ones financing the development of open source software.

GCC is, apparently, going to move to GPLv3 – it says here that GCC 4.2.1 would be the last version released under GPLv2 (which is a bit rum, because I just checked GCC 4.4 and it is GPLv2. What gives?).

So, pretty clearly, there’s a need for a C/C++ compiler that is not GPLv3, and this, it would seem, is the real driver for LLVM.

Obviously this issue is not confined to GCC. As more software moves to GPLv3, what will the outcome be? Will the friction between GPL and other licences finally start persuading projects that free != GPL, and that BSD-style licences better suit their needs? Or will it just be that GPLv3 fails to make headway? We can only hope for the former outcome.

14 Jan 2009

Verisign Demonstrate Their Lack of Integrity

Filed under: Security — Ben @ 14:29

On the 7th of May, 2008, Debian fixed the now famous OpenSSL Weak PRNG bug. So, I’m pretty stunned to read, over 9 months later, Verisign’s newsletter saying

Earlier this year, the Debian organization discovered a vulnerability that weakened the system’s Random Number Generator, making SSL encryption predictable

Err, earlier last year. A lot earlier.

They go on to say

We have identified which customers were affected. If one of your customers has a weak certificate, we’ll send you details on how to resolve the situation, with templates for talking to customers about the problem and policies for replacing weak certificates.

Affected certificates must be replaced as soon as possible. VeriSign will begin revoking any certificates that are still affected by this vulnerability in early 2009.

Well, that’s mighty big of you, Verisign! You’ve left your customers exposed to this problem for over 9 months – even if they replace the certificates, they are still vulnerable to attack, of course – and now that 75% of vulnerable certificates have expired, you’re beginning to think you might start revoking them. Soon. Let me guess – does “early 2009” mean May? I’m sure you wouldn’t be so cynical as to wait until there were no certificates to revoke, would you?

By the way, if anyone gets the details on “how to resolve the situation” from Verisign I’d be very interested to see them. I wonder how much the resolution will cost?

Update: A former Verisign employee pointed out Verisign’s values

…We exercise integrity in all aspects of our business. And with ferocious drive, we take the initiative to carry out all actions with exceptional execution by acting decisively…

If this is ferocious, I wonder what Verisign’s version of laid-back looks like?

12 Jan 2009

Radio 4 on Open Source and Creative Commons

Filed under: Open Source — Ben @ 13:32

The Radio 4 programme “In Business” was all about free stuff and Creative Commons last night. Because of Auntie’s lameness, this link will presumably only work for a week.

Update: find the podcast here.

10 Jan 2009

Jabber Pain

Filed under: Programming,Troubleshooting — Ben @ 17:59

For a while, its been apparent to me that Jabber was occasionally dropping messages. Last week I finally got annoyed enough to investigate it in earnest.

Unfortunately, I started off on entirely the wrong track, and blamed GTalk (sorry, guys!) – but much investigation later, with help from some very patient friends (you know who you are: thanks!), I found that it was my own Jabber server that was to blame.

However, it was not an easy journey. First of all, how do you tell messages are being dropped? I am pretty certain my server has been dropping messages since before Christmas – i.e. at least four weeks, and I am fairly certain it has been doing it ever since I first built it – which must be a year or two now. Could it be that it could drop messages for that long and no-one noticed? It seems to me, in retrospect, that it could! A wise friend of mine once said, “you know, 90% of what we say to each other could be completely different and it would make no difference”. This is even more true for IM. We send messages out. Sometimes we get answers. Sometimes we don’t. If we don’t, well, the other guy was away, or not interested, or got busy and forgot to respond. It’s fine. It was probably one of the 90%. When it’s one of the 10%, well, then we say it again. And this time we get an answer, and we’re both happy. So, it you can go on for years and not notice that stuff is missing.

It wasn’t until I started badgering my friends to tell me when they thought messages were going missing that it became clear that they were, indeed. And not just a few – a lot! I now know that it was dropping about 50% of incoming messages (i.e. messages sent to me) and no outgoing messages. God knows what kind of rude bastard my friends think I am by now! An interesting feature is that it would drop them in batches – i.e. drop for 5 minutes, forward for 5 minutes, drop for 5 minutes and so on. If it had been every second message it would have been apparent sooner, I suspect, because the conversation would be quite choppy.

But even knowing that messages were being dropped was not the end of the story. How do you figure out what is to blame? In the typical scenario, because I run my own server, there are at least 3 connections and 4 pieces of software that could be at fault.

  1. The other guy’s client,
  2. the connection from that to their server,
  3. their server,
  4. the connection from their server to my server,
  5. my server,
  6. the connection from my server to my client,
  7. my client

As I said above, I started at the wrong end – with GTalk. With some help, it became apparent that GTalk was unlikely to be to blame (and because it was upstream from the other guy’s client, we could eliminate that, too). So the next easiest target to look at was my server – which I did, with the help of tcpdump and Wireshark, though investigation was complicated by both OTR and SSL, which make it very hard to interpret and track messages. Luckily the server-to-server connection was in plain text (which is one reason I use OTR), so it could be done, with difficulty – particularly since it turned out that my jabber daemon was the culprit – so I could see messages coming in in the traces, and no corresponding activity in the server-to-client connection. Sometimes.

To cut a long story short, after much poking at my existing jabber server, which was jabberd14, I decided to replace it with jabberd2. But before I did that, I wanted to be really sure that jabberd14 was to blame, and that jabberd2 would fix it. So, I wanted the Jabber equivalent of ping. To my amazement, there appears to be no such thing! There is a Jabber ping extension but I can’t find anything that uses it. Which is the final reason I am writing this blog post – I wrote a pair of scripts that will do a Jabber ping test, and, feel free to use them. And if you are using jabberd14, I’d really like to know if you, too, get message drops…

I was planning to make them count and produce statistics and such, but I got lazy. Since you can see both ends, eyeballing them is enough to let you know what’s going on – Ping does count how many it got back, though, so you can leave it running without watching it all the time. To run them, you need two Jabber accounts, one on the suspect server and the other elsewhere. You can run them like this:

./ account1 password1
./ account2 password2

Pong will actually answer multiple Pings running simultaneously. Ping pings every 10 seconds. Output should be reasonably obvious. Because Jabber does store-and-forward, Ping will ignore Pongs from a different session. And because they use different resources, you can use the same account at both ends, if you want. Like I say, I’d be really interested to hear from anyone that experiences drops – a couple of hundred pings was always enough to show them when I was testing.

Oh yeah, and the good news: jabberd2 has now answered over 500 pings without a single drop. So, if you felt ignored, I hope things will improve!

9 Jan 2009

CodeCon Comes Back

Filed under: General — Ben @ 11:20

After an absence of several years (the last one was in 2006), CodeCon is back!

CodeCon has long been one of my favourite low-cost conferences, focusing on stuff that actually works. And I hear this year it won’t be in a nightclub, which may be bad from a beer point of view, but is good from the sticky floor perspective…

Check out the CFP.

8 Jan 2009

OpenPGP:SDK V0.9 Released

Filed under: Crypto,Open Source — Ben @ 23:14

A long time ago my sometimes collaborator, Rachel Willmer, and I started work on a BSD-licensed OpenPGP library, sponsored by Nominet.

Things slowed down a bit when, shortly after getting the initial code done, I got hired by Google – that was a bit distracting. But Rachel has been plugging away ever since, with occasional interference from me. So, I’m pleased to announce that we’ve reached the point of a somewhat feature complete release, for some value of “feature complete”, OpenPGP:SDK V0.9.

Of course, its an open source project, so contributions and bug reports are welcome. More to the point, if anyone would rather use a library than shell out to gpg, I’d be happy to help them figure out how.

7 Jan 2009

Yet Another Serious Bug That’s Been Around Forever

Filed under: Crypto,Open Source,Programming,Rants,Security — Ben @ 17:13

Late last year the Google Security Team found a bug in OpenSSL that’s been there, well, forever. That is, nearly 10 years in OpenSSL and, I should think, for as long as SSLeay existed, too. This bug means that anyone can trivially fake DSA and ECDSA signatures, which is pretty damn serious. What’s even worse, numerous other packages copied (or independently invented) the same bug.

I’m not sure what to say about this, except to reiterate that it seems people just aren’t very good at writing or reviewing security-sensitive code. It seems to me that we need better static analysis tools to catch this kind of obvious error – and so I should bemoan, once more, that there’s really no-one working seriously on static analysis in the open source world, which is a great shame. I’ve even offered to pay real money to anyone (credible) that wants to work in this area, and still, nothing. The closed source tools aren’t that great, either – OpenSSL is using Coverity’s free-for-open-source service, and it gets a lot of false positives. And didn’t find this rather obvious (and, obviously staticly analysable) bug.

Oh, I should also say that we (that is, the OpenSSL Team) worked with oCERT for the first time on coordinating a response with other affected packages. It was a very easy and pleasant experience, I recommend them highly.

6 Jan 2009

The Atheist Bus

Filed under: General — Ben @ 19:37

I’m pretty amused by this very successful appeal, which has put on buses all over England

Atheist bus advert

and I’m really impressed that they raised £135,000 (against a target of £5,500) to do it!

As an atheist humanist myself, I can only disagree with them in one respect: the use of the word “probably”. The website they’re reacting to, after all, doesn’t leave much room for doubt. They don’t say “You will probably be condemned to everlasting separation from God and then you might spend all eternity in torment in hell”, do they?

(via Boing Boing)

4 Jan 2009

Ken Hom’s “Chinese Cookery”

Filed under: Recipes — Ben @ 19:39

I cooked a snack this afternoon, crab and sweetcorn soup, a classic dish which I always cook from the fantastic book “Chinese Cookery” by Ken Hom. As we ate it my older son, who is home from university for a few weeks, complained yet again that you can’t get the book anymore. So, I decided to have a look to see if I could find a secondhand copy. To my amazement it is in print again, as of a few days ago, and you can get it for about a tenner from Amazon!

Of all Ken Hom’s books this is my favourite – great food with simple recipes that actually work, and can be achieved without too many specialist ingredients or tools. My own copy is in three pieces it has been used so much.

Powered by WordPress