Links

Ben Laurie blathering

30 Aug 2009

Porting Chromium to FreeBSD

Filed under: Open Source,Programming — Ben @ 19:27

In my copious spare time, I’ve started work on a FreeBSD port of Chromium. So far it doesn’t even build, but I’ve identified a few problem areas people might like to pitch in on.

  • NSS port: the FreeBSD NSS port needs to be upgraded to at least NSS 3.12. Ideally to the latest version because it fixes some security holes.
  • Sound: mostly whatever Linux does it likely to be the right thing for FreeBSD, too, so much of the porting work at this stage is pretty mechanical. But sound isn’t – the Linux port uses ALSA, which FreeBSD has no truck with. I am informed that the right thing to use on FreeBSD is OSS. But someone needs to do that.
  • Build farm: building this sucker is slow. I wouldn’t mind having access to a nice big build farm.

So far none of the work has been committed, but I am optimistic that some will be soon. In the meantime, you’ll be needing some patches.

There’ll probably be another patch soon – I’m going through the rest of the code right now getting the parts that don’t need NSS or ALSA building.

In order to build this lot, you’ll need to configure like this:

cd src && GYP_GENERATORS=make python tools/gyp/gyp_chromium -D 'OS=freebsd' -D 'use_system_libxml=1' build/all.gyp

and build using gmake:

cd src && gmake chrome

Right now anything that fails on NSS or ALSA I’m just manually removing from the .mk files.

You’ll also need a bunch of ports installed, here’s what my notes currently say

Python >= 2.4       (lang/python24, lang/python25, lang/python26, lang/python30)
Perl >= 5.x         (lang/perl5.6, lang/perl5.8, lang/perl5.10)
gcc/g++ >= 4.2      (lang/gcc42, lang/gcc43, lang/gcc44, lang/gcc45)
g++-multilib >=4.2  (?)
bison >= 2.3        (devel/bison == 2.4.1)
flex >= 2.5.34      (base system: 2.5.4 - oops?)
gperf >= 3.0.3      (lang/gperf - NB: base system has /usr/bin/gperf == 2.7.2, port installs /usr/local/bin/gperf)
pkg-config >= 0.20  (devel/pkg-config == 0.23)
libnss3-dev >= 3.12 (security/nss == 3.11.9_2 - hand install?)
libgconf2-dev       (devel/gconf2)
libglib2.0-dev      (devel/glib20)
libgtk2.0-dev       (x11-toolkits/gtk20)
libnspr4-0d >= 4.7.1+1.9-0ubuntu0.8.04.5 (ubuntu0.8.04.1 causes duplicate dtoa references)   (devel/nspr?)
libnspr4-dev >= 4.7.1+1.9-0ubuntu0.8.04.5  (devel/nspr)
msttcorefonts (Microsoft fonts)
freetype-dev        (print/freetype)
libcairo2-dev       (no port!)
libdbus-1-dev       (devel/dbus)

Get in touch if you want to help. Or join the chromium-dev mailing list and pitch in.

22 Aug 2009

I Hope You’re Not In A Hurry

Filed under: Art/Music — Ben @ 23:58

The aforementioned Felix just pointed me at this superb comic art, by someone who is learning to how to make a comic by making a comic. In my opinion, this guy is a graphic novel god.

The Waldo Project The Waldo Project

He thinks it might take him some time. If it’s going to look this good, I can wait.

Japanese Curry

Filed under: Recipes — Ben @ 19:04

Last night I cooked an absolutely delicious (if I say so myself) Japanese curry for my guests – no recipe, because just copying out somebody else’s seems uncool, though I believe it is technically permitted. You can, however, find the recipe in Madhur Jaffrey’s Ultimate Curry Bible.

I had no idea the Japanese even ate curry until I came across this recipe but anyway, some comments…

Firstly, it was fantastically quick to cook, 10 minutes from start to end – far, far faster than any other curry I know how to make. It did take a while to prepare, mostly because slicing enough beef for eight people into 1/8 inch thick pieces takes quite some time. Even with a ceramic knife.

Secondly, its not really a Japanese curry, it’s Madhur’s guess at how to cook one without the specialist Japanese ingredients.

Thirdly, I served it with plain boiled rice and hot and sour aubergine (my older son, Felix, cooked it), which comes from another masterpiece of Madhur Jaffrey’s: “World Vegetarian Cookbook” – which no kitchen is complete without. Incidentally, this is the dish for which I am most often asked to provide a recipe. Again, sorry, you’ll have to get the book (by the way, there’s an American version of this book which is strangely different, but equally good, as far as I can tell – I don’t know if this recipe is in there).

Fourthly, Felix and I were of the opinion that just stir-frying the marinated meat would make a pretty fine dish, too. I haven’t tried it yet, but I intend to.

Fifthly, my younger son informs me that this recipe is highly unauthentic because it contains garlic, which is practically banned in Japan, he claims. I don’t care, it was lovely.

Finally, I used double cream instead of whipping cream, since that’s what I had. Seemed to work just fine.

16 Aug 2009

Useful Security

Filed under: Rants,Security — Ben @ 15:59

A while back, I had a bash at formal methods. Reasonably enough, some people had a bash back. I feel I should respond.

Michael asked “what about Tokeneer?”

The description on that page makes some great points for me. Firstly, and I think most importantly

At each stage in the process verification activities were undertaken to ensure that no errors had been introduced. These activities included review and semi-formal verification techniques applicable to the entities being developed.

In other words: “we can’t actually apply formal methods to the entire process, so we did some ad hoc stuff, too”. Core to this problem is that you have to somehow describe what it is you are trying to do. In order to disguise the problem, formal methods folk like to call this a specification – but when you get down to it, it’s a program. Which can have bugs in it. Which you can only diagnose by thinking about it and testing.

Next, from the overview, section 5

Since release 7.4, SPARK has included an “accept” annotation that can be used to justify expected warnings and errors. These annotations have been added as appropriate.

In other words, verification fails, but these failures are “justified”. Hmmm.

Again from the overview (section 5.3) even after formal verification a bug was found, which was an elementary integer overflow bug. I would hope any competent programmer would have spotted this as they were writing the code, but apparently it was beyond all this expensive and painful infrastructure.

Finally (there’s more, but I have other things to write about, so I’ll stop here), again from the summary

# lines of code: 9939
# total effort (days): 260

Wow. That’s a lot of days for not very much code.

Toby asked “how would you feel about a proposal that asked for a range of standard software modules whose design had been subjected to formal analysis, at some semi-useful and reasonable level of abstraction, of some of its key functional/security properties?”

I guess I feel about this rather as Gandhi felt about Western civilisation: it would be a good idea.

More positively, I imagine there are actually some modules that are sufficiently small in scope that one could convince oneself that the specification was actually correct, and maybe even prove that the implementation matched the specification. For example, things like arrays, sets and maps might be implementable correctly. Where it all falls apart, in my view, is when you try to make a system that actually does something useful: then you get into the realm where debugging the specification is the hard problem.

Ian Brown asked “do you think a formally verified microkernel that enforces security controls within an OS is achievable/desirable?”

I think this actually might be achievable, and perhaps even desirable. But I’m not so sure it would be truly useful. A microkernel architecture inherently punts on all the interesting security questions and simply enforces whatever incorrect decisions you have made outside the kernel. So, you are still left with all the real-world security problems, but at least you have a place to stand when you claim you are enforcing whatever security properties you think your code implements (that is, you can discount the microkernel as a source of problems and only have to worry about the other 99% of the code).

I also strongly suspect that a team of skilled engineers could carefully write a secure microkernel that was just as robust without any need for formal verification. More quickly and with less swearing.

Finally, Anil Madhavapeddy writes “Modern type systems spring from a solid formal footing, and mainstream languages are adopting functional features”.

This is a great point, and I actually agree. I’m a big fan of type safety, as anyone who’s seen some of the hoops I jumped through in both the Apache module system and more recently in OpenSSL will know. I really like things to break at compile-time rather than run-time if at all possible, and type safety is one way to achieve this (this is one reason I prefer C++ to C, despite its many defects). I guess functional languages also have interesting properties from that point of view but I feel like I understand them less. I really must learn Erlang (or Haskell, I suppose, but I’ve tried and failed a few times already – it seems there are no good tutorials out there).

Anil also says “Even for C, static analysis is increasingly used to track down bugs (with products like Coverity which are very effective these days)”.

Sorry, but no. I thought for quite a while that there was a future in static analysis. But the more I am exposed to it, the less I think it is true. The false positive rate is still fantastically high, even in Coverity, which is probably the best system I’ve played with, and even correct hits tend to be on the “academically correct but not actually useful” side.

I do continue to suspect that static analysis combined with annotation might be useful, though (e.g. as in Deputy). But really, this is just trying to bolt on strong typing to weakly typed languages and isn’t truly static analysis is we hoped it might be.

Finally, he says “If things continue like they have been, then we’ll continue to cherry pick the practical developments from the formal methods community into mainstream languages, and reap the benefits.”

I certainly hope so, and I don’t want to discourage further research into formal methods. I just object to the notion that they are practical to the extent that we should be trying to use them wholesale to build real systems. They really aren’t.

I was intending to also talk a bit about things I think actually are useful for security, but I think I’ll leave that for a later post.

Ceramic Knife

Filed under: Recipes — Ben @ 14:43

I’ve been lusting after a ceramic knife for ages now. I finally got around to buying a Kyocera 16cm knife. Absolutely awesome, I’ve never had such a sharp knife, really nice to use, and very light, too.

2 Aug 2009

Rigour

Filed under: Rants,Security — Ben @ 17:45

I know this is ancient history now, but I was busy, OK?

A while back, Schneier said something that really annoyed me

Commenting on Google’s claim that Chrome was designed to be virus-free, I said:

Bruce Schneier, the chief security technology officer at BT, scoffed at Google’s promise. “It’s an idiotic claim,” Schneier wrote in an e-mail. “It was mathematically proved decades ago that it is impossible — not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible — to create an operating system that is immune to viruses.”

What I was referring to, although I couldn’t think of his name at the time, was Fred Cohen’s 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.

Now, if what you’re interested in is PR, then it seems you can get away with these kinds of statements; certainly I have not seen a single public challenge. But if you care about rigour, you have to do rather better, since Schneier’s claim is demonstrably wrong. Why? Well, here goes … Cohen’s proof relies on computer science’s only trick: diagonalisation[1]. Basically, I assume that I have some perfect virus detector, V. If I give V a program, p, it returns true or false, depending on whether p is a virus or not. Let’s be charitable and assume that we can define what a virus is well enough to allow such a program to exist. Let’s also define what is meant by “perfect” – by that we mean that any program that exhibits virus behaviour will be classified as a virus and any that does not will be classified as not a virus.

Then Cohen says: fine, write a program, c like this:

if(V(c))
  do_nothing();
else
  be_evil();

Now, if V(c) returns true (i.e. c is a virus), then c does nothing, and therefore V is wrong. Similarly, if it returns false, then c behaves evilly, and once more V is wrong. QED, no perfect virus checker is possible. So far, we are in agreement.

Can we go from this to “it is always possible to write a virus that any virus-checking program will not detect”. No, because the proof only talks about perfect virus-checking programs. If the virus checker is allowed to be wrong sometimes, then the proof no longer works. In particular, if the virus checker can return false positives (i.e. claim that innocent programs are viruses) but is not allowed to return false negatives, then we can, indeed, have a virus checker that would keep our system free of viruses. Why? Because the virus checker will always detect a virus, by definition, but the diagonalisation proof no longer works – in particular, the case where V(c) is true no longer leads to a contradiction.

If we want to go a little further and show that such a program can, in fact, exist, we can actually do that quite easily. For example, consider the program V that always returns true: this would prevent any programs at all from running, so our OS wouldn’t be all that useful, but it would be virus-free. Less frivolously, we could have a list of non-virus programs, and V could return false for any program in the list and true for all others. Even less frivolously, it is possible to imagine an analysis that’s thorough enough for some restricted set of cases to permit reasonably general programs to pass the test without allowing any viruses (obviously we would also disallow many perfectly innocent programs, too), but at this point we’d have to define “virus” to drill down into what that analysis might be – but it could, for example, require that the program be written in some restricted, easily-analyzed language, and avoid constructs that are hard to deal with.

So, sorrry, Schneier. It has not been shown that it is impossible, in the 2 + 2 = 3 sense, to write a virus-free OS. Indeed, it has been shown that it is, in fact, possible – though I would certainly agree that it is an open question how hard it would be to create an OS that’s both useful and virus-free.

[1] Don’t get me wrong; it’s a good trick.

Powered by WordPress