Links

Ben Laurie blathering

9 Oct 2012

I Hate Java, or How To Make Ant Run Javascript

Filed under: Open Source,Programming — Ben @ 18:29

I am writing this up because it took me all day to figure out, and the Interwebz was surprisingly unhelpful!

So, if you’re trying to configure EJBCA, or maybe something else whose author thought that including Javascript in a makefile was a really great idea, you may run into this error, when you run ant:

/disk1.1/usr/home/ben/software/unpacked/ejbca_4_0_12/propertyDefaults.xml:11: Unable to create javax script engine for javascript

If you Google for that, maybe you’ll now find this article, but as I write, you get a lot of hits on people with this problem, and a paucity of useful advice. So, here’s how I fixed it…

First off, the core problem is ant needs BSF (the Bean Scripting Framework) in order to run scripts at all, and also Rhino, in order to run Javascript. And to complicate matters further, BSF needs Jakarta Commons Logging. It turns out you can get these in various ways, some of which I’ve tested. In particular, you can generally download zips of jars (and other junk) from the projects’ sites (generally described as binaries, for some reason), or you might be able to install packages or ports. In practice I got BSF from its own site, logging by installing a port, and I tried Rhino both ways. So, once you’ve got the relevant jars, the next bit of magic is to let Ant know about them.

You can either install them in Ant’s library directory, wherever that is, or name them in a -lib argument on the command line. Like this:

ant -lib '../bsf-2.4.0/lib/bsf.jar:/usr/local/share/java/classes/commons-logging.jar:/usr/local/share/java/rhino/rhino.jar' <target>

And Robert, as they say, is a close relative. Update: putting them on the classpath does work, so long as you remember the classpath is separated by colons and not semicolons. On some platforms. However, this does not make jrunscript work.

So, this solved my immediate problem, but I’m now really curious to know how jrunscript can be made to understand JS. Coz right now, despite having all the stuff I need to make Ant do it, I get

script engine for language js can not be found

How about it, lazyweb?

BTW, I hate Java.

28 Apr 2012

Using Capsicum For Sandboxing

Filed under: Capabilities,General,Programming,Security — Ben @ 18:07

FreeBSD 9.0, released in January 2012, has experimental Capsicum support in the kernel, disabled by default. In FreeBSD 10, Capsicum will be enabled by default.

But unless code uses it, we get no benefit. So far, very little code uses Capsicum, mostly just experiments we did for our paper. I figured it was time to start changing that. Today, I’ll describe my first venture – sandboxing bzip2. I chose bzip2 partly because Ilya Bakulin had already done some of the work for me, but mostly because a common failure mode in modern software is mistakes made in complicated bit twiddling code, such as decompressors and ASN.1 decoders.

These can often lead to buffer overflows or integer over/underflows – and these often lead to remote code execution. Which is bad. bzip2 is no stranger to this problem: CVE-2010-0405 describes an integer overflow that could lead to remote code execution. The question is: would Capsicum have helped – and if it would, how practical is it to convert bzip2 to use Capsicum?

The answers are, respectively, “yes” and “fairly practical”.

First of all, how does Capsicum mitigate this problem? The obvious way to defend a decompressor is to run the decompression engine in a separate process with no privilege beyond that needed to get its job done – which is the ability to read the input and write the output. In Capsicum, this is easy to achieve: once the appropriate files are open, fork the process and enter capability mode in the child. Discard all permissions except the ability to read the input and write the output (in Capsicum, this means close all other file descriptors and limit those two to read and write), and then go ahead and decompress. Should there be a bug in the decompressor, what does the attacker get? Well, pretty much what he had already: the ability to read the input file (he supplied it, so no news there!) and the ability to write arbitrary content to the output file (he already had that, since he could have chosen arbitrary input and compressed it). He also gets to burn CPU and consume memory. But that’s it – no access to your files, the network, any other running process, or anything else interesting.

I think that’s pretty neat.

But how hard is it to do? I answer that question in a series of diffs on GitHub, showing a step-by-step transformation of bzip2 into the desired form. I used a technique I like to call error-driven development; the idea is you attempt to make changes that will cause compilation to fail until you have completely accomplished your goal. This is a useful way to reassure yourself that you have made all necessary updates and there’s nothing hiding away you didn’t take care of. If you follow along by building the various stages, you’ll see how it works.

It turns out that in bzip2 this matters – it isn’t very beautifully written, and the code that looks like it might cleanly just take an input file and an output file and do the work in isolation, actually interacts with the rest of the code through various function calls and globals. This causes a problem: once you’ve forked, those globals and functions are now in the wrong process (i.e. the child) and so it is necessary to use RPC to bridge any such things back to the parent process. Error-driven development assures us that we have caught and dealt with all such cases.

So how did this work out in practice? Firstly, it turns out we have to give the compressor a little more privilege: it writes to stderr if there are problems, so we need to also grant write on stderr (note that we could constrain what it writes with a bit more effort). The callbacks we have to provide do not, I think, let it do anything interesting: cause the program to exit, make the output file’s permissions match the input file’s, and remove the input or output files (ok, removing the input file is slightly interesting – but note that bzip2 does this anyway).

Secondly, because we have not yet decided on an RPC mechanism, this particular conversion involves quite a bit of boilerplate: wrapping and unwrapping arguments for RPCs, wiring them up and all that, all of which would be vastly reduced by a proper RPC generator. Try not to let it put you off 🙂

Finally, the system has (at least) one global, errno. I did not deal with that so far, which means some errors will report the wrong error – but it is not particularly hard to do so.

So, on to the diffs. This is something of an experimental way to present a piece of development, so I’d be interested in feedback. Here they are, in order:

And there you are: bzip2 is now rendered safe from decompressor exploits, and it was only a few hours work. As we refine the support infrastructure, it will be even less work.

1 Oct 2011

Open Source Transcription Software Developer

Filed under: Open Data,Open Source,Programming — Ben @ 18:06

Since we set up FreeBMD, FreeREG and FreeCEN things have come a long way, and so we’re revisiting how we do transcription. Those great guys at Zooniverse have released their Scribe transcription software, which they developed to use with Old Weather and Ancient Lives (and more to come), as open source.

We are working with them to develop a new transcription platform for genealogical records, based on Scribe, and we want to hire a developer to help us with it. Scribe itself is written in Ruby, so some familiarity with that would help. We also use Python and EC2, so knowing about those would be good, too. And the front-end is sure to be using Javascript, so there’s another tickbox to tick.

Finally, we intend to open source everything, and so a developer used to working in an open source community would be helpful.

Everything is negotiable. FreeBMD does not have offices, so this would be “work from home” (or the beach, or whatever suits you).

If you’re interested, send email to freebmd-sd@links.org. Feel free to forward this post, of course.

14 Dec 2010

Grown-up Arduino Programming

Filed under: Arduino/Freeduino,Programming — Ben @ 13:38

As I mentioned in a previous post, I am not a big a fan of the Arduino IDE. Since writing that, I’ve discovered I like it even less, because it does some ad-hoc mangling of what you write to turn it from a nearly-C language into genuine C. As a result, it is possible to write some C++ and get away with it, but whether C++ stuff works or not seems entirely random.

It may sound nuts to want to write C++ for a processor so small. But read on – it turns out you can do some nice things at essentially zero cost. But first you need a proper C++ toolchain. One way to get hold of it would be to install the Arduino IDE, in fact, since it uses it under the hood. But I did it from scratch on my FreeBSD system. This turns out to be mostly easy, but there were a couple of wrinkles worth writing down for the greater good.

Firstly, on FreeBSD the compiler is available as a port, so I just installed it with my favourite ports tool

portmaster devel/avr-gcc

(note that there are several variants based on different versions of the compiler available – this is the default variant which, at the time of writing, is based on gcc 4.2.4).

Sadly, although the libc part of the toolchain is available as a port, too, at the time of writing both versions (devel/avr-libc and devel/avr-libc-devel) are broken because they depend on defunct source code. So, I had to build this one by hand, starting with avr-libc-1.6.8.tar.bz2 from http://download.savannah.gnu.org/releases/avr-libc/. This is not too hard, just a slightly customised configuration followed by the usual make commands:

./configure --prefix=/usr/local/avr --host=avr
make
make install

Setting the prefix to /usr/local/avr is advisable as some things get installed immediately below the prefix and so could conflict with native compilers and libraries. However, it does cause some things to end up in /usr/local/avr/avr. Oh, well.

Next up, a test program is a good idea. avr-glibc comes with demo code, which can be found in /usr/local/avr/share/doc/avr-libc-1.6.8/examples/, but none of it is particularly well suited to an Arduino. So, I stole the Makefile from the demo sample and used this code instead of demo.c

#include <avr/interrupt.h>
#include <avr/io.h>

#define FLASH		PB5  // "Pin 13" (Arduino pin) - ATmega168 pin 19
#define CONTROL_PORT	PORTB
#define CONTROL_DDR	DDRB

static void ioinit(void)
    {
    CONTROL_DDR = _BV(FLASH);
    }

int main(void)
    {
    long n;

    ioinit();

    for( ; ; )
	{
	CONTROL_PORT &= ~_BV(FLASH);
	for(n=0; n < 300000; ++n)
	    ;
	CONTROL_PORT |= _BV(FLASH);
	for(n=0; n < 300000; ++n)
	    ;
	}
    }

and modified the Makefile to remove optimisation (essential, otherwise the delay loops get optimised away), select the right CPU (atmega168) and to modify these two lines

DEFS = -I /usr/local/avr/avr/include
LIBS = -B /usr/local/avr/avr/lib

-B is a new flag to me: it specifies where binaries and the crt0 files are found. The last ingredient is a way to upload to the Arduino. The utility avrdude can do this for you

avrdude -p m168 -P /dev/cuaU0 -c arduino -b 19200 -U flash:w:yourstuff.hex

Of course, delay loops are horrible, so my second attempt does this properly, using a timer interrupt. And this is where the C++ comes in: the “standard” way to set up the CPU is to write code like

#define FLASH		PB5  // "Pin 13" (Arduino pin) - ATmega168 pin 19
#define CONTROL_PORT	PORTB
#define CONTROL_DDR	DDRB

static void ioinit(void)
    {
    // PWM, 10-bit, phase-correct
    TCCR1A = _BV(WGM10) | _BV(WGM11);
    // Pre-scaler set to 1024
    TCCR1B = _BV(CS12) | _BV(CS10);
    // Set flash pin to output
    CONTROL_DDR = _BV(FLASH);
    // Enable timer 1 overflow interrupt
    TIMSK1 = _BV(TOIE1);
    sei();
    }

which is pretty revolting and involves a lot of manual-reading to understand. So, as is my habit when dealing with hardware, I tried wrapping it up in nice C++ classes to see what the run-time cost is. I won’t show the C++ classes here as they’re quite verbose and are a work in progress, but the net effect on the setup code is that it now looks like this

#define FLASH		PortB5  // "Pin 13" (Arduino pin) - ATmega168 pin 19

static void ioinit(void)
    {
    Control c;

    c.tc1.OC1ADisconnected();
    c.tc1.OC1BDisconnected();
    c.tc1.PWMPhaseCorrect10Bit();
    c.tc1.Prescaler1024();
    c.tc1.OverflowInterruptEnable();

    FLASH::Out(&c);

    c.Set();

    sei();
    }

which I hope you’ll agree is much more readable. The amazing thing is that, despite the increased verbosity, there’s no cost at all: this produces almost exactly the same assembler as the original code (it is in a slightly different order, though even that could be fixed if needed). The wonders of optimisation.

Note, by the way, the use of a PWM mode is simply because the demo code I borrowed from actually did use PWM – but pin 19 (where the LED is on a standard Ardunio/Freeduino) isn’t a PWM pin, so my code just uses the timer interrupt to time when to turn the LED on or off. The PWM is not really needed but the timer has to be in some mode, so I haven’t yet bothered to figure out a more appropriate one.

When I’ve got more stuff encapsulated in C++ I’ll start sharing the code.

10 May 2010

Programming Languages

Filed under: Programming — Ben @ 19:00

I don’t often go in for the reblogging thing, but this made me laugh out loud in several places. For example:

1986 – Brad Cox and Tom Love create Objective-C, announcing “this language has all the memory safety of C combined with all the blazing speed of Smalltalk.” Modern historians suspect the two were dyslexic.

24 Feb 2010

Stupid Mailing List

Filed under: Open Source,Programming,Security — Ben @ 12:49

It seems Stupid just won’t go away – Ben Clifford and I have been gradually expanding it – it now has functions and even primitive I/O. I’m working on a new test harness and we have regular discussions on language philosophy. So I figured it was time for a mailing list for developers (and other interested parties). Here it is.

7 Feb 2010

Perhaps Not So Stupid, After All?

Filed under: Crypto,Open Source,Programming — Ben @ 17:04

Stupid now generates correct (single-block, still) SHA-256 code in C. It has functions. We’re starting to wonder about adding structures, and the semantics of arrays – particularly whether an array passed for output can also be used for input (or vice versa). I’m inclining towards making that illegal – if you want a function that, say, fills in every second entry in an array, then you’d need to pass in the array to be filled in, and return a second array which would be the result. The function would have to copy the input array to the output before filling in the new values (or copy the parts it isn’t going to fill in). It seems to me this makes analysis simpler, but can easily be optimised by smart compilers, too.

I guess its time we started writing some of this down! I’d also like to add generators for some common scripting languages, like Perl, Python and PHP.

The thing I’m a little scared of is that eventually, if I’m going to take this seriously, we’re going to need a bignum implementation – not too hard to do if you don’t care about efficiency, I guess.

1 Feb 2010

Writing Compilers Quickly

Filed under: Programming — Ben @ 14:10

James Donald asks

How did you bring up a compiler so quickly – what environment, what development system is the compiler written in?

Firstly, I’ve had a lot of practice – I’ve been writing (small) compilers for at least 20 years now. I’ve been writing code for over 35 years. That helps. But to talk about the reproducible parts of what I did…

Firstly, I wrote the compiler in Perl. I like Perl for quick hacks, and I also like to write Perl code “properly” – that is, I use modules and object oriented programming, not great monolithic blocks of Perl gibberish.

Secondly, I used a compiler-compiler, in this case, Parse::Yapp. This uses a specification language rather like BNF, and exactly like its predecessors, yacc and bison, so there was zero learning curve for me there.

I do remember finding yacc extremely puzzling when I first started with it, so if you are new to this, I would highly recommend Antlr and Antlrworks. The only reason I did not use Antlr is its Perl support seems to be pretty experimental, and I didn’t want to spend time fighting the tools. Otherwise, Antlr is vastly superior to the whole yacc/bison/yapp thing. Antlrworks lets you interactively explore your parsing, complete with diagrams. It really is quite awesome and I love it.

Thirdly, I avoided using a lexer generator. In my experience, these are fiddly and more trouble than they’re worth, especially if you’re writing in Perl, where you have very nice pattern matching tools that allow you to write a lexer in not many lines of code (about 60 in the current version of Stupid).

Fourthly, I used monkey-patching to inject the language-dependent output part of the code. I’ve never really (consciously) used this technique before – I first came across it as a formal notion when we were wrestling with early versions of Caja, as it is very widely used in Javascript libraries. Although it is kinda evil and caused us untold pain with Caja, it does make for a very nice separation between components.

Fifthly, I kept the grammar of Stupid simple – I make life harder for the programmer in order to make the parser simpler. This is for two reasons, firstly, I was in a hurry, but more importantly, it is a design aim of Stupid that it should be clear to the programmer exactly what is happening. Getting clever with the grammar does not aid that process.

Sixthly, keeping your head clear is good! Parsers naturally produce parse trees with great ease, so building a parse tree and then later processing that is the way to go. Trying to do it all in one go rarely works well (single pass compilers are generally rather limited, though Stupid is technically mostly single pass at the moment). Getting used to recursion helps with processing parse trees.

Finally, I had a previous project, CaPerl, where I’d used Parse::Yapp, so I could crib from that to get off the ground rapidly.

As for the rest of the environment: FreeBSD for the operating system, emacs for the editor (wish there were a yapp/yacc mode – and even better, a Stupid mode!), Mercurial for version control.

24 Jan 2010

Stupid: A Metalanguage For Cryptography

Filed under: Crypto,General,Open Source,Programming,Security — Ben @ 20:32

Various threads lately have got me thinking about implementing cryptography and cryptographic protocols. As I have mentioned before, this is hard. But obviously the task itself is the same every time, by its very nature – if I want to interoperate with others, then I must implement effectively the same algorithm as them. So why do we ever implement anything more than once? There are various reasons, varying as to their level of bogosity. Here’s a few

  • Trust: “I don’t want to trust anyone else’s code”. This, in my view is mostly bogus. If you don’t trust others to write your crypto, then you’ve got some pretty big problems on your hands…
    • You’re likely to be using some pretty heavyweight stuff like SSL and/or X.509, and reimplementing those is a seriously major job. Are you really going to do that?
    • Unless you are also a cryptographer, then you’re trusting the guys that designed the crypto you’re implementing anyway.
    • Ditto protocol desginer.
  • Languages: an implementation in Java isn’t going to work so well in Python. And although its true that you can plug C implementations into almost everything, there are legitimate and not-so-legitimate reasons for not wanting to do so…
    • You don’t trust native code: see above.
    • It makes your distribution hard to build and/or use and tricky to install.
    • You are running on a platform that doesn’t allow native code, for example, a web hosting company, or Google’s App Engine.
    • Native code has buffer overflows and MyFavouriteLanguage doesn’t: true, but probably the least of your worries, given all the other things that you can get wrong, at least if the native code is widely used and well tested.
  • Efficiency: you are not in the long tail of users who’s transactions per second is measured in fractions. In this case, you may well want specialised implementations that exploit every ounce of power in your platform.

Of these, reimplementation for efficiency clearly needs a completely hand-crafted effort. Trust issues are, in my view, largely bogus, but if you really want to go that way, be my guest. So what does that leave? People who want it in their chosen language, are quite happy to have someone else implement it and are not in need of the most efficient implementation ever. However, they would like correctness!

This line of thinking let me spend the weekend implementing a prototype of a language I call “Stupid”. The idea is to create a language that will permit the details of cryptography and cryptographic protocols to be specified unambiguously, down to the bits and bytes, and then compile that language into the language of your choice. Because we want absolute clarity, Stupid does not want to be like advanced languages, like OCaml and Haskell, or even C, where there’s all sorts of implicit type conversions and undefined behaviour going on – it wants it to be crystal clear to the programmer (or reviewer) exactly what is happening at every stage. This also aids the process of compiling into the target language, of course. So, the size of everything wants to be measured in bits, not vague things like “long” or “size_t”. Bits need to be in known places (for example, big-endian). Operations need to take known inputs and produce known outputs. Sign extension and the like do not want to happen magically. Overflow and underflow should be errors, unless you specifically stated that they were not, and so on.

To that end, I wrote just enough compiler to take as input a strawman Stupid grammar sufficient to do SHA-256, and produce various languages as output, in order to get a feel for what such a language might look like, and how hard it would be to implement.

The result is: you can do something rough in a weekend 🙂

Very rough – but it seems clear to me that proceeding down this road with more care would be very useful indeed. We could write all the cryptographic primitives in Stupid, write relatively simple language plugins for each target language and we’d have substantially improved the crypto world. So, without further ado, what does my proto-Stupid look like? Well, here’s SHA-256, slightly simplified (it only processes one block, I was going cross-eyed before I got round to multiple blocks). Note, I haven’t tested this yet, but I am confident that it implements (or can be easily extended to implement) everything needed to make it work – and the C output the first language plugin produces builds just fine with gcc -Wall -Werror. I will test it soon, and generate another language, just to prove the point. In case the code makes your eyes glaze over, see below for some comments on it…

"This code adapted from Wikipedia pseudocode";

"Note 2: All constants in this pseudo code are in big endian";

"Initialize variables";
"(first 32 bits of the fractional parts of the square roots of the first 8 primes 2..19):";
uint32 h0 = 0x6a09e667;
uint32 h1 = 0xbb67ae85;
uint32 h2 = 0x3c6ef372;
uint32 h3 = 0xa54ff53a;
uint32 h4 = 0x510e527f;
uint32 h5 = 0x9b05688c;
uint32 h6 = 0x1f83d9ab;
uint32 h7 = 0x5be0cd19;

"Initialize table of round constants";
"(first 32 bits of the fractional parts of the cube roots of the first 64 primes 2..311):";
array(uint32, 64) k =
(0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2);

"For now, dummy in the message instead of declaring a function wrapper";
"Also, for now, allow enough room in the input for padding, etc, to simplify the loop";
uint32 message_bits = 123;
array(uint8, 64) message =
(0x12, 0x34, 0x56, 0x78, 0x9a, 0xbc, 0xde, 0xf0,
0x0f, 0xed, 0xcb, 0xa9, 0x87, 0x65, 0x43, 0x21);
uint32 pad_byte = 0;
uint32 pad_bit = 0;
uint32 tmp = 0;
uint32 tmp2 = 0;
array(uint32, 16) w = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
uint32 i = 0;
uint32 s0 = 0;
uint32 s1 = 0;
uint32 a = 0;
uint32 b = 0;
uint32 c = 0;
uint32 d = 0;
uint32 e = 0;
uint32 f = 0;
uint32 g = 0;
uint32 h = 0;
uint32 maj = 0;
uint32 t1 = 0;
uint32 t2 = 0;
uint32 ch = 0;

"Pre-processing:";
"append the bit '1' to the message";

"note that we're using a 32-bit length for now";
"all the op32, op8 etc are _without_ wrap (where applicable) - i.e. wrap is an error";
"they also require left and right to both be the correct type and size";
"also, we have no precedence, it is up to you to bracket things";
"rshift is with zero padding";

pad_bit = 7 minus32 (message_bits mod32 8);
pad_byte = (message_bits plus32 1) rshift32 8;
message[pad_byte] = message[pad_byte] or8 (1 lshift8 pad_bit);

"append k bits '0', where k is the minimum number >= 0 such that the
resulting message length (in bits) is congruent to 448 (mod 512)";

"eq32 and friends return a boolean value (which is not even a bit)";

if (pad_bit eq32 0) {
pad_bit = 7;
pad_byte = pad_byte plus32 1;
} else {
pad_bit = pad_bit minus32 1;
}

"bor is like C || (i.e. RHS is only executed if LHS is false)";

"448/8 = 56";
while (((pad_byte mod32 512) ne32 56) bor (pad_bit ne32 7)) {
message[pad_byte] = message[pad_byte] and8 (not8 (1 lshift8 pad_bit));
if (pad_bit eq32 0) {
pad_bit = 7;
pad_byte = pad_byte plus32 1;
} else {
pad_bit = pad_bit minus32 1;
}
}

"append length of message (before pre-processing), in bits, as 64-bit big-endian integer";

message[pad_byte] = 0;
message[pad_byte plus32 1] = 0;
message[pad_byte plus32 2] = 0;
message[pad_byte plus32 3] = 0;

message[pad_byte plus32 7] = mask32to8 message_bits;
tmp = message_bits rshift32 8;
message[pad_byte plus32 6] = mask32to8 message_bits;
tmp = message_bits rshift32 8;
message[pad_byte plus32 5] = mask32to8 message_bits;
tmp = message_bits rshift32 8;
message[pad_byte plus32 4] = mask32to8 message_bits;

"for each chunk (we only have one, so don't bother with the loop for now)";

" break chunk into sixteen 32-bit big-endian words w[0..15]";
tmp = 0;
while(tmp ne32 16) {
tmp2 = tmp lshift32 2;
w[tmp] = ((widen8to32 message[tmp2]) lshift32 24)
plus32 ((widen8to32 message[tmp2 plus32 1]) lshift32 16)
plus32 ((widen8to32 message[tmp2 plus32 2]) lshift32 8)
plus32 (widen8to32 message[tmp2 plus32 3]);
tmp = tmp plus32 1;
}

" Extend the sixteen 32-bit words into sixty-four 32-bit words";
i = 16;
while(i ne32 64) {
s0 = (w[i minus32 15] rrotate32 7) xor32 (w[i minus32 15] rrotate32 18) xor32 (w[i minus32 15] rshift32 3);
s1 = (w[i minus32 2] rrotate32 17) xor32 (w[i minus32 2] rrotate32 19) xor32 (w[i minus32 2] rshift32 10);
w[i] = w[i minus32 16] plus32 s0 plus32 w[i minus32 7] plus32 s1;
}

" Initialize hash value for this chunk:";

a = h0;
b = h1;
c = h2;
d = h3;
e = h4;
f = h5;
g = h6;
h = h7;

" Main loop:";

i = 0;
while(i ne32 64) {
s0 = (a rrotate32 2) xor32 (a rrotate32 13) xor32 (a rrotate32 22);
maj = (a and32 b) xor32 (a and32 c) xor32 (b and32 c);
t2 = s0 plus32 maj;
s1 = (e rrotate32 6) xor32 (e rrotate32 11) xor32 (e rrotate32 25);
ch = (e and32 f) xor32 ((not32 e) and32 g);
t1 = h plus32 s1 plus32 ch plus32 k[i] plus32 w[i];
h = g;
g = f;
f = e;
e = d plus32 t1;
d = c;
c = b;
b = a;
a = t1 plus32 t2;
}

" Add this chunk's hash to result so far:";

h0 = h0 plus32 a;
h1 = h1 plus32 b;
h2 = h2 plus32 c;
h3 = h3 plus32 d;
h4 = h4 plus32 e;
h5 = h5 plus32 f;
h6 = h6 plus32 g;
h7 = h7 plus32 h;

"end of outer loop (when we do it)";

"Obviously I can also do this part, but right now I am going cross-eyed";
"Produce the final hash value (big-endian):
digest = hash = h0 append h1 append h2 append h3 append h4 append h5 append h6 append h7";

Notice that every operator specifies the input and output sizes. For example plus32 means add two 32-bit numbers to get a 32-bit result, with wrap being an error (this probably means, by the way, that the last few plus32s should be plus32_with_overflow, since SHA-256 actually expects overflow for these operations). So far we only deal with unsigned quantities; some “overflows” are actually expected when dealing with negative numbers, so that would have to be specified differently. Also, I didn’t deal with the size of constants, because I wasn’t sure about a good notation, though I am leaning towards 23_8 to mean an 8-bit representation of 23 (subscripted, like in TeX).

Because Stupid really is stupid, it should be very easy to write static analysis code for it, enabling checks to be omitted sometimes – for example, the fact that we only subtract 1 from pad_bit if pad_bit is non-zero means that we would not have to check for underflow in that case.

Anyway, I’m a bit bemused after writing a lot of rather repetitive code for the compiler, so I think I’ll wait for reactions before commenting further – but it does seem to me that this is a project worth pursuing. The compiler itself, whilst somewhat crude, particularly since it doesn’t yet do most of the checks I suggest should be there, is pretty small and easily understood: less than 1,500 lines of Perl and YAPP. I won’t bore you with the details, but if you want a peek, here’s a tarball.

19 Jan 2010

Debugging for Caja

Filed under: Caja,Capabilities,Programming,Security — Ben @ 15:28

One of the hardest parts about using Caja (which, by the way, is now far and away the most successful capability project ever) is debugging. Because of the transforms Caja must do to render your code safe, even something simple like

x.a = y.b + z.c();

becomes

$v.s($v.ro(‘x’), ‘a’, $v.r($v.ro(‘y’), ‘b’) + $v.cm($v.ro(‘z’), ‘c’, [ ]));

if we ignore all the wrapping code that is generated. Whilst you can certainly get used to reading this and translating it back into your original source in your head, so you can use, say, Firebug to debug, it’s pretty painful at best.

So I was pleased to see that the Closure Inspector now supports Caja debugging.

By the way, if you want to play with Caja, it’s now easier than ever, using the new Appspot-based Caja Playground.

5 Jan 2010

Security Is Hard: Live With It

Filed under: Open Source,Open Standards,Programming,Rants,Security — Ben @ 17:59

I’ve been meaning to summon the energy to write about OAuth WRAP. It’s hard to do, because like OpenID, OAuth WRAP is just so obviously a bad idea, it’s difficult to know where to start. So I was pleased to see that Ben Adida saved me the trouble.

I understand. Security is hard. Getting those timestamps and nonces right, making sure you’ve got the right HMAC algorithm… it’s non-trivial, and it slows down development. But those things are there for a reason. The timestamp and nonce prevent replay attacks. The signature prevents repurposing the request for something else entirely. That we would introduce a token-as-password web security protocol in 2010 is somewhat mind-boggling.

Exactly. The idea that security protocols should be so simple than anyone can implement them is attractive, but as we’ve seen, wrong. But does the difficulty of implementing them mean they can’t be used? Of course not – SSL is fantastically hard to implement. But it is also fantastically widely deployed. Why? Because there are several open source libraries that do everything for you. Likewise every crypto algorithm under the sun is hard to implement, but there’s no shortage of libraries for them, either.

Clearly the way forward for OAuth is not to dumb it down to the point where any moron can implement it, the way forward is to write libraries that implement a properly secure version, and have everyone use them.

If the amount of effort that has been wasted on OAuth WRAP (and OpenID) had instead been put instead into writing code for the various platforms then we would probably now have pretty much universal support for OAuth and no-one would be whining that it’s too hard to implement.

Instead, we will spend the next decade or two clearing up the mess that we seem to be intent on creating. It makes me tired.

23 Sep 2009

seL4

Filed under: Capabilities,Open Source,Programming,Security — Ben @ 13:13

In response to my not-so-recent rants on formal methods I was told to go and read up on seL4.

I’m glad I did, it’s really quite fascinating. To set the stage a little, L4 is a microkernel architecture with various implementations. Last time I looked it was mostly used by academics and was not very mature, but it seems to have moved on – for example, L4Linux is a version of it with Linux hosted on top, providing benefits like driver isolation.

seL4 is a variant of L4 that is based around a capability architecture (I confess I don’t know enough about L4 to understand how one variant can be capability-based without them all being, pointers welcome), which would be interesting in itself. But far more interesting than that – seL4 has been formally verified.

The approach is somewhat convoluted. First of all, they write a prototype of the kernel in Haskell, including all of the low-level hardware control (note that because this is a microkernel, this does not include most of the device drivers). This prototype is then linked to an emulator derived from QEMU, which allows user code to run in a binary compatible simulated environment. When user code calls the kernel, the emulator instead branches into the Haskell code, runs it, and then returns to the user code. This in itself would be pretty cool, but there’s more.

Next, there’s the abstract specification of the kernel. This describes the binary layout of arguments to system calls, the effect of each system call and of interrupts and traps (e.g. page faults), without describing the implementation. From the Haskell, they generate what they call an executable specification. This is done by an automatic translation from the subset of Haskell they use into a theorem proving language. They then prove that the executable specification is a refinement of the abstract specification. Which means, informally, that everything that is true of the abstract specification is also true of the executable specification. In other words, the executable specification implements what the abstract specification specifies.

However, we’re not there yet. The real kernel is written in C, by hand. This is required if you want a kernel that is appropriately optimised – automatic generation of C code from the Haskell would be possible, but unlikely to run fast. The C implementation is then verified by automatically translating it into the theorem proving language once more, and showing that this is a refinement of the executable specification. This automatic translation is another remarkable achievement, though they do not claim to be able to translate arbitrary C: it must be written to conform with some rules – for example, no passing addresses of local variables to functions. Clearly these rules are not too onerous: they manged to write a whole kernel that conformed to them.

Note that this does not rely on the correctness of the Haskell code! Since that is only used to generate the executable specification, which stands between the abstract specification and the C implementation, all we really care about is the correctness of the executable specification, which is proved. Although this strongly implies that the Haskell code is correct, it is in no way relied upon.

So, the end result of all this is a number of fun things.

  • Most importantly, a kernel written in C that has a formal proof of correctness.
  • A Haskell prototyping environment for the kernel (I guess I’d like this part even more if I could figure out how to learn Haskell without sitting next to a Haskell guru).
  • The tools needed to adopt this approach for other code.
  • A nice write-up of the whole thing.
  • Detailed costs for doing this kind of proof, and the costs of making changes. See the paper for more, but the big picture is the whole thing took 20 man-years, of which 9 were development of (presumably reusable) tools and 11 were the formal seL4 proof itself.

14 Sep 2009

FreeBSD Chromium, Part 2

Filed under: Open Source,Programming — Ben @ 20:28

Over the weekend I got chromium building and linking on FreeBSD. It doesn’t run for long, but this seems like a major milestone! I also got commit and submitted the first round of patches, by the way.

However, I discovered in the process that I wasn’t building with debugging info. Now that I am, I can’t link on my FreeBSD machine, because ld runs out of RAM. If someone out there has a nice big (and fast would be nice, it takes a really long time to build) box that I could ssh or, even better, VNC or NX into, now’s the time to let me know! Mine is dying with a dataseg limit of 716800 – which I could increase, I guess, but its a pretty old box, so probably not by much before I get into thrashing…

Anyway, for anyone who wants to try, the primary patch is here:

http://codereview.chromium.org/199105

There are a couple of other patches referenced there (for V8 and WebCore – they’re tiny so I hope I can push them upstream soon), and you still need to build as specified here.

3 Sep 2009

Why Open Is Better Than Proprietary?

Filed under: Open Source,Programming — Ben @ 12:46

First of all watch this fascinating TED talk by Dan Pink. Watch it all the way to the end: I promise it is worth it. Then consider this…

I’ve long argued that open source provides a clear economic benefit (and hence incentive). However, I’ve always had a bit of a nagging feeling that there’s more to it than that but have never been satisfied by sociologists’ lame attempts at explanations. Perhaps Dan Pink’s observations fill in that missing piece. Autonomy, mastery and purpose – open source development provides all three of these to developers, in copious quantities.

  • Autonomy: you choose what you work on, when you work on it, and how you work on it.
  • Mastery: putting all your work out there in public view gets you great feedback – and many studies have shown that people don’t improve without external feedback. Furthermore, seeing what other people have done is a fantastic learning resource.
  • Purpose: most open source projects have a purpose that goes beyond the mere development of the software – for example, Apache exists to serve up web pages better than anything else does – and the higher purpose is greater democratisation and freedom for everyone. not just the developers. OpenSSL exists to protect people from invasions of the privacy and theft of their data. It’s not just a geek toy, it’s critical infrastructure for the new world we are moving into.

It seems that economics is not the only thing that makes open source better.

30 Aug 2009

Porting Chromium to FreeBSD

Filed under: Open Source,Programming — Ben @ 19:27

In my copious spare time, I’ve started work on a FreeBSD port of Chromium. So far it doesn’t even build, but I’ve identified a few problem areas people might like to pitch in on.

  • NSS port: the FreeBSD NSS port needs to be upgraded to at least NSS 3.12. Ideally to the latest version because it fixes some security holes.
  • Sound: mostly whatever Linux does it likely to be the right thing for FreeBSD, too, so much of the porting work at this stage is pretty mechanical. But sound isn’t – the Linux port uses ALSA, which FreeBSD has no truck with. I am informed that the right thing to use on FreeBSD is OSS. But someone needs to do that.
  • Build farm: building this sucker is slow. I wouldn’t mind having access to a nice big build farm.

So far none of the work has been committed, but I am optimistic that some will be soon. In the meantime, you’ll be needing some patches.

There’ll probably be another patch soon – I’m going through the rest of the code right now getting the parts that don’t need NSS or ALSA building.

In order to build this lot, you’ll need to configure like this:

cd src && GYP_GENERATORS=make python tools/gyp/gyp_chromium -D 'OS=freebsd' -D 'use_system_libxml=1' build/all.gyp

and build using gmake:

cd src && gmake chrome

Right now anything that fails on NSS or ALSA I’m just manually removing from the .mk files.

You’ll also need a bunch of ports installed, here’s what my notes currently say

Python >= 2.4       (lang/python24, lang/python25, lang/python26, lang/python30)
Perl >= 5.x         (lang/perl5.6, lang/perl5.8, lang/perl5.10)
gcc/g++ >= 4.2      (lang/gcc42, lang/gcc43, lang/gcc44, lang/gcc45)
g++-multilib >=4.2  (?)
bison >= 2.3        (devel/bison == 2.4.1)
flex >= 2.5.34      (base system: 2.5.4 - oops?)
gperf >= 3.0.3      (lang/gperf - NB: base system has /usr/bin/gperf == 2.7.2, port installs /usr/local/bin/gperf)
pkg-config >= 0.20  (devel/pkg-config == 0.23)
libnss3-dev >= 3.12 (security/nss == 3.11.9_2 - hand install?)
libgconf2-dev       (devel/gconf2)
libglib2.0-dev      (devel/glib20)
libgtk2.0-dev       (x11-toolkits/gtk20)
libnspr4-0d >= 4.7.1+1.9-0ubuntu0.8.04.5 (ubuntu0.8.04.1 causes duplicate dtoa references)   (devel/nspr?)
libnspr4-dev >= 4.7.1+1.9-0ubuntu0.8.04.5  (devel/nspr)
msttcorefonts (Microsoft fonts)
freetype-dev        (print/freetype)
libcairo2-dev       (no port!)
libdbus-1-dev       (devel/dbus)

Get in touch if you want to help. Or join the chromium-dev mailing list and pitch in.

12 Jun 2009

Ignorance Transfer Network

Filed under: Programming,Security — Ben @ 12:41

SSDSIG recognises that some commonly used languages (e.g. C, php etc.) allow, or even encourage, programming practices that introduce security vulnerabilities. Accepting that in time market forces may encourage the adoption of safer alternatives some members feel that the process needs to be accelerated. The reasons for the continued use of ‘unsafe’ ‘languages and the near-term feasibility of alternatives for commercial systems of modest criticality are complex and ill-understood. This also applies to the slow uptake of more formal methods Further data on this is required.

This is a gem from “Secure Software Development – a White Paper: Software Security Failures: who should correct them and how” by Bill Whyte and John Harrison, from the Cyber Security Ignorance (Knowledge, shurely? Ed) Transfer Network, presumably at the taxpayer’s expense. I hear through the grapevine that they’re planning to spend more of our money to set up a “Secure Software Development Panel” to deliberate on the deep thinking exemplified above. Awesome.

So, what’s wrong with that statement? Firstly, I think we’ve got past the idea that there’s something extra special about buffer overflows as a security issue. Yes, there are many languages that prevent them completely (e.g. PHP, amusingly), but they don’t magically produce secure programs either. Indeed, pretty much all languages used for web development are “safe” in this respect, and yet the web is a cesspit of security problems, so how did that help?

Secondly, the claim that the “reasons are … complex and poorly understood” is a great one to make if you want to spend your life wasting your time on government money, but, well, not exactly true. C is widely used because it is fast, portable, can do anything and has a vast amount of software already written in it that is otherwise difficult to get at. Which is, of course, why PHP is widely used: because it’s one way for the less capable programmer to get at all that C out there. As for “near-term feasibility of alternatives”, well, name an alternative and I’m pretty sure anyone knowledgeable in the field could give you a thorough rundown on its near-term feasibility in an hour or so.

Thirdly, talking about “unsafe” languages implies that there might be “safe” ones. Which is nonsense.

Fourthly, formal methods. Really? The reason there’s slow uptake is because they don’t work. Get with the program, guys!

20 May 2009

ECMAScript 5

Filed under: Open Standards,Programming,Security — Ben @ 4:35

When I started working on Caja I had not really plumbed the depths of Javascript (or, as it is more correctly called, ECMAScript 3) and I was very surprised to learn how powerful it actually is. I was also pretty startled by some of the nasty gotchas lurking for the unwary (or even wary) programmer (had I known, perhaps I would never had tried to get Caja off the ground!).

For some time now, the ECMAScript committee has been working on a new version of Javascript which fixes many of these problems without breaking all the existing Javascript that is out there. This seems to me a remarkable achievement; Mark Miller, Mike Samuel (both members of the Caja team) and Waldemar Horwat gave a very interesting talk about these gotchas and how the ES5 spec manages to wriggle around them. I recommend it highly. Slides are available for those who don’t want to sit through the presentation, though I would say it is worth the effort.

7 Apr 2009

CodeCon Is Back!

Filed under: General,Open Source,Programming — Ben @ 10:43

Unfortunately, I can’t be there, but the lineup looks great. The bio-hacking track looks particularly fun.

Not long to go now, less than two weeks. Sign up!

11 Feb 2009

Crypto Craft Knowledge

Filed under: Crypto,Programming,Rants,Security — Ben @ 17:50

From time to time I bemoan the fact that much of good practice in cryptography is craft knowledge that is not written down anywhere, so it was with interest that I read a post by Michael Roe about hidden assumptions in crypto. Of particular interest is this

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

Actually, in my extensive experience of reviewing security-critical code, this particular error is extremely common. Why does Michael assume that they probably wouldn’t? Because he is steeped in the craft knowledge around crypto. But most developers aren’t. Most developers don’t even have the right mindset for secure coding, let alone correct cryptographic coding. So, why on Earth do we expect them to follow our unwritten rules, many of which are far from obvious even if you understand the crypto?

27 Jan 2009

Caja on the Yahoo! Developer Network

Filed under: Caja,Programming — Ben @ 4:40

Here’s a nice (and brief) article about developing for the Yahoo! Application Platform, and, in the process, covering some of the benefits and gotchas of Caja.

Caja enforces the W3C standards for JavaScript, HTML, and CSS, regardless of browser (!).
Yes, you read correctly, if your code cajoles, it will run in Firefox and IE. This is amazing and its value cannot be overestimated.

Next Page »

Powered by WordPress