Ben Laurie blathering

24 Jan 2013

Up-Goer 5 Capabilities

Filed under: Capabilities,Security — Ben @ 23:42

Just for fun, I tried to explain capabilities using only the ten hundred most used words. Here’s what I came up with.

They are a way to allow people to use the things they are allowed to use and not the things they are not allowed to use by giving them a key for each thing they are allowed to use. Every thing has its own key. If you are shown a key you can make another key for the same thing. Keys should be very, very, very hard to guess.

For capability purists: yes, I am describing network capabilities.

28 Apr 2012

Using Capsicum For Sandboxing

Filed under: Capabilities,General,Programming,Security — Ben @ 18:07

FreeBSD 9.0, released in January 2012, has experimental Capsicum support in the kernel, disabled by default. In FreeBSD 10, Capsicum will be enabled by default.

But unless code uses it, we get no benefit. So far, very little code uses Capsicum, mostly just experiments we did for our paper. I figured it was time to start changing that. Today, I’ll describe my first venture – sandboxing bzip2. I chose bzip2 partly because Ilya Bakulin had already done some of the work for me, but mostly because a common failure mode in modern software is mistakes made in complicated bit twiddling code, such as decompressors and ASN.1 decoders.

These can often lead to buffer overflows or integer over/underflows – and these often lead to remote code execution. Which is bad. bzip2 is no stranger to this problem: CVE-2010-0405 describes an integer overflow that could lead to remote code execution. The question is: would Capsicum have helped – and if it would, how practical is it to convert bzip2 to use Capsicum?

The answers are, respectively, “yes” and “fairly practical”.

First of all, how does Capsicum mitigate this problem? The obvious way to defend a decompressor is to run the decompression engine in a separate process with no privilege beyond that needed to get its job done – which is the ability to read the input and write the output. In Capsicum, this is easy to achieve: once the appropriate files are open, fork the process and enter capability mode in the child. Discard all permissions except the ability to read the input and write the output (in Capsicum, this means close all other file descriptors and limit those two to read and write), and then go ahead and decompress. Should there be a bug in the decompressor, what does the attacker get? Well, pretty much what he had already: the ability to read the input file (he supplied it, so no news there!) and the ability to write arbitrary content to the output file (he already had that, since he could have chosen arbitrary input and compressed it). He also gets to burn CPU and consume memory. But that’s it – no access to your files, the network, any other running process, or anything else interesting.

I think that’s pretty neat.

But how hard is it to do? I answer that question in a series of diffs on GitHub, showing a step-by-step transformation of bzip2 into the desired form. I used a technique I like to call error-driven development; the idea is you attempt to make changes that will cause compilation to fail until you have completely accomplished your goal. This is a useful way to reassure yourself that you have made all necessary updates and there’s nothing hiding away you didn’t take care of. If you follow along by building the various stages, you’ll see how it works.

It turns out that in bzip2 this matters – it isn’t very beautifully written, and the code that looks like it might cleanly just take an input file and an output file and do the work in isolation, actually interacts with the rest of the code through various function calls and globals. This causes a problem: once you’ve forked, those globals and functions are now in the wrong process (i.e. the child) and so it is necessary to use RPC to bridge any such things back to the parent process. Error-driven development assures us that we have caught and dealt with all such cases.

So how did this work out in practice? Firstly, it turns out we have to give the compressor a little more privilege: it writes to stderr if there are problems, so we need to also grant write on stderr (note that we could constrain what it writes with a bit more effort). The callbacks we have to provide do not, I think, let it do anything interesting: cause the program to exit, make the output file’s permissions match the input file’s, and remove the input or output files (ok, removing the input file is slightly interesting – but note that bzip2 does this anyway).

Secondly, because we have not yet decided on an RPC mechanism, this particular conversion involves quite a bit of boilerplate: wrapping and unwrapping arguments for RPCs, wiring them up and all that, all of which would be vastly reduced by a proper RPC generator. Try not to let it put you off 🙂

Finally, the system has (at least) one global, errno. I did not deal with that so far, which means some errors will report the wrong error – but it is not particularly hard to do so.

So, on to the diffs. This is something of an experimental way to present a piece of development, so I’d be interested in feedback. Here they are, in order:

And there you are: bzip2 is now rendered safe from decompressor exploits, and it was only a few hours work. As we refine the support infrastructure, it will be even less work.

9 Mar 2011

Capsicum Wins Cambridge Ring Award

Filed under: Capabilities,Security — Ben @ 19:28

Of course, I know that capabilities are really important, and that the work we (I say we as if I did much – the hard graft is down to Robert Watson and Jon Anderson) have done on adding capabilities to FreeBSD is particularly awesome. But I continue to be amazed at the community reaction to it.

The latest accolade is the rather unwieldy Cambridge Ring Hall of Fame Award for Best Publication of the Year.

You know, I’m beginning to think we might actually make some serious progress with capabilities in the next year or two. Watch this space, there’s a lot going on in this field!

14 Aug 2010

FreeBSD Capsicum

Filed under: Capabilities,Security — Ben @ 12:34

I mentioned FreeBSD Capsicum in my roundup of capability OSes earlier this year without mentioning that I am involved in the project. Since then we’ve managed to port and sandbox Chromium, using less code than any other Chromium sandbox (100 lines), as well as a number of other applications. Also impressive, I think, is the fact that Robert Watson managed to write this sandbox in just two days, having never seen the Chromium codebase before – this is as much a testament to Robert’s coding skills and the clean Chromium codebase as it is to Capsicum, but nevertheless worth a mention.

Anyway, at USENIX Security this week, we won Best Student Paper. A PC member described the paper to me as “excellent” and “very important”. Robert has also blogged about it rather more eloquently than I can manage at this time in the morning.

You can read the paper, too, if you want.

Even more exciting, FreeBSD 9 will include the Capsicum capability framework, allowing the peaceful coexistence of capability and POSIX programs. Although this has been attempted before, as far as I am aware all previous versions have put a POSIX emulation layer on top of a capability system, rather than grafting capabilities onto POSIX. Since Capsicum is highly efficient and FreeBSD is a perfectly sound and portable system (and my server OS of choice), this opens up the possibility of a gradual migration to capabilities, something that has been problem up to now.

Robert and I (and a host of others) are continuing our research into practical capability systems, Robert at Cambridge and me at Google. Work is also in progress to port Capsicum to Linux.

27 Mar 2010

Capability Operating Systems

Filed under: Caja,Capabilities,Security — Ben @ 16:31

Now that we’ve deployed the most successful capability language ever, it’s time to start thinking about the rest of the stack, and one end of that stack is the operating system.

Luckily, thinking in this area has been going on a long time – indeed, capabilities were invented in the context of the OS, though for a long time were thought to be the exclusive domain of specialised hardware. Some of that hardware ended up being extremely widely deployed, too, so don’t think this is the stuff of lab experiments only. Sadly, though, despite the hardware supported capabilities, these were not generally exposed up to the level of the kernel/userland interface; they were thought to be useful only within the kernel (with one notable, but not very well known or widely used, exception),

However, more recently it has been realised that capabilities are not only useful in userland, but also can be implemented on top of commodity hardware, resulting in a crop of new capability operating systems. But these still suffer from the problems that traditional capability languages have suffered from: they need the world to be completely reinvented before you can use them. Because the capability paradigm is fundamentally different from the ambient authority ACL-based world we live in, no existing software can fully enjoy the benefits of capabilities without at least some rewriting.

So, the interesting research question has now become: how can we move toward this world without having to rewrite everything on day one? Some progress has been made with mapping POSIX onto capabilities. Heading in a completely different direction is the idea of running existing OSes as guests on a capability system. Yet another approach is to apply capabilities to more restricted domains: one that I have been involved in is the idea of hosting untrusted software “in the cloud”, in the same vein as Google App Engine. Because this software is all new, changing the way it has to work is not a big deal.

But the thing that interests me most is the work being done on FreeBSD, which allows capability-based code to coexist with (or even be contained within) existing POSIX code. This provides a genuine, believably workable, migration path from existing systems to a brave new capability world. We can move one application (or even one library) at a time, without breaking anything. Which is why I am pleased to be able to say I am involved in this work, too. What’s even better is this work is by no means specific to FreeBSD – the same principles could be applied to any POSIX system (so Linux and Mac OS X would be good targets). Just as we have seen success with Caja it seems to me that this route can deliver success at the OS level, because it allows a gradual, piecemeal migration.

Unusually for me, I have not interrupted my narrative flow by naming or saying too much directly about the various things I link to – however, I appreciate that following links in the middle of reading can get distracting, so here are many of the links again with some explanation…

Caja: a capability version of Javascript. I have written about it before.

CAP computer: the project that invented capabilities.

IBM System/38: more capability hardware.

AS/400: derived from the System/38. Although this had capabilities, they were not exposed to userland. Very widely used commercially.

KeyKOS: a commercial capability operating system.

Amoeba: an experimental capability system – like Caja, it tends to advertise its other virtues rather than describing itself as a capability system.

EROS: another experimental capability OS – originally intended to provide robustness, not security. The first to run on a standard PC.

CapROS: when EROS was discontinued, it lived on as CapROS. Google has recently sponsored the development of a web-hosting experiment on top of CapROS.

Coyotos: by the original designer of EROS. Now also discontinued (can you spot a trend here?).

Plash: the Principle of Least Authority Shell. This shell runs on Linux, figures out from the command line what any particular invocation of an executable should have access to, creates a sandbox with access to only those things, then maps POSIX calls onto the sandboxed things.

L4: a modern capability-based microkernel.

L4Linux: Linux running on top of L4. Although this is nice for things like driver isolation, it seems like the wrong direction because it does not assist with exposing capabilities to userland.

FreeBSD Capsicum: a capability mode for FreeBSD. Whole executables can opt in to this mode, coexisting with POSIX binaries. Even more interestingly, libraries can spawn off capability-mode subprocesses whilst effectively remaining in POSIX mode themselves. This allows the transparent implementation of privilege separation. This project has also been sponsored by Google.

19 Jan 2010

Debugging for Caja

Filed under: Caja,Capabilities,Programming,Security — Ben @ 15:28

One of the hardest parts about using Caja (which, by the way, is now far and away the most successful capability project ever) is debugging. Because of the transforms Caja must do to render your code safe, even something simple like

x.a = y.b + z.c();


$v.s($‘x’), ‘a’, $v.r($‘y’), ‘b’) + $$‘z’), ‘c’, [ ]));

if we ignore all the wrapping code that is generated. Whilst you can certainly get used to reading this and translating it back into your original source in your head, so you can use, say, Firebug to debug, it’s pretty painful at best.

So I was pleased to see that the Closure Inspector now supports Caja debugging.

By the way, if you want to play with Caja, it’s now easier than ever, using the new Appspot-based Caja Playground.

6 Dec 2009

Intent Is The Problem

Filed under: Capabilities,Security — Ben @ 18:29

Of late, I keep banging into the problem that people want systems to be “secure by default”: they don’t want to pester the user about security. They want the system to just do the right thing. The problem is, this just isn’t possible. One example I like to give is “rm -rf *“. Clearly this command is sometimes a very bad idea, and sometimes exactly what you want to do. If some piece of code I mistakenly trusted runs that command on my behalf, I might be very sad about it. Therefore, any system that wants to be “secure” has to somehow know that when I move to some directory and type rm -rf * I mean it, and when I run a piece of code I’m expecting to (say) edit some text, I don’t mean it, and it should not be allowed to do it.

How can the system discover this? Clearly it must be through some user action. The user must behave differently in some way in the two cases, so that the system can discover his intent. Therefore it is impossible to be “secure” without, in some way, consulting the user about his intent.

Obviously we can try to minimise the intrusiveness of the consultation – for example, this is the impetus behind the “designation is authorisation” paradigm that is so natural in capability systems. But we cannot make it go away.

ChromeOS provides us with some interesting examples. If we are going to have an operating system that only lets you use a browser, then clearly we’re going to have to let that browser do some things we would not normally expect a browser to do, like access the webcam or interact with your USB devices. There is simply no way to have those operations be secure by default – some web pages should have access to the camera and some should not, and there’s no way to tell which is which without involving the user.

Of course, we’ve traditionally allowed any program we install on a conventional operating system to access these things if it wants to, but the stupidity of that practice becomes very clear when we instead worry about what a web page can do. Why do we continue to grant these broad permissions to executables? Once more, it is largely because we don’t want to bother the user with these microdecisions (we saw what a great idea that was with Vista), but hopefully the increasing power of the web will force us to figure out good ways to discern intent without getting in the user’s way. It seems to me that one opportunity we have with web interfaces is that we can place the APIs at a higher level. This allows us to ask the user more meaningful questions than when the security boundary is at the system call level – and obviously by “ask questions” I include ways to discern the intent of the user without explicitly asking him, as is done, for example, in a file open dialog: clearly what is indicated is a single file which the user wants to open – modern browsers enforce that decision transparently, whereas modern operating systems just provide the file name as a hint to the executable – which can open any file it pleases.

Will the web teach us a better way? I don’t know, but one thing is clear: we can’t ignore these problems in the browser. “Stupid user shouldn’t have installed that evil executable” does not translate well into “stupid user shouldn’t have visited that evil web page”. We’re going to have to find some way to consult the user; we won’t be able to brush the problem under the table as we have done in operating systems.

One approach I am very interested in is to somehow use collective behaviour to make smarter default decisions. But more on that another time.

A final thought on the subject: what lunacy caused us to design systems where “cat foo” gets any more privilege than a read handle to foo plus write handles to stdout and stderr?

23 Sep 2009


Filed under: Capabilities,Open Source,Programming,Security — Ben @ 13:13

In response to my not-so-recent rants on formal methods I was told to go and read up on seL4.

I’m glad I did, it’s really quite fascinating. To set the stage a little, L4 is a microkernel architecture with various implementations. Last time I looked it was mostly used by academics and was not very mature, but it seems to have moved on – for example, L4Linux is a version of it with Linux hosted on top, providing benefits like driver isolation.

seL4 is a variant of L4 that is based around a capability architecture (I confess I don’t know enough about L4 to understand how one variant can be capability-based without them all being, pointers welcome), which would be interesting in itself. But far more interesting than that – seL4 has been formally verified.

The approach is somewhat convoluted. First of all, they write a prototype of the kernel in Haskell, including all of the low-level hardware control (note that because this is a microkernel, this does not include most of the device drivers). This prototype is then linked to an emulator derived from QEMU, which allows user code to run in a binary compatible simulated environment. When user code calls the kernel, the emulator instead branches into the Haskell code, runs it, and then returns to the user code. This in itself would be pretty cool, but there’s more.

Next, there’s the abstract specification of the kernel. This describes the binary layout of arguments to system calls, the effect of each system call and of interrupts and traps (e.g. page faults), without describing the implementation. From the Haskell, they generate what they call an executable specification. This is done by an automatic translation from the subset of Haskell they use into a theorem proving language. They then prove that the executable specification is a refinement of the abstract specification. Which means, informally, that everything that is true of the abstract specification is also true of the executable specification. In other words, the executable specification implements what the abstract specification specifies.

However, we’re not there yet. The real kernel is written in C, by hand. This is required if you want a kernel that is appropriately optimised – automatic generation of C code from the Haskell would be possible, but unlikely to run fast. The C implementation is then verified by automatically translating it into the theorem proving language once more, and showing that this is a refinement of the executable specification. This automatic translation is another remarkable achievement, though they do not claim to be able to translate arbitrary C: it must be written to conform with some rules – for example, no passing addresses of local variables to functions. Clearly these rules are not too onerous: they manged to write a whole kernel that conformed to them.

Note that this does not rely on the correctness of the Haskell code! Since that is only used to generate the executable specification, which stands between the abstract specification and the C implementation, all we really care about is the correctness of the executable specification, which is proved. Although this strongly implies that the Haskell code is correct, it is in no way relied upon.

So, the end result of all this is a number of fun things.

  • Most importantly, a kernel written in C that has a formal proof of correctness.
  • A Haskell prototyping environment for the kernel (I guess I’d like this part even more if I could figure out how to learn Haskell without sitting next to a Haskell guru).
  • The tools needed to adopt this approach for other code.
  • A nice write-up of the whole thing.
  • Detailed costs for doing this kind of proof, and the costs of making changes. See the paper for more, but the big picture is the whole thing took 20 man-years, of which 9 were development of (presumably reusable) tools and 11 were the formal seL4 proof itself.

10 Mar 2009

Capabilities for Python

Filed under: Capabilities,Security — Ben @ 16:13

Guido van Rossum has never been a big fan of this idea, and he recently unloaded a pile of reasoning as to why. Much of this really boils down to the unsuitability of existing Python implementations as a platform for a capability version of the language, though clearly there are language features that must go, too. There’s more on this point from tav, but perhaps his idea of translating Capability Python into Cajita is a more fruitful course…

Anyway, what intrigued me more than the specifics was this statement from Guido

The only differences are at the library level: you cannot write to the filesystem, you cannot create sockets or pipes, you cannot create threads or processes, and certain built-in modules that would support backdoors have been disabled (in a few cases, only the insecure APIs of a module have been disabled, retaining some useful APIs that are deemed safe). All these are eminently reasonable constraints given the goal of App Engine. And yet almost every one of these restrictions has caused severe pain for some of our users.

Securing App Engine has required a significant use of internal resources, and yet the result is still quite limiting. Now consider that App Engine’s security model is much simpler than that preferred by capability enthusiasts: it’s an all-or-nothing model that pretty much only protects Google from being attacked by rogue developers (though it also helps to prevent developers from attacking each other). Extrapolating, I expect that a serious capability-based Python would require much more effort to secure, and yet would place many more constraints on developers. It would have to have a very attractive “killer feature” to make developers want to use it…

There are two important mistakes in this.

Firstly, capability enthusiasts don’t prefer a security model in the sense that Guido is suggesting; we prefer a way of enforcing a security model. App Engine does this enforcement through layers of sandboxing whereas capability languages do it by not providing the untrusted code with the undesirable capabilities. Of course, a side effect of this approach is that capabilities allow far more subtle security models (e.g. “you can only write this part of the file system” or “you can only write files a user has specifically designated” or “you can create sockets, but only for these destinations”) without much extra work and so capability enthusiasts have a tendency to talk about and think in terms of those subtler models. However, Guido’s all-or-nothing model can be implemented easily with capabilities – we don’t have to be subtle if he doesn’t want us to be!

This fallacy causes the second error – because the security model does not have to be subtler, there’s no particular reason to imagine it should take any longer to implement. Nor need it place many extra constraints on developers (I will concede that it must place some constraints because not all of Python is capability-safe). Developers are really only constrained by capability languages in the intended sense: they can’t do the things we don’t want them to do. If the security models are the same, the constraints will be the same, regardless of whether you use sandboxes or capabilities.

Incidentally, I tried to sell the idea of capabilities to the App Engine team several years ago. Given how far we’ve come with Caja in a year, working on a language that is definitely less suited to capabilities than Python is, I would be very surprised if we could not have done the same for Python by now.

16 Dec 2008

Caja Goes Live on Yahoo!

Filed under: Caja,Capabilities,Security — Ben @ 14:28

Yahoo! yesterday launched their new development platform for My Yahoo! and Yahoo! Mail, which uses Caja to protect users from malicious gadgets. This means Caja suddenly got 275,000,000 users. Wow! I guess this makes Caja the most widely used capability language ever.

But what I’m most excited about is that there’s virtually no mention of Caja at all. Why? Because Caja gives you security without getting in your way – the fact that end users don’t need to know about it all – and even developers hardly have to care – is a great success for Caja.

29 Oct 2008

Yahoo, Caja, OpenSocial

Filed under: Caja,Capabilities,Open Source,Programming,Security — Ben @ 13:01

I’m very excited that Yahoo! have launched their gadget platforms, including an OpenSocial platform. Why am I excited? Because Yahoo! require all gadgets to use Caja so that they can be sure the gadgets behave themselves without review. Caja allows the container (i.e. Yahoo!’s platform, in this case) to completely confine the untrusted Javascript (i.e. the gadget, in this case), only allowing it to perform “safe” operations. All other platforms either have to manually review gadgets or take the risk that the gadgets will do something evil to their users.

22 May 2008

Preprint: Access Control

Filed under: Capabilities,Programming,Security — Ben @ 17:07

I have three currently unpublished papers that may be of interest. This one has been submitted but not yet accepted. As you can guess from the title, it’s about access control, particularly in the area of mashups, gadgets and web applications.

This is the introduction:

Access control is central to computer security. Traditionally, we wish to restrict the user to exactly what he should be able to do, no more and no less.

You might think that this only applies to legitimate users: where do attackers fit into this worldview? Of course, an attacker is a user whose access should be limited just like any other. Increasingly, of course, computers expose services that are available to anyone — in other words, anyone can be a a legitimate user.

As well as users there are also programs we would like to control. For example, the program that keeps the clock correctly set on my machine should be allowed to set the clock and talk to other time-keeping programs on the Internet, and probably nothing else\footnote{Perhaps it should also be allowed a little long-term storage, for example to keep its calculation of the drift of the native clock.}.

Increasingly we are moving towards an environment where users choose what is installed on their machines, where their trust in what is installed is highly variable\footnote{A user probably trusts their
operating system more than their browser, their browser more than the pages they browse to and some pages more than others.} and where “installation” of software is an increasingly fluid concept,
particularly in the context of the Web, where merely viewing a page can cause code to run.

In this paper I explore an alternative to the traditional mechanisms of roles and access control lists. Although I focus on the use case of web pages, mashups and gadgets, the technology is appliable to all access control.

And the paper is here.

Regular readers will not be surprised to hear I am talking about capabilities.

16 Apr 2008

Nice Review of Caja

Filed under: Capabilities,Open Source,Programming,Security — Ben @ 1:41

Tim Oren posted about Caja.

…this adds up to a very good chance that something that’s right now fairly obscure could turn into a major force in Web 2.0 within months, not years. Because Caja modifies the de facto definition of JavaScript, it would have an immediate impact on any scripts and sites that are doing things regarded as unsafe in the new model. If you’ve got a Web 2.0 based site, get ready for a project to review for ‘Caja-safety’. If the Caja model spreads, then the edges of the sandbox are going to get blurry. Various users and sites will be able to make choices to allow more powerful operations, and figuring out which ones are significant and allow enhanced value could be a fairly torturous product management challenge, and perhaps allow market entry chances for more powerful forms of widgets and Facebook-style ‘apps’.

End of message.

5 Feb 2008

Caja in the News

Filed under: Capabilities,Open Source,Programming,Security — Ben @ 20:07

It seems MySpace’s developer launch today is causing Caja to get splattered all over the place.

24 Dec 2007

Handling Private Data with Capabilities

Filed under: Anonymity/Privacy,Capabilities,Programming,Security — Ben @ 7:10

A possibility I’ve been musing about that Caja enables is to give gadgets capabilities to sensitive (for example, personal) data which are opaque to the gadgets but nevertheless render appropriately when shown to the user.

This gives rise to some interesting, perhaps non-obvious consequences. One is that a sorted list of these opaque capabilities would itself have to be opaque, otherwise the gadget might be able to deduce things from the order. That is, the capabilities held in the sorted list would have to be unlinkable to the original capabilities (I think that’s the minimum requirement). This is because sort order reveals data – say the capabilities represented age or sexual preference and the gadget knows, for some other reason, what that is for one member of the list. It would then be able to deduce information about people above or below that person in the list.

Interestingly, you could allow the gadget to do arbitrary processing on the contents of the opaque capabilities, so long as it gave you (for example) a piece of code that could be confined only to do processing and no communication. Modulo wall-banging, Caja could make that happen. Although it might initially sound a bit pointless, this would allow the gadget to produce output that could be displayed to the user, despite the gadget itself not being allowed to know that output.

Note that because of covert channels, it should not be thought that this prevents the leakage of sensitive data – to do that, you would have to forbid any processing by the gadget of the secret data. But what this does do is prevent inadvertent leakage of data by (relatively) benign gadgets, whilst allowing them a great deal of flexibility in what they do with that data from the user’s point of view.

3 Dec 2007

Caja and OpenSocial

Filed under: Capabilities,Open Source,Programming,Security — Ben @ 2:03

An obvious place to use Caja is, of course, in OpenSocial. So, a bunch of us at Google have been experimenting with this use case and the first outcome is an update to the container sample which allows you to try running your gadget Caja-ised (gotta think of a better name for that). We even have instructions on how to Caja-ise your gadget.

We haven’t tried many gadgets yet, but the good news is the example gadgets worked with (almost[1]) no change. It seems clear that more complex gadgets are not likely to survive without at least some change but we don’t yet know how hard that’s going to be. Feedback, as always, welcome! And don’t forget to join the mailing list to discuss it.

[1] Right now, because Caja-ised code gets pushed into its own sandbox, you have to export any functions that need to be visible to the rest of the page (for example functions that get called when you click a button) – right now, you have to explicitly perform that export but we expect to be able to remove that requirement.

14 Nov 2007

Caja Code is Available

Filed under: Capabilities,Open Source,Programming,Security — Ben @ 15:51

Yesterday we put the initial (incomplete) version of the Caja code up at

From now on, all development will be done out in the open. External developers are welcome to come and play, too. Join the mailing list. Write code! Find bugs! Laugh at my mistakes! Have fun!

1 Nov 2007

Caja: Capability Javascript

Filed under: Capabilities,Open Source,Programming,Security — Ben @ 11:44

I’ve been running a team at Google for a while now, implementing capabilities in Javascript. Fans of this blog will remember that long ago I did a thing called CaPerl. The idea in CaPerl was to compile a slightly modified version of Perl into Perl, enforcing capability security in the process.

Caja follows a similar path, except rather than modify Javascript, we restrict it to a large subset. This means that a Caja program will run without modification on a standard Javascript interpreter – though it won’t be secure, of course! When it is compiled then, like CaPerl, the result is standard Javascript that enforces capability security. What does this mean? It means that Web apps can embed untrusted third party code without concern that it might compromise either the application’s or the user’s security.

Caja will be open source, under the Apache License. We’re still debating whether we will drop our existing code for this as a starting point, or whether we want to take a different approach, but in any case, there’s plenty to be done.

Although the site has been up for a while, I was reluctant to talk about it until there was some way for you to be involved. Now there is – we have a public mailing list. Come along, read the docs (particularly the Halloween version of the spec) and join in the discussions. I’m very excited about this project and the involvement of some world class capability experts, including Mark Miller (of E fame) who is a full-time member of the Caja development team.

5 Mar 2007

Delegation and Identity Management

Filed under: Capabilities,Crypto,Identity Management,Security — Ben @ 9:27

Kim Cameron has written two pieces about delegation. The only thing I have to add is when you delegate authority, in an ideal world you should also be able to restrict what you delegate – I’ll agree that we could have a long discussion about that, but the only thing I have to say right now is “capabilities“.

4 Dec 2005

Capabilities versus Jails

Filed under: Capabilities — Ben @ 15:23

In responses to a post mentioning CaPerl, the relative merits of jails and capabilities are touched upon, prompting my (somewhat tangential, I’ll admit) thought:

Capabilities have at least two obvious superiorities over jails.

The first is that designation is authorisation – that is, I don’t have to first tell the jail what the untrusted code can use, and then tell the code to use it.

The second is that when using capabilities it is easy to restrict resources in custom ways, since a capability is essentially code that wraps the mediated resource.

I will admit, though, that the CaPerl style of capability system can’t (neatly) control CPU usage. For that, you need a capability system that’s built into the operating system.

Since we’re talking about capabilities and Python, I’m reminded that some of the Twisted guys spent some time getting excited about caps with me during, hmm, PyCon, I think – and also wanted to control CPU – must be some kind of Python meme.

Finally, it’s been pointed out to me that Twisted incoporates capabilities somewhere in its guts (look for the Perspective Broker).

Next Page »

Powered by WordPress