Links

Ben Laurie blathering

30 Jul 2008

Is Your DNS Really Safe?

Filed under: Lazyweb,Security — Ben @ 6:26

Ever since the recent DNS alert people have been testing their DNS servers with various cute things that measure how many source ports you use, and how “random” they are. Not forgetting the command line versions, of course

dig +short porttest.dns-oarc.net TXT
dig +short txidtest.dns-oarc.net TXT

which yield output along the lines of

"aaa.bbb.ccc.ddd is GREAT: 27 queries in 12.7 seconds from 27 ports with std dev 15253"

But just how GREAT is that, really? Well, we don’t know. Why? Because there isn’t actually a way to test for randomness. Your DNS resolver could be using some easily predicted random number generator like, say, a linear congruential one, as is common in the rand() library function, but DNS-OARC would still say it was GREAT. Believe them when they say it isn’t GREAT, though! Non-randomness we can test for.

So, how do you tell? The only way to know for sure is to review the code (or the silicon, see below). If someone tells you “don’t worry, we did statistical checks and it’s random” then make sure you’re holding on to your wallet – he’ll be selling you a bridge next.

But, you may say, we already know all the major caching resolvers have been patched and use decent randomness, so why is this an issue?

It is an issue because of NAT. If your resolver lives behind NAT (which is probably way more common since this alert, as many people’s reactions [mine included] was to stop using their ISP’s nameservers and stand up their own to resolve directly for them) and the NAT is doing source port translation (quite likely), then you are relying on the NAT gateway to provide your randomness. But random ports are not the best strategy for NAT. They want to avoid re-using ports too soon, so they tend to use an LRU queue instead. Pretty clearly an LRU queue can be probed and manipulated into predictability.

So, if your NAT vendor is telling you not to worry, because the statistics say they are “random”, then I would start worrying a lot: your NAT vendor doesn’t understand the problem. It’s also pretty unhelpful for the various testers out there not to mention this issue, I must say.

Incidentally, I’m curious how much this has impacted the DNS infrastructure in terms of traffic – anyone out there got some statistics?

Oh, and I should say that number of ports and standard deviation are not a GREAT way to test for “randomness”. For example, the sequence 1000, 2000, …, 27000 has 27 ports and a standard deviation of over 7500, which looks pretty GREAT to me. But not very “random”.

27 Jul 2008

Why Not W3C or IETF?

Filed under: Open Standards — Ben @ 12:46

Ralf Bendrath asks what’s wrong with the W3C and the IETF that the OWF is trying to solve? So, to be very brief…

The W3C is a pay-to-play cartel that increasingly gets nothing done. Open source developers can’t even participate, as a rule. It also has an IPR policy that’s just as crap as everything else we’re trying not to emulate. So, not a realistic alternative.

The IETF is much better, but its main problem is that it has no IPR policy at all, other than “tell us what you know”. In practice this often works out OK, but there have been some notable instances where the outcome was pretty amazingly ungood, such as RSA’s stranglehold over SSL and TLS for years – a position Certicom are now trying to emulate with ECC, also via the IETF.

A more minor objection to the IETF that I hope the OWF will solve similarly to the ASF is that it is actually too inclusive. Anyone is allowed to join a working group and have as much say as anyone else. This means that any fool with time on their hands can completely derail the process for as long as they feel like. In my view, a functional specification working group should give more weight to those that are actually going to implement the specification and those who have a track record of actually being useful, much as the ASF pays more attention to contributors, committers and members, in that order.

24 Jul 2008

Open Web Foundation

Filed under: Open Source,Open Standards — Ben @ 18:41

I’m very pleased that we’ve launched the Open Web Foundation today. As Scott Kveton says

The OWF is an organization modeled after the Apache Software Foundation; we wanted to use a model that has been working and has stood the test of time.

When we started the ASF, we wanted to create the best possible place for open source developers to come and share their work. As time went by, it became apparent that the code wasn’t the only problem – standards were, too. The ASF board (and members, I’m sure) debated the subject several times whilst I was serving on it, and no doubt still does, but we always decided that we should focus on a problem we knew we could solve.

So, I’m extra-happy that finally a group of community-minded volunteers have come together to try to do the same thing for standards.

23 Jul 2008

Getting At Public Data

Filed under: Civil Liberties,Digital Rights — Ben @ 14:46

The government has quietly launched two quite fascinating initiatives. I have no idea why there wasn’t more fanfare. I was even at OpenTech, where one was announced, and I didn’t know!

Firstly, Show Us A Better Way

Ever been frustrated that you can’t find out something that ought to be easy to find? Ever been baffled by league tables or ‘performance indicators’? Do you think that better use of public information could improve health, education, justice or society at large?

The UK Government wants to hear your ideas for new products that could improve the way public information is communicated.

And 20 grand for the best ideas, too.

Secondly, The Public Sector Unlocking Service (Beta). I love that they put “Beta” in there. Tell them about crown copyright data some bureaucrat is hoarding, and they’ll read them the riot act. Awesome.

The Register on Security

Filed under: Security — Ben @ 4:54

So, The Register has a story on Mozilla doing security metrics. Which is cool.

But what tickles me is that The Register thinks I should download an Excel file to read more about the project. Yeah, right.

19 Jul 2008

Caja Security Review

Filed under: Programming,Security — Ben @ 16:00

A few weeks ago, we invited a group of external security experts to come and spend a week trying to break Caja. As we expected, they did. Quite often. In fact, I believe a team member calculated that they filed a new issue every 5 minutes throughout the week.

The good news, though, was that nothing they found was too hard to fix. Also, their criticism has led to some rethinking about some aspects of our approach which we hope will make the next security review easier as well as Caja more robust.

You can read a summary of their findings.

18 Jul 2008

Analysing Data Loss

Filed under: Privacy,Security — Ben @ 14:39

My colleague, Steve Weis, has an interesting article analysing the Dataloss Database. With pictures!

Within accidental disclosures, 36% were due to improper disposal of media or computers. Surprisingly, 30% were due to leaks via snail mail.

10 Jul 2008

ACTA, The Pirate Bay and BTNS

Doc Searls just pointed me at a couple of articles. The first is about ACTA.

ACTA, first unveiled after being leaked to the public via Wikileaks, has sometimes been lauded by its supporters as “The Pirate Bay-killer,” due to its measures to criminalize the facilitation of copyright infringement on the internet – text arguably written specifically to beat pirate BitTorrent trackers. The accord will place add internet copyright enforcement to international law and force national ISPs to respond to international information requests, and subjects iPods and other electronic devices to ex parte searches at international borders.

Obviously this is yet another thing we must resist. The Pirate Bay’s answer to this

IPETEE would first test whether the remote machine is supporting the crypto technology; once that’s confirmed it would then exchange encryption keys with the machine before transmitting your actual request and sending the video file your way. All data would automatically be unscrambled once it reaches your machine, so there would be no need for your media player or download manager to support any new encryption technologies. And if the remote machine didn’t know how to handle encryption, the whole transfer would fall back to an unencrypted connection.

is a great idea, but … its already been done by the IETF BTNS (Better-Than-Nothing Security) Working Group.

The WG has the following specific goals:

a) Develop an informational framework document to describe the motivation and goals for having security protocols that support anonymous keying of security associations in general, and IPsec and IKE in particular

Hmmm. I guess I should figure out how I switch this on. Anyone?

9 Jul 2008

FreeBMD Gets New Boots

Filed under: General — Ben @ 22:04

FreeBMD recently moved its servers from one of The Bunker‘s data centres to the other.

Our marvellous sysadmin posted some pictures. It never fails to amaze me how much tin it takes to keep that crazy idea running.

3 Jul 2008

ORG Report on E-counting

Filed under: Civil Liberties,Crypto,Digital Rights,Security — Ben @ 13:47

It seems like a long time since I spent a very long afternoon (and evening) observing the electronic count of the London Elections. Yesterday, the Open Rights Group released its report on the count. The verdict?

there is insufficient evidence available to allow independent observers to state reliably whether the results declared in the May 2008 elections for the Mayor of London and the London Assembly are an accurate representation of voters’ intentions.

There was lots of nice machinery and pretty screens to watch, but in my view three more things were needed to ensure confidence in the vote.

  • A display that showed (a random selection of) ballots and the corresponding vote recorded automatically.
  • No machines connected to the network that could not be observed.
  • A commitment to the vote (I mean this in the cryptographic sense) after which a manual recount of randomly selected ballot boxes.

The last point is technically tricky to do properly, but I think it could be achieved. For example, take the hash of each ballot box’s count, then form a Merkle tree from those. Publish the root of the tree as the commitment, then after the manual recount, show that the hashes of the (electronic) counts for those boxes (which you would have to reveal anyway to verify the recount) are consistent with the tree.

Powered by WordPress