Links

Ben Laurie blathering


Is Your DNS Really Safe?

Ever since the recent DNS alert people have been testing their DNS servers with various cute things that measure how many source ports you use, and how “random” they are. Not forgetting the command line versions, of course

dig +short porttest.dns-oarc.net TXT
dig +short txidtest.dns-oarc.net TXT

which yield output along the lines of

"aaa.bbb.ccc.ddd is GREAT: 27 queries in 12.7 seconds from 27 ports with std dev 15253"

But just how GREAT is that, really? Well, we don’t know. Why? Because there isn’t actually a way to test for randomness. Your DNS resolver could be using some easily predicted random number generator like, say, a linear congruential one, as is common in the rand() library function, but DNS-OARC would still say it was GREAT. Believe them when they say it isn’t GREAT, though! Non-randomness we can test for.

So, how do you tell? The only way to know for sure is to review the code (or the silicon, see below). If someone tells you “don’t worry, we did statistical checks and it’s random” then make sure you’re holding on to your wallet – he’ll be selling you a bridge next.

But, you may say, we already know all the major caching resolvers have been patched and use decent randomness, so why is this an issue?

It is an issue because of NAT. If your resolver lives behind NAT (which is probably way more common since this alert, as many people’s reactions [mine included] was to stop using their ISP’s nameservers and stand up their own to resolve directly for them) and the NAT is doing source port translation (quite likely), then you are relying on the NAT gateway to provide your randomness. But random ports are not the best strategy for NAT. They want to avoid re-using ports too soon, so they tend to use an LRU queue instead. Pretty clearly an LRU queue can be probed and manipulated into predictability.

So, if your NAT vendor is telling you not to worry, because the statistics say they are “random”, then I would start worrying a lot: your NAT vendor doesn’t understand the problem. It’s also pretty unhelpful for the various testers out there not to mention this issue, I must say.

Incidentally, I’m curious how much this has impacted the DNS infrastructure in terms of traffic – anyone out there got some statistics?

Oh, and I should say that number of ports and standard deviation are not a GREAT way to test for “randomness”. For example, the sequence 1000, 2000, …, 27000 has 27 ports and a standard deviation of over 7500, which looks pretty GREAT to me. But not very “random”.

4 Comments

  1. Why not set up your resolver without going through NAT? I’ve been meaning to set up my own as OpenDNS has been annoying me (returning pages instead of ‘not found’).

    Since I already have a webserver, I can simply run it on the same box.

    Comment by Charles Darke — 30 Jul 2008 @ 9:13

  2. You can easily configure OpenDNS to return `not found’ in earnest.

    Comment by Peter van Dijk — 31 Jul 2008 @ 6:02

  3. @Ben: I agree with William Allen Simpson (see http://www.mail-archive.com/cryptography%40metzdowd.com/msg09561.html) that the term ‘unpredictability’ is preferable over ‘randomness’.

    @Charles: ISPs may use multiple unpredictable source IP-addresses for submitting non-recursive DNS queries which extra complicates attacks.

    W.r.t. OpenDNS: spoofed DNS answers seemingly originating from an OpenDNS server IP-address, regardless of their actual origin, are not likely to be blocked at your ISPs perimeter, while spoofed answers seemingly originating from your ISPs DNS server IP-address, likely are. When using OpenDNS, you’d better make sure the DNS requests _you_ send to OpenDNS have unpredictable transaction ID and source port.

    Comment by Bitwiper — 1 Aug 2008 @ 16:31

  4. Don’t forget to mention firewalls – pretty much anything that cheerfully rewrites your packets for you is problematic…

    Comment by Cat — 6 Aug 2008 @ 22:53

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress