Links

Ben Laurie blathering

24 Feb 2009

Doing DNSSEC Right

Filed under: DNSSEC,Security — Ben @ 16:36

Since posting about DNSSEC, I’ve had lots of great feedback. So, in no particular order…

Various people have pointed out that DLV is not as bad as I suggested

  • DLV is only activated for queries that cannot be proved secure in the cache
  • DLV employs aggressive negative caching – it works out whether existing cached NSEC (and NSEC3?) records would prove nonexistence of a record before bothering to query it
  • DLV is not used for domains that have trusted keys

Although the second measure is, as I remember it, strictly speaking against the rules (one is not supposed to calculate negative responses from the cache), clearly it can be stipulated that a DLV server must behave when serving NSEC records. Anyway, the net result is that the overhead of DLV is actually quite reasonable. I still say it should be run by every DLVed domain for every other, though. In any case, I am going to switch it on in my own caching resolver.

One thing I wanted to achieve is that a DNSSEC-ignorant resolver downstream of my caching resolver would only get validated results. I tried to do this with the dnssec-must-be-secure configuration option – but this is wrong. That option requires everything to be signed, whereas in DNSSEC it is perfectly OK for a zone to be unsigned so long as its parent delegates to it with no keys (bear in mind that with DNSSEC the nonexistence of the keys is provable, and so this is secure). In fact, BIND 5.3 behaves as I want it to with just DNSSEC enabled. In BIND 5.4 onwards I will have to switch it on with the dnssec-validation option (gee, thanks, ISC for making a backward incompatible change!).

Jelte Jansen operates a domain with various broken entries – this is very handy for testing and I now include its key in my configuration. Note that if you want to see a record that fails validation, then you need to set the CD bit (with dig, +cd or +cd +dnssec if you want to see the DNSSEC records).

Paul Hoffman wonders why I would prefer a signature (for anchors2keys) to download over HTTPS. The reason is that HTTPS download doesn’t really prove the file hasn’t been interfered with – the server will serve anything that happens to be in the filesystem over HTTPS, of course. A signature would be done with a key that I would hope is very strictly supervised, and so is far more trustworthy.

Incidentally, for DNSSEC newbies, one of the interesting features of DNSSEC is that it can be done entirely with offline keys. Proving negatives (i.e. the nonexistence of names) with such a constraint is an interesting problem – and one that I spent three years working on, leading in the end to RFC 5155.

I’m sure everyone is tired of reading my config and makefile, so there’s a tarball here.

Finally, thanks very much to all the experts for the excellent feedback.

22 Feb 2009

DNSSEC With DLV

Filed under: DNSSEC,Security — Ben @ 18:38

Tony asks “what about DLV?”.

DLV is Domain Lookaside Validation. The idea is that if your resolver can’t find a trust anchor for foo.bar.example.com, then it can go and look in a lookaside zone, hosted at, say, dlv.isc.org, for trust anchors. So, it would first look for com.dlv.isc.org and then example.com.dlv.isc.org and so forth.

So, what do I think of this? It’s another way to solve the problem of having the root not signed.

How does it compare to IANA’s ITAR?

  1. It’s much less efficient – all those extra lookups for every query.
  2. It covers more than just TLDs – ITAR could, too, but it doesn’t, for whatever reason.
  3. There doesn’t seem to be a way to force it, like there is for ITAR. That is, I would like to configure my caching server to force DNSSEC for every domain that exists in DLV, but I don’t believe I can. This makes DLV practically useless, since now only clients that check the AD bit will be aware of the failure.

Also, I think it would be organisationally better if all the participating domains would run DLV for each other, rather than have any single party running it.

Anyway, I modified my setup to also use DLV. Here’s the new Makefile

all: run

run: named.root rndc.key itar-trusted-keys.conf force-dnssec.conf isc-dlv.conf
	named -c named.conf -d 10 -g

named.root!
	rm -f named.root
	wget ftp://ftp.rs.internic.net/domain/named.root

rndc.key:
	rndc-confgen -a -c rndc.key

itar-trusted-keys.conf: anchors2keys anchors.xml
	./anchors2keys < anchors.xml > /tmp/itar-trusted-keys
	mv /tmp/itar-trusted-keys itar-trusted-keys.conf

anchors.xml! iana-pgp-keys
# appears to break without -v!
	rsync -v rsync.iana.org::itar/anchors.xml rsync.iana.org::itar/anchors.xml.sig .
	gpg --no-default-keyring --keyring ./iana-pgp-keys --verify anchors.xml.sig anchors.xml

anchors2keys:
	wget --no-check-certificate https://itar.iana.org/_misc/anchors2keys
	chmod +x anchors2keys

iana-pgp-keys:
	html2text -nobs http://www.icann.org/en/general/pgp-keys.htm > iana-pgp-keys.tmp
# IANA's PGP keys suck. Clean them up...
	awk '/^>/ { print substr($$0,2,100); next; } /^Version:/ { print; print ""; next; } { print }' < iana-pgp-keys.tmp > iana-pgp-keys.tmp2
	gpg --import iana-pgp-keys.tmp2
	gpg --export 81D464F4 | gpg --no-default-keyring --keyring ./iana-pgp-keys --import
	rm iana-pgp-keys.tmp*

force-dnssec.conf: itar-trusted-keys.conf
	awk '/^"/ { gsub(/"/, "", $$1); print "dnssec-must-be-secure \"" $$1 "\" true;"; }' < itar-trusted-keys.conf | sort -u > force-dnssec.conf

isc-pgp-keys:
	rm -f 363
	wget --no-check-certificate https://www.isc.org/node/363
	html2text < 363 > isc-key.tmp
	awk '/^Version:/ { print; print ""; next; } { print }' < isc-key.tmp > isc-key.tmp2
	gpg --import isc-key.tmp2
	gpg --export 1BC91E6C | gpg --no-default-keyring --keyring ./isc-pgp-keys --import
	rm isc-key.tmp* 363

isc-dlv.conf: isc-pgp-keys
	rm -f dlv.isc.org.named.conf
	wget http://ftp.isc.org/www/dlv/dlv.isc.org.named.conf http://ftp.isc.org/www/dlv/dlv.isc.org.named.conf.asc
	gpg --no-default-keyring --keyring ./isc-pgp-keys --verify dlv.isc.org.named.conf.asc dlv.isc.org.named.conf
	mv dlv.isc.org.named.conf isc-dlv.conf

test:
	dig -p5453 +dnssec www.dnssec.se @localhost

and here’s named.conf

options {
  listen-on port 5453 { 127.0.0.1; };
  pid-file "named.pid";
  dnssec-enable true;
  dnssec-lookaside . trust-anchor dlv.isc.org.;
  include "force-dnssec.conf";
};

// obtain this file from ftp://ftp.rs.internic.net/domain/named.root
zone "." { type hint; file "named.root"; };

// include the rndc key
include "rndc.key";
controls {
  inet 127.0.0.1 port 1953
    allow { 127.0.0.1; }
    keys { "rndc-key"; };
};

// include ITAR trust anchors
include "itar-trusted-keys.conf";

// include ISC DLV trust anchor
include "isc-dlv.conf";

Enjoy.

Incidentally, I have enabled “forced ITAR” on my main resolver, so we’ll see how that goes. I haven’t added DLV because, like I say, failure would not be noticed, so what’s the point of all the overhead?

What Is DNSSEC Good For?

Filed under: Crypto,DNSSEC,Security — Ben @ 18:24

A lot of solutions to all our problems begin with “first find a public key for the server”, for example, signing XRD files. But where can we get a public key for a server? Currently the only even slightly sane way is by using an X.509 certificate for the server. However, there are some problems with this approach

  1. If you are going to trust the key, then the certificate must come from a trusted CA, and hence costs money.
  2. Because the certificate is a standard X.509 certificate, it can be used (with the corresponding private key, of course) to validate an HTTPS server – but you may not want to trust the server with that power.
  3. The more we (ab)use X.509 certificates for this purpose, the more services anyone with a certificate can masquerade as (for the certified domain, of course).

One obvious way to fix these is to add extensions to the certificates that prevent their use for inappropriate services. Of course, then we would have to get the CAs to support these extensions and figure out how to validate certificate requests that used them.

But I have to wonder why we’re involving CAs in this process at all? All the CA does is to establish that the person requesting the certificate is the owner of the corresponding domain. But why do we need that service? Why could the owner of the domain not simply include the certificate in the DNS – after all, only the owner of the domain can do that, so what further proof is required?

Obviously the answer is: DNS is not secure! This would allow anyone to easily spoof certificates for any domain. Well, yes – that’s why you need DNSSEC. Forgetting the details of DNSSEC, the interesting feature is that the owner of a domain also owns a private key that can sign entries in that domain (and no-one else does, if the owner is diligent). So, the domain owner can include any data they want in their zone and the consumer of the data can be sure, using DNSSEC, that the data is valid.

So, when the question “what is the public key for service X on server Y?” arises, the answer should be “look it up in the DNS with DNSSEC enabled”. The answer is every bit as secure as current CA-based certificates, and, what’s more, once the domain owner has set up his domain, there is no further cost to him – any new keys he needs he can just add to his zone and he’s done.

Does DNSSEC have any other uses? OK, it would be nice to know that the A record you just got back corresponds to the server you were looking for, but if you trust a connection just on the basis that you used the right address, you are dead meat – you’ll need some key checking on top of it (for example, by using TLS) to avoid attacks by evil proxies (such as rogue wifi hotspots) or routing attacks and so forth. For me, the real value in DNSSEC is cryptographic key distribution.

Using DNSSEC Today

Filed under: DNSSEC,Security — Ben @ 15:54

It’s been a while since I’ve properly paid attention to developments in the DNSSEC world, so I was surprised to learn that IANA now has an “Interim Trust Anchor Repository“. No need to wait any longer for the root to be signed, you can configure yourself to do DNSSEC right now.

Here’s how. First off, grab this Makefile

all: run

run: named.root rndc.key itar-trusted-keys.conf force-dnssec.conf
	named -c named.conf -d 10 -g

named.root!
	rm -f named.root
	wget ftp://ftp.rs.internic.net/domain/named.root

rndc.key:
	rndc-confgen -a -c rndc.key

itar-trusted-keys.conf: anchors2keys anchors.xml
	./anchors2keys < anchors.xml > /tmp/itar-trusted-keys
	mv /tmp/itar-trusted-keys itar-trusted-keys.conf

anchors.xml! iana-pgp-keys
# appears to break without -v!
	rsync -v rsync.iana.org::itar/anchors.xml rsync.iana.org::itar/anchors.xml.sig .
	gpg --no-default-keyring --keyring ./iana-pgp-keys --verify anchors.xml.sig anchors.xml

anchors2keys:
	wget --no-check-certificate https://itar.iana.org/_misc/anchors2keys
	chmod +x anchors2keys

iana-pgp-keys:
	html2text -nobs http://www.icann.org/en/general/pgp-keys.htm > iana-pgp-keys.tmp
# IANA's PGP keys suck. Clean them up...
	awk '/^>/ { print substr($$0,2,100); next; } /^Version:/ { print; print ""; next; } { print }' < iana-pgp-keys.tmp > iana-pgp-keys.tmp2
	gpg --import iana-pgp-keys.tmp2
	gpg --export 81D464F4 | gpg --no-default-keyring --keyring ./iana-pgp-keys --import
	rm iana-pgp-keys.tmp*

force-dnssec.conf: itar-trusted-keys.conf
	awk '/^"/ { gsub(/"/, "", $$1); print "dnssec-must-be-secure \"" $$1 "\" true;"; }' < itar-trusted-keys.conf | sort -u > force-dnssec.conf

and this named.conf

options {
  listen-on port 5453 { 127.0.0.1; };
  pid-file "named.pid";
  dnssec-enable true;
  include "force-dnssec.conf";
};

// obtain this file from ftp://ftp.rs.internic.net/domain/named.root
zone "." { type hint; file "named.root"; };

// include the rndc key
include "rndc.key";
controls {
  inet 127.0.0.1 port 1953
    allow { 127.0.0.1; }
    keys { "rndc-key"; };
};

// include ITAR trust anchors
include "itar-trusted-keys.conf";

and run make. After a while, with luck, you’ll have named with trust anchors configured (you can see which ones by looking at itar-trusted-keys.conf) running on port 5453 (and the rndc control channel on port 1953).

You can test it with, for example

$ dig -p5453 +dnssec www.dnssec.se @localhost      

; < <>> DiG 9.3.5-P2 < <>> -p5453 +dnssec www.dnssec.se @localhost
;; global options:  printcmd
;; connection timed out; no servers could be reached
[ben@euphrates ~/svn-work/peim/doc]
$ dig -p5453 +dnssec www.dnssec.se @localhost

; < <>> DiG 9.3.5-P2 < <>> -p5453 +dnssec www.dnssec.se @localhost
;; global options:  printcmd
;; Got answer:
;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 41806
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;www.dnssec.se.                 IN      A

;; ANSWER SECTION:
www.dnssec.se.          300     IN      CNAME   dnssec.iis.se.
www.dnssec.se.          300     IN      RRSIG   CNAME 5 3 300 20090302080001 20090220080001 62658 dnssec.se. y0JcIxVunryZRaccDX2PteGyxCQ2dlfeoeDYNcoPKCryBa9vGWuNJwNa MhqzMmLLr5N4SbQsIL8YrQ8+l/wBFebXB6I8dJ8OWDmz6OqihSzkDYB/ qFwEWLQi49RfCuE6Qai/PnPh0Om+7guyL15fLTMh3PtZso4axt23/vqG 5RI=
dnssec.iis.se.          3600    IN      CNAME   www.iis.se.
dnssec.iis.se.          3600    IN      RRSIG   CNAME 5 3 3600 20090302135501 20090220135501 6477 iis.se. ebDsJcmZRHkq5Y+SLTIC2Iey3fNBj7r3bk3TAeyJPXtgFE6YJqAtJmv4 m5Sn1jDZhidnI0NWyPz5dUwDFfzVnJN/DH+CZJuiynKQge4inIGt8Dzk ybaq7JSoFkHABAu+IBbVKwR4+TW92tzv2CgzdtBIsQnuOn+CQMpmuz+N rFk=
www.iis.se.             3600    IN      A       212.247.7.210
www.iis.se.             3600    IN      RRSIG   A 5 3 3600 20090302135501 20090220135501 6477 iis.se. aIOT7U/CRcFi3CcgaHp6EqV8JHkODodQM0Pg7CKh1gby4/8pGnqABDiU +4bg8/zDlAzVUz6o4j5sjIg5uS2A1ODJzp+UodXyVL9/Q8eBfZGSDuOa FPwK9jUxj6P1iXIqoMyeAS1PG1rFgSim/xpZLhJK2l5ScQ/1+Pq6SG8T Lgc=

;; Query time: 1236 msec
;; SERVER: 127.0.0.1#5453(127.0.0.1)
;; WHEN: Sun Feb 22 14:07:14 2009
;; MSG SIZE  rcvd: 602

Because we have forced DNSSEC on the trusted domains (this is done in the included file force-dnssec.conf), the fact we get an answer at all tells us that the signatures checked out – but also the fact that the AD bit is set (“;; flags: ... ad ...“) tells you that the signatures were validated by the caching server you just set up. Note that pretty much no application will check for the AD bit, so DNSSEC is only really going to help if you have it forced on, as in this configuration. Obviously if you want to run this for real, you should run it on standard ports and point all your resolvers at it. Then you get the benefits of DNSSEC today with no application changes.

Note that ordinary queries (i.e. ones that don’t request DNSSEC) are also validated by the server, but we don’t see the DNSSEC stuff in the response

$ dig -p5453 www.dnssec.se @localhost

; < <>> DiG 9.3.5-P2 < <>> -p5453 www.dnssec.se @localhost
;; global options:  printcmd
;; Got answer:
;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 2159
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 3, ADDITIONAL: 0

;; QUESTION SECTION:
;www.dnssec.se.                 IN      A

;; ANSWER SECTION:
www.dnssec.se.          100     IN      CNAME   dnssec.iis.se.
dnssec.iis.se.          3400    IN      CNAME   www.iis.se.
www.iis.se.             3400    IN      A       212.247.7.210

;; AUTHORITY SECTION:
iis.se.                 86200   IN      NS      ns3.nic.se.
iis.se.                 86200   IN      NS      ns.nic.se.
iis.se.                 86200   IN      NS      ns2.nic.se.

;; Query time: 1 msec
;; SERVER: 127.0.0.1#5453(127.0.0.1)
;; WHEN: Sun Feb 22 14:41:25 2009
;; MSG SIZE  rcvd: 147

I notice that the AD bit isn’t set, though, which seems like a bug.

16 Feb 2009

Identification Is Not Security

Filed under: General,Rants,Security — Ben @ 18:51

The New York Times have an article about the Stanford Clean Slate project. It concludes

Proving identity is likely to remain remarkably difficult in a world where it is trivial to take over someone’s computer from half a world away and operate it as your own. As long as that remains true, building a completely trustable system will remain virtually impossible.

As far as I can tell, Clean Slate itself doesn’t make this stupid claim, the NYT decided to add it for themselves. But why do they think identification is relevant? Possibly because we are surrounded by the same spurious claim. For example…

  • We need ID cards because they will prevent terrorism.
  • We shouldn’t run software on our Windows box that isn’t signed because that’ll prevent malware.
  • We should only connect to web servers that have certificates from well-known CAs because only they can be trusted.

But…

  • The guys who crashed the planes were all carrying ID. Didn’t help.
  • The guys who blew up the train in Spain were all carrying ID. Didn’t help.
  • People get hacked via their browser all the time. Did signing it help?
  • What does it take to sign code? A certificate, issued by a CA…
  • What does it take to get a certificate? Not much … proof that you own a domain, in fact. So, I can trust the server because the guy that owns it can afford to pay Joker $10? And I can trust the code he signed? Why?

Nope. Security is not about knowing who gave you the code that ate your lunch – security is about having a system that is robust against code that you don’t trust. The identity of the author of that code should be irrelevant.

11 Feb 2009

Crypto Craft Knowledge

Filed under: Crypto,Programming,Rants,Security — Ben @ 17:50

From time to time I bemoan the fact that much of good practice in cryptography is craft knowledge that is not written down anywhere, so it was with interest that I read a post by Michael Roe about hidden assumptions in crypto. Of particular interest is this

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

Actually, in my extensive experience of reviewing security-critical code, this particular error is extremely common. Why does Michael assume that they probably wouldn’t? Because he is steeped in the craft knowledge around crypto. But most developers aren’t. Most developers don’t even have the right mindset for secure coding, let alone correct cryptographic coding. So, why on Earth do we expect them to follow our unwritten rules, many of which are far from obvious even if you understand the crypto?

1 Feb 2009

A Good Use of the TPM?

Filed under: Anonymity,Privacy,Security — Ben @ 20:33

Back when the TPM was called Palladium I made myself unpopular in some circles by pointing out that there were good uses for it, too, such as protecting my servers from attackers.

Whether that is practical is still an interesting question – it’s a very big step from a cheap device that does some cunning crypo to a software stack that can reliably attest to what is running (which is probably all that has saved us from the more evil uses of the TPM) – but at a recent get-together for privacy and anonymity researchers George Danezis and I ran, Mark Ryan presented an interesting use case.

He proposes using the TPM to hold sensitive data such that the guy holding it can read it – but if he does, then it becomes apparent to the person who gave him the data. Or, the holder can choose to “give the data back” by demonstrably destroying his own ability to read it.

Why would this be useful? Well, consider MI5’s plan to trawl through the Oyster card records. Assuming that government fails to realise that this kind of thing is heading us towards a police state, wouldn’t it be nice if we could check afterwards that they have behaved themselves and only accessed data that they actually needed to access? This kind of scheme is a step towards having that kind of assurance.

Powered by WordPress