Links

Ben Laurie blathering

26 Sep 2008

ICANN’s Never-ending Quest for Suckers

Filed under: Rants — Ben @ 19:43

In their latest attempt to answer the question “how can we get everyone with a domain name to pay for it again?” ICANN are apparently enthused about this stupid idea.

But wait … I have it … why don’t we create a TLD for every service? We obviously need .www, .smtp, .dhcp and so forth, or how will people know what service you are offering?

1 Sep 2008

Crypto Everywhere

Filed under: Crypto,Security — Ben @ 21:00

Recent events, and a post to the OpenID list got me thinking.

Apparently rfc2817 allows an http url tp be used for https security.

Given that Apache seems to have that implemented [1] and that the
openid url is mostly used for server to server communication, would
this be a way out of the http/https problem?

I know that none of the browsers support it, but I suppose that if the
client does not support this protocol, the server can redirect to the
https url? This seems like it could be easier to implement that XRI .

Disclaimer: I don’t know much about rfc2817

Henry

[1] http://www.mail-archive.com/dev-tech-crypto@lists.mozilla.org/msg00251.html

The core issue is that HTTPS is used to establish end-to-end security, meaning, in particular, authentication and secrecy. If the MitM can disable the upgrade to HTTPS then he defeats this aim. The fact that the server declines to serve an HTTP page is irrelevant: it is the phisher that will be serving the HTTP page, and he will have no such compunction.

The traditional fix is to have the client require HTTPS, which the MitM is powerless to interfere with. Upgrades would work fine if the HTTPS protocol said “connect on port 80, ask for an upgrade, and if you don’t get it, fail”, however as it is upgrades work at the behest of the server. And therefore don’t work.

Of course, the client “requires” HTTPS because there was a link that had a scheme of “https”. But why did was that link followed? Because there was an earlier page with a trusted link (we hope) that was followed. (Note that this argument applies to both users clicking links and OpenID servers following metadata).

If that page was served over HTTP, then we are screwed, obviously (bearing in mind DNS cache attacks and weak PRNGs).

This leads to the inescapable conclusion that we should serve everything over HTTPS (or other secure channels).

Why don’t we? Cost. It takes far more tin to serve HTTPS than HTTP. Even really serious modern processors can only handle a few thousand new SSL sessions per second. New plaintext sessions can be dealt with in their tens of thousands.

Perhaps we should focus on this problem: we need cheap end-to-end encryption. HTTPS solves this problem partially through session caching, but it can’t easily be shared across protocols, and sessions typically last on the order of five minutes, an insanely conservative figure.

What we need is something like HTTPS, shareable across protocols, with caches that last at least hours, maybe days. And, for sites we have a particular affinity with, an SSH-like pairing protocol (with less public key crypto – i.e. more session sharing).

Having rehearsed this discussion many times, I know the next objection will be DoS on the servers: a bad guy can require the server to spend its life doing PK operations by pretending he has never connected before. Fine, relegate PK operations to the slow queue. Regular users will not be inconvenienced: they already have a session key. Legitimate new users will have to wait a little longer for initial load. Oh well.

Powered by WordPress