Links

Ben Laurie blathering


Crypto Everywhere

Recent events, and a post to the OpenID list got me thinking.

Apparently rfc2817 allows an http url tp be used for https security.

Given that Apache seems to have that implemented [1] and that the
openid url is mostly used for server to server communication, would
this be a way out of the http/https problem?

I know that none of the browsers support it, but I suppose that if the
client does not support this protocol, the server can redirect to the
https url? This seems like it could be easier to implement that XRI .

Disclaimer: I don’t know much about rfc2817

Henry

[1] http://www.mail-archive.com/dev-tech-crypto@lists.mozilla.org/msg00251.html

The core issue is that HTTPS is used to establish end-to-end security, meaning, in particular, authentication and secrecy. If the MitM can disable the upgrade to HTTPS then he defeats this aim. The fact that the server declines to serve an HTTP page is irrelevant: it is the phisher that will be serving the HTTP page, and he will have no such compunction.

The traditional fix is to have the client require HTTPS, which the MitM is powerless to interfere with. Upgrades would work fine if the HTTPS protocol said “connect on port 80, ask for an upgrade, and if you don’t get it, fail”, however as it is upgrades work at the behest of the server. And therefore don’t work.

Of course, the client “requires” HTTPS because there was a link that had a scheme of “https”. But why did was that link followed? Because there was an earlier page with a trusted link (we hope) that was followed. (Note that this argument applies to both users clicking links and OpenID servers following metadata).

If that page was served over HTTP, then we are screwed, obviously (bearing in mind DNS cache attacks and weak PRNGs).

This leads to the inescapable conclusion that we should serve everything over HTTPS (or other secure channels).

Why don’t we? Cost. It takes far more tin to serve HTTPS than HTTP. Even really serious modern processors can only handle a few thousand new SSL sessions per second. New plaintext sessions can be dealt with in their tens of thousands.

Perhaps we should focus on this problem: we need cheap end-to-end encryption. HTTPS solves this problem partially through session caching, but it can’t easily be shared across protocols, and sessions typically last on the order of five minutes, an insanely conservative figure.

What we need is something like HTTPS, shareable across protocols, with caches that last at least hours, maybe days. And, for sites we have a particular affinity with, an SSH-like pairing protocol (with less public key crypto – i.e. more session sharing).

Having rehearsed this discussion many times, I know the next objection will be DoS on the servers: a bad guy can require the server to spend its life doing PK operations by pretending he has never connected before. Fine, relegate PK operations to the slow queue. Regular users will not be inconvenienced: they already have a session key. Legitimate new users will have to wait a little longer for initial load. Oh well.

3 Comments

  1. From my reading of the responses to your post on the mailing list, the answers seem to be

    + SSL was expensive in the mid 90ies. Computers are now so powerful it does not really make a difference anymore for medium sized sites
    + session caches are not that big
    “a session cache entry isn’t really that big. It easily fits into 100 bytes on the server, so you can serve a million concurrent user for a measly 100M. Second, you can use CSSC/Tickets [RFC5077] to offload all the information onto the client.” Eric Rescoria
    – there is a problem with clients dropping sessions though after 5 minutes (since the server cannot change that).
    me: What types of browsers are these? Browsers in public places?

    Comment by Henry Story — 2 Sep 2008 @ 8:13

  2. […] as ever about how to do security right, and coming to the age-old conclusion that it involves crypto everywhere. It’s often been said that we won the battle to make strong encryption freely available, but lost […]

    Pingback by Danny O’Brien’s Oblomovka » Blog Archive » hi mum! and crypto — 7 Sep 2008 @ 5:27

  3. @henry – yes, computer performance has improved since the 1990s. You seem to have overlooked that internet traffic has outpaced this. see original authors’ post where they says a server can handle 1Ks of SSL connections, 10Ks to 100Ks of unencrypted connections

    @author
    i’m assuming you mean client caching of SSL-transfered data, because intermediate servers (including those on the target site) can’t cache the transfered data — it’s encrypted, and cannot be replayed (SSL is designed to prevent replay attacks — that includes replay as a feature). What’s really needed is signed HTTP, where the client can trust the integrity of the data sent from the server (my bank logon page better be from the bank, not spoofed) but where the client doesn’t care about privacy (after all, _everyone_ can see the same sign in page anyway).

    Comment by aaron — 16 Dec 2008 @ 7:10

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress