Links

Ben Laurie blathering


Side-Channel Attacks and Security Theatre

OpenSSL fixed yet another side-channel attack recently. For those of you not in the know, a side-channel attack is one where process A figures out some aspect of what process B is doing by observing some change in the behaviour of process A. A trivial example of this would be to guess whether process B is running or idle by checking what percentage of the CPU process A is getting.

More advanced versions exploit various tricks CPUs to make things go faster, such as caches and branch prediction. Somewhat surprisingly, these attacks can provide enough information to leak information like RSA keys. This, of course, causes people who are trying to make a name for themselves to get quite excited – if they can claim to be able to steal secret keys, then that is news.

However, this all seems rather silly to me. In order to mount most of these attacks the attacker must be local – that is, they have to be able to run code on the same machine as the machine using the secret key. Now, every good security person knows that if your attacker has the ability to run stuff on your machine, it is game over, so why are we even caring about these attacks? This is security theatre of exactly the type that we geeks like to accuse the TSA of on a regular basis – isn’t it time we started making fun of ourselves, too?

Why don’t we? Presumably for exactly the reasons that governments like security theatre. Its good for business. We make people feel loved and protected. We keep people like CERT in jobs. Security companies can issue updates to products. Staff can spend lots of lovely overtime hours doing QA for the emergency rollout of a security update. The economy benefits!

Isn’t it time we stopped fixing these attacks? It isn’t as if the fixes come for free – they almost always make the crypto slower. And, as I said above, until we have platforms that are actually robust in the face of hostile users that can run code on them, there is absolutely no point in avoiding these attacks.

By the way, OpenSSL is far from being the only crypto library that’s vulnerable to this attack, but the advisory will only be about OpenSSL. Why? Diminishing returns, that’s why – OpenSSL is the most widely used crypto library. Once you’ve broken that, the theatrical value of the others is minimal, so why bother? Because you care about security, you say? I rest my case.

6 Comments

  1. What about smartcards ? Sidechannel attacks are a real problem there.

    Comment by Gavin — 2 Aug 2007 @ 11:47

  2. Many non-windows platforms do try their best to ensure that a remotely logged-in and unprivileged user cannot gain privileges. If you’re given a non-root account on a Linux box you’d expect not be able to gain root access or read memory of processes of other users. At Red Hat we even treat a non-root but otherwise authorised user being able to cause a machine to crash as a security issue. I guess this flaw may worry those ISP’s that do shared SSL web hosting (without using virtualisation), but I’m sure that for those machines these attacks will not be practical anyway given other stuff happening on the machine at the same time (by the very nature of them being ‘shared’).

    Comment by Mark Cox — 2 Aug 2007 @ 11:53

  3. what about https://www.kb.cert.org/vuls/id/997481

    side channel attack remotely stealing a private rsa key from apache running ssl in 2 hours over a local network? i will admit i haven’t looked at follow ups, but perhaps with a bunch of engineering (and a lot more requests), it’d be possible to do this over a larger network, or at least provide enough information to make a brute force attack more feasible.

    crypto operations should be black boxes which aren’t supposed to be leaking any information about private keys. if the timing of running the function does leak some information, the app developers are not going to be thinking about this, and that information is going to leak further and further until it is out and about on the network, albeit _very_ diluted. i’d put these in the same class of vulnerabilities as ‘never encrypt the same plaintext twice’ and other gotchas to app programmers that they’d hopefully not have to worry about.

    i started thinking about simple system-level ways to keep from leaking this information and still make the operations perform as fast as possible (throughput wise), perhaps by making the response time of each request constant. but even in these scenarios, it’s still going to be possible to submit a lot of queries, measure the actual throughput of the server, and infer how long the crypto ops are taking. leaking a lot less information – but still a little.

    and these are all white hat attacks. who knows how advanced the black hat attacks are (nsa etc). the foundations of crypto haven’t been broken (as far as we know), so the goal is to make the actual implementations line up as closely with the theory as possible.

    ps – screw drm smartcards. if i’ve got a device in my possession, i should be able to somehow extract any keys on it, and any system that assumes otherwise is just going to (and should) be broken in the long run. (i will make allowances for tamper-evident cards and of course cards that stop the user from inadvertently giving up their key, but not ones who’s goal is to explicitly disallow the user from ever obtaining information on the card)

    Comment by mind — 3 Aug 2007 @ 6:50

  4. For question one – smartcards, sidechannel attacks have been known for a long time. If you have a smartcard using the RSA algorithm for example, taking the time the card uses to calculate a signature can help an attacker to find the key faster, especially if a common fast algortihm for modular exponentiation is used. This attack have been known since the mid 90s. Symetric crypto can also be vulnerable if different keys or combinations of plaintext and key are using different amount of time.

    Later cryptologists has attempted to design cryptographic algorithms to not use different time dependent on key and input, as well as finding implementations of existing algorithms that are not vulnerable to timing attacks. The problem recently is with s-boxes. This is tables used in the algorithms for substitute one number for another in a reversible way (to make decryption possible). These are in memory, that is in cached. The cache is not managable from the programs, but are handled by the processor and OS, and the timing of cryptographic processes can be altered by another process accessing memory in certain certain patterns. This can be used to derive the key. S-boxes were earlier belived to unvulnerable to attacks, so many cryprographic algorithms uses them including AES, DES, Twofish, and Blowfish.

    Whether this is a problem? If you have a web account on a shared server, you are vulnarable to this attack. Even with virtualization you are vulnarable to this kind of attack, so as long as your code is not on a dedicated server, you are potentially vulnerable.

    And yes, these attacks are practical, unlike many attacks other kinds of attacks. Attack programs that extract keys from real-world production code like OpenSSL was implemented by the researchers that discovered the problem.
    This is also possibly the reason that CERT did give a security advisory, since they could not ignore the very package the researchers used (thus the security theater comment above), but also because the OpenSSL guys took the problem seriously. Other packages have not offered a fix, and the only thing CERT could say about Microsoft CAPI (the most used cryptographic package worldwide i guess) was to advice users to stop using IE, IIS and a lot of other Microsoft software until Microsoft have made a fix or convinced that they are not vulnerable, which is not something CERT will do i guess.

    Comment by Gisle — 3 Aug 2007 @ 11:20

  5. The future is in many machines virtualised onto a single piece of hardware. Then the attacker doesn’t need to be running in the same OS instance to exploit this.

    Comment by James — 6 Aug 2007 @ 13:12

  6. A good point, I love the whole ecosystem of bug fixing 🙂

    The question I’ve got is why did OpenSSL fixed that particular bug?

    Joel Spolsky ages ago wrote on the Economy of Bug fixing:

    “Fixing bugs is only important when the value of having the bug fixed exceeds the cost of the fixing it.”

    http://www.joelonsoftware.com/articles/fog0000000014.html

    Was there any contractual/monetary/other reason/value for fixing this particular bug?

    Comment by Igor Drokov — 29 Aug 2007 @ 13:30

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress