Links

Ben Laurie blathering


Factoring RSA

Apparently I have not blogged about factoring weak RSA keys before. Well, I guess I have now :-)

One thing I’ve been wondering ever since that research was done is: is there anything OpenSSL could do about this? I’ve been assuming OpenSSL was used to generate at least some of those keys.

So, I was interested to read this analysis. First off, it shows that it is highly likely that the bad keys were generated by OpenSSL and one other (proprietary) implementation. However, I have to argue with some details in an otherwise excellent writeup.

Firstly, this canard irritates me:

Until version 0.9.7 (released on Dec 31, 2002) OpenSSL relied exclusively on the /dev/urandom source, which by its very definition is non-blocking. If it does not have enough entropy, it will keep churning out pseudo-random numbers possibly of very poor quality in terms of their unpredictability or uniqueness.

By definition? Whose definition? When did Linux man pages become “by definition”? In FreeBSD, which, IMO, has a much sounder approach to randomness, urandom does block until it has sufficient entropy. Is poor design of the OS OpenSSL’s fault?

Which brings me to

FreeBSD prior to version 5 posed its own problem, since its /dev/random source silently redirected to /dev/urandom.

Well. Modern FreeBSD versions link /dev/urandom to /dev/random. That doesn’t seem like a material change to me. I’m pretty sure that the implementation changed, too – perhaps that’s more important than filenames?

Finally, in the summary:

Some unfortunate choices by the OpenSSL library didn’t help either.

Oh really? So the fact that a 10-year-old version of OpenSSL used a device that in some OSes is not very well designed is contributing to this problem? I’m finding this a little hard to swallow. Also, “choices”? What choices? Only one choice is mentioned.

The real problem is, IMNSHO: if you provide a weak random number source, then people will use it when they shouldn’t. The problem here is with the OS that is providing the randomness, not the OpenSSL library. So, why is the OS (which I am prepared to bet is Linux) not even mentioned?

2 Comments »

  1. >By definition? Whose definition? When did Linux man pages become “by definition”? In FreeBSD, which, IMO, >has a much sounder approach to randomness, urandom does block until it has sufficient entropy. Is poor >design of the OS OpenSSL’s fault?

    Agree; I’m far from calling this “fault” but perhaps OpenSSL should contain high quality userland-based analog of /dev/(u)random?

    Comment by Milo — 24 May 2012 @ 11:39

  2. You know, wikipedia appears to disagree with you on whether FreeBSD’s urandom ever blocks – so either wikipedia is wrong, or else you are. I’m quite willing to believe it’s wikipedia, mind, but it’d probably help mere mortals like me if you’d fact-check and correct the article.

    Comment by Dave Cridland — 12 Jun 2012 @ 13:24

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress

Close
E-mail It