Links

Ben Laurie blathering

28 Mar 2011

Census FAIL

Filed under: Rants — Ben @ 13:19

Once every ten years, every household in the UK gets to fill in a census form. This year, for the first time ever, I think, you can do it online. So, imagine how delighted we are that I am the only person in my household whose name actually fits in the box. Yes, really, there’s a 50 character limit.

Why? Suppose they’d splashed out and allowed 500 characters instead. What would that cost? Well, let’s assume 100M names. That’s an extra 450 x 100 = 45,000 MB of data, assuming they’re still using databases with fixed width fields. 45 GB. That would’ve cost them nearly an extra £5 at today’s prices. Not £5 per person, or £5 per household. £5 total.

Thank god for government savings, eh?

BTW, my wife rang and asked what to do. Amazingly, they opt for the least useful possible answer: start at the beginning of your name and keep going ’til you run out of space. I’m sure future generations will be very happy to have complete middle names and no surname. Not.

16 Nov 2010

Apache vs. Oracle

Filed under: Open Source,Open Standards,Rants — Ben @ 11:49

As a founder of the Apache Software Foundation I have long been frustrated by the ASF’s reluctance to confront Sun and now Oracle head-on over their continued refusal to allow Apache Harmony to proceed as an open source Java implementation.

So I am very pleased to see the board finally draw a line in the sand, basically saying “honour the agreements or we block the next version of Java”. Oracle’s response is just ridiculous, including the blatant

Oracle provides TCK licenses under fair, reasonable, and non-discriminatory terms consistent with its obligations under the JSPA.

why even bother to say that in response to ASF’s charges? Do they think the ASF are suddenly going to say “oops, we see the light now, how wrong we were to expect a TCK licence under fair, reasonable and non-discriminatory terms consistent with your obligations under the JSPA, we see it all clearly now and humbly withdraw our unwarranted request”?

Well, whatever Oracle expected, the ASF’s response was short and sweet

Oracle statement regarding Java: “Now is the time for positive action (and) to move Java forward.”

The ball is in your court. Honor the agreement.

How this will play out is very uncertain, at least to me, but one thing is sure: delay and vacillation are over, at least from the ASF. Expect plenty of delay from Oracle, though.

2 Oct 2010

Aims not Mechanisms

Filed under: Privacy,Rants — Ben @ 22:18

I’m a big fan of the EFF, so it comes as a bit of a surprise when I see them say things that don’t make any sense.

A while back the EFF posted a bill of privacy rights for social network users. Whilst I totally sympathise with what the EFF is trying to say here, I’m disappointed that they head the way of policymakers by ignoring inconvenient technical reality and proposing absurd policies.

In particular, I refer you to this sentence:

The right to control includes users’ right to decide whether their friends may authorize the service to disclose their personal information to third-party websites and applications.

In other words, if I post something to a “social network” (whatever that is: yes, I have an informal notion of what it means, and I’m sure you do, too, but is, say, my blog part of a “social network”? Email?) then I should be able to control whether you, a reader of the stuff I post, can do so via a “third-party application”. For starters, as stated, this is equivalent to determining whether you can read my post at all in most cases, since you do so via a browser, which is a “third-party application”. If I say “no” to my friends using “third-party applications” then I am saying “no” to my friends reading my posts at all.

Perhaps, then, they mean specific third-party applications? So I should be able to say, for example, “my friends can read this with a browser, but not with evil-rebroadcaster-app, which not only reads the posts but sends them to their completely public blog”? Well, perhaps, but how is the social network supposed to control that? This is only possible in the fantasy world of DRM and remote attestation.

Do the EFF really want DRM? Really? I assume not. So they need to find a better way to say what they want. In particular, they should talk about the outcome and not the mechanism. Talking about mechanisms is exactly why most technology policy turns out to be nonsense: mechanisms change and there are far more mechanisms available than any one of us knows about, even those of us whose job it is to know about them. Policy should not talk about the means employed to achieve an aim, it should talk about the aim.

The aim is that users should have control over where their data goes, it seems. Phrased like that, this is clearly not possible, nor even desirable. Substitute “Disney” for the “the users” and you can immediately see why. If you solve this problem, then you solve the DRM “problem”. No right thinking person wants that.

So, it seems like EFF should rethink their aims, as well as how they express them.

26 Sep 2010

The Tragedy of the Uncommons

Filed under: Rants,Security — Ben @ 3:46

An interesting phenomenon seems to be emerging: ultra-hyped projects are turning out to be crap. I am, of course, speaking of Haystack and Diaspora (you should follow these links, I am not going to go over the ground they cover, much).

The pattern here is that some good self-promoters come up with a cool idea, hype it up to journalists, who cannot distinguish it from the other incomprehensible cool stuff we throw at them daily, who duly write about how it’ll save the world. The interesting thing is what happens next. The self-promoters now have to deliver the goods. But, for some reason, rather than enlisting the help of experts to assist them, they seem to be convinced that because they can persuade the non-experts with their hype they can therefore build this system they have been hyping. My instatheory[1] is that it’d dilute their fame if they shared the actual design and implementation. They’ve got to save the world, after all. Or we could be more charitable and follow Cialdini: it seems humans have a strong drive to be consistent with their past actions. Our heroes have said, very publicly, that they’re going to build this thing, so now they have a natural tendency to do exactly what they said[2].

But the end result, in my sample of two, is disastrous. Haystack has completely unravelled as fundamentally flawed. Diaspora seems to be deeply rooted in totally insecure design. I hope I am preaching to the choir when I say that security is not something that should be bolted on later, and that the best way to do security design is to have the design reviewed as widely as possible. In both Haystack and DIaspora’s cases that could, and should, have been a full public review. There is no excuse for this, it wastes a vast amount of enthusiasm and energy (and money) on ultimately destructive goals.

I don’t have any great ideas on how to fix this, though. Yes, reporters getting expert assistance will help. Many of the experts in the security field are quite outspoken, it isn’t hard to track them down. In Diaspora’s case, perhaps one could have expected that Kickstarter would take a more active role in guidance and mentoring. Or if they already do, get it right.

Natural selection gets you every time.

BTW, if any journalists are reading this, I am absolutely happy to take a call to explain, in English, technological issues.

[1] I love this word. Ben Hyde introduced me to it.

[2] This is known as “consistency” in the compliance trade.

5 Jan 2010

Security Is Hard: Live With It

Filed under: Open Source,Open Standards,Programming,Rants,Security — Ben @ 17:59

I’ve been meaning to summon the energy to write about OAuth WRAP. It’s hard to do, because like OpenID, OAuth WRAP is just so obviously a bad idea, it’s difficult to know where to start. So I was pleased to see that Ben Adida saved me the trouble.

I understand. Security is hard. Getting those timestamps and nonces right, making sure you’ve got the right HMAC algorithm… it’s non-trivial, and it slows down development. But those things are there for a reason. The timestamp and nonce prevent replay attacks. The signature prevents repurposing the request for something else entirely. That we would introduce a token-as-password web security protocol in 2010 is somewhat mind-boggling.

Exactly. The idea that security protocols should be so simple than anyone can implement them is attractive, but as we’ve seen, wrong. But does the difficulty of implementing them mean they can’t be used? Of course not – SSL is fantastically hard to implement. But it is also fantastically widely deployed. Why? Because there are several open source libraries that do everything for you. Likewise every crypto algorithm under the sun is hard to implement, but there’s no shortage of libraries for them, either.

Clearly the way forward for OAuth is not to dumb it down to the point where any moron can implement it, the way forward is to write libraries that implement a properly secure version, and have everyone use them.

If the amount of effort that has been wasted on OAuth WRAP (and OpenID) had instead been put instead into writing code for the various platforms then we would probably now have pretty much universal support for OAuth and no-one would be whining that it’s too hard to implement.

Instead, we will spend the next decade or two clearing up the mess that we seem to be intent on creating. It makes me tired.

16 Aug 2009

Useful Security

Filed under: Rants,Security — Ben @ 15:59

A while back, I had a bash at formal methods. Reasonably enough, some people had a bash back. I feel I should respond.

Michael asked “what about Tokeneer?”

The description on that page makes some great points for me. Firstly, and I think most importantly

At each stage in the process verification activities were undertaken to ensure that no errors had been introduced. These activities included review and semi-formal verification techniques applicable to the entities being developed.

In other words: “we can’t actually apply formal methods to the entire process, so we did some ad hoc stuff, too”. Core to this problem is that you have to somehow describe what it is you are trying to do. In order to disguise the problem, formal methods folk like to call this a specification – but when you get down to it, it’s a program. Which can have bugs in it. Which you can only diagnose by thinking about it and testing.

Next, from the overview, section 5

Since release 7.4, SPARK has included an “accept” annotation that can be used to justify expected warnings and errors. These annotations have been added as appropriate.

In other words, verification fails, but these failures are “justified”. Hmmm.

Again from the overview (section 5.3) even after formal verification a bug was found, which was an elementary integer overflow bug. I would hope any competent programmer would have spotted this as they were writing the code, but apparently it was beyond all this expensive and painful infrastructure.

Finally (there’s more, but I have other things to write about, so I’ll stop here), again from the summary

# lines of code: 9939
# total effort (days): 260

Wow. That’s a lot of days for not very much code.

Toby asked “how would you feel about a proposal that asked for a range of standard software modules whose design had been subjected to formal analysis, at some semi-useful and reasonable level of abstraction, of some of its key functional/security properties?”

I guess I feel about this rather as Gandhi felt about Western civilisation: it would be a good idea.

More positively, I imagine there are actually some modules that are sufficiently small in scope that one could convince oneself that the specification was actually correct, and maybe even prove that the implementation matched the specification. For example, things like arrays, sets and maps might be implementable correctly. Where it all falls apart, in my view, is when you try to make a system that actually does something useful: then you get into the realm where debugging the specification is the hard problem.

Ian Brown asked “do you think a formally verified microkernel that enforces security controls within an OS is achievable/desirable?”

I think this actually might be achievable, and perhaps even desirable. But I’m not so sure it would be truly useful. A microkernel architecture inherently punts on all the interesting security questions and simply enforces whatever incorrect decisions you have made outside the kernel. So, you are still left with all the real-world security problems, but at least you have a place to stand when you claim you are enforcing whatever security properties you think your code implements (that is, you can discount the microkernel as a source of problems and only have to worry about the other 99% of the code).

I also strongly suspect that a team of skilled engineers could carefully write a secure microkernel that was just as robust without any need for formal verification. More quickly and with less swearing.

Finally, Anil Madhavapeddy writes “Modern type systems spring from a solid formal footing, and mainstream languages are adopting functional features”.

This is a great point, and I actually agree. I’m a big fan of type safety, as anyone who’s seen some of the hoops I jumped through in both the Apache module system and more recently in OpenSSL will know. I really like things to break at compile-time rather than run-time if at all possible, and type safety is one way to achieve this (this is one reason I prefer C++ to C, despite its many defects). I guess functional languages also have interesting properties from that point of view but I feel like I understand them less. I really must learn Erlang (or Haskell, I suppose, but I’ve tried and failed a few times already – it seems there are no good tutorials out there).

Anil also says “Even for C, static analysis is increasingly used to track down bugs (with products like Coverity which are very effective these days)”.

Sorry, but no. I thought for quite a while that there was a future in static analysis. But the more I am exposed to it, the less I think it is true. The false positive rate is still fantastically high, even in Coverity, which is probably the best system I’ve played with, and even correct hits tend to be on the “academically correct but not actually useful” side.

I do continue to suspect that static analysis combined with annotation might be useful, though (e.g. as in Deputy). But really, this is just trying to bolt on strong typing to weakly typed languages and isn’t truly static analysis is we hoped it might be.

Finally, he says “If things continue like they have been, then we’ll continue to cherry pick the practical developments from the formal methods community into mainstream languages, and reap the benefits.”

I certainly hope so, and I don’t want to discourage further research into formal methods. I just object to the notion that they are practical to the extent that we should be trying to use them wholesale to build real systems. They really aren’t.

I was intending to also talk a bit about things I think actually are useful for security, but I think I’ll leave that for a later post.

2 Aug 2009

Rigour

Filed under: Rants,Security — Ben @ 17:45

I know this is ancient history now, but I was busy, OK?

A while back, Schneier said something that really annoyed me

Commenting on Google’s claim that Chrome was designed to be virus-free, I said:

Bruce Schneier, the chief security technology officer at BT, scoffed at Google’s promise. “It’s an idiotic claim,” Schneier wrote in an e-mail. “It was mathematically proved decades ago that it is impossible — not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible — to create an operating system that is immune to viruses.”

What I was referring to, although I couldn’t think of his name at the time, was Fred Cohen’s 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.

Now, if what you’re interested in is PR, then it seems you can get away with these kinds of statements; certainly I have not seen a single public challenge. But if you care about rigour, you have to do rather better, since Schneier’s claim is demonstrably wrong. Why? Well, here goes … Cohen’s proof relies on computer science’s only trick: diagonalisation[1]. Basically, I assume that I have some perfect virus detector, V. If I give V a program, p, it returns true or false, depending on whether p is a virus or not. Let’s be charitable and assume that we can define what a virus is well enough to allow such a program to exist. Let’s also define what is meant by “perfect” – by that we mean that any program that exhibits virus behaviour will be classified as a virus and any that does not will be classified as not a virus.

Then Cohen says: fine, write a program, c like this:

if(V(c))
  do_nothing();
else
  be_evil();

Now, if V(c) returns true (i.e. c is a virus), then c does nothing, and therefore V is wrong. Similarly, if it returns false, then c behaves evilly, and once more V is wrong. QED, no perfect virus checker is possible. So far, we are in agreement.

Can we go from this to “it is always possible to write a virus that any virus-checking program will not detect”. No, because the proof only talks about perfect virus-checking programs. If the virus checker is allowed to be wrong sometimes, then the proof no longer works. In particular, if the virus checker can return false positives (i.e. claim that innocent programs are viruses) but is not allowed to return false negatives, then we can, indeed, have a virus checker that would keep our system free of viruses. Why? Because the virus checker will always detect a virus, by definition, but the diagonalisation proof no longer works – in particular, the case where V(c) is true no longer leads to a contradiction.

If we want to go a little further and show that such a program can, in fact, exist, we can actually do that quite easily. For example, consider the program V that always returns true: this would prevent any programs at all from running, so our OS wouldn’t be all that useful, but it would be virus-free. Less frivolously, we could have a list of non-virus programs, and V could return false for any program in the list and true for all others. Even less frivolously, it is possible to imagine an analysis that’s thorough enough for some restricted set of cases to permit reasonably general programs to pass the test without allowing any viruses (obviously we would also disallow many perfectly innocent programs, too), but at this point we’d have to define “virus” to drill down into what that analysis might be – but it could, for example, require that the program be written in some restricted, easily-analyzed language, and avoid constructs that are hard to deal with.

So, sorrry, Schneier. It has not been shown that it is impossible, in the 2 + 2 = 3 sense, to write a virus-free OS. Indeed, it has been shown that it is, in fact, possible – though I would certainly agree that it is an open question how hard it would be to create an OS that’s both useful and virus-free.

[1] Don’t get me wrong; it’s a good trick.

7 Apr 2009

Trust Me, I’m Signed!

Filed under: Rants,Security — Ben @ 15:30

The W3C recently announced their spec for signing widgets. Signing things is a good idea, if you’d like to be assured that they come from where you think they come from, or you want to detect tampering. But I would have hoped we were way past statements like this

Widget authors and distributors can digitally sign widgets as a trust and quality assurance mechanism.

If trust and quality were assured by signatures then our lives would be so much easier – but sadly it is not so. Indeed, it is so much not so that CAs, in an amazing piece of marketing, have managed to persuade us that, since they work so poorly for trust, what we should do is pay them even more money to get more robust signatures (a.k.a. EV certificates)!

Anyway, I was sufficiently irritated by this stupidity that I felt it necessary to remark on it. Which prompted this absolutely brilliant response from my friend Peter Gutmann

From the report:

Of signed detected files, severity of the threats tended to be high or severe, with low and moderate threats comprising a much smaller number of files:

Severe 50819
High 73677
Moderate 42308
Low 1099

So there you go, signing definitely does provide a “trust and quality assurance mechanism”. If it’s a CA-certified signed rootkit or worm, you know you’ve been infected by the good stuff.

“the report”, by the way, is a large scale study by Microsoft which makes for some interesting reading. In particular, they also acknowledge that even the promise that signatures would at least let you track down the evil bastard that wrote the code has proven empty

Though also intended to identify the signing parties, Microsoft has been unable to identify any authors of signed malware in cooperation with CAs because the malware authors exploit gaps in issuing practices and obtain certificates with fraudulent identities.

28 Mar 2009

More Banking Stupidity: Phished by Visa

Filed under: General,Rants,Security — Ben @ 14:21

Not content with destroying the world’s economies, the banking industry is also bent on ruining us individually, it seems. Take a look at Verified By Visa. Allegedly this protects cardholders – by training them to expect a process in which there’s absolutely no way to know whether you are being phished or not. Even more astonishing is that this is seen as a benefit!

Frame inline displays the VbV authentication page in
the merchant’s main window with the merchant’s
header. Therefore, VbV is seen as a natural part of the
purchase process. It is recommended that the top
frame include the merchant’s standard branding in a
short and concise manner and keep the cardholder
within the same look and feel of the checkout process.

Or, in other words

Please ensure that there is absolutely no way for your customer to know whether we are showing the form or you are. In fact, please train your customer to give their “Verified by Visa” password to anyone who asks for it.

Craziness. But it gets better – obviously not everyone is pre-enrolled in this stupid scheme, so they also allow for enrolment using the same inline flow. Now the phishers have the opportunity to also get information that will allow them to identify themselves to the bank as you. Yes, Visa have provided a very nicely tailored and packaged identity theft scheme. But, best of all, rather like Chip and PIN, they push all blame for their failures on to the customer

Verified by Visa helps protect you from fraudulent claims from cardholders – that they didn’t take part in, or authorise, a payment. Once you are up and running with Verified by Visa, you are no longer liable for chargebacks of this nature.

In other words, if the phisher uses your Verified by Visa password, then it’s going to be your fault – obviously the only way they could know it is if you told them! If you claim it was not you, then you are guilty of fraud; it says so, right there.

16 Feb 2009

Identification Is Not Security

Filed under: General,Rants,Security — Ben @ 18:51

The New York Times have an article about the Stanford Clean Slate project. It concludes

Proving identity is likely to remain remarkably difficult in a world where it is trivial to take over someone’s computer from half a world away and operate it as your own. As long as that remains true, building a completely trustable system will remain virtually impossible.

As far as I can tell, Clean Slate itself doesn’t make this stupid claim, the NYT decided to add it for themselves. But why do they think identification is relevant? Possibly because we are surrounded by the same spurious claim. For example…

  • We need ID cards because they will prevent terrorism.
  • We shouldn’t run software on our Windows box that isn’t signed because that’ll prevent malware.
  • We should only connect to web servers that have certificates from well-known CAs because only they can be trusted.

But…

  • The guys who crashed the planes were all carrying ID. Didn’t help.
  • The guys who blew up the train in Spain were all carrying ID. Didn’t help.
  • People get hacked via their browser all the time. Did signing it help?
  • What does it take to sign code? A certificate, issued by a CA…
  • What does it take to get a certificate? Not much … proof that you own a domain, in fact. So, I can trust the server because the guy that owns it can afford to pay Joker $10? And I can trust the code he signed? Why?

Nope. Security is not about knowing who gave you the code that ate your lunch – security is about having a system that is robust against code that you don’t trust. The identity of the author of that code should be irrelevant.

11 Feb 2009

Crypto Craft Knowledge

Filed under: Crypto,Programming,Rants,Security — Ben @ 17:50

From time to time I bemoan the fact that much of good practice in cryptography is craft knowledge that is not written down anywhere, so it was with interest that I read a post by Michael Roe about hidden assumptions in crypto. Of particular interest is this

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

Actually, in my extensive experience of reviewing security-critical code, this particular error is extremely common. Why does Michael assume that they probably wouldn’t? Because he is steeped in the craft knowledge around crypto. But most developers aren’t. Most developers don’t even have the right mindset for secure coding, let alone correct cryptographic coding. So, why on Earth do we expect them to follow our unwritten rules, many of which are far from obvious even if you understand the crypto?

18 Jan 2009

United Are Bastards

Filed under: Rants — Ben @ 20:41

I just accidentally went through the whole booking process on United’s US site. There were many seats in Economy Plus. But I couldn’t book because I don’t have a US address. So, I went through it all again on the UK site. Guess what? Economy Plus is allegedly booked solid. Bastards.

16 Jan 2009

Will GPLv3 Kill GPL?

Filed under: Open Source,Rants — Ben @ 14:49

I started looking at the LLVM project today, which is a replacement for the widely used gcc compiler for C and C++. My interest in this was prompted by thinking once more about static analysis, which it is pretty much impossible to use gcc for, and is likely to remain so, because Stallman opposes features which would enable it.

Anyway, being an optimist, I thought perhaps the interest in LLVM and clang (the C/C++ front end) were prompted by a sudden surge of interest in open source static analysis, but asking around, it seems it is not so.

The primary motivator appears to be GPLv3. Why? Well, here’s a few facts.

  • GPLv3 is not compatible with GPLv2. Don’t take my word for it, believe Richard.
  • Linux is, of course, famously GPLv2 without the upgrade clause, and hence GPLv3 incompatible.
  • FreeBSD, for example, are unlikely to accept software into the core that is GPLv3. No new licence can be used without core team approval and I am told this has not been given for GPLv3 and is not likely to be.
  • Commercial users of open source have always been a bit twitchy about GPLv2, but they’re very twitchy indeed about GPLv3. And don’t tell me commercial users are not important: these days they are the ones financing the development of open source software.

GCC is, apparently, going to move to GPLv3 – it says here that GCC 4.2.1 would be the last version released under GPLv2 (which is a bit rum, because I just checked GCC 4.4 and it is GPLv2. What gives?).

So, pretty clearly, there’s a need for a C/C++ compiler that is not GPLv3, and this, it would seem, is the real driver for LLVM.

Obviously this issue is not confined to GCC. As more software moves to GPLv3, what will the outcome be? Will the friction between GPL and other licences finally start persuading projects that free != GPL, and that BSD-style licences better suit their needs? Or will it just be that GPLv3 fails to make headway? We can only hope for the former outcome.

7 Jan 2009

Yet Another Serious Bug That’s Been Around Forever

Filed under: Crypto,Open Source,Programming,Rants,Security — Ben @ 17:13

Late last year the Google Security Team found a bug in OpenSSL that’s been there, well, forever. That is, nearly 10 years in OpenSSL and, I should think, for as long as SSLeay existed, too. This bug means that anyone can trivially fake DSA and ECDSA signatures, which is pretty damn serious. What’s even worse, numerous other packages copied (or independently invented) the same bug.

I’m not sure what to say about this, except to reiterate that it seems people just aren’t very good at writing or reviewing security-sensitive code. It seems to me that we need better static analysis tools to catch this kind of obvious error – and so I should bemoan, once more, that there’s really no-one working seriously on static analysis in the open source world, which is a great shame. I’ve even offered to pay real money to anyone (credible) that wants to work in this area, and still, nothing. The closed source tools aren’t that great, either – OpenSSL is using Coverity’s free-for-open-source service, and it gets a lot of false positives. And didn’t find this rather obvious (and, obviously staticly analysable) bug.

Oh, I should also say that we (that is, the OpenSSL Team) worked with oCERT for the first time on coordinating a response with other affected packages. It was a very easy and pleasant experience, I recommend them highly.

30 Dec 2008

Morons Release Beautiful Attack

Filed under: Rants,Security — Ben @ 16:18

I’m in two minds whether to even talk about this, but I guess it’ll be all over the ‘net soon.

A rather lovely attack on X.509 certificates exploiting the weakness of MD5 was released today. Read the (very well written) paper for all the gory details, but the short version is you construct a pair of certificates with colliding MD5 hashes. One of these you send off to get signed, and the other you carefully arrange to have the “CA” bit set. This means the second certificate can now be used to sign any other certificate: in effect you have become a CA, using what is known as a chained CA certificate.

So why are they morons? Because they chose to 0day this attack. Why? Users could have been protected from this exploit quite easily – only browsers and CAs had to be notified, which is easily achievable without premature public disclosure. I have no idea why they chose not to do this, but they’ve certainly destroyed any trust I had in them – which is a shame, at least some of them were people I respected.

Ironically, their attack is rendered somewhat pointless right now, as it has been recently shown that Comodo will issue a certificate for any website to anyone at all, without verification.

4 Dec 2008

Making No Sense

Filed under: Rants,Security — Ben @ 20:27

Paul Madsen wants to continue to beat this dead horse. OK.

unphishable : impossible to phish, see phish.
.
.
phish: a fraudulent attempt to acquire sensitive information such as usernames, passwords, and credit card details by masquerading as a trustworthy entity in an electronic communication

This more inclusive definition does not guarantee (for some mechanisms, this would be the case) that there will be nothing on the authentication server that could be used by a insider to impersonate the user elsewhere. And so, this type of unphishable does not inevitably mean that it is appropriate to use the same credential everywhere.

It seems the plan here is to define certain ways of stealing your password as something other than phishing, and because I want to defend against them, I am therefore wrong. What a pointless argument!

OK, let’s call them unstealable instead of unphishable. Happy now?

2 Nov 2008

All Your Data Are Lost By Us

Filed under: Rants,Security — Ben @ 12:38

Don’t worry, if we put all our data into central government databases, it’ll all be fine. Except when it isn’t. Our esteemed Prime Minister says

“It is important to recognise we cannot promise that every single item of information will always be safe because mistakes are made by human beings. Mistakes are made in the transportation, if you like in the communication, of information.”

in the aftermath of yet another ridiculous data breach: this time, people’s passwords to the Government Gateway on a memory stick dropped in the road.

Perhaps it is uncouth to point this out, but … if the system had been designed by people with any security clue whatsoever there would have been no passwords to put on a memory stick in the first place.

I notice that Gordon thinks the contractors in this case (Atos Origin) are responsible and action should be taken against them (though how he squares this with his statement that such events are inevitable only a politician can tell you). Well, sure. But why is he not taking action against the morons that designed and approved a system that made it possible for Atos Origin to have the passwords in the first place?

My theory? Policy makers think that it is beneath them to actually understand the technologies they make policy about, or to consult anyone who does. So, it has not occurred to Brown or any of his advisers that this is actually an avoidable error.

24 Oct 2008

WTF Does Open Source Have To Do With Business Models?

Filed under: Open Source,Rants — Ben @ 14:52

Unless you are Red Hat, the answer is: about as much as eating lunch has to do with business models.

Whatever business you are in, you need your lunch, or you’re going to die. Likewise, but perhaps not quite so urgently, you need software. Open source is about getting the software you want for the minimum investment. It is cheaper and more efficient for those who need particular functionality to group together and provide it for themselves than it is to pay five different companies to not quite provide it in five different ways.

That’s all there is. End of.

So, when you read something like this: “Report: Pure Open Source No Longer a Viable Business Model”, what you are supposed to do is hit the keyboard violently and scream, “It never was, you morons! Get with the program!”

Selling support for software is a business model. Whether it is open source or not. That’s the business Red Hat is in. Not the “open source business”. There isn’t one.

26 Sep 2008

ICANN’s Never-ending Quest for Suckers

Filed under: Rants — Ben @ 19:43

In their latest attempt to answer the question “how can we get everyone with a domain name to pay for it again?” ICANN are apparently enthused about this stupid idea.

But wait … I have it … why don’t we create a TLD for every service? We obviously need .www, .smtp, .dhcp and so forth, or how will people know what service you are offering?

27 Jun 2008

ICANN Create Domain Cash Cow

Filed under: Rants — Ben @ 12:45

Back when I used to serve on Nominet’s Policy Advisory Board, I used to find myself regularly arguing against the creation of new subdomains under .uk. Why? Because the only point I can actually see for creating a new subdomain is so that the registrars can make a huge pile of money while everyone scrambles to register in the new domain in order to protect their brand names.

Does anyone else benefit in any way? No. The registrants do not benefit: they already had domain names, they didn’t need any more. The public do not benefit: one domain name is quite sufficient for any Internet service.

So, given the complete pointlessness of doing this, I am not in the slightest surprised to hear that that most pointless of organisations, ICANN, has decided to allow approximately a zillion new TLDs.

In their usual egotistical style, they bill this piece of stupidity as…

Biggest Expansion to Internet in Forty Years Approved for Implementation

The only thing this expands is the wallets of registrars and, presumably, ICANN’s coffers. The Internet itself is not expanded one iota by this dumb move.

I guess the interesting thing to watch here is who manages to figure out the best TLDs to persuade people they need to register to protect themselves. “.trademark” sounds promising to me. “.name” would also be good. I invite your suggestions – perhaps we should form a consortium to register them, too.

Think I can get .ben? That would be cool.

Next Page »

Powered by WordPress