Links

Ben Laurie blathering

23 Jul 2011

An Efficient and Practical Distributed Currency

Filed under: Anonymity,Crypto,Security — Ben @ 15:51

Now that I’ve said what I don’t like about Bitcoin, it’s time to talk about efficient alternatives.

In my previous paper on the subject I amused myself by hypothesizing an efficient alternative to Bitcoin based on whatever mechanism it uses to achieve consensus on checkpoints. Whilst this is fun, it is pretty clear that no such decentralised mechanism exists. Bitcoin enthusiasts believe that I have made an error by discounting proof-of-work as the mechanism, for example

I believe Laurie’s paper is missing a key element in bitcoin’s reliance on hashing power as the primary means of achieving consensus: it can survive attacks by governments.

If bitcoin relied solely on a core development team to establish the authoritative block chain, then the currency would have a Single Point of Failure, that governments could easily target if they wanted to take bitcoin down. As it is, every one in the bitcoin community knows that if governments started coming after bitcoin’s development team, the insertion of checkpoints might be disrupted, but the block chain could go on.

Checkpoints are just an added security measure, that are not essential to bitcoin’s operation and that are used as long as the option exists. It is important for the credibility of a decentralized currency that it be possible for it to function without such a relatively easy to disrupt method of establishing consensus, and bitcoin, by relying on hashing power, can.

or

Ben, your analysis reads as though you took your well-known and long-standing bias against proof-of-work and reverse engineered that ideology to fit into an ad hoc criticism of bitcoin cryptography. You must know that bitcoin represents an example of Byzantine fault tolerance in use and that the bitcoin proof-of-work chain is the key to solving the Byzantine Generals’ Problem of synchronising the global view.

My response is simple: yes, I know that proof-of-work, as used in Bitcoin, is intended to give Byzantine fault tolerance, but my contention is that it doesn’t. And, furthermore, that it fails in a spectacularly inefficient way. I can’t believe I have to keep reiterating the core point, but here we go again: the flaw in proof-of-work as used in Bitcoin is that you have to expend 50% of all the computing power in the universe, for the rest of time in order to keep the currency stable (67% if you want to go for the full Byzantine model). There are two problems with this plan. Firstly, there’s no way you can actually expend 50% (67%), in practice. Secondly, even if you could, it’s far, far too high a price to pay.

In any case, in the end, control of computing power is roughly equivalent to control of money – so why not cut out the middleman and simply buy Bitcoins? It would be just as cheap and it would not burn fossil fuels in the process.

Finally, if the hash chain really works so well, why do the Bitcoin developers include checkpoints? The currency isn’t even under attack and yet they have deemed them necessary. Imagine how much more needed they would be if there were deliberate disruption of Bitcoin (which seems quite easy to do to me).

But then the question would arise: how do we efficiently manage a distributed currency? I present an answer in my next preprint: “An Efficient Distributed Currency”.

21 May 2011

Bitcoin is Slow Motion

Filed under: Anonymity,Crypto,General,Privacy,Security — Ben @ 5:32

OK, let’s approach this from another angle.

The core problem Bitcoin tries to solve is how to get consensus in a continuously changing, free-for-all group. It “solves” this essentially insoluble problem by making everyone walk through treacle, so it’s always evident who is in front.

But the problem is, it isn’t really evident. Slowing everyone down doesn’t take away the core problem: that someone with more resources than you can eat your lunch. Right now, with only modest resources, I could rewrite all of Bitcoin history. By the rules of the game, you’d have to accept my longer chain and just swallow the fact you thought you’d minted money.

If you want to avoid that, then you have to have some other route to achieve a consensus view of history. Once you have a way to achieve such a consensus, then you could mint coins by just sequentially numbering them instead of burning CPU on slowing yourself down, using the same consensus mechanism.

Now, I don’t claim to have a robust way to achieve consensus; any route seems to open to attacks by people with more resources. But I make this observation: as several people have noted, currencies are founded on trust: trust that others will honour the currency. It seems to me that there must be some way to leverage this trust into a mechanism for consensus.

Right now, for example, in the UK, I can only spend GBP. At any one time, in a privacy preserving way, it would in theory be possible to know who was in the UK and therefore formed part of the consensus group for the GBP. We could then base consensus on current wielders of private keys known to be in the UK, the vast majority of whom would be honest. Or their devices would be honest on their behalf, to be precise. Once we have such a consensus group, we can issue coins simply by agreeing that they are issued. No CPU burning required.

20 May 2011

Bitcoin 2

Filed under: Anonymity,Crypto,Security — Ben @ 16:32

Well, that got a flood of comments.

Suppose I take 20 £5 notes, burn them and offer you a certificate for the smoke for £101. Would you buy the certificate?

This is the value proposition of Bitcoin. I don’t get it. How does that make sense? Why would you burn £100 worth of non-renewable resources and then use it to represent £100 of buying power. Really? That’s just nuts, isn’t it?

I mean, it’s nice for the early adopters, so long as new suckers keep coming along. But in the long run it’s just a pointless waste of stuff we can never get back.

Secondly, the point of referencing “Proof-of-work Proves Not to Work” was just to highlight that cycles are much cheaper for some people than others (particularly botnet operators), which makes them a poor fit for defence.

Finally, consensus is easy if the majority are honest. And then coins become cheap to make. Just saying.

17 May 2011

Bitcoin

Filed under: Anonymity,Distributed stuff,Security — Ben @ 17:03

A friend alerted to me to a sudden wave of excitement about Bitcoin.

I have to ask: why? What has changed in the last 10 years to make this work when it didn’t in, say, 1999, when many other related systems (including one of my own) were causing similar excitement? Or in the 20 years since the wave before that, in 1990?

As far as I can see, nothing.

Also, for what its worth, if you are going to deploy electronic coins, why on earth make them expensive to create? That’s just burning money – the idea is to make something unforgeable as cheaply as possible. This is why all modern currencies are fiat currencies instead of being made out of gold.

Bitcoins are designed to be expensive to make: they rely on proof-of-work. It is far more sensible to use signatures over random numbers as a basis, as asymmetric encryption gives us the required unforgeability without any need to involve work. This is how Chaum’s original system worked. And the only real improvement since then has been Brands‘ selective disclosure work.

If you want to limit supply, there are cheaper ways to do that, too. And proof-of-work doesn’t, anyway (it just gives the lion’s share to the guy with the cheapest/biggest hardware).

Incidentally, Lucre has recently been used as the basis for a fully-fledged transaction system, Open Transactions. Note: I have not used this system, so make no claims about how well it works.

(Edit: background reading – “Proof-of-Work” Proves Not to Work)

21 Dec 2010

Is Openleaks The Next Haystack?

As everyone who’s even half-awake knows by now, a bunch of people who used to work on Wikileaks have got together to work on Openleaks. From what I hear, Openleaks is going to be so much better than Wikileaks – it will have no editorial role, it will strongly protect people who submit leaks, it’s not about the people who run it, it’ll be distributed and encrypted.

But where’s the design to back up this rhetoric? Where are the security reviews from well-known authorities? They seem to be missing. Instead we have excited articles in mainstream media about how wonderful it is going to be, and how many hours the main man has spent on it.

This sounds very familiar indeed. And we all know what happened last time round.

Of course, Openleaks may be fine, but I strongly suggest that those who are working on it publish their plan and subject it to scrutiny before they put contributors at risk.

As always, I offer my services in this regard. I am sure I am not alone.

18 Dec 2010

ƃuıʇsılʞɔɐlq uʍop-ǝpısd∩

Filed under: Anonymity,Crypto,Identity Management,Lazyweb,Privacy — Ben @ 14:54

A well-known problem with anonymity is that it allows trolls to ply their unwelcome trade. So, pretty much every decent cryptographic anonymity scheme proposed has some mechanism for blacklisting. Basically these work by some kind of zero-knowledge proof that you aren’t on the blacklist – and once you’ve done that you can proceed.

However, this scheme suffers from the usual problem with trolls: as soon as they’re blacklisted, they create a new account and carry on. Solving this problem ultimately leads to a need for strong identification for everyone so you can block the underlying identity. Obviously this isn’t going to happen any time soon, and ideally never, so blacklists appear to be fundamentally and fatally flawed, except perhaps in closed user groups (where you can, presumably, find a way to do strong-enough identification, at least sometimes) – for example, members of a club, or employees of a company.

So lately I’ve been thinking about using “blacklists” for reputation. That is, rather than complain about someone’s behaviour and get them blacklisted, instead when you see someone do something you like, add them to a “good behaviour blacklist”. Membership of the “blacklist” then proves the (anonymous) user has a good reputation, which could then be used, for example, to allow them to moderate posts, or could be shown to other users of the system (e.g. “the poster has a +1 reputation”), or all sorts of other things, depending on what the system in question does.

The advantage of doing it this way is that misbehaviour can then be used to remove reputation, and the traditional fallback of trolls no longer works: a new account is just as useless as the one they already have.

There is one snag that I can see, though, which is at least some anonymity systems with blacklisting (e.g. Nymble, which I’ve somehow only recently become aware of) have the side-effect of making every login by a blacklisted person linkable. This is not good, of course. I wonder if there are systems immune to this problem?

Given that Jan Camenisch et al have a presentation on upside-down blacklisting (predating my thinking by quite a long way – one day I’ll get there first!), I assume there are – however, according to Henry, Henry and Goldberg, Camenisch’s scheme is not very efficient compared to Nymble or Nymbler.

16 Aug 2010

It’s All About Blame

Filed under: Anonymity,Crypto,Privacy,Security — Ben @ 17:57

I do not represent my employer in this post.

Eric Schmidt allegedly said

“The only way to manage this is true transparency and no anonymity. In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it.”

I don’t care whether he actually said it, but it neatly illustrates my point. The trouble with allowing policy makers, CEOs and journalists define technical solutions is that their ability to do so is constrained by their limited understanding of the available technologies. At Google (who I emphatically do not represent in this post), we have this idea that engineers should design the systems they work on. I approve of this idea, so, speaking as a practising engineer in the field of blame (also known as security), I contend that what Eric really should have allegedly said was that the only way to manage this is true ability to blame. When something goes wrong, we should be able to track down the culprit. Governments will demand it.

Imagine if, the next time you got on a plane, instead of showing your passport, you instead handed over an envelope with a fancy seal on it, containing your ID, with windows showing just enough to get you on the plane (e.g. your ticket number and photo). The envelope could be opened on the order of a competent court, should it turn out you did something naughty whilst travelling, but otherwise you would remain unidentified. Would this not achieve the true aim that Eric allegedly thinks should be solved by universal identification? And is it not, when spread to everything, a better answer?

Of course, in the physical world this is actually quite hard to pull off, tamper-proof and -evident seals being what they are (i.e. crap), but in the electronic world we can actually do it. We have the crypto.

Just sayin’.

8 Jun 2010

XAuth: Who Should Know What?

Filed under: Anonymity,Privacy,Security — Ben @ 11:26

Note that I am not speaking for my employer in this post.

I’ve been following the debate around XAuth with interest. Whilst the debate about whether centralisation is an acceptable stepping stone to an in-browser service is interesting, I am concerned about the functionality of either solution.

As it stands, XAuth reveals to each relying party all of my identity providers, so that it can then present UI to allow me to choose one of them to authenticate to the RP. Why? What business of the RP is it where I have accounts? All that should be revealed is the IdP I choose to reveal (if any). This seems easy enough to accomplish, even in the existing centralised version: all that has to happen is for the script that xauth.org serves is to include the UI for IdP choice.

This is not just privacy religion (or theatre): as the EFF vividly illustrated with their Panopticlick experiment, it is surprisingly easy to uniquely identify people from signals you would have thought were not at all identifying, such as browser version and configuration information. Indeed, a mere 33 IdPs would provide enough information (if evenly distributed) to uniquely identify every person in the world. Meebo had no difficulty at all coming up with 15 of them for page one of many in their introductory blog post

15 IdPs on page 1 of many

25 Apr 2010

Wikileaks: The Facts

Filed under: Anonymity,Civil Liberties — Ben @ 18:17

Apparently some reporters think it’s useful to make stupid claims about Wikileaks. I won’t bother to link, but just in case you mistook them for journalism: for the record, I am a member of Wikileaks’ advisory board and I am honoured to be. I don’t think Julian Assange is crazy, I think he’s a very talented guy. Yeah, he’s a little unusual, but that just adds to the fun. It is true, however, that I don’t know anything about how Wikileaks operates in detail and it is also true that I think that’s a good idea.

If you don’t know what I’m talking about, I hear there’s a search engine that might help. Or you could do something useful with your time.

4 Mar 2010

Selective Disclosure, At Last?

Filed under: Anonymity,Crypto,Privacy,Security — Ben @ 5:34

Apparently it’s nearly five years since I first wrote about this and now it finally seems we might get to use selective disclosure.

I’m not going to re-iterate what selective disclosure is good for and apparently my friend Ben Hyde has spared me from the need to be cynical, though I think (I am not a lawyer!) he is wrong: the OSP applies to each individual specification – you are not required to use them in the context of each other.

So, for now, I will just celebrate the fact that Microsoft has finally made good on its promise to open up the technology, including BSD-licensed code. Though I guess I will have to inject one note of cynicism: a quick glance at the specification (you can get it here) suggests that they have only opened up the most basic use of the technology: the ability to assert a subset of the signed claims. There’s a lot more there. I hope they plan to open that up, too (how long will we have to wait, though?).

8 Dec 2009

Encryption Is Not Anonymisation

Filed under: Anonymity,Privacy — Ben @ 19:45

I was surprised to see

the encrypted (and thus anonymised) customer identity

in Richard Clayton’s analysis of Detica.

As we know from the AOL and Netflix fiascos, this is far from the truth. If you want anonymity, you also need unlinkability. But I’ve said that before.

1 Feb 2009

A Good Use of the TPM?

Filed under: Anonymity,Privacy,Security — Ben @ 20:33

Back when the TPM was called Palladium I made myself unpopular in some circles by pointing out that there were good uses for it, too, such as protecting my servers from attackers.

Whether that is practical is still an interesting question – it’s a very big step from a cheap device that does some cunning crypo to a software stack that can reliably attest to what is running (which is probably all that has saved us from the more evil uses of the TPM) – but at a recent get-together for privacy and anonymity researchers George Danezis and I ran, Mark Ryan presented an interesting use case.

He proposes using the TPM to hold sensitive data such that the guy holding it can read it – but if he does, then it becomes apparent to the person who gave him the data. Or, the holder can choose to “give the data back” by demonstrably destroying his own ability to read it.

Why would this be useful? Well, consider MI5’s plan to trawl through the Oyster card records. Assuming that government fails to realise that this kind of thing is heading us towards a police state, wouldn’t it be nice if we could check afterwards that they have behaved themselves and only accessed data that they actually needed to access? This kind of scheme is a step towards having that kind of assurance.

18 Dec 2008

Microsoft Show a Complete Lack of Respect for Privacy

Filed under: Anonymity,Privacy — Ben @ 20:24

I am astonished to read that Microsoft suddenly removed anonymity from blog posts. Retroactively. WTF? That’s just crazy, not to mention rude. I can’t begin to imagine who got outed by this and how annoyed they must be.

I also wonder whether, when they were pretending that posts were anonymous, they revealed that they were actually tracking who wrote them all? And finally I wonder, given the litigious nature of their homeland, who is going to sue them first?

3 Dec 2008

Podcast With Dave Birch

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 16:07

A few weeks ago, Dave Birch of Consult Hyperion interviewed me about Digital Identity. Because he weirdly doesn’t give a way to link to an individual podcast, here’s the M4a (whatever that is) and the MP3.

This was the first podcast I’ve done that I actually dared listen to. I think it came out pretty well.

17 Nov 2008

Identification Is Not Security

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 16:50

Kim writes about minimal disclosure. Funnily enough my wife, Camilla, spontaneously explained minimal disclosure to me a couple of nights ago. She was incensed that she ended up having to “prove” who she was in order to pay a bill over the phone.

First of all, they asked her for her password. Of course, she has no idea what her password might be with this particular company, so their suggestion was she guess. Camilla surprised me by telling me that she had, of course, declined to guess, because by guessing she would be revealing all her passwords that she might use elsewhere. So, they then resorted to the usual stupidity: mother’s maiden name, postcode, phone number and so forth. Camilla said she was happy to provide that information because she didn’t feel it was in any way secret – which, of course, means it doesn’t really authenticate her, either.

Anyway, her point was that in order to pay a bill she really shouldn’t have to authenticate to the payee – what do they care who pays the money, so long as it gets paid? In fact, really, we want the authentication to be the other way round – the payee should prove to her that they are really the payee. It would also be nice if they provided some level of assurance that she is paying the right bill. But they really don’t need to have any clue who she is, so long as she can hand over money somehow (which might, of course, including authenticating somehow to some money-handling middleman).

But what seems to be happening now is that everyone is using identity as a proxy for security. If we know who you are, then everything else springs from that.

Now, if what you want to do is to determine whether someone is authorised to do something, then certainly this is an approach that works. I find out who you are, then I look you up in my Big Table of Everything Everyone Is Allowed To Do, and I’m done. However, and now I finally circle back to Kim’s post, for many, if not most, purposes, identification is far more than is really needed. For example, Equifax just launched the Over 18 I-Card. I hope Equifax got this right and issued a card that doesn’t reveal anything else about you – but even if they didn’t, clearly it could be done – and clearly there’s value in proving you’re over 18, and therefore authorised to do some things you might not otherwise be able to do. Though I’d note that I am not over 18 in Equifax’ view because I do not have an SSN!

Anyway, current deficiencies aside, this is a great example of where minimal disclosure works better than identification – rather than everyone having a lookup table containing everyone in the world and whether they are over 18, someone who has the information anyway does the lookup once and then signs the statement “yep, the bearer is over 18”.

But in many other cases identification doesn’t work at all – after all, the premise of the ID card is that it is supposed to improve our security against terrorists. But its pretty obvious that identifying people really isn’t going to help – you can work that out just by thinking about it, but even more importantly, in several recent terrorist attacks everyone has been very thoroughly identified but it hasn’t helped one bit.

And in the case of my wife trying to pay a bill, identification was completely without purpose. Yet everyone wants to do it. As Kim says, we really need to rethink the world in terms of minimal disclosure – and as I show above, sometimes this is actually the easiest way to think about it – my one area of disagreement is that we should not call this “identity” or even “contextual identity”. We need a term that makes it clear it has nothing to do with identification. I prefer to think in terms of “proof of entitlement” or “proof of authority” – but those don’t exactly roll off the tongue … ideas?

24 Jun 2008

Information Card Foundation Launched

Yet another industry alliance launches today: the Information Card Foundation (yes, I know that’s currently a holding page: as always, the Americans think June 24th starts when they wake up).

I have agreed to be a “Community Steering Member”, which means I sit on the board and get a vote on what the ICF does. Weirdly, I am also representing Google on the ICF board. I guess I brought that on myself.

I am not super-happy with the ICF’s IPR policy, though it is slightly better than the OpenID Foundation’s. I had hoped to get that fixed before launch, but there’s only so many legal reviews the various founders could put up with at short notice, so I will have to continue to tinker post-launch.

It is also far from clear how sincere Microsoft are about all this. Will they behave, or will they be up to their usual shenanigans? We shall see (though the adoption of a fantastically weak IPR policy is not the best of starts)! And on that note, I still wait for any sign of movement at all on the technology Microsoft acquired from Credentica – which they have kinda, sorta, maybe committed to making generally available. This is key, IMO, to the next generation of identity management systems and will only flourish if people can freely experiment with it. So what are they waiting for?

(More news reports than you can shake a stick at.)

23 May 2008

Preprint: (Under)mining Privacy in Social Networks

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 15:11

Actually, I’m not sure if this one ends up in print or not. But anyway, I think its content is obvious from the title.

My colleagues Monica Chew and Dirk Balfanz did all the hard work on this paper.

12 May 2008

The World Without “Identity” or “Federation” is Already Here

Filed under: Anonymity,Identity Management,Privacy,Security — Ben @ 12:24

My friend Alec Muffett thinks we should do away with “Big I” Identity. I’m all for that … but Alec seems to be quite confused.

Firstly, his central point, that all modern electronic identity requires the involvement of third parties, is just plain wrong. OpenID, which he doesn’t mention, is all about self-asserted identity – I put stuff on webpages I own and that’s my identity. Cardspace, to the extent it is used at all, is mostly used with self-signed certificates – I issue a new one for each site I want to log in to, and each time I visit that site I prove again that I own the corresponding private key. And, indeed, this is a pretty general theme through the “user-centric” identity community.

Secondly, the idea that you can get away with no third party involvement is just unrealistic. If everyone were honest, then sure, why go beyond self-assertion? But everyone is not. How do we deal with bad actors? Alec starts off down that path himself, with his motorcycling example: obviously conducting a driving test on the spot does not scale well – when I took my test, it took around 40 minutes to cover all the aspects considered necessary to establish sufficient skill, and I’d hesitate to argue that it could be reduced. The test used to be much shorter, and the price we paid was a very high death rate amongst young motorcyclists; stronger rules have made a big inroads on that statistic. It is not realistic to expect either me or the police to spend 40 minutes establishing my competence every time it comes into question. Alec appears to be recognising this problem by suggesting that the officer might instead rely on the word of my local bike club. But this has two problems, firstly I am now relying on a third party (the club) to certify me, which is exactly counter to Alec’s stated desires, and secondly, how does one deal with clubs whose only purpose is to certify people who actually should not be allowed to drive (because they’re incompetent or dangerous, for example)?

The usual answer one will get at this point from those who have not worked their way through the issues yet is “aha, but we don’t need a central authority to fix this problem, instead we can rely on some kind of reputation system”. The trouble is no-one has figured out how you build a reputation system in cyberspace (and perhaps in meatspace, too) that is not easily subverted by people creating networks of “fake” identities purely in order to boost their own reputations – at least, not without some kind of central authority attesting to identity.

Yet another issue that has to be faced is what to do about negative attributes (e.g. “this guy is a bad risk, don’t lend him money because he never pays it back”). No-one is going to willingly make those available to others. Once more, we end up having to invoke some kind of authority.

Of course, there are many cases where self-assertion is perfectly fine, so I have no argument with Alec there. And yes, there is a school of thought that says any involvement with self-issued stuff is a ridiculous idea, but you mostly run into that amongst policy people, who like to think that we’re all too stupid to look after ourselves, and corporate types who love silos (we find a lot of those in the Liberty Alliance and the ITU and such-like places, in my experience).

But the bottom line is that a) what he wants is insufficient to completely deal with the problems of identity and reputation and b) it is nothing that plenty of us haven’t been saying (and doing) all along – at least where it works.

Once you’ve figured that out, you realise how wrong

I am also here not going to get into the weirdness of Identity wherein the goal is to centralise your personal information to make management of it convenient, and then expend phenomenal amounts of brainpower implementing limited-disclosure mechanisms and other mathematica, in order to re-constrain the amount of information that is shared; e.g. “prove you are old enough to buy booze without disclosing how old you are”. Why consolidate the information in the first place, if it’s gonna be more work to keep it secret henceforth? It’s enough to drive you round the twist, but it’ll have to wait for a separate rant.

is. Consolidation is not what makes it necessary to use selective disclosure – that is driven by the need for the involvement of third parties. Obviously I can consolidate self-asserted attributes without any need for selective disclosure – if I want to prove something new or less revealing, I just create a new attribute. Whether its stored “centrally” (what alternative does Alec envision, I wonder?) or not is entirely orthogonal to the question.

Incidentally, the wit that said “Something you had, Something you forgot, Something you were” was the marvellous Nick Mathewson, one of the guys behind the Tor project. Also, Alec, if you think identity theft is fraud (as I do), then I recommend not using the misleading term preferred by those who want to shift blame, and call it “identity fraud” – in fraud, the victim is the person who believes the impersonator, not the person impersonated. Of course the banks would very much like you to believe that identity fraud is your problem, but it is not: it is theirs.

26 Apr 2008

Do We Need Credentica?

Filed under: Anonymity,Crypto,Open Source,Privacy,Security — Ben @ 20:22

I read that IBM have finally contributed Idemix to Higgins.

But … I am puzzled. Everyone knows that the reason Idemix has not been contributed sooner is because it infringes the Credentica patents. At least, so says Stefan – I wouldn’t know, I haven’t checked. But it seems plausible that at least IBM think that’s true.

So, what’s changed? Have IBM decided that Idemix does not infringe? Or did Microsoft let them publish? Or what?

If its the former, then do others agree? And if its the latter, then in what sense is this open source? If IBM have some kind of special permission with regard to the patents, that is of no assistance to the rest of us.

It seems to me that someone needs to do some explaining. But if the outcome is that Idemix really is open source, then what is the relevance of Credentica?

Incidentally, I wanted to take a look at what it is that IBM have actually released, but there doesn’t seem to be anything there.

Powered by WordPress