The DJB & Tanje Lange team out of Technische Universiteit Eindhoven, Netherlands have produced a set of curves to challenge the notion of verifiable randomness. Specifically, they seem to be aiming at the Brainpool curves which had a stab at producing a new set of curves for elliptic curve cryptography (ECC).
Now, please note: if you don't understand ECC then don't worry, neither do I. But we do get to black box it like any *useful technology to society* and in that black boxing we might ask, nay, we must ask the question, where the seeds fairly chosen? Or,
Verifiably random parameters offer some additional conservative features. These parameters are chosen from a seed using SHA-1 as specified in ANSI X9.62 [X9.62]. This process ensures that the parameters cannot be predetermined. The parameters are therefore extremely unlikely to be susceptible to future special-purpose attacks, and no trapdoors can have been placed in the parameters during their generation. —Certicom SEC 2 2.0 (2010)
Which claim the team set out to challenge:
The name "BADA55" (pronounced "bad-ass") is explained by the appearance of the string BADA55 near the beginning of each BADA55 curve. This string is underlined in the Sage scripts above.
We actually chose this string in advance and then manipulated the curve choices to produce this string. The BADA55-VR curves illustrate the fact that, as pointed out by Scott in 1999, "verifiably random" curves do not stop the attacker from generating a curve with a one-in-a-million weakness. The BADA55-VPR curves illustrate the fact that "verifiably pseudorandom" curves with "systematic" seeds generated from "nothing-up-my-sleeve numbers" also do not stop the attacker from generating a curve with a one-in-a-million weakness.
We do not assert that the presence of the string BADA55 is a weakness. However, with a similar computation we could have selected a one-in-a-million weakness and produced curves with that weakness. Suppose, for example, that twist attacks were not publicly known but were known to us; a random 224-bit curve has one chance in a million of being extremely twist-insecure (attack cost below 2^30), and we could have generated a curve with this property, while pretending that this was a "verifiable" curve generated for maximum security.
Which highlights two problems we have with all prior sets of curves: were the curves (seeds) chosen at random, and/or were they chosen to exploit weaknesses we did not know about? The crux here is that if someone does know of a weakness, they can re-run their "verifiably random" process until they get the results get want.
Is this realistic? Snowden says it is. The choosers of the main popular set of curves were the NSA & NIST, and as they ain't saying much other than to deny anything they've already been caught with, we have enough evidence to damn the NIST curves.
This is good stuff, BADA55 as a process highlights this very well. But:
We view the terminology "verifiably random" as deceptive. The claimed randomness (a uniform distribution) is not being verified; what is being verified is merely a hash computation. We similarly view the terminology "verifiably pseudorandom" and "nothing up my sleeves" as deceptive.
goes too far. They reproduced the process (presumably) and showed that it did not meet its own claimed standard, but did not explore how to create a fair seed. We do know how to do this, and there is an entire business case for it, it is the root of a CA. Which gives us at least two answers.
In the CA industry they suggest that hard tech problems be outsourced to a thing called a HSM or High Security Module. This is a hardware device that is strictly produced to the highest standards and testing to produce what we need. In this particular case, the generation of random numbers will be done in a HSM according to a NIST or equivalent standard, and tested according to their very harsh and expensive regimes.
That's the formal, popular, and safe answer, which most CAs use to pass audit . Except, it creates a complicated expensive process which can be perverted by the NSA & friends, as alleged by the Snowden revelations.
At CAcert we did something different. Because we knew that the HSM process was suspect enough to be unreliable, and it had no apparent way to mitigate this risk, we developed our own. In short this is what we do:
Wrap in some governance tricks such as reporting, observation, and construction of hardware & software on the spot with bog-standard components (for destruction later) and we have a complete process. Accepting the assumptions, this design ensures that the seed is random if at least one person has reliably delivered a good input.
Or so I claim: nothing up at least one person's sleeves means there isn't anything up our collective sleeve.
Granted, there are limitations to this process. /Verifiability/ is a direct part of the process, but it is limited to being there, on the day. Thereafter, we are limited to trusting the reports of those who were there. Hence, it isn't a repeatable experiment in the sense of scientific method, for that we'd need a bit more work.
But quibbles aside about the precise semantics of verifiability, I claim this is good enough for the job. Or, it is as good as it gets. If you combine the Eindhoven process with the CAcert process, then you'll get a set of curves that are reliably and verifiably secure to known current standards.
As good as it gets? If you do that, we'll need a new name for a better, badder set of curves; sadly I can only think of 5A1A55 for Fat-Ass right now.