In a previous entry I suggested creating an AES-style competition for automated voting systems. The idea is to throw the design open to the world's expertise on complex systems, including universities, foundations and corporates, and manage the process in an open fashion to bring out the best result.
Several people said "Who would judge a contest for voting machines?" I thought at first blush that this wasn't an issue, but others do. Why is that? I wonder if the AES experience surfaced more good stuff than superficially apparent?
If you look at the AES competition, NIST/NSA decided who would be the winner. James points out in comments that the NSA is indeed competent to do this, but we also know that they are biased by their mission. So why did we trust them to judge honestly?
In this case, what happened is that NIST decided to start off with an open round which attracted around 30 contributions, and then whittled that down to 5 in a second round. Those 5 then went forward and battled it out under increased scrutiny. Then, on the basis of the open scrutiny, and some other not-so-open scrutiny, the NSA chose Rijndael to be the future AES standard.
Let's hypothesize that the NSA team had a solid incentive to choose the worst algorithm, and were minded to do that. What stopped them doing it?
Several things. Firstly, there were two rounds, and all the weaker algorithms were cleaned out in the first round. All of the five algorithms in the second round were more or less "good enough," so the NSA didn't have any easy material to work with. Secondly, they were up against the open scrutiny of the community. So any tricky choice was likely to cause muttering, which could spread mistrust in the future, and standards are susceptible to mistrust. Thirdly, by running a first round, and fairly whittling the algorithms done on quality, and then leading into the second round, NIST created an expectation. Positively, this encouraged everyone to get involved, including those who would normally dismiss the experiment as just another government fraud, waiting to reveal itself. At a more aggressive extreme, it created a precedent, and this exposed the competition to legal attack later on.
These mechanisms worked hand in hand. Probably, either alone was not sufficient to push the NSA into our camp, but together they locked down the choices. Once that was done, the NSA saw its natural incentives to cheat neutered by future costs and open scrutiny. As it no longer could justify the risk of cheating, its best strategy was to do the best job, in return for reputation.
The mechanism design of the competition created the incentives for the judge to vote how we wanted -- for the best algorithm -- even if he didn't want to.
So, we can turn the original question around. Instead of asking who would judge such a competition, design a mechanism such that we don't care who would judge it. Make it like the AES competition, where even if they had wanted to, the NSA's best strategy was to choose the best. Set yourself a challenge: we get the right result even when it is our worst enemy.
Have a read of this. Quick summary: Altimo thinks Telenor may be using espionage tactics to cause problems.
Altimo alleges the interception of emails and tapping of telephone calls, surveillance of executives and shareholders, and payments to journalists to write damaging articles.
So instead of getting its knickers in a knot (court case or whatever) Altimo simply writes to Telenor and suggests that this is going on, and asks for confirmation that they know nothing about it, do not endorse it, etc.
![]() |
Who ya bluffin? |
...Andrei Kosogov, Altimo's chairman, wrote an open letter to Telenor's chairman, Harald Norvik, asking him to explain what Telenor's role has been and "what activity your agents have directed at Altimo". He said that he was "reluctant to believe" that Mr Norvik or his colleagues would have sanctioned any of the activities complained of..... Mr Kosogov said he first wrote to Telenor in October asking if the company knew of the alleged campaign, but received no reply. In yesterday's letter to Mr Norvik, Mr Kosogov writes: "We would welcome your reassurance that Telenor's future dealings with Altimo will be conducted within a legal and ethical framework."
Think about it: This open disclosure locks down Telenor completely. It draws a firm line in time, as also, gives Telenor a face-saving way to back out of any "exuberance" it might have previously "endorsed." If indeed Telenor does not take this chance to stop the activity, it would be negligent. If it is later found out that Telenor's board of directors knew, then it becomes a slam-dunk in court. And, if Telenor is indeed innocent of any action, it engages them in the fight to also chase the perpetrator. The bluff is called, as it were.
This is good use of game theory. Note also that the Advisory Board of Altimo includes some high-powered people:
Evidence of an alleged campaign was contained in documents sent to each member of Altimo's advisory board some time before October. The board is chaired by ex-GCHQ director Sir Francis Richards, and includes Lord Hurd, a former UK Foreign Secretary, and Sir Julian Horn-Smith, a founder of Vodafone.
We could speculate that those players -- the spooks and mandarins -- know how powerful open disclosure is in locking down the options of nefarious players. A salutory lesson!
One of the things that I've gradually come to believe in is that secrecy in anything is more likely to be a danger to you and yours than a help. The reasons for this are many, but include:
There are no good reasons for secrecy, only less bad ones. If we accept that proposition, and start unwinding the secrecy so common in organisations today, there appear to be two questions: how far to open up, and how do we do it?
How far to open up appears to be a personal-organisational issue, and perhaps the easiest thing to do is to look at some examples. I've seen three in recent days which I'd like to share.
First the Intelligence agencies: in the USA, they are now winding back the concept of "need-to-know" and replacing it with "responsibility-to-share".
Implementing Intellipedia Within a "Need to Know" CultureSean Dennehy, Chief of Intellipedia Development, Directorate of Intelligence, U.S. Central Intelligence Agency
Sean will share the technical and cultural changes underway at the CIA involving the adoption of wikis, blogs, and social bookmarking tools. In 2005, Dr. Calvin Andrus published The Wiki and The Blog: Toward a Complex Adaptive Intelligence Community. Three years later, a vibrant and rapidly growing community has transformed how the CIA aggregates, communicates, and organizes intelligence information. These tools are being used to improve information sharing across the U.S. intelligence community by moving information out of traditional channels.
The way they are doing this is to run a community-wide suite of social network tools: blogs, wikis, youtube-copies, etc. The access is controlled at the session level by the username/password/TLS and at the person level by sponsoring. That latter means that even contractors can be sponsored in to access the tools, and all sorts of people in the field can contribute directly to the collection of information.
The big problem that this switch has is that not only is intelligence information controlled by "need to know" but also it is controlled in horizontal layers. For same of this discussion, there are three: TOP SECRET / SECRET / UNCLASSIFIED-CONTROLLED. The intel community's solution to this is to have 3 separate networks in parallel, one for each, and to control access to each of these. So in effect, contractors might be easily sponsored into the lowest level, but less likely in the others.
What happens in practice? The best coverage is found in the network that has the largest number of people, which of course is the lowest, UNCLASSIFIED-CONTROLLED network. So, regardless of the intention, most of the good stuff is found in there, and where higher layer stuff adds value, there are little pointers embedded to how to find it.
In a nutshell, the result is that anyone who is "in" can see most everything, and modify everything. Anyone who is "out" cannot. Hence, a spectacular success if the mission was to share; it seems so obvious that one wonders why they didn't do it before.
As it turns out, the second example is quite similar: Google. A couple of chaps from there explained to me around the dinner table that the process is basically this: everyone inside google can talk about any project to any other insider. But, one should not talk about projects to outsiders (presumably there are some exceptions). It seems that SEC (Securities and Exchange Commission in USA) provisions for a public corporation lead to some sensitivity, and rather than try and stop the internal discussion, google chose to make it very simple and draw a boundary at the obvious place.
The third example is CAcert. In order to deal with various issues, the Board chose to take it totally open last year. This means that all the decisions, all the strategies, all the processes should be published and discussable to all. Some things aren't out there, but they should be; if an exception is needed it must be argued and put into policies.
The curious thing is why CAcert did not choose to set a boundary at some point, like google and the intelligence agencies. Unlike google, there is no regulator to say "you must not reveal inside info of financial import." Unlike the CIA, CAcert is not engaging in a war with an enemy where the bad guys might be tipped off to some secret mission.
However, CAcert does have other problems, and it has one problem that tips it in the balance of total disclosure: the presence of valuable and tempting privacy assets. These seem to attract a steady stream of interested parties, and some of these parties are after private gain. I have now counted 4 attempts to do this in my time related to CAcert, and although each had their interesting differences, they each in their own way sought to employ CAcert's natural secrecy to own advantage. From a commercial perspective, this was fairly obvious as the interested parties sought to keep their negotiations confidential, and this allowed them to pursue the sales process and sell the insiders without wiser heads putting a stop to it. To the extent that there are incentives for various agencies to insert different agendas into the inner core, then the CA needs a way to manage that process.
How to defend against that? Well, one way is to let the enemy of your enemy know who we are talking to. Let's take a benign example which happened (sort of): a USB security stick manufacturer might want to ship extra stuff like CAcert's roots on the stick. Does he want the negotiations to be private because other competitors might deal for equal access, or does he want it private because wiser heads will figure out that he is really after CAcert's customer list? CAcert might care more about one than they other, but they are both threats to someone. As the managers aren't smart enough to see every angle, every time, they need help. One defence is many eyeballs and this is something that CAcert does have available to it. Perhaps if sufficient info of the emerging deal is published, then the rest of the community can figure it out. Perhaps, if the enemy's enemy notices what is going on, he can explain the tactic.
A more poignant example might be someone seeking to pervert the systems and get some false certificates issued. In order to deal with those, CAcert's evolving Security Manual says all conflicts of interest have to be declared broadly and in advance, so that we can all mull over them and watch for how these might be a problem. This serves up a dilemma to the secret attacker: either keep private and lie, and risk exposure later on, or tell all upfront and lose the element of surprise.
This method, if adopted, would involve sacrifices. It means that any agency that is looking to impact the systems is encouraged to open up, and this really puts the finger on them: are they trying to help us or themselves? Also, it means that all people in critical roles might have to sacrifice their privacy. This latter sacrifice, if made, is to preserve the privacy of others, and it is the greater for it.
I'm at LISA and just listened to this one:
The State of Electronic Voting, 2008
David Wagner, University of California, BerkeleyAs electronic voting has seen a surge in growth in the U.S. in recent years, controversy has swirled. Are these systems trustworthy? Can we rely upon them to count our votes? In this talk, I will discuss what is known and what isn't. I will survey some of the most important developments and analyses of voting systems, including the groundbreaking top-to-bottom review commissioned by California Secretary of State Debra Bowen last year. I will take stock of where we stand today, the outlook for the future, and the role that technologists can play in improving elections.
The one-line summary seems to be that voting machines are in a mess, and while there are brave efforts (California's review cited), there are no easy answers. It's a mess. This accords with my own prejudices: it looks like it should be a mess, by architectural requirements. My advice is to keep away, but today I didn't follow that advice, and have a suggestion!
One thing that is frequently suggested is that if the Internet community can build an Internet, surely we should be able to build a secure voting system. We can do big secure systems on the net, right? The counter example for this is IPSec or DNSSec or S/MIME: surely we should have been able to get a secure system into widespread use, but we seem to have failed at every turn here.
One reason why these things didn't work out is that the IETF committees who put them together got bogged down in details, as different stakeholders fought over different areas. The result is that familiar camel known as a secure but unusable architecture. Committees are at their best when they are retro-standardising an already successful design, such as SSL, because then they cannot dive into their own areas. They are forced to focus on the existing successful design.
Another suggestion is to use NIST or the NSA (same thing in this context) to design the system for us. But, this only works when we don't really care so much about the results. With encryption algorithms, for example, we the public get very suspicious about funny S-Boxes and the like, and skepticism dogged the famous DES algorithm as well as Skipjack and the cryptophones. For Hash designs, we are less fussed, because in application space much less much can go wrong if there is a secret way of futzing the hash.
Now, in the late 1990s, NIST took these issues seriously and took a novel path. They created a design competition to create a new encryption algorithm, asking anyone and everyone to propose. Any team around the world could submit an algorithm, and the final winner came from Belgium. As well, all the teams were encouraged to review the others' designs, and knock themselves out with criticisms. (By way of disclosure, Raif in my old Cryptix group created the Java framework for the AES proposals. It was that open that they took in help from crazy net hackers like ourselves.)
This worked! People mutter about AES as being a bit odd, but everyone admires the open design process, the use of the free and open scrutiny, and the way that the worldwide cryptography community rose to the challenge.
Why can't we do that with voting machines? All the elements seem to rhyme: stakeholders who will bog it down, conspiracy theories in abundance, desperate need of the people to see a secure outcome, and lots and lots of students and academics who love a big design challenge. NIST seems to be the ring-in to manage the process, and the result could be a standard design, which avoids the tricky issue of "mandating use".
Just a thought! I don't know whether this will work or not, but I can't see why not?