Open is a big word these days. It started out as open source, being the progression of AT&T's distro of Unix leading to BSD and then to GPL. For computer source code, open works well, as long as you are doing the code anyway and can't figure out how to sell it. Instead of just keeping it secret, share the source and profit on the service.
Or something - the economic ramifications are quite interesting and deserve a much wider attention (especially with our new Hayekian and Misean perspectives, and the changing economics of digital property).
People have also applied open to other things: I apply it to Governance, there is a group working on words and music called Open Commons and this blog is under one of their licences. People have even prepared legal cases in Open forums. The list of experiments in Open This and Open That is quite long. I want to apply it to FC, and indeed we've published many papers and much source code without seeing much or any openness in FC at the project level to date, so it remains a big question: just how far does open go?
One of the things we have found is that open source helps security. People have often thought too much of this - that open source is necessary for security. No such applies, it certainly helps a lot, but so do many other things, and there are plenty of secure systems with closed source. Strangely, open also clashes with the process of fixing bugs. Clearly, if there is a bug in your source, and you know it, you want it fixed before the bad guys find out. Telling everyone that there is a bug might not be the smartest thing to do.
So security is more than source, it's a process. The security process involves many elements. Patching and bug fixes, of course. These are the things that non-security projects know about and the things that the press reports on (like those silly Symantec comments on how many security advisories each competitor has issued).
But there is more, much more, and these are the things that projects with a Security Goal have developed. One of these things is a relatively open process. What this means is that decisions on security are taken openly - in open forums. Even though uncomfortable and noisy, the result is better because the interests of all are heard, including the users, who normally aren't adept enough to enter these highly technical debates. Hopefully someone there will represent the users if they know this is an open process.
The problem with the alternate is "agenda capture" (or co-option?). If a project conducts secret negotiations to do some sort of security thing, then you can bet your bottom dollar that the participants want it secret because they are attempting some sort of coup. They are trying to achieve something that won't work when exposed to the disinfectant of open sunlight. It infringes the interests of one group or another, and if it didn't there wouldn't be any reason to keep it secret.
So it was with sadness that I discovered that the Mozilla Foundation had entered into the smoke filled rooms of secret negotiations for security changes. These negotiations are apparently over the security User Interface. It involves some other browser manufacturers - Microsoft was mentioned - and some of the CAs - Verisign has not been mentioned that I have heard.
There is no doubt that Mozilla has walked into an agenda capture process. It specifically excluded one CA, CACert.org, for what appears to be competitive reasons. Microsoft enters these things frequently for the purposes of a) knowing what people are up to, and b) controlling them. (Nothing wrong with that, unless you aren't Microsoft.) At least one of the participants in the process is in the throes of selling a product to others, one that just happens to leave itself in control. The membership itself is secret, as are the minutes, etc etc.
The rooms were filled with smoke a month or two back. And now, people are reportedly beavering away on the results, which are again secret. Normally, cartel theory will tell you that these sort of approaches won't work positively because of the economics of game theory (check out the Prisoner's Dilemma). But in this case, there was "A Result" and that "Result" is now being used as a justification for not addressing other initiatives in phishing. We don't know what it was but it exists and it has been accepted, secretly, without formal process or proposal, by Mozilla.
I have therefore departed the scene of Mozilla. This is a road to disaster for them, and it is blatently obvious that they haven't the business acumen to realise what they've been sucked into. As the security process is well and truly compromised at this point, there is no hope for my original objectives, which were to encourage anti-phishing solutions within Firefox.
Personally, I had thought that the notion of open source was enough to assume that an open process in security would follow, and a sterling effort by Frank Hecker seemed to support this. But I was wrong; Mozilla runs a closed security process and even Frank's openly negotiated CA ascendency protocol is stalled in closed session. The actual bug fixing process is documented as closed if there are security issues involved, and from that one small exception, the entire process has closed and to my mind stalled (1, 2). The team is a closed shop (you have to apply, they have to ignore the application), any security decisions are taken in secret forums that we haven't been able to identify, and the whole process flips mysteriously between the security team, the security group and the group sometimes known as "staff". Oh, and in their minds, security is synonymous with PKI, so anything and anyone that challenges the PKI model is rejected out of hand. Which is an issue for those that suggest PKI is at the heart of phishing...
So the security process is either closed or it's not present, which in my mind amounts to the same thing, because one becomes the other in due course. And this is another reason that security processes have to be open - in order to eliminate groupthink and keep themselves alive, the process must defend itself and regenerate itself, out in the open, on a regular basis.
My time there has still been very educational. Coming from the high security world of payments and then entering the medium security world of the browser, I've realised just how much there is yet to learn about security. I have a developing paper on what it means to be a security project, and I've identified about 19 or so factors. Comparing and contrasting Security Goal projects like the BSDs and the OpenPGPs with Mozilla has teased these factors out slowly, over a year or so. The persistence of security myths has led me on a search via Adam's security signals suggestion to Michael Spence's job market signalling theory and into a new model for Silver Bullets, another paper that is evolving (draft in the normal place).
But I have to face facts. I'm interested in security and specifically in the security of the browsing process under its validated threat of phishing. If the Mozilla Foundation cannot produce more than an alpha effort in certificate identification in 2.5 years of my being involved in what passes for their open security forum, that has to suggest that Mozilla is never going to meet new and evolving objectives in security.
Posted by iang at June 29, 2005 08:00 AM | TrackBackDid you expect the genesis of Netscape to do anything else?
Posted by: Jimbo at June 29, 2005 10:26 AMHopefully we can do better in developing improved security technologies in the Jabber world...
Posted by: stpeter at June 29, 2005 12:23 PMIan, a couple of brief comments: I apologize for not participating on the Mozilla security newsgroups recently; I've been involved in some other Mozilla volunteer work (not security related) and that plus work/family time has prevented my keeping up with the newsgroup. I see I missed a lot of (sometimes contentious) discussions, and regret I wasn't able to comment at the time.
There are three separate issues here:
1. The CA certificate policy. That's basically held up short of final adoption simply because Mitchell Baker and other folks at the Foundation (but mainly Mitchell) haven't had time to do final review on it. I'll ping Mitchell again on this, and if she doesn't have time in the short term I'll likely just adopt the new policy as an "interim" policy and go forward on that basis for now. (I've done this "interim policy" thing once before, and I have no problem doing it again if I need to.)
(Also, I've been stalled on CA-related work in general due to other demands on my time, so that's something I need to work on when I get time in the next few weeks.)
2. The Mozilla process for disclosing and fixing security vulnerabilities. That's basically the same process today as has existed for the past few years. Yes, there's a closed group that deals with such issues, and you can read my comments elsewhere for why that is; the major membership of that group is people who have a good track record of finding Mozilla vulnerabilities and cooperating with the project, Mozilla developers who can fix vulnerabilities, and Mozilla distributors (e.g., companies/organizations creating Linux distributions) that need to ship the fixed versions.
Almost all of the vulnerabilities with which the group deals have nothing to do with PKI, so I think your comment that "in their minds, security is synonymous with PKI" is misleading at best.
However I think it is fair to say that the security group dealing with vulnerabilities is first and foremost concerned with reacting to vulnerabilities narrowly defined. The group was not originally chartered and does not function to do forward-looking design and development of security-related features, including anti-phishing features. I agree that the project does need more central coordination of such security-related work; not having a single person to do that is a problem, one which the Mozilla Foundation needs to address. (And to my knowledge the Foundation knows it needs to address it, it's mainly a matter of finding the right person or persons for the job. And I should add that the person is almost certainly not going to be me.)
My personal opinion is that it is a mistake to think of the Mozilla security group (the group dealing with vulnerabilities) as the preferred forum for dealing with design and development of new security features. It has never filled that role in the past, and IMO will not do so in the future. It is true that a lot of the people in the group also do (or at least have done) development of new security features, but that's independent of the group itself.
The problem again is that the lines of communication regarding security design issues and the ways in which design and development decisions are made are not clear, and depend a lot on informal relationships among and with the various developers who are involved in the various parts of Mozilla/Firefox/Thunderbird. There was and is no overall intent to do stuff in "closed rooms", but in practice it is certainly not clear for people outside the process how and where to influence the process. Again, I agree that work needs to be done here to centralize and clarify things both within the project itself and to others outside the project.
3. The CA/browser vendor discussions. To clarify a couple of things: The CAs who initiated these discussions and who are involved in this are all traditional commercial CAs; I don't think CAcert.org was singled out for exclusion, the discussions exclude government-associated CAs, other non-profit CAs, indeed every CA that's not in the stated business of selling SSL certs to ecommerce businesses.
What is basically going on as I see it is that these traditional CAs are trying to make themselves relevant with regard to the phishing problem, along the lines of what we've publicly discussed in the Mozilla newsgroups: make CA branding more prominent, have different levels of certificates (including certs with significantly more subscriber vetting than is done today, and so on) and make distinctions between different types of certs in the browser UIs, and so on. There are conflicts between CAs as to exactly how to do that, and of course they need the browser vendors' assistance if such distinctions are actually to end up in the browser UI.
I personally don't see this as solving the phishing problem in and of itself, just as one more possibly useful approach among the various possibly useful approaches that have been proposed. I think it's perfectly appropriate for work in other areas to go forward, and I think the Mozilla project should consider implementing multiple approaches.
As a final comment, I'm sorry you've gotten discouraged your interactions with the Mozilla project in this area. As I've written to you before in other contexts, I think your concerns are valid in some areas and overstated in others. To the extent that I get involved in these issues, I will see what I can do to address the valid concerns of you and others.
Some supplimentry thoughts to go with Ian's on the CAcert Blog: http://blog.cacert.org/2005/07/81.html
Posted by: Duane at June 30, 2005 11:05 PM