Comments: What makes a Security Project?

Do you want commentary on the general scoring criteria, or me to point out errors in your results?

There is pretty much zero, let me repeat, zero evidence that open source products are more secure. More transparent perhaps, but not more security. Including it as a useful metric for judging the security of a product is more than a little ridiculous given what we know current about public defect rates in open vs. closed source products.

As much as I like the work the OpenBSD community has done, their stance on certain classes of vulnerabilities has been pretty weak/lame to date. Consider last year's kernel bug that the core impact guys found. At first since it was "only a DoS attack" they refused to categorize it as a security bug. Well, if an attacker shuts most people down they are going to consider it a big deal.... oh, and it turns out it was exploitable to get root... hmm....

But, enough about methodology..... how about the data.

As much as I've never been a Microsoft apologist in my past, they are actually leading the pack in this area. Perhaps not in terms of absolute defect rate (they started with a bit of a self-imposed handicap there) but in terms of current SDL methodology and openness. Check out their SDL blog, Michael Howard's blog, and the MSRC blog for exhaustive details on a number of recent security defects. Of commercial organizations they are being more open than anyone about the causes of defects and how to counter them.

You've said that the chart is a bit dated, perhaps we could take a look where things stand in 2008 and get a better picture?

Posted by Andy Steingruebl at May 10, 2008 10:44 PM

The Linux kernel has a broad set of crypto goodies built into it, of varying quality. It randomness source is of questionable quality, its disc encryption modules are very good, its password encryption stuff is standard, high-quality, etc.

Also, I strongly disagree with Andy. The defect rate is a very poor metric, IMHO. Not all defects are equal. You cannot just add them up.

Posted by Daniel Nagy at May 11, 2008 02:18 AM

To answer Andi's questions on scoring: Criteria is more important, I think. As the scoring is three years old, I reckon pointing out errors is futile. As you suggest, a lot has changed, and I agree that Microsoft has done a whole lot more than other organisations in that time.

Should we take a look at the different activities and update it to 2008? I don't know ... my question would be, how do we ensure some sort of relatively unbiased approach? How do we avoid mobbing by the apologists for the manufacturers?

I think the commentary on the criteria is what I'm more interested in, either developing them, expanding them or trashing them. There are some unusual artifacts in there that make one think that maybe the criteria need more thought.

Posted by Iang at May 11, 2008 05:19 AM

Just to be clear lots of companies hire me! To give one example - I wrote that JAOO didn't address software security

http://1raindrop.typepad.com/1_raindrop/2007/07/doctor-it-hurts.html

and then they asked me to lead a whole software security track. This kind of thing happens all the time. I am not critiquing companies as a loose cannon, but earnestly hoping they do a better job. I realize its a new field. I criticize Sun in particular because I hold them to a *higher* standard than others.

Posted by Gunnar at May 12, 2008 08:21 AM
Post a comment









Remember personal info?






Hit Preview to see your comment.
MT::App::Comments=HASH(0x558342dad0e8) Subroutine MT::Blog::SUPER::site_url redefined at /home/iang/www/fc/cgi-bin/mt/lib/MT/Object.pm line 125.